Projects
Factory:RISC-V:Base
risc-v-kernel
Sign Up
Log In
Username
Password
We truncated the diff of some files because they were too big. If you want to see the full diff for every file,
click here
.
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
Expand all
Collapse all
Changes of Revision 5
View file
_service:tar_scm:kernel.spec
Changed
@@ -9,7 +9,7 @@ Name: kernel Version: 5.10.0 -Release: 9 +Release: 11 Summary: Linux Kernel for RISC-V URL: http://www.kernel.org/ License: GPLv2 @@ -19,6 +19,7 @@ Source2: cpupower.service Source3: cpupower.config Patch1: 0001-fix-task-size-min-macro.patch +Patch2: 0002-backport-Make-mmap-with-PROT_WRITE-imply-PROT_READ.patch BuildRequires: module-init-tools, patch >= 2.5.4, bash >= 2.03, tar BuildRequires: bzip2, xz, findutils, gzip, m4, perl, make >= 3.78, diffutils, gawk @@ -142,6 +143,7 @@ cd linux-%{KernelVer} %patch1 -p1 +%patch2 -p1 touch .scmversion @@ -461,6 +463,12 @@ %endif %changelog +* Thu Jan 19 2023 mingzheng <xingmingzheng@iscas.ac.cn> - 5.10.0-11-riscv64 +- submit the backport patch to make mmap() with PROT_WRITE implied PROT_READ + +* Thu Dec 22 2022 laokz <zhangkai@iscas.ac.cn> - 5.10.0-10-riscv64 +- add some defconfig for FAT, clamav, NFS server, FUSE, firewalld + * Wed Dec 07 2022 laokz <zhangkai@iscas.ac.cn> - 5.10.0-9-riscv64 - submit @geasscore TASK_SIZE_MIN macro patch to fix EFI driver failure
View file
_service:tar_scm:0002-backport-Make-mmap-with-PROT_WRITE-imply-PROT_READ.patch
Added
@@ -0,0 +1,66 @@ +From 75a0cfefa0f051fba68682d2b18c26be6cd1a06e Mon Sep 17 00:00:00 2001 +From: Mingzheng Xing <xingmingzheng@iscas.ac.cn> +Date: Thu, 19 Jan 2023 12:31:34 +0800 +Subject: [PATCH] backport: Make mmap() with PROT_WRITE imply PROT_READ + +commit 8aeb7b17f04ef40f620c763502e2b644c5c73efd +Merge: c45fc916c2b2 9e2e6042a7ec +Author: Palmer Dabbelt <palmer@rivosinc.com> +Date: Thu Oct 13 12:49:12 2022 -0700 + + RISC-V: Make mmap() with PROT_WRITE imply PROT_READ + + Commit 2139619bcad7 ("riscv: mmap with PROT_WRITE but no PROT_READ is + invalid") made mmap() reject mappings with only PROT_WRITE set in an + attempt to fix an observed inconsistency in behavior when attempting + to read from a PROT_WRITE-only mapping. The root cause of this behavior + was actually that while RISC-V's protection_map maps VM_WRITE to + readable PTE permissions (since write-only PTEs are considered reserved + by the privileged spec), the page fault handler considered loads from + VM_WRITE-only VMAs illegal accesses. Fix the underlying cause by + handling faults in VM_WRITE-only VMAs (patch 1) and then re-enable + use of mmap(PROT_WRITE) (patch 2), making RISC-V's behavior consistent + with all other architectures that don't support write-only PTEs. + + * remotes/palmer/riscv-wonly: + riscv: Allow PROT_WRITE-only mmap() + riscv: Make VM_WRITE imply VM_READ + + Link: https://lore.kernel.org/r/20220915193702.2201018-1-abrestic@rivosinc.com/ + Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> +--- + arch/riscv/kernel/sys_riscv.c | 3 --- + arch/riscv/mm/fault.c | 3 ++- + 2 files changed, 2 insertions(+), 4 deletions(-) + +diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c +index 8a7880b9c433..bb402685057a 100644 +--- a/arch/riscv/kernel/sys_riscv.c ++++ b/arch/riscv/kernel/sys_riscv.c +@@ -18,9 +18,6 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len, + if (unlikely(offset & (~PAGE_MASK >> page_shift_offset))) + return -EINVAL; + +- if (unlikely((prot & PROT_WRITE) && !(prot & PROT_READ))) +- return -EINVAL; +- + return ksys_mmap_pgoff(addr, len, prot, flags, fd, + offset >> (PAGE_SHIFT - page_shift_offset)); + } +diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c +index 3c8b9e433c67..8f84bbe0ac33 100644 +--- a/arch/riscv/mm/fault.c ++++ b/arch/riscv/mm/fault.c +@@ -167,7 +167,8 @@ static inline bool access_error(unsigned long cause, struct vm_area_struct *vma) + } + break; + case EXC_LOAD_PAGE_FAULT: +- if (!(vma->vm_flags & VM_READ)) { ++ /* Write implies read */ ++ if (!(vma->vm_flags & (VM_READ | VM_WRITE))) { + return true; + } + break; +-- +2.34.1 +
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/Documentation/ABI/testing/sysfs-bus-vdpa
Added
@@ -0,0 +1,37 @@ +What: /sys/bus/vdpa/driver_autoprobe +Date: March 2020 +Contact: virtualization@lists.linux-foundation.org +Description: + This file determines whether new devices are immediately bound + to a driver after the creation. It initially contains 1, which + means the kernel automatically binds devices to a compatible + driver immediately after they are created. + + Writing "0" to this file disable this feature, any other string + enable it. + +What: /sys/bus/vdpa/driver_probe +Date: March 2020 +Contact: virtualization@lists.linux-foundation.org +Description: + Writing a device name to this file will cause the kernel binds + devices to a compatible driver. + + This can be useful when /sys/bus/vdpa/driver_autoprobe is + disabled. + +What: /sys/bus/vdpa/drivers/.../bind +Date: March 2020 +Contact: virtualization@lists.linux-foundation.org +Description: + Writing a device name to this file will cause the driver to + attempt to bind to the device. This is useful for overriding + default bindings. + +What: /sys/bus/vdpa/drivers/.../unbind +Date: March 2020 +Contact: virtualization@lists.linux-foundation.org +Description: + Writing a device name to this file will cause the driver to + attempt to unbind from the device. This may be useful when + overriding default bindings.
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/Documentation/ABI/testing/sysfs-platform-intel-ifs
Added
@@ -0,0 +1,39 @@ +What: /sys/devices/virtual/misc/intel_ifs_<N>/run_test +Date: April 21 2022 +KernelVersion: 5.19 +Contact: "Jithu Joseph" <jithu.joseph@intel.com> +Description: Write <cpu#> to trigger IFS test for one online core. + Note that the test is per core. The cpu# can be + for any thread on the core. Running on one thread + completes the test for the core containing that thread. + Example: to test the core containing cpu5: echo 5 > + /sys/devices/platform/intel_ifs.<N>/run_test + +What: /sys/devices/virtual/misc/intel_ifs_<N>/status +Date: April 21 2022 +KernelVersion: 5.19 +Contact: "Jithu Joseph" <jithu.joseph@intel.com> +Description: The status of the last test. It can be one of "pass", "fail" + or "untested". + +What: /sys/devices/virtual/misc/intel_ifs_<N>/details +Date: April 21 2022 +KernelVersion: 5.19 +Contact: "Jithu Joseph" <jithu.joseph@intel.com> +Description: Additional information regarding the last test. The details file reports + the hex value of the SCAN_STATUS MSR. Note that the error_code field + may contain driver defined software code not defined in the Intel SDM. + +What: /sys/devices/virtual/misc/intel_ifs_<N>/image_version +Date: April 21 2022 +KernelVersion: 5.19 +Contact: "Jithu Joseph" <jithu.joseph@intel.com> +Description: Version (hexadecimal) of loaded IFS binary image. If no scan image + is loaded reports "none". + +What: /sys/devices/virtual/misc/intel_ifs_<N>/reload +Date: April 21 2022 +KernelVersion: 5.19 +Contact: "Jithu Joseph" <jithu.joseph@intel.com> +Description: Write "1" (or "y" or "Y") to reload the IFS image from + /lib/firmware/intel/ifs/ff-mm-ss.scan.
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/Documentation/admin-guide/sysctl/vm.rst
Changed
@@ -65,6 +65,7 @@ - page-cluster - panic_on_oom - percpu_pagelist_fraction +- percpu_max_batchsize - stat_interval - stat_refresh - numa_stat @@ -856,6 +857,15 @@ sysctl, it will revert to this default behavior. +percpu_max_batchsize +======================== + +This is used to setup the max batch and high size of percpu in each zone. +The default value is set to (256 * 1024) / PAGE_SIZE. +The max value is limited to (512 * 1024) / PAGE_SIZE. +The min value is limited to (64 * 1024) / PAGE_SIZE. + + stat_interval =============
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/Documentation/x86/ifs.rst
Added
@@ -0,0 +1,2 @@ +.. SPDX-License-Identifier: GPL-2.0 +.. kernel-doc:: drivers/platform/x86/intel/ifs/ifs.h
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/Documentation/x86/index.rst
Changed
@@ -34,6 +34,7 @@ usb-legacy-support i386/index x86_64/index + ifs sva sgx elf_auxvec
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/MAINTAINERS
Changed
@@ -8966,6 +8966,14 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux.git F: drivers/idle/intel_idle.c +INTEL IN FIELD SCAN (IFS) DEVICE +M: Jithu Joseph <jithu.joseph@intel.com> +R: Ashok Raj <ashok.raj@intel.com> +R: Tony Luck <tony.luck@intel.com> +S: Maintained +F: drivers/platform/x86/intel/ifs +F: include/trace/events/intel_ifs.h + INTEL INTEGRATED SENSOR HUB DRIVER M: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> M: Jiri Kosina <jikos@kernel.org> @@ -18674,6 +18682,7 @@ M: Jason Wang <jasowang@redhat.com> L: virtualization@lists.linux-foundation.org S: Maintained +F: Documentation/ABI/testing/sysfs-bus-vdpa F: Documentation/devicetree/bindings/virtio/ F: drivers/block/virtio_blk.c F: drivers/crypto/virtio/
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/configs/hi3516dv300_smp_defconfig
Added
@@ -0,0 +1,2181 @@ +# +# Automatically generated file; DO NOT EDIT. +# Linux/arm 4.19.90 Kernel Configuration +# + +# +# Compiler: arm-himix410-linux-gcc (HC&C V1R3C00SPC200B042_20210105) 7.3.0 +# +CONFIG_CC_IS_GCC=y +CONFIG_GCC_VERSION=70300 +CONFIG_CLANG_VERSION=0 +CONFIG_CC_HAS_ASM_GOTO=y +CONFIG_IRQ_WORK=y +CONFIG_BUILDTIME_EXTABLE_SORT=y + +# +# General setup +# +CONFIG_INIT_ENV_ARG_LIMIT=32 +# CONFIG_COMPILE_TEST is not set +CONFIG_LOCALVERSION="" +# CONFIG_LOCALVERSION_AUTO is not set +CONFIG_BUILD_SALT="" +CONFIG_HAVE_KERNEL_GZIP=y +CONFIG_HAVE_KERNEL_LZMA=y +CONFIG_HAVE_KERNEL_XZ=y +CONFIG_HAVE_KERNEL_LZO=y +CONFIG_HAVE_KERNEL_LZ4=y +CONFIG_KERNEL_GZIP=y +# CONFIG_KERNEL_LZMA is not set +# CONFIG_KERNEL_XZ is not set +# CONFIG_KERNEL_LZO is not set +# CONFIG_KERNEL_LZ4 is not set +CONFIG_DEFAULT_HOSTNAME="(none)" +# CONFIG_SWAP is not set +CONFIG_SYSVIPC=y +CONFIG_SYSVIPC_SYSCTL=y +CONFIG_CROSS_MEMORY_ATTACH=y +CONFIG_USELIB=y + +# +# IRQ subsystem +# +CONFIG_GENERIC_IRQ_PROBE=y +CONFIG_GENERIC_IRQ_SHOW=y +CONFIG_GENERIC_IRQ_SHOW_LEVEL=y +CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y +CONFIG_GENERIC_IRQ_MIGRATION=y +CONFIG_HARDIRQS_SW_RESEND=y +CONFIG_IRQ_DOMAIN=y +CONFIG_IRQ_DOMAIN_HIERARCHY=y +CONFIG_HANDLE_DOMAIN_IRQ=y +CONFIG_IRQ_FORCED_THREADING=y +CONFIG_SPARSE_IRQ=y +# CONFIG_GENERIC_IRQ_DEBUGFS is not set +CONFIG_GENERIC_IRQ_MULTI_HANDLER=y +CONFIG_ARCH_CLOCKSOURCE_DATA=y +CONFIG_GENERIC_TIME_VSYSCALL=y +CONFIG_GENERIC_CLOCKEVENTS=y +CONFIG_ARCH_HAS_TICK_BROADCAST=y +CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y + +# +# Timers subsystem +# +CONFIG_TICK_ONESHOT=y +CONFIG_HZ_PERIODIC=y +# CONFIG_NO_HZ_IDLE is not set +# CONFIG_NO_HZ_FULL is not set +# CONFIG_NO_HZ is not set +CONFIG_HIGH_RES_TIMERS=y +CONFIG_PREEMPT_NONE=y +# CONFIG_PREEMPT_VOLUNTARY is not set +# CONFIG_PREEMPT is not set + +# +# CPU/Task time and stats accounting +# +CONFIG_TICK_CPU_ACCOUNTING=y +# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set +# CONFIG_IRQ_TIME_ACCOUNTING is not set +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_CPU_ISOLATION=y + +# +# RCU Subsystem +# +CONFIG_TREE_RCU=y +# CONFIG_RCU_EXPERT is not set +CONFIG_SRCU=y +CONFIG_TREE_SRCU=y +CONFIG_RCU_STALL_COMMON=y +CONFIG_RCU_NEED_SEGCBLIST=y +# CONFIG_IKCONFIG is not set +CONFIG_LOG_BUF_SHIFT=17 +CONFIG_LOG_CPU_MAX_BUF_SHIFT=12 +CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=13 +CONFIG_GENERIC_SCHED_CLOCK=y +CONFIG_CGROUPS=y +# CONFIG_MEMCG is not set +# CONFIG_BLK_CGROUP is not set +CONFIG_CGROUP_SCHED=y +CONFIG_FAIR_GROUP_SCHED=y +# CONFIG_CFS_BANDWIDTH is not set +# CONFIG_RT_GROUP_SCHED is not set +# CONFIG_CGROUP_PIDS is not set +# CONFIG_CGROUP_RDMA is not set +# CONFIG_CGROUP_FREEZER is not set +# CONFIG_CPUSETS is not set +# CONFIG_CGROUP_DEVICE is not set +# CONFIG_CGROUP_CPUACCT is not set +# CONFIG_CGROUP_PERF is not set +# CONFIG_CGROUP_DEBUG is not set +CONFIG_NAMESPACES=y +CONFIG_UTS_NS=y +CONFIG_IPC_NS=y +# CONFIG_USER_NS is not set +CONFIG_PID_NS=y +# CONFIG_CHECKPOINT_RESTORE is not set +# CONFIG_SCHED_AUTOGROUP is not set +# CONFIG_SYSFS_DEPRECATED is not set +# CONFIG_RELAY is not set +CONFIG_BLK_DEV_INITRD=y +CONFIG_INITRAMFS_SOURCE="" +# CONFIG_RD_GZIP is not set +# CONFIG_RD_BZIP2 is not set +# CONFIG_RD_LZMA is not set +# CONFIG_RD_XZ is not set +# CONFIG_RD_LZO is not set +# CONFIG_RD_LZ4 is not set +CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y +# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set +CONFIG_SYSCTL=y +CONFIG_ANON_INODES=y +CONFIG_HAVE_UID16=y +# CONFIG_EXPERT is not set +CONFIG_UID16=y +CONFIG_MULTIUSER=y +CONFIG_SYSFS_SYSCALL=y +CONFIG_FHANDLE=y +CONFIG_POSIX_TIMERS=y +CONFIG_PRINTK=y +CONFIG_PRINTK_NMI=y +CONFIG_BUG=y +CONFIG_ELF_CORE=y +CONFIG_BASE_FULL=y +CONFIG_FUTEX=y +CONFIG_FUTEX_PI=y +CONFIG_EPOLL=y +CONFIG_SIGNALFD=y +CONFIG_TIMERFD=y +CONFIG_EVENTFD=y +CONFIG_SHMEM=y +CONFIG_AIO=y +CONFIG_ADVISE_SYSCALLS=y +CONFIG_MEMBARRIER=y +CONFIG_KALLSYMS=y +# CONFIG_KALLSYMS_ALL is not set +CONFIG_KALLSYMS_BASE_RELATIVE=y +# CONFIG_BPF_SYSCALL is not set +# CONFIG_USERFAULTFD is not set +CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y +CONFIG_RSEQ=y +# CONFIG_EMBEDDED is not set +CONFIG_HAVE_PERF_EVENTS=y +CONFIG_PERF_USE_VMALLOC=y + +# +# Kernel Performance Events And Counters +# +CONFIG_PERF_EVENTS=y +# CONFIG_DEBUG_PERF_USE_VMALLOC is not set +CONFIG_VM_EVENT_COUNTERS=y +CONFIG_SLUB_DEBUG=y +CONFIG_COMPAT_BRK=y +# CONFIG_SLAB is not set +CONFIG_SLUB=y +CONFIG_SLAB_MERGE_DEFAULT=y +# CONFIG_SLAB_FREELIST_RANDOM is not set +# CONFIG_SLAB_FREELIST_HARDENED is not set +CONFIG_SLUB_CPU_PARTIAL=y +# CONFIG_PROFILING is not set +CONFIG_ARM=y +CONFIG_ARM_HAS_SG_CHAIN=y +CONFIG_MIGHT_HAVE_PCI=y +CONFIG_SYS_SUPPORTS_APM_EMULATION=y +CONFIG_HAVE_PROC_CPU=y +CONFIG_STACKTRACE_SUPPORT=y +CONFIG_LOCKDEP_SUPPORT=y +CONFIG_TRACE_IRQFLAGS_SUPPORT=y +CONFIG_RWSEM_XCHGADD_ALGORITHM=y +CONFIG_FIX_EARLYCON_MEM=y +CONFIG_GENERIC_HWEIGHT=y +CONFIG_GENERIC_CALIBRATE_DELAY=y +CONFIG_ARCH_SUPPORTS_UPROBES=y +CONFIG_ARM_PATCH_PHYS_VIRT=y +CONFIG_GENERIC_BUG=y +CONFIG_PGTABLE_LEVELS=2 +
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt
Added
+(directory)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/Kconfig
Added
@@ -0,0 +1,258 @@ +config ARCH_HISI_BVT + bool "Hisilicon BVT SoC Support" + select ARM_AMBA + select ARM_GIC if ARCH_MULTI_V7 + select ARM_VIC if ARCH_MULTI_V5 + select ARM_TIMER_SP804 + select POWER_RESET + select POWER_SUPPLY + +if ARCH_HISI_BVT + +menu "Hisilicon BVT platform type" + +config ARCH_HI3521DV200 + bool "Hisilicon Hi3521DV200 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3521DV200 Soc family. + +config ARCH_HI3520DV500 + bool "Hisilicon Hi3520DV500 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3520DV500 Soc family. + +config ARCH_HI3516A + bool "Hisilicon Hi3516A Cortex-A7(Single) family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select ARM_GIC + select ARCH_HAS_RESET_CONTROLLER + select RESET_CONTROLLER + help + Support for Hisilicon Hi3516A Soc family. + +config ARCH_HI3516CV500 + bool "Hisilicon Hi3516CV500 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3516CV500 Soc family. + +config ARCH_HI3516DV300 + bool "Hisilicon Hi3516DV300 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3516DV300 Soc family. + +config ARCH_HI3516EV200 + bool "Hisilicon Hi3516EV200 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3516EV200 Soc family. + +config ARCH_HI3516EV300 + bool "Hisilicon Hi3516EV300 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3516EV300 Soc family. + +config ARCH_HI3518EV300 + bool "Hisilicon Hi3518EV300 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3518EV300 Soc family. + +config ARCH_HI3516DV200 + bool "Hisilicon Hi3516DV200 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3516DV200 Soc family. +config ARCH_HI3556V200 + bool "Hisilicon Hi3556V200 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3556V200 Soc family. + +config ARCH_HI3559V200 + bool "Hisilicon Hi3559V200 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3559V200 Soc family. + +config ARCH_HI3518EV20X + bool "Hisilicon Hi3518ev20x ARM926T(Single) family" + depends on ARCH_MULTI_V5 + select PINCTRL + select PINCTRL_SINGLE + help + Support for Hisilicon Hi3518ev20x Soc family. + +config ARCH_HI3536DV100 + bool "Hisilicon Hi3536DV100 Cortex-A7(Single) family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + help + Support for Hisilicon Hi3536DV100 Soc family. + +config ARCH_HI3521A + bool "Hisilicon Hi3521A A7(Single) family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select ARM_GIC + select PINCTRL + select PINCTRL_SINGLE + help + Support for Hisilicon Hi3521a Soc family. + +config ARCH_HI3531A + bool "Hisilicon Hi3531A A9 family" if ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select ARM_GIC + select CACHE_L2X0 + select PINCTRL + select PINCTRL_SINGLE + select HAVE_ARM_SCU if SMP + select NEED_MACH_IO_H if PCI + help + Support for Hisilicon Hi3531a Soc family. + +config ARCH_HI3556AV100 + bool "Hisilicon Hi3556AV100 Cortex-a53 family" if ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select ARM_CCI + select ARCH_HAS_RESET_CONTROLLER + select RESET_CONTROLLER + select PMC if SMP + help + Support for Hisilicon Hi3556AV100 Soc family +if ARCH_HI3556AV100 + +config PMC + bool + depends on ARCH_HI3556AV100 + help + support power control for Hi3556AV100 Cortex-a53 + +endif + +config ARCH_HI3519AV100 + bool "Hisilicon Hi3519AV100 Cortex-a53 family" if ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select ARM_CCI + select ARM_GIC + select ARCH_HAS_RESET_CONTROLLER + select RESET_CONTROLLER + select NEED_MACH_IO_H if PCI + select PMC if SMP + help + Support for Hisilicon Hi3519AV100 Soc family +if ARCH_HI3519AV100 + +config PMC + bool + depends on ARCH_HI3519AV100 + help + support power control for Hi3519AV100 Cortex-a53 + +endif + +config ARCH_HI3568V100 + bool "Hisilicon Hi3568V100 Cortex-a53 family" if ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select ARM_CCI + select ARM_GIC + select ARCH_HAS_RESET_CONTROLLER + select RESET_CONTROLLER + select NEED_MACH_IO_H if PCI + select PMC if SMP
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/Makefile
Added
@@ -0,0 +1,27 @@ +# +# Makefile for Hisilicon processors family +# + +obj-$(CONFIG_ARCH_HI3521DV200) += mach-hi3521dv200.o +obj-$(CONFIG_ARCH_HI3520DV500) += mach-hi3521dv200.o +obj-$(CONFIG_ARCH_HI3516A) += mach-hi3516a.o +obj-$(CONFIG_ARCH_HI3516CV500) += mach-hi3516cv500.o +obj-$(CONFIG_ARCH_HI3516EV200) += mach-hi3516ev200.o +obj-$(CONFIG_ARCH_HI3516EV300) += mach-hi3516ev300.o +obj-$(CONFIG_ARCH_HI3518EV300) += mach-hi3518ev300.o +obj-$(CONFIG_ARCH_HI3516DV200) += mach-hi3516dv200.o +obj-$(CONFIG_ARCH_HI3516DV300) += mach-hi3516dv300.o +obj-$(CONFIG_ARCH_HI3556V200) += mach-hi3556v200.o +obj-$(CONFIG_ARCH_HI3559V200) += mach-hi3559v200.o +obj-$(CONFIG_ARCH_HI3562V100) += mach-hi3559v200.o +obj-$(CONFIG_ARCH_HI3566V100) += mach-hi3559v200.o +obj-$(CONFIG_ARCH_HI3518EV20X) += mach-hi3518ev20x.o +obj-$(CONFIG_ARCH_HI3536DV100) += mach-hi3536dv100.o +obj-$(CONFIG_ARCH_HI3521A) += mach-hi3521a.o +obj-$(CONFIG_ARCH_HI3531A) += mach-hi3531a.o +obj-$(CONFIG_ARCH_HI3556AV100) += mach-hi3556av100.o +obj-$(CONFIG_ARCH_HI3519AV100) += mach-hi3519av100.o +obj-$(CONFIG_ARCH_HI3568V100) += mach-hi3519av100.o + + +obj-$(CONFIG_SMP) += platsmp.o
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/Makefile.boot
Added
@@ -0,0 +1,7 @@ +ifeq ($(CONFIG_ARCH_HISI_BVT_AMP), y) +zreladdr-$(CONFIG_ARCH_HISI_BVT) := $(CONFIG_AMP_ZRELADDR) +else +zreladdr-$(CONFIG_ARCH_HISI_BVT) := $(CONFIG_HI_ZRELADDR) +endif +params_phys-$(CONFIG_ARCH_HISI_BVT) := $(CONFIG_HI_PARAMS_PHYS) +initrd_phys-$(CONFIG_ARCH_HISI_BVT) := $(CONFIG_HI_INITRD_PHYS)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/include
Added
+(directory)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/include/mach
Added
+(directory)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/include/mach/hi3516dv300_io.h
Added
@@ -0,0 +1,27 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __HI3516DV300_IO_H +#define __HI3516DV300_IO_H + +/* + * phy: 0x20000000 ~ 0x20700000 + * vir: 0xFE100000 ~ 0xFE800000 + */ +#define HI3516DV300_IOCH2_PHYS 0x20000000 +#define IO_OFFSET_HIGH 0xDE100000 +#define HI3516DV300_IOCH2_VIRT (HI3516DV300_IOCH2_PHYS + IO_OFFSET_HIGH) +#define HI3516DV300_IOCH2_SIZE 0x700000 + +/* phy: 0x10000000 ~ 0x100E0000 + * vir: 0xFE000000 ~ 0xFE0E0000 + */ +#define HI3516DV300_IOCH1_PHYS 0x10000000 +#define IO_OFFSET_LOW 0xEE000000 +#define HI3516DV300_IOCH1_VIRT (HI3516DV300_IOCH1_PHYS + IO_OFFSET_LOW) +#define HI3516DV300_IOCH1_SIZE 0xE0000 + +#define IO_ADDRESS(x) ((x) >= HI3516DV300_IOCH2_PHYS ? (x) + IO_OFFSET_HIGH \ + : (x) + IO_OFFSET_LOW) + +#define __io_address(n) ((void __iomem __force *)IO_ADDRESS(n)) + +#endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/include/mach/hi3516dv300_platform.h
Added
@@ -0,0 +1,5 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __HI3516DV300_CHIP_REGS_H__ +#define __HI3516DV300_CHIP_REGS_H__ + +#endif /* End of __HI3516DV300_CHIP_REGS_H__ */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/include/mach/io.h
Added
@@ -0,0 +1,53 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __ASM_ARM_ARCH_IO_H +#define __ASM_ARM_ARCH_IO_H + +#ifdef CONFIG_ARCH_HI3516A +#include <mach/hi3516a_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3518EV20X +#include <mach/hi3518ev20x_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3536DV100 +#include <mach/hi3536dv100_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3521A +#include <mach/hi3521a_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3531A +#include <mach/hi3531a_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3516CV500 +#include <mach/hi3516cv500_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3516DV300 +#include <mach/hi3516dv300_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3556V200 +#include <mach/hi3556v200_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3559V200 +#include <mach/hi3559v200_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3562V100 +#include <mach/hi3559v200_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3566V100 +#include <mach/hi3559v200_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3519AV100 +#include <mach/hi3519av100_io.h> +#endif + +#endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/include/mach/platform.h
Added
@@ -0,0 +1,53 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __HISI_PLATFORM_H__ +#define __HISI_PLATFORM_H__ + +#ifdef CONFIG_ARCH_HI3536DV100 +#include <mach/hi3536dv100_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3521A +#include <mach/hi3521a_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3531A +#include <mach/hi3531a_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3516DV300 +#include <mach/hi3516dv300_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3516CV500 +#include <mach/hi3516cv500_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3556V200 +#include <mach/hi3556v200_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3559V200 +#include <mach/hi3559v200_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3562V100 +#include <mach/hi3559v200_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3566V100 +#include <mach/hi3559v200_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3556AV100 +#include <mach/hi3556av100_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3519AV100 +#include <mach/hi3519av100_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3568V100 +#include <mach/hi3519av100_platform.h> +#endif + +#endif /* End of __HISI_PLATFORM_H__ */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/mach-common.h
Added
@@ -0,0 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __SMP_COMMON_H +#define __SMP_COMMON_H + +#ifdef CONFIG_SMP +void hi35xx_set_cpu(unsigned int cpu, bool enable); +void __init hi35xx_smp_prepare_cpus(unsigned int max_cpus); +int hi35xx_boot_secondary(unsigned int cpu, struct task_struct *idle); +#endif /* CONFIG_SMP */ +#endif /* __SMP_COMMON_H */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/mach-hi3516dv300.c
Added
@@ -0,0 +1,69 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2016-2017 HiSilicon Technologies Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + * +*/ + +#include <linux/of_address.h> +#include <asm/smp_scu.h> + +#include "mach-common.h" + +#ifdef CONFIG_SMP + +#define REG_CPU_SRST_CRG 0x78 +#define CPU1_SRST_REQ BIT(2) +#define DBG1_SRST_REQ BIT(4) + +void hi35xx_set_cpu(unsigned int cpu, bool enable) +{ + struct device_node *np = NULL; + unsigned int regval; + void __iomem *crg_base; + + np = of_find_compatible_node(NULL, NULL, "hisilicon,hi3516dv300-clock"); + if (!np) { + pr_err("failed to find hisilicon clock node\n"); + return; + } + + crg_base = of_iomap(np, 0); + if (!crg_base) { + pr_err("failed to map address\n"); + return; + } + + if (enable) { + /* clear the slave cpu reset */ + regval = readl(crg_base + REG_CPU_SRST_CRG); + regval &= ~CPU1_SRST_REQ; + writel(regval, (crg_base + REG_CPU_SRST_CRG)); + } else { + regval = readl(crg_base + REG_CPU_SRST_CRG); + regval |= (DBG1_SRST_REQ | CPU1_SRST_REQ); + writel(regval, (crg_base + REG_CPU_SRST_CRG)); + } + iounmap(crg_base); +} + +static const struct smp_operations hi35xx_smp_ops __initconst = { + .smp_prepare_cpus = hi35xx_smp_prepare_cpus, + .smp_boot_secondary = hi35xx_boot_secondary, +}; + +CPU_METHOD_OF_DECLARE(hi3516dv300_smp, "hisilicon,hi3516dv300", + &hi35xx_smp_ops); +#endif /* CONFIG_SMP */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/platsmp.c
Added
@@ -0,0 +1,63 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2013 Linaro Ltd. + * Copyright (c) 2013 Hisilicon Limited. + * Based on arch/arm/mach-vexpress/platsmp.c, Copyright (C) 2002 ARM Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + */ + +#include <linux/io.h> +#include <linux/smp.h> +#include <asm/smp_scu.h> + +#include "mach-common.h" + +#define HI35XX_BOOT_ADDRESS 0x00000000 + +void __init hi35xx_smp_prepare_cpus(unsigned int max_cpus) +{ + unsigned long base = 0; + void __iomem *scu_base = NULL; + + if (scu_a9_has_base()) { + base = scu_a9_get_base(); + scu_base = ioremap(base, PAGE_SIZE); + if (!scu_base) { + pr_err("ioremap(scu_base) failed\n"); + return; + } + + scu_enable(scu_base); + iounmap(scu_base); + } +} + +void hi35xx_set_scu_boot_addr(phys_addr_t start_addr, phys_addr_t jump_addr) +{ + void __iomem *virt; + + virt = ioremap(start_addr, PAGE_SIZE); + if (!virt) { + pr_err("ioremap(start_addr) failed\n"); + return; + } + + writel_relaxed(0xe51ff004, virt); /* ldr pc, [rc, #-4] */ + writel_relaxed(jump_addr, virt + 4); /* pc jump phy address */ + iounmap(virt); +} + +int hi35xx_boot_secondary(unsigned int cpu, struct task_struct *idle) +{ + phys_addr_t jumpaddr; + + jumpaddr = virt_to_phys(secondary_startup); + hi35xx_set_scu_boot_addr(HI35XX_BOOT_ADDRESS, jumpaddr); + hi35xx_set_cpu(cpu, true); + arch_send_wakeup_ipi_mask(cpumask_of(cpu)); + return 0; +} +
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/configs/openeuler_defconfig
Changed
@@ -139,7 +139,7 @@ CONFIG_CGROUP_V1_WRITEBACK=y CONFIG_CGROUP_SCHED=y CONFIG_QOS_SCHED=y -CONFIG_QOS_SCHED_SMT_EXPELLER=y +# CONFIG_QOS_SCHED_SMT_EXPELLER is not set CONFIG_FAIR_GROUP_SCHED=y CONFIG_CFS_BANDWIDTH=y CONFIG_RT_GROUP_SCHED=y @@ -2776,6 +2776,11 @@ CONFIG_NGBE_DEBUG_FS=y # CONFIG_NGBE_POLL_LINK_STATUS is not set CONFIG_NGBE_SYSFS=y +CONFIG_TXGBE=m +CONFIG_TXGBE_HWMON=y +CONFIG_TXGBE_DEBUG_FS=y +# CONFIG_TXGBE_POLL_LINK_STATUS is not set +CONFIG_TXGBE_SYSFS=y # CONFIG_JME is not set # CONFIG_NET_VENDOR_MARVELL is not set CONFIG_NET_VENDOR_MELLANOX=y @@ -3400,7 +3405,6 @@ CONFIG_TCG_TIS_ST33ZP24_SPI=m # CONFIG_XILLYBUS is not set CONFIG_PIN_MEMORY_DEV=m -CONFIG_HISI_SVM=m # CONFIG_RANDOM_TRUST_CPU is not set # CONFIG_RANDOM_TRUST_BOOTLOADER is not set # end of Character devices @@ -7294,6 +7298,7 @@ # CONFIG_CORESIGHT_CTI is not set CONFIG_CORESIGHT_TRBE=m CONFIG_ULTRASOC_SMB=m +CONFIG_ACPI_TRBE=y # end of arm64 Debugging #
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/include/asm/acpi.h
Changed
@@ -42,6 +42,9 @@ #define ACPI_MADT_GICC_SPE (offsetof(struct acpi_madt_generic_interrupt, \ spe_interrupt) + sizeof(u16)) +#define ACPI_MADT_GICC_TRBE (offsetof(struct acpi_madt_generic_interrupt, \ + trbe_interrupt) + sizeof(u16)) + /* Basic configuration for ACPI */ #ifdef CONFIG_ACPI pgprot_t __acpi_get_mem_attribute(phys_addr_t addr);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/include/asm/resctrl.h
Changed
@@ -545,5 +545,23 @@ DEFINE_INLINE_CTRL_FEATURE_ENABLE_FUNC(caMax); DEFINE_INLINE_CTRL_FEATURE_ENABLE_FUNC(caPrio); +/** + * rdtgroup_remove - the helper to remove resource group safely + * @rdtgrp: resource group to remove + * + * On resource group creation via a mkdir, an extra kernfs_node reference is + * taken to ensure that the rdtgroup structure remains accessible for the + * rdtgroup_kn_unlock() calls where it is removed. + * + * Drop the extra reference here, then free the rdtgroup structure. + * + * Return: void + */ +static inline void rdtgroup_remove(struct rdtgroup *rdtgrp) +{ + kernfs_put(rdtgrp->kn); + kfree(rdtgrp); +} + #endif #endif /* _ASM_ARM64_RESCTRL_H */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/kernel/Makefile
Changed
@@ -54,6 +54,7 @@ obj-$(CONFIG_ARMV8_DEPRECATED) += armv8_deprecated.o obj-$(CONFIG_ACPI) += acpi.o obj-$(CONFIG_ACPI_NUMA) += acpi_numa.o +obj-$(CONFIG_ACPI_TRBE) += acpi_trbe.o obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL) += acpi_parking_protocol.o obj-$(CONFIG_PARAVIRT) += paravirt.o paravirt-spinlocks.o obj-$(CONFIG_PARAVIRT_SPINLOCKS) += paravirt.o paravirt-spinlocks.o
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/kernel/acpi_trbe.c
Added
@@ -0,0 +1,81 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * ACPI probing code for ARM Trace Buffer Extension. + * + * Copyright (C) 2022 ARM Ltd. + */ + +#include <linux/acpi.h> +#include <linux/coresight.h> +#include <linux/platform_device.h> +#include <linux/init.h> + +static struct resource trbe_resources[] = { + { + /* irq */ + .flags = IORESOURCE_IRQ, + } +}; + +static struct platform_device trbe_dev = { + .name = ARMV9_TRBE_PDEV_NAME, + .id = -1, + .resource = trbe_resources, + .num_resources = ARRAY_SIZE(trbe_resources) +}; + +static void arm_trbe_acpi_register_device(void) +{ + int cpu, hetid, irq, ret; + bool first = true; + u16 gsi = 0; + + /* + * Sanity check all the GICC tables for the same interrupt number. + * For now, we only support homogeneous machines. + */ + for_each_possible_cpu(cpu) { + struct acpi_madt_generic_interrupt *gicc; + + gicc = acpi_cpu_get_madt_gicc(cpu); + if (gicc->header.length < ACPI_MADT_GICC_TRBE) + return; + + if (first) { + gsi = gicc->trbe_interrupt; + if (!gsi) + return; + hetid = find_acpi_cpu_topology_hetero_id(cpu); + first = false; + } else if ((gsi != gicc->trbe_interrupt) || + (hetid != find_acpi_cpu_topology_hetero_id(cpu))) { + pr_warn("ACPI: TRBE must be homogeneous\n"); + return; + } + } + + irq = acpi_register_gsi(NULL, gsi, ACPI_LEVEL_SENSITIVE, + ACPI_ACTIVE_HIGH); + if (irq < 0) { + pr_warn("ACPI: TRBE Unable to register interrupt: %d\n", gsi); + return; + } + + trbe_resources[0].start = irq; + ret = platform_device_register(&trbe_dev); + if (ret < 0) { + pr_warn("ACPI: TRBE: Unable to register device\n"); + acpi_unregister_gsi(gsi); + } +} + +static int arm_acpi_trbe_init(void) +{ + if (acpi_disabled) + return 0; + + arm_trbe_acpi_register_device(); + + return 0; +} +device_initcall(arm_acpi_trbe_init)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/kernel/armv8_deprecated.c
Changed
@@ -208,10 +208,12 @@ loff_t *ppos) { int ret = 0; - struct insn_emulation *insn = container_of(table->data, struct insn_emulation, current_mode); - enum insn_emulation_mode prev_mode = insn->current_mode; + struct insn_emulation *insn; + enum insn_emulation_mode prev_mode; mutex_lock(&insn_emulation_mutex); + insn = container_of(table->data, struct insn_emulation, current_mode); + prev_mode = insn->current_mode; ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); if (ret || !write || prev_mode == insn->current_mode)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/kernel/mpam/mpam_ctrlmon.c
Changed
@@ -804,7 +804,6 @@ if (IS_ERR(kn_subdir)) return PTR_ERR(kn_subdir); - kernfs_get(kn_subdir); ret = resctrl_group_kn_set_ugid(kn_subdir); if (ret) return ret; @@ -830,7 +829,6 @@ *kn_info = kernfs_create_dir(parent_kn, "info", parent_kn->mode, NULL); if (IS_ERR(*kn_info)) return PTR_ERR(*kn_info); - kernfs_get(*kn_info); ret = resctrl_group_add_files(*kn_info, RF_TOP_INFO); if (ret) @@ -865,12 +863,6 @@ } } - /* - m This extra ref will be put in kernfs_remove() and guarantees - * that @rdtgrp->kn is always accessible. - */ - kernfs_get(*kn_info); - ret = resctrl_group_kn_set_ugid(*kn_info); if (ret) goto out_destroy;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/kernel/mpam/mpam_resctrl.c
Changed
@@ -310,11 +310,15 @@ return -EINVAL; } - if (kstrtoul(buf, rr->ctrl_features[type].base, &data)) + if (kstrtoul(buf, rr->ctrl_features[type].base, &data)) { + rdt_last_cmd_printf("Non-hex character in the mask %s\n", buf); return -EINVAL; + } - if (data >= rr->ctrl_features[type].max_wd) + if (data >= rr->ctrl_features[type].max_wd) { + rdt_last_cmd_puts("Mask out of range\n"); return -EINVAL; + } cfg->new_ctrl[type] = data; cfg->have_new_ctrl = true; @@ -338,25 +342,31 @@ switch (rr->ctrl_features[type].evt) { case QOS_MBA_MAX_EVENT_ID: case QOS_MBA_PBM_EVENT_ID: - if (kstrtoul(buf, rr->ctrl_features[type].base, &data)) - return -EINVAL; - data = (data < r->mbw.min_bw) ? r->mbw.min_bw : data; - data = roundup(data, r->mbw.bw_gran); - break; case QOS_MBA_MIN_EVENT_ID: - if (kstrtoul(buf, rr->ctrl_features[type].base, &data)) + if (kstrtoul(buf, rr->ctrl_features[type].base, &data)) { + rdt_last_cmd_printf("Non-decimal digit in MB value %s\n", buf); return -EINVAL; - /* for mbw min feature, 0 of setting is allowed */ + } + if (data < r->mbw.min_bw) { + rdt_last_cmd_printf("MB value %ld out of range [%d,%d]\n", data, + r->mbw.min_bw, rr->ctrl_features[type].max_wd - 1); + return -EINVAL; + } data = roundup(data, r->mbw.bw_gran); break; default: - if (kstrtoul(buf, rr->ctrl_features[type].base, &data)) + if (kstrtoul(buf, rr->ctrl_features[type].base, &data)) { + rdt_last_cmd_printf("Non-decimal digit in MB value %s\n", buf); return -EINVAL; + } break; } - if (data >= rr->ctrl_features[type].max_wd) + if (data >= rr->ctrl_features[type].max_wd) { + rdt_last_cmd_printf("MB value %ld out of range [%d,%d]\n", data, + r->mbw.min_bw, rr->ctrl_features[type].max_wd - 1); return -EINVAL; + } cfg->new_ctrl[type] = data; cfg->have_new_ctrl = true; @@ -1335,7 +1345,7 @@ (rdtgrp->flags & RDT_DELETED)) { current->closid = 0; current->rmid = 0; - kfree(rdtgrp); + rdtgroup_remove(rdtgrp); } preempt_disable(); @@ -2280,6 +2290,8 @@ case QOS_MBA_MAX_EVENT_ID: range = MBW_MAX_BWA_FRACT(res->class->bwa_wd); mpam_cfg->mbw_max = (resctrl_cfg * range) / (MAX_MBA_BW - 1); + /* correct mbw_max if remainder is too large */ + mpam_cfg->mbw_max += ((resctrl_cfg * range) % (MAX_MBA_BW - 1)) / range; mpam_cfg->mbw_max = (mpam_cfg->mbw_max > range) ? range : mpam_cfg->mbw_max; mpam_set_feature(mpam_feat_mbw_max, &mpam_cfg->valid); @@ -2287,6 +2299,8 @@ case QOS_MBA_MIN_EVENT_ID: range = MBW_MAX_BWA_FRACT(res->class->bwa_wd); mpam_cfg->mbw_min = (resctrl_cfg * range) / (MAX_MBA_BW - 1); + /* correct mbw_min if remainder is too large */ + mpam_cfg->mbw_min += ((resctrl_cfg * range) % (MAX_MBA_BW - 1)) / range; mpam_cfg->mbw_min = (mpam_cfg->mbw_min > range) ? range : mpam_cfg->mbw_min; mpam_set_feature(mpam_feat_mbw_min, &mpam_cfg->valid);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/kernel/mpam/mpam_setup.c
Changed
@@ -172,6 +172,8 @@ list_del(&d->list); dom = container_of(d, struct mpam_resctrl_dom, resctrl_dom); kfree(dom); + + res->resctrl_res.dom_num--; } mpam_resctrl_clear_default_cpu(cpu); @@ -417,6 +419,9 @@ * of 1 would appear too fine to make percentage conversions. */ r->mbw.bw_gran = GRAN_MBA_BW; + /* do not allow mbw_max/min below mbw.bw_gran */ + if (r->mbw.min_bw < r->mbw.bw_gran) + r->mbw.min_bw = r->mbw.bw_gran; /* We will only pick a class that can monitor and control */ r->alloc_capable = true;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/kvm/hyp/include/hyp/switch.h
Changed
@@ -223,7 +223,11 @@ esr_ec != ESR_ELx_EC_SVE) return false; - vcpu->stat.fp_asimd_exit_stat++; + if (esr_ec == ESR_ELx_EC_FP_ASIMD) + vcpu->stat.fp_asimd_exit_stat++; + else /* SVE trap */ + vcpu->stat.sve_exit_stat++; + /* Don't handle SVE traps for non-SVE vcpus here: */ if (!sve_guest) if (esr_ec != ESR_ELx_EC_FP_ASIMD)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/mm/init.c
Changed
@@ -727,14 +727,20 @@ } } +#ifdef CONFIG_ASCEND_CHARGE_MIGRATE_HUGEPAGES +extern int enable_charge_mighp; +#endif + +#ifdef CONFIG_ARM64_PSEUDO_NMI +extern bool enable_pseudo_nmi; +#endif + void ascend_enable_all_features(void) { if (IS_ENABLED(CONFIG_ASCEND_DVPP_MMAP)) enable_mmap_dvpp = 1; #ifdef CONFIG_ASCEND_CHARGE_MIGRATE_HUGEPAGES - extern int enable_charge_mighp; - enable_charge_mighp = 1; #endif @@ -743,8 +749,6 @@ #endif #ifdef CONFIG_ARM64_PSEUDO_NMI - extern bool enable_pseudo_nmi; - enable_pseudo_nmi = true; #endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/configs/openeuler_defconfig
Changed
@@ -158,7 +158,7 @@ CONFIG_CGROUP_V1_WRITEBACK=y CONFIG_CGROUP_SCHED=y CONFIG_QOS_SCHED=y -CONFIG_QOS_SCHED_SMT_EXPELLER=y +# CONFIG_QOS_SCHED_SMT_EXPELLER is not set CONFIG_FAIR_GROUP_SCHED=y CONFIG_CFS_BANDWIDTH=y CONFIG_RT_GROUP_SCHED=y @@ -2743,12 +2743,16 @@ CONFIG_FM10K=m # CONFIG_IGC is not set CONFIG_NET_VENDOR_NETSWIFT=y -CONFIG_TXGBE=m CONFIG_NGBE=m CONFIG_NGBE_HWMON=y CONFIG_NGBE_DEBUG_FS=y # CONFIG_NGBE_POLL_LINK_STATUS is not set CONFIG_NGBE_SYSFS=y +CONFIG_TXGBE=m +CONFIG_TXGBE_HWMON=y +CONFIG_TXGBE_DEBUG_FS=y +# CONFIG_TXGBE_POLL_LINK_STATUS is not set +CONFIG_TXGBE_SYSFS=y # CONFIG_JME is not set # CONFIG_NET_VENDOR_MARVELL is not set CONFIG_NET_VENDOR_MELLANOX=y @@ -6504,6 +6508,7 @@ # CONFIG_GREYBUS is not set # CONFIG_STAGING is not set CONFIG_X86_PLATFORM_DEVICES=y +CONFIG_INTEL_IFS=m CONFIG_ACPI_WMI=m CONFIG_WMI_BMOF=m # CONFIG_ALIENWARE_WMI is not set
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/events/amd/uncore.c
Changed
@@ -12,11 +12,11 @@ #include <linux/init.h> #include <linux/cpu.h> #include <linux/cpumask.h> +#include <linux/cpufeature.h> +#include <linux/smp.h> -#include <asm/cpufeature.h> #include <asm/perf_event.h> #include <asm/msr.h> -#include <asm/smp.h> #define NUM_COUNTERS_NB 4 #define NUM_COUNTERS_L2 4 @@ -537,7 +537,7 @@ if (amd_uncore_llc) { uncore = *per_cpu_ptr(amd_uncore_llc, cpu); - uncore->id = per_cpu(cpu_llc_id, cpu); + uncore->id = get_llc_id(cpu); uncore = amd_uncore_find_online_sibling(uncore, amd_uncore_llc); *per_cpu_ptr(amd_uncore_llc, cpu) = uncore; @@ -755,11 +755,9 @@ fail_llc: if (boot_cpu_has(X86_FEATURE_PERFCTR_NB)) perf_pmu_unregister(&amd_nb_pmu); - if (amd_uncore_llc) - free_percpu(amd_uncore_llc); + free_percpu(amd_uncore_llc); fail_nb: - if (amd_uncore_nb) - free_percpu(amd_uncore_nb); + free_percpu(amd_uncore_nb); return ret; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/include/asm/cpu.h
Changed
@@ -66,4 +66,23 @@ #else static inline void init_ia32_feat_ctl(struct cpuinfo_x86 *c) {} #endif + +struct ucode_cpu_info; + +int intel_cpu_collect_info(struct ucode_cpu_info *uci); + +static inline bool intel_cpu_signatures_match(unsigned int s1, unsigned int p1, + unsigned int s2, unsigned int p2) +{ + if (s1 != s2) + return false; + + /* Processor flags are either both 0 ... */ + if (!p1 && !p2) + return true; + + /* ... or they intersect. */ + return p1 & p2; +} + #endif /* _ASM_X86_CPU_H */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/include/asm/cpufeature.h
Changed
@@ -31,7 +31,6 @@ CPUID_7_ECX, CPUID_8000_0007_EBX, CPUID_7_EDX, - CPUID_8000_001F_EAX, }; #ifdef CONFIG_X86_FEATURE_NAMES
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/include/asm/msr-index.h
Changed
@@ -78,6 +78,8 @@ /* Abbreviated from Intel SDM name IA32_CORE_CAPABILITIES */ #define MSR_IA32_CORE_CAPS 0x000000cf +#define MSR_IA32_CORE_CAPS_INTEGRITY_CAPS_BIT 2 +#define MSR_IA32_CORE_CAPS_INTEGRITY_CAPS BIT(MSR_IA32_CORE_CAPS_INTEGRITY_CAPS_BIT) #define MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT_BIT 5 #define MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT BIT(MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT_BIT) @@ -192,6 +194,11 @@ #define MSR_IA32_POWER_CTL 0x000001fc #define MSR_IA32_POWER_CTL_BIT_EE 19 +/* Abbreviated from Intel SDM name IA32_INTEGRITY_CAPABILITIES */ +#define MSR_INTEGRITY_CAPS 0x000002d9 +#define MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT 4 +#define MSR_INTEGRITY_CAPS_PERIODIC_BIST BIT(MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT) + #define MSR_LBR_NHM_FROM 0x00000680 #define MSR_LBR_NHM_TO 0x000006c0 #define MSR_LBR_CORE_FROM 0x00000040
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/include/asm/processor.h
Changed
@@ -829,6 +829,8 @@ DECLARE_PER_CPU(u64, msr_misc_features_shadow); +extern u16 get_llc_id(unsigned int cpu); + #ifdef CONFIG_CPU_SUP_AMD extern u16 amd_get_nb_id(int cpu); extern u32 amd_get_nodes_per_socket(void);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kernel/cpu/amd.c
Changed
@@ -445,7 +445,7 @@ node = numa_cpu_node(cpu); if (node == NUMA_NO_NODE) - node = per_cpu(cpu_llc_id, cpu); + node = get_llc_id(cpu); /* * On multi-fabric platform (e.g. Numascale NumaChip) a
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kernel/cpu/common.c
Changed
@@ -79,6 +79,12 @@ /* Last level cache ID of each logical CPU */ DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID; +u16 get_llc_id(unsigned int cpu) +{ + return per_cpu(cpu_llc_id, cpu); +} +EXPORT_SYMBOL_GPL(get_llc_id); + /* correctly size the local cpu masks */ void __init setup_cpu_local_masks(void) { @@ -957,9 +963,6 @@ if (c->extended_cpuid_level >= 0x8000000a) c->x86_capability[CPUID_8000_000A_EDX] = cpuid_edx(0x8000000a); - if (c->extended_cpuid_level >= 0x8000001f) - c->x86_capability[CPUID_8000_001F_EAX] = cpuid_eax(0x8000001f); - init_scattered_cpuid_features(c); init_speculation_control(c);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kernel/cpu/hygon.c
Changed
@@ -235,12 +235,12 @@ u32 ecx; ecx = cpuid_ecx(0x8000001e); - nodes_per_socket = ((ecx >> 8) & 7) + 1; + __max_die_per_package = nodes_per_socket = ((ecx >> 8) & 7) + 1; } else if (boot_cpu_has(X86_FEATURE_NODEID_MSR)) { u64 value; rdmsrl(MSR_FAM10H_NODE_ID, value); - nodes_per_socket = ((value >> 3) & 7) + 1; + __max_die_per_package = nodes_per_socket = ((value >> 3) & 7) + 1; } if (!boot_cpu_has(X86_FEATURE_AMD_SSBD) &&
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kernel/cpu/intel.c
Changed
@@ -181,6 +181,38 @@ return false; } +int intel_cpu_collect_info(struct ucode_cpu_info *uci) +{ + unsigned int val[2]; + unsigned int family, model; + struct cpu_signature csig = { 0 }; + unsigned int eax, ebx, ecx, edx; + + memset(uci, 0, sizeof(*uci)); + + eax = 0x00000001; + ecx = 0; + native_cpuid(&eax, &ebx, &ecx, &edx); + csig.sig = eax; + + family = x86_family(eax); + model = x86_model(eax); + + if (model >= 5 || family > 6) { + /* get processor flags from MSR 0x17 */ + native_rdmsr(MSR_IA32_PLATFORM_ID, val[0], val[1]); + csig.pf = 1 << ((val[1] >> 18) & 7); + } + + csig.rev = intel_get_microcode_revision(); + + uci->cpu_sig = csig; + uci->valid = 1; + + return 0; +} +EXPORT_SYMBOL_GPL(intel_cpu_collect_info); + static void early_init_intel(struct cpuinfo_x86 *c) { u64 misc_enable;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kernel/cpu/microcode/intel.c
Changed
@@ -45,20 +45,6 @@ /* last level cache size per core */ static int llc_size_per_core; -static inline bool cpu_signatures_match(unsigned int s1, unsigned int p1, - unsigned int s2, unsigned int p2) -{ - if (s1 != s2) - return false; - - /* Processor flags are either both 0 ... */ - if (!p1 && !p2) - return true; - - /* ... or they intersect. */ - return p1 & p2; -} - /* * Returns 1 if update has been found, 0 otherwise. */ @@ -69,7 +55,7 @@ struct extended_signature *ext_sig; int i; - if (cpu_signatures_match(csig, cpf, mc_hdr->sig, mc_hdr->pf)) + if (intel_cpu_signatures_match(csig, cpf, mc_hdr->sig, mc_hdr->pf)) return 1; /* Look for ext. headers: */ @@ -80,7 +66,7 @@ ext_sig = (void *)ext_hdr + EXT_HEADER_SIZE; for (i = 0; i < ext_hdr->count; i++) { - if (cpu_signatures_match(csig, cpf, ext_sig->sig, ext_sig->pf)) + if (intel_cpu_signatures_match(csig, cpf, ext_sig->sig, ext_sig->pf)) return 1; ext_sig++; } @@ -342,37 +328,6 @@ return patch; } -static int collect_cpu_info_early(struct ucode_cpu_info *uci) -{ - unsigned int val[2]; - unsigned int family, model; - struct cpu_signature csig = { 0 }; - unsigned int eax, ebx, ecx, edx; - - memset(uci, 0, sizeof(*uci)); - - eax = 0x00000001; - ecx = 0; - native_cpuid(&eax, &ebx, &ecx, &edx); - csig.sig = eax; - - family = x86_family(eax); - model = x86_model(eax); - - if ((model >= 5) || (family > 6)) { - /* get processor flags from MSR 0x17 */ - native_rdmsr(MSR_IA32_PLATFORM_ID, val[0], val[1]); - csig.pf = 1 << ((val[1] >> 18) & 7); - } - - csig.rev = intel_get_microcode_revision(); - - uci->cpu_sig = csig; - uci->valid = 1; - - return 0; -} - static void show_saved_mc(void) { #ifdef DEBUG @@ -386,7 +341,7 @@ return; } - collect_cpu_info_early(&uci); + intel_cpu_collect_info(&uci); sig = uci.cpu_sig.sig; pf = uci.cpu_sig.pf; @@ -495,7 +450,7 @@ struct ucode_cpu_info uci; if (delay_ucode_info) { - collect_cpu_info_early(&uci); + intel_cpu_collect_info(&uci); print_ucode_info(&uci, current_mc_date); delay_ucode_info = 0; } @@ -597,7 +552,7 @@ if (!(cp.data && cp.size)) return 0; - collect_cpu_info_early(&uci); + intel_cpu_collect_info(&uci); scan_microcode(cp.data, cp.size, &uci, true); @@ -630,7 +585,7 @@ if (!(cp.data && cp.size)) return NULL; - collect_cpu_info_early(uci); + intel_cpu_collect_info(uci); return scan_microcode(cp.data, cp.size, uci, false); } @@ -705,7 +660,7 @@ struct microcode_intel *p; struct ucode_cpu_info uci; - collect_cpu_info_early(&uci); + intel_cpu_collect_info(&uci); p = find_patch(&uci); if (!p)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kernel/cpu/scattered.c
Changed
@@ -44,6 +44,11 @@ { X86_FEATURE_PROC_FEEDBACK, CPUID_EDX, 11, 0x80000007, 0 }, { X86_FEATURE_MBA, CPUID_EBX, 6, 0x80000008, 0 }, { X86_FEATURE_PERFMON_V2, CPUID_EAX, 0, 0x80000022, 0 }, + { X86_FEATURE_SME, CPUID_EAX, 0, 0x8000001f, 0 }, + { X86_FEATURE_SEV, CPUID_EAX, 1, 0x8000001f, 0 }, + { X86_FEATURE_VM_PAGE_FLUSH, CPUID_EAX, 2, 0x8000001f, 0 }, + { X86_FEATURE_SEV_ES, CPUID_EAX, 3, 0x8000001f, 0 }, + { X86_FEATURE_SME_COHERENT, CPUID_EAX, 10, 0x8000001f, 0 }, { 0, 0, 0, 0, 0 } };
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kvm/vmx/nested.c
Changed
@@ -4543,6 +4543,17 @@ vmx_switch_vmcs(vcpu, &vmx->vmcs01); + /* + * If IBRS is advertised to the vCPU, KVM must flush the indirect + * branch predictors when transitioning from L2 to L1, as L1 expects + * hardware (KVM in this case) to provide separate predictor modes. + * Bare metal isolates VMX root (host) from VMX non-root (guest), but + * doesn't isolate different VMCSs, i.e. in this case, doesn't provide + * separate modes for L2 vs L1. + */ + if (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL)) + indirect_branch_prediction_barrier(); + /* Update any VMCS fields that might have changed while L2 ran */ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kvm/vmx/vmx.c
Changed
@@ -1454,8 +1454,10 @@ /* * No indirect branch prediction barrier needed when switching - * the active VMCS within a guest, e.g. on nested VM-Enter. - * The L1 VMM can protect itself with retpolines, IBPB or IBRS. + * the active VMCS within a vCPU, unless IBRS is advertised to + * the vCPU. To minimize the number of IBPBs executed, KVM + * performs IBPB on nested VM-Exit (a single nested transition + * may switch the active VMCS multiple times). */ if (!buddy || WARN_ON_ONCE(buddy->vmcs != prev)) indirect_branch_prediction_barrier();
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/bfq-cgroup.c
Changed
@@ -613,6 +613,10 @@ struct bfq_group *bfqg; while (blkg) { + if (!blkg->online) { + blkg = blkg->parent; + continue; + } bfqg = blkg_to_bfqg(blkg); if (bfqg->online) { bio_associate_blkg_from_css(bio, &blkg->blkcg->css); @@ -907,6 +911,9 @@ unsigned long flags; int i; + if (!bfqg->online) + return; + spin_lock_irqsave(&bfqd->lock, flags); if (!entity) /* root group */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/bfq-iosched.c
Changed
@@ -2775,6 +2775,15 @@ bfqq != bfqd->in_service_queue) bfq_del_bfqq_busy(bfqd, bfqq, false); + /* + * __bfq_bic_change_cgroup() just reset bic->bfqq so that a new bfqq + * will be created to handle new io, while old bfqq will stay around + * until all the requests are completed. It's unsafe to keep bfqq->bic + * since they are not related anymore. + */ + if (bfqq_process_refs(bfqq) == 1) + bfqq->bic = NULL; + bfq_put_queue(bfqq); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/blk-core.c
Changed
@@ -1304,6 +1304,37 @@ } } +static void blk_account_io_latency(struct request *req, u64 now, const int sgrp) +{ +#ifdef CONFIG_64BIT + u64 stat_time; + struct request_wrapper *rq_wrapper; + + if (!(req->rq_flags & RQF_FROM_BLOCK)) { + part_stat_add(req->part, nsecs[sgrp], now - req->start_time_ns); + return; + } + + rq_wrapper = request_to_wrapper(req); + stat_time = READ_ONCE(rq_wrapper->stat_time_ns); + /* + * This might fail if 'stat_time_ns' is updated + * in blk_mq_check_inflight_with_stat(). + */ + if (likely(now > stat_time && + cmpxchg64(&rq_wrapper->stat_time_ns, stat_time, now) + == stat_time)) { + u64 duration = stat_time ? now - stat_time : + now - req->start_time_ns; + + part_stat_add(req->part, nsecs[sgrp], duration); + } +#else + part_stat_add(req->part, nsecs[sgrp], now - req->start_time_ns); + +#endif +} + void blk_account_io_done(struct request *req, u64 now) { /* @@ -1315,36 +1346,15 @@ !(req->rq_flags & RQF_FLUSH_SEQ)) { const int sgrp = op_stat_group(req_op(req)); struct hd_struct *part; -#ifdef CONFIG_64BIT - u64 stat_time; - struct request_wrapper *rq_wrapper = request_to_wrapper(req); -#endif part_stat_lock(); part = req->part; update_io_ticks(part, jiffies, true); part_stat_inc(part, ios[sgrp]); -#ifdef CONFIG_64BIT - stat_time = READ_ONCE(rq_wrapper->stat_time_ns); - /* - * This might fail if 'stat_time_ns' is updated - * in blk_mq_check_inflight_with_stat(). - */ - if (likely(now > stat_time && - cmpxchg64(&rq_wrapper->stat_time_ns, stat_time, now) - == stat_time)) { - u64 duation = stat_time ? now - stat_time : - now - req->start_time_ns; - - part_stat_add(req->part, nsecs[sgrp], duation); - } -#else - part_stat_add(part, nsecs[sgrp], now - req->start_time_ns); -#endif + blk_account_io_latency(req, now, sgrp); if (precise_iostat) part_stat_local_dec(part, in_flight[rq_data_dir(req)]); part_stat_unlock(); - hd_struct_put(part); } }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/blk-flush.c
Changed
@@ -333,7 +333,7 @@ flush_rq->cmd_flags = REQ_OP_FLUSH | REQ_PREFLUSH; flush_rq->cmd_flags |= (flags & REQ_DRV) | (flags & REQ_FAILFAST_MASK); - flush_rq->rq_flags |= RQF_FLUSH_SEQ; + flush_rq->rq_flags |= RQF_FLUSH_SEQ | RQF_FROM_BLOCK; flush_rq->rq_disk = first_rq->rq_disk; flush_rq->end_io = flush_end_io; /* @@ -470,7 +470,8 @@ gfp_t flags) { struct blk_flush_queue *fq; - int rq_sz = sizeof(struct request_wrapper); + struct request_wrapper *wrapper; + int rq_sz = sizeof(struct request) + sizeof(struct request_wrapper); fq = kzalloc_node(sizeof(*fq), flags, node); if (!fq) @@ -479,10 +480,11 @@ spin_lock_init(&fq->mq_flush_lock); rq_sz = round_up(rq_sz + cmd_size, cache_line_size()); - fq->flush_rq = kzalloc_node(rq_sz, flags, node); - if (!fq->flush_rq) + wrapper = kzalloc_node(rq_sz, flags, node); + if (!wrapper) goto fail_rq; + fq->flush_rq = (struct request *)(wrapper + 1); INIT_LIST_HEAD(&fq->flush_queue[0]); INIT_LIST_HEAD(&fq->flush_queue[1]); INIT_LIST_HEAD(&fq->flush_data_in_flight); @@ -501,7 +503,7 @@ if (!fq) return; - kfree(fq->flush_rq); + kfree(request_to_wrapper(fq->flush_rq)); kfree(fq); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/blk-iocost.c
Changed
@@ -2747,8 +2747,13 @@ struct ioc_pcpu_stat *ccs; u64 on_q_ns, rq_wait_ns, size_nsec; int pidx, rw; + struct request_wrapper *rq_wrapper; - if (!ioc->enabled || !rq->alloc_time_ns || !rq->start_time_ns) + if (WARN_ON_ONCE(!(rq->rq_flags & RQF_FROM_BLOCK))) + return; + + rq_wrapper = request_to_wrapper(rq); + if (!ioc->enabled || !rq_wrapper->alloc_time_ns || !rq->start_time_ns) return; switch (req_op(rq) & REQ_OP_MASK) { @@ -2764,8 +2769,8 @@ return; } - on_q_ns = ktime_get_ns() - rq->alloc_time_ns; - rq_wait_ns = rq->start_time_ns - rq->alloc_time_ns; + on_q_ns = ktime_get_ns() - rq_wrapper->alloc_time_ns; + rq_wait_ns = rq->start_time_ns - rq_wrapper->alloc_time_ns; size_nsec = div64_u64(calc_size_vtime_cost(rq, ioc), VTIME_PER_NSEC); ccs = get_cpu_ptr(ioc->pcpu_stat);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/blk-mq-debugfs.c
Changed
@@ -360,8 +360,9 @@ blk_flags_show(m, rq->cmd_flags & ~REQ_OP_MASK, cmd_flag_name, ARRAY_SIZE(cmd_flag_name)); seq_puts(m, ", .rq_flags="); - blk_flags_show(m, (__force unsigned int)rq->rq_flags, rqf_name, - ARRAY_SIZE(rqf_name)); + blk_flags_show(m, + (__force unsigned int)(rq->rq_flags & ~RQF_FROM_BLOCK), + rqf_name, ARRAY_SIZE(rqf_name)); seq_printf(m, ", .state=%s", blk_mq_rq_state_name(blk_mq_rq_state(rq))); seq_printf(m, ", .tag=%d, .internal_tag=%d", rq->tag, rq->internal_tag);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/blk-mq.c
Changed
@@ -115,9 +115,8 @@ struct request_wrapper *rq_wrapper; mi->inflight[rq_data_dir(rq)]++; - if (!rq->part) + if (!rq->part || !(rq->rq_flags & RQF_FROM_BLOCK)) return true; - /* * If the request is started after 'part->stat_time' is set, * don't update 'nsces' here. @@ -375,7 +374,7 @@ rq->q = data->q; rq->mq_ctx = data->ctx; rq->mq_hctx = data->hctx; - rq->rq_flags = 0; + rq->rq_flags = RQF_FROM_BLOCK; rq->cmd_flags = data->cmd_flags; if (data->flags & BLK_MQ_REQ_PM) rq->rq_flags |= RQF_PM; @@ -387,7 +386,7 @@ rq->rq_disk = NULL; rq->part = NULL; #ifdef CONFIG_BLK_RQ_ALLOC_TIME - rq->alloc_time_ns = alloc_time_ns; + request_to_wrapper(rq)->alloc_time_ns = alloc_time_ns; #endif request_to_wrapper(rq)->stat_time_ns = 0; if (blk_mq_need_time_stamp(rq)) @@ -2601,8 +2600,9 @@ * rq_size is the size of the request plus driver payload, rounded * to the cacheline size */ - rq_size = round_up(sizeof(struct request_wrapper) + set->cmd_size, - cache_line_size()); + rq_size = round_up(sizeof(struct request) + + sizeof(struct request_wrapper) + set->cmd_size, + cache_line_size()); left = rq_size * depth; for (i = 0; i < depth; ) { @@ -2642,7 +2642,7 @@ to_do = min(entries_per_page, depth - i); left -= to_do * rq_size; for (j = 0; j < to_do; j++) { - struct request *rq = p; + struct request *rq = p + sizeof(struct request_wrapper); tags->static_rqs[i] = rq; if (blk_mq_init_request(set, rq, hctx_idx, node)) {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/blk-mq.h
Changed
@@ -37,6 +37,20 @@ struct kobject kobj; } ____cacheline_aligned_in_smp; +struct request_wrapper { + /* Time that I/O was counted in part_get_stat_info(). */ + u64 stat_time_ns; +#ifdef CONFIG_BLK_RQ_ALLOC_TIME + /* Time that the first bio started allocating this request. */ + u64 alloc_time_ns; +#endif +} ____cacheline_aligned; + +static inline struct request_wrapper *request_to_wrapper(void *rq) +{ + return rq - sizeof(struct request_wrapper); +} + void blk_mq_exit_queue(struct request_queue *q); int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr); void blk_mq_wake_waiters(struct request_queue *q);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/elevator.c
Changed
@@ -630,7 +630,8 @@ if (q->tag_set && q->tag_set->flags & BLK_MQ_F_NO_SCHED_BY_DEFAULT) return NULL; - if (q->nr_hw_queues != 1) + if (q->nr_hw_queues != 1 && + !blk_mq_is_sbitmap_shared(q->tag_set->flags)) return NULL; return elevator_get(q, "mq-deadline", false);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/acpi/apei/einj.c
Changed
@@ -545,6 +545,8 @@ != REGION_INTERSECTS) && (region_intersects(base_addr, size, IORESOURCE_MEM, IORES_DESC_PERSISTENT_MEMORY) != REGION_INTERSECTS) && + (region_intersects(base_addr, size, IORESOURCE_MEM, IORES_DESC_SOFT_RESERVED) + != REGION_INTERSECTS) && !arch_is_platform_page(base_addr))) return -EINVAL;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/android/binder.c
Changed
@@ -6081,6 +6081,7 @@ .open = binder_open, .flush = binder_flush, .release = binder_release, + .may_pollfree = true, }; static int __init init_binder_device(const char *name)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/android/binder_alloc.c
Changed
@@ -212,7 +212,7 @@ mm = alloc->vma_vm_mm; if (mm) { - mmap_read_lock(mm); + mmap_write_lock(mm); vma = alloc->vma; } @@ -270,7 +270,7 @@ trace_binder_alloc_page_end(alloc, index); } if (mm) { - mmap_read_unlock(mm); + mmap_write_unlock(mm); mmput(mm); } return 0; @@ -303,7 +303,7 @@ } err_no_vma: if (mm) { - mmap_read_unlock(mm); + mmap_write_unlock(mm); mmput(mm); } return vma ? -ENOMEM : -ESRCH;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/block/virtio_blk.c
Changed
@@ -933,7 +933,7 @@ mutex_lock(&vblk->vdev_mutex); /* Stop all the virtqueues. */ - vdev->config->reset(vdev); + virtio_reset_device(vdev); /* Virtqueues are stopped, nothing can use vblk->vdev anymore. */ vblk->vdev = NULL; @@ -953,7 +953,7 @@ struct virtio_blk *vblk = vdev->priv; /* Ensure we don't receive any more interrupts */ - vdev->config->reset(vdev); + virtio_reset_device(vdev); /* Make sure no work handler is accessing the device. */ flush_work(&vblk->config_work);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/block/zram/zram_drv.c
Changed
@@ -1983,31 +1983,53 @@ static int zram_remove(struct zram *zram) { struct block_device *bdev; + bool claimed; bdev = bdget_disk(zram->disk, 0); if (!bdev) return -ENOMEM; mutex_lock(&bdev->bd_mutex); - if (bdev->bd_openers || zram->claim) { + if (bdev->bd_openers) { mutex_unlock(&bdev->bd_mutex); bdput(bdev); return -EBUSY; } - zram->claim = true; + claimed = zram->claim; + if (!claimed) + zram->claim = true; mutex_unlock(&bdev->bd_mutex); zram_debugfs_unregister(zram); - /* Make sure all the pending I/O are finished */ - fsync_bdev(bdev); - zram_reset_device(zram); + if (claimed) { + /* + * If we were claimed by reset_store(), del_gendisk() will + * wait until reset_store() is done, so nothing need to do. + */ + ; + } else { + /* Make sure all the pending I/O are finished */ + fsync_bdev(bdev); + zram_reset_device(zram); + } bdput(bdev); pr_info("Removed device: %s\n", zram->disk->disk_name); del_gendisk(zram->disk); + + /* del_gendisk drains pending reset_store */ + WARN_ON_ONCE(claimed && zram->claim); + + /* + * disksize_store() may be called in between zram_reset_device() + * and del_gendisk(), so run the last reset to avoid leaking + * anything allocated with disksize_store() + */ + zram_reset_device(zram); + blk_cleanup_queue(zram->disk->queue); put_disk(zram->disk); kfree(zram); @@ -2085,7 +2107,7 @@ static int zram_remove_cb(int id, void *ptr, void *data) { - zram_remove(ptr); + WARN_ON_ONCE(zram_remove(ptr)); return 0; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/char/Kconfig
Changed
@@ -478,16 +478,6 @@ help pin memory driver -config HISI_SVM - tristate "Hisilicon svm driver" - depends on ARM64 && ARM_SMMU_V3 && MMU_NOTIFIER - default m - help - This driver provides character-level access to Hisilicon - SVM chipset. Typically, you can bind a task to the - svm and share the virtual memory with hisilicon svm device. - When in doubt, say "N". - config RANDOM_TRUST_CPU bool "Initialize RNG using CPU RNG instructions" default y
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/char/Makefile
Changed
@@ -48,4 +48,3 @@ obj-$(CONFIG_POWERNV_OP_PANEL) += powernv-op-panel.o obj-$(CONFIG_ADI) += adi.o obj-$(CONFIG_PIN_MEMORY_DEV) += pin_memory.o -obj-$(CONFIG_HISI_SVM) += svm.o
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/char/hw_random/virtio-rng.c
Changed
@@ -134,7 +134,7 @@ vi->hwrng_removed = true; vi->data_avail = 0; complete(&vi->have_data); - vdev->config->reset(vdev); + virtio_reset_device(vdev); vi->busy = false; if (vi->hwrng_register_done) hwrng_unregister(&vi->hwrng);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/char/svm.c
Deleted
@@ -1,1772 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright (c) 2017-2018 Hisilicon Limited. - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - */ - -#include <asm/esr.h> -#include <linux/mmu_context.h> - -#include <linux/delay.h> -#include <linux/err.h> -#include <linux/interrupt.h> -#include <linux/io.h> -#include <linux/iommu.h> -#include <linux/miscdevice.h> -#include <linux/mman.h> -#include <linux/mmu_notifier.h> -#include <linux/module.h> -#include <linux/of.h> -#include <linux/of_address.h> -#include <linux/of_device.h> -#include <linux/platform_device.h> -#include <linux/ptrace.h> -#include <linux/security.h> -#include <linux/slab.h> -#include <linux/uaccess.h> -#include <linux/sched.h> -#include <linux/hugetlb.h> -#include <linux/sched/mm.h> -#include <linux/msi.h> -#include <linux/acpi.h> - -#define SVM_DEVICE_NAME "svm" -#define ASID_SHIFT 48 - -#define SVM_IOCTL_REMAP_PROC 0xfff4 -#define SVM_IOCTL_UNPIN_MEMORY 0xfff5 -#define SVM_IOCTL_PIN_MEMORY 0xfff7 -#define SVM_IOCTL_GET_PHYS 0xfff9 -#define SVM_IOCTL_LOAD_FLAG 0xfffa -#define SVM_IOCTL_SET_RC 0xfffc -#define SVM_IOCTL_PROCESS_BIND 0xffff - -#define CORE_SID 0 - -#define SVM_IOCTL_RELEASE_PHYS32 0xfff3 -#define SVM_REMAP_MEM_LEN_MAX (16 * 1024 * 1024) -#define MMAP_PHY32_MAX (16 * 1024 * 1024) - -static int probe_index; -static LIST_HEAD(child_list); -static DECLARE_RWSEM(svm_sem); -static struct rb_root svm_process_root = RB_ROOT; -static struct mutex svm_process_mutex; - -struct core_device { - struct device dev; - struct iommu_group *group; - struct iommu_domain *domain; - u8 smmu_bypass; - struct list_head entry; -}; - -struct svm_device { - unsigned long long id; - struct miscdevice miscdev; - struct device *dev; - phys_addr_t l2buff; - unsigned long l2size; -}; - -struct svm_bind_process { - pid_t vpid; - u64 ttbr; - u64 tcr; - int pasid; - u32 flags; -#define SVM_BIND_PID (1 << 0) -}; - -/* - *svm_process is released in svm_notifier_release() when mm refcnt - *goes down zero. We should access svm_process only in the context - *where mm_struct is valid, which means we should always get mm - *refcnt first. - */ -struct svm_process { - struct pid *pid; - struct mm_struct *mm; - unsigned long asid; - struct rb_node rb_node; - struct mmu_notifier notifier; - /* For postponed release */ - struct rcu_head rcu; - int pasid; - struct mutex mutex; - struct rb_root sdma_list; - struct svm_device *sdev; - struct iommu_sva *sva; -}; - -struct svm_sdma { - struct rb_node node; - unsigned long addr; - int nr_pages; - struct page **pages; - atomic64_t ref; -}; - -struct svm_proc_mem { - u32 dev_id; - u32 len; - u64 pid; - u64 vaddr; - u64 buf; -}; - -static char *svm_cmd_to_string(unsigned int cmd) -{ - switch (cmd) { - case SVM_IOCTL_PROCESS_BIND: - return "bind"; - case SVM_IOCTL_GET_PHYS: - return "get phys"; - case SVM_IOCTL_SET_RC: - return "set rc"; - case SVM_IOCTL_PIN_MEMORY: - return "pin memory"; - case SVM_IOCTL_UNPIN_MEMORY: - return "unpin memory"; - case SVM_IOCTL_REMAP_PROC: - return "remap proc"; - case SVM_IOCTL_LOAD_FLAG: - return "load flag"; - case SVM_IOCTL_RELEASE_PHYS32: - return "release phys"; - default: - return "unsupported"; - } - - return NULL; -} - -/* - * image word of slot - * SVM_IMAGE_WORD_INIT: initial value, indicating that the slot is not used. - * SVM_IMAGE_WORD_VALID: valid data is filled in the slot - * SVM_IMAGE_WORD_DONE: the DMA operation is complete when the TS uses this address, - * so, this slot can be freed. - */ -#define SVM_IMAGE_WORD_INIT 0x0 -#define SVM_IMAGE_WORD_VALID 0xaa55aa55 -#define SVM_IMAGE_WORD_DONE 0x55ff55ff - -/* - * The length of this structure must be 64 bytes, which is the agreement with the TS. - * And the data type and sequence cannot be changed, because the TS core reads data - * based on the data type and sequence. - * image_word: slot status. For details, see SVM_IMAGE_WORD_xxx - * pid: pid of process which ioctl svm device to get physical addr, it is used for - * verification by TS. - * data_type: used to determine the data type by TS. Currently, data type must be - * SVM_VA2PA_TYPE_DMA. - * char data[48]: for the data type SVM_VA2PA_TYPE_DMA, the DMA address is stored. - */ -struct svm_va2pa_slot { - int image_word; - int resv; - int pid; - int data_type; - union { - char user_defined_data[48]; - struct { - unsigned long phys; - unsigned long len; - char reserved[32]; - }; - }; -}; - -struct svm_va2pa_trunk { - struct svm_va2pa_slot *slots; - int slot_total; - int slot_used; - unsigned long *bitmap; - struct mutex mutex; -}; - -struct svm_va2pa_trunk va2pa_trunk; - -#define SVM_VA2PA_TRUNK_SIZE_MAX 0x3200000 -#define SVM_VA2PA_MEMORY_ALIGN 64 -#define SVM_VA2PA_SLOT_SIZE sizeof(struct svm_va2pa_slot) -#define SVM_VA2PA_TYPE_DMA 0x1 -#define SVM_MEM_REG "va2pa trunk" -#define SVM_VA2PA_CLEAN_BATCH_NUM 0x80
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/char/virtio_console.c
Changed
@@ -1967,7 +1967,7 @@ flush_work(&portdev->config_work); /* Disable interrupts for vqs */ - vdev->config->reset(vdev); + virtio_reset_device(vdev); /* Finish up work that's lined up */ if (use_multiport(portdev)) cancel_work_sync(&portdev->control_work); @@ -2149,7 +2149,7 @@ portdev = vdev->priv; - vdev->config->reset(vdev); + virtio_reset_device(vdev); if (use_multiport(portdev)) virtqueue_disable_cb(portdev->c_ivq);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/Kconfig
Changed
@@ -22,6 +22,38 @@ help Build the clock driver for hi3660. +config COMMON_CLK_HI3531DV200 + tristate "Hi3531DV200 Clock Driver" + depends on ARCH_HI3531DV200 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for Hi3531DV200. + +config COMMON_CLK_HI3535AV100 + tristate "Hi3535AV100 Clock Driver" + depends on ARCH_HI3535AV100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for Hi3535AV100. + +config COMMON_CLK_HI3521DV200 + tristate "Hi3521DV200 Clock Driver" + depends on ARCH_HI3521DV200 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3521DV200. + +config COMMON_CLK_HI3520DV500 + tristate "Hi3520DV500 Clock Driver" + depends on ARCH_HI3520DV500 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3520DV500. + config COMMON_CLK_HI3670 bool "Hi3670 Clock Driver" depends on ARCH_HISI || COMPILE_TEST @@ -37,6 +69,166 @@ help Build the clock driver for hi3798cv200. +config COMMON_CLK_HI3516A + tristate "Hi3516A Clock Driver" + depends on ARCH_HI3516A || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3516A. + +config COMMON_CLK_HI3516CV500 + tristate "Hi3516CV500 Clock Driver" + depends on ARCH_HI3516CV500 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3516CV500. + +config COMMON_CLK_HI3516EV200 + tristate "Hi3516EV200 Clock Driver" + depends on ARCH_HI3516EV200 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3516EV200. + +config COMMON_CLK_HI3516EV300 + tristate "Hi3516EV300 Clock Driver" + depends on ARCH_HI3516EV300 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3516EV300. + +config COMMON_CLK_HI3518EV300 + tristate "Hi3518EV300 Clock Driver" + depends on ARCH_HI3518EV300 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3518EV300. + +config COMMON_CLK_HI3516DV200 + tristate "Hi3516DV200 Clock Driver" + depends on ARCH_HI3516DV200 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3516DV200. + +config COMMON_CLK_HI3516DV300 + tristate "Hi3516DV300 Clock Driver" + depends on ARCH_HI3516DV300 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3516DV300. + +config COMMON_CLK_HI3556V200 + tristate "Hi3556V200 Clock Driver" + depends on ARCH_HI3556V200 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3556V200. + +config COMMON_CLK_HI3559V200 + tristate "Hi3559V200 Clock Driver" + depends on ARCH_HI3559V200 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3559V200. + +config COMMON_CLK_HI3562V100 + tristate "Hi3562V100 Clock Driver" + depends on ARCH_HI3562V100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3562V100. + +config COMMON_CLK_HI3566V100 + tristate "Hi3566V100 Clock Driver" + depends on ARCH_HI3566V100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3566V100. + +config COMMON_CLK_HI3518EV20X + tristate "Hi3518EV20X Clock Driver" + depends on ARCH_HI3518EV20X || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3516A. + +config COMMON_CLK_HI3536DV100 + tristate "Hi3536DV100 Clock Driver" + depends on ARCH_HI3536DV100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3536DV100. + +config COMMON_CLK_HI3559AV100 + tristate "Hi3559AV100 Clock Driver" + depends on ARCH_HI3559AV100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3559av100. + +config COMMON_CLK_HI3569V100 + tristate "Hi3569V100 Clock Driver" + depends on ARCH_HI3569V100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3569v100. + +config COMMON_CLK_HI3521A + tristate "Hi3521A Clock Driver" + depends on ARCH_HI3521A || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3521A. + +config COMMON_CLK_HI3531A + tristate "Hi3531A Clock Driver" + depends on ARCH_HI3531A || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3531A. + +config COMMON_CLK_HI3556AV100 + tristate "Hi3556AV100 Clock Driver" + depends on ARCH_HI3556AV100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3556av100. + +config COMMON_CLK_HI3519AV100 + tristate "Hi3519AV100 Clock Driver" + depends on ARCH_HI3519AV100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3519av100. + +config COMMON_CLK_HI3568V100 + tristate "Hi3568V100 Clock Driver" + depends on ARCH_HI3568V100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/Makefile
Changed
@@ -14,6 +14,7 @@ obj-$(CONFIG_COMMON_CLK_HI3670) += clk-hi3670.o obj-$(CONFIG_COMMON_CLK_HI3798CV200) += crg-hi3798cv200.o obj-$(CONFIG_COMMON_CLK_HI6220) += clk-hi6220.o +obj-$(CONFIG_COMMON_CLK_HI3516DV300) += clk-hi3516dv300.o obj-$(CONFIG_RESET_HISI) += reset.o obj-$(CONFIG_STUB_CLK_HI6220) += clk-hi6220-stub.o obj-$(CONFIG_STUB_CLK_HI3660) += clk-hi3660-stub.o
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/clk-hi3516dv300.c
Added
@@ -0,0 +1,272 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2016-2017 HiSilicon Technologies Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; either version 2 of the License, or (at your + * option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + * + */ + +#include <dt-bindings/clock/hi3516dv300-clock.h> +#include <linux/clk-provider.h> +#include <linux/module.h> +#include <linux/of_device.h> +#include <linux/platform_device.h> +#include "clk.h" +#include "crg.h" +#include "reset.h" + +static struct hisi_fixed_rate_clock hi3516dv300_fixed_rate_clks[] __initdata = { + { HI3516DV300_FIXED_3M, "3m", NULL, 0, 3000000, }, + { HI3516DV300_FIXED_6M, "6m", NULL, 0, 6000000, }, + { HI3516DV300_FIXED_12M, "12m", NULL, 0, 12000000, }, + { HI3516DV300_FIXED_24M, "24m", NULL, 0, 24000000, }, + { HI3516DV300_FIXED_25M, "25m", NULL, 0, 25000000, }, + { HI3516DV300_FIXED_50M, "50m", NULL, 0, 50000000, }, + { HI3516DV300_FIXED_54M, "54m", NULL, 0, 54000000, }, + { HI3516DV300_FIXED_83P3M, "83.3m", NULL, 0, 83300000, }, + { HI3516DV300_FIXED_100M, "100m", NULL, 0, 100000000, }, + { HI3516DV300_FIXED_125M, "125m", NULL, 0, 125000000, }, + { HI3516DV300_FIXED_150M, "150m", NULL, 0, 150000000, }, + { HI3516DV300_FIXED_163M, "163m", NULL, 0, 163000000, }, + { HI3516DV300_FIXED_200M, "200m", NULL, 0, 200000000, }, + { HI3516DV300_FIXED_250M, "250m", NULL, 0, 250000000, }, + { HI3516DV300_FIXED_257M, "257m", NULL, 0, 257000000, }, + { HI3516DV300_FIXED_300M, "300m", NULL, 0, 300000000, }, + { HI3516DV300_FIXED_324M, "324m", NULL, 0, 324000000, }, + { HI3516DV300_FIXED_342M, "342m", NULL, 0, 342000000, }, + { HI3516DV300_FIXED_342M, "375m", NULL, 0, 375000000, }, + { HI3516DV300_FIXED_396M, "396m", NULL, 0, 396000000, }, + { HI3516DV300_FIXED_400M, "400m", NULL, 0, 400000000, }, + { HI3516DV300_FIXED_448M, "448m", NULL, 0, 448000000, }, + { HI3516DV300_FIXED_500M, "500m", NULL, 0, 500000000, }, + { HI3516DV300_FIXED_540M, "540m", NULL, 0, 540000000, }, + { HI3516DV300_FIXED_600M, "600m", NULL, 0, 600000000, }, + { HI3516DV300_FIXED_750M, "750m", NULL, 0, 750000000, }, + { HI3516DV300_FIXED_1000M, "1000m", NULL, 0, 1000000000, }, + { HI3516DV300_FIXED_1500M, "1500m", NULL, 0, 1500000000UL, }, +}; + +static const char *sysaxi_mux_p[] __initconst = { + "24m", "200m", "300m" +}; +static const char *sysapb_mux_p[] __initconst = {"24m", "50m"}; +static const char *uart_mux_p[] __initconst = {"24m", "6m"}; +static const char *fmc_mux_p[] __initconst = {"24m", "100m", "150m", + "163m", "200m", "257m", "300m", "396m"}; +static const char *eth_mux_p[] __initconst = {"100m", "54m"}; +static const char *mmc_mux_p[] __initconst = {"100m", "50m", "25m"}; +static const char *pwm_mux_p[] __initconst = {"3m", "50m", "24m", "24m"}; + +static u32 sysaxi_mux_table[] = {0, 1, 2}; +static u32 sysapb_mux_table[] = {0, 1}; +static u32 uart_mux_table[] = {0, 1}; +static u32 fmc_mux_table[] = {0, 1, 2, 3, 4, 5, 6, 7}; +static u32 eth_mux_table[] = {0, 1}; +static u32 mmc_mux_table[] = {1, 2, 3}; +static u32 pwm_mux_table[] = {0, 1, 2, 3}; + +static struct hisi_mux_clock hi3516dv300_mux_clks[] __initdata = { + { + HI3516DV300_SYSAXI_CLK, "sysaxi_mux", sysaxi_mux_p, + ARRAY_SIZE(sysaxi_mux_p), + CLK_SET_RATE_PARENT, 0x80, 6, 2, 0, sysaxi_mux_table, + }, + { + HI3516DV300_SYSAPB_CLK, "sysapb_mux", sysapb_mux_p, + ARRAY_SIZE(sysapb_mux_p), + CLK_SET_RATE_PARENT, 0x80, 10, 1, 0, sysapb_mux_table, + }, + { + HI3516DV300_FMC_MUX, "fmc_mux", fmc_mux_p, ARRAY_SIZE(fmc_mux_p), + CLK_SET_RATE_PARENT, 0x144, 2, 3, 0, fmc_mux_table, + }, + { + HI3516DV300_MMC0_MUX, "mmc0_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p), + CLK_SET_RATE_PARENT, 0x148, 2, 2, 0, mmc_mux_table, + }, + { + HI3516DV300_MMC1_MUX, "mmc1_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p), + CLK_SET_RATE_PARENT, 0x160, 2, 2, 0, mmc_mux_table, + }, + { + HI3516DV300_MMC2_MUX, "mmc2_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p), + CLK_SET_RATE_PARENT, 0x154, 2, 2, 0, mmc_mux_table, + }, + { + HI3516DV300_UART_MUX, "uart_mux0", uart_mux_p, + ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1bc, 18, 1, 0, uart_mux_table, + }, + { + HI3516DV300_UART1_MUX, "uart_mux1", uart_mux_p, + ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1bc, 19, 1, 0, uart_mux_table, + }, + { + HI3516DV300_UART2_MUX, "uart_mux2", uart_mux_p, + ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1bc, 20, 1, 0, uart_mux_table, + }, + { + HI3516DV300_UART3_MUX, "uart_mux3", uart_mux_p, + ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1bc, 21, 1, 0, uart_mux_table, + }, + { + HI3516DV300_UART4_MUX, "uart_mux4", uart_mux_p, + ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1bc, 22, 1, 0, uart_mux_table, + }, + { + HI3516DV300_PWM_MUX, "pwm_mux", pwm_mux_p, + ARRAY_SIZE(pwm_mux_p), + CLK_SET_RATE_PARENT, 0x1bc, 8, 2, 0, pwm_mux_table, + }, + /* ethernet clock select */ + { + HI3516DV300_ETH_MUX, "eth_mux", eth_mux_p, ARRAY_SIZE(eth_mux_p), + CLK_SET_RATE_PARENT, 0x16c, 7, 1, 0, eth_mux_table, + }, +}; + +static struct hisi_fixed_factor_clock hi3516dv300_fixed_factor_clks[] __initdata + = { + { + HI3516DV300_SYSAXI_CLK, "clk_sysaxi", "sysaxi_mux", 1, 4, + CLK_SET_RATE_PARENT + }, +}; + +static struct hisi_gate_clock hi3516dv300_gate_clks[] __initdata = { + { + HI3516DV300_FMC_CLK, "clk_fmc", "fmc_mux", + CLK_SET_RATE_PARENT, 0x144, 1, 0, + }, + { + HI3516DV300_MMC0_CLK, "clk_mmc0", "mmc0_mux", + CLK_SET_RATE_PARENT, 0x148, 1, 0, + }, + { + HI3516DV300_MMC1_CLK, "clk_mmc1", "mmc1_mux", + CLK_SET_RATE_PARENT, 0x160, 1, 0, + }, + { + HI3516DV300_MMC2_CLK, "clk_mmc2", "mmc2_mux", + CLK_SET_RATE_PARENT, 0x154, 1, 0, + }, + { + HI3516DV300_UART0_CLK, "clk_uart0", "uart_mux0", + CLK_SET_RATE_PARENT, 0x1b8, 0, 0, + }, + { + HI3516DV300_UART1_CLK, "clk_uart1", "uart_mux1", + CLK_SET_RATE_PARENT, 0x1b8, 1, 0, + }, + { + HI3516DV300_UART2_CLK, "clk_uart2", "uart_mux2", + CLK_SET_RATE_PARENT, 0x1b8, 2, 0, + }, + { + HI3516DV300_UART3_CLK, "clk_uart3", "uart_mux3", + CLK_SET_RATE_PARENT, 0x1b8, 3, 0, + }, + { + HI3516DV300_UART4_CLK, "clk_uart4", "uart_mux4", + CLK_SET_RATE_PARENT, 0x1b8, 4, 0, + }, + { + HI3516DV300_I2C0_CLK, "clk_i2c0", "50m", + CLK_SET_RATE_PARENT, 0x1b8, 11, 0, + }, + { + HI3516DV300_I2C1_CLK, "clk_i2c1", "50m", + CLK_SET_RATE_PARENT, 0x1b8, 12, 0, + }, + { + HI3516DV300_I2C2_CLK, "clk_i2c2", "50m", + CLK_SET_RATE_PARENT, 0x1b8, 13, 0, + },
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/clk-hi3519av100.c
Added
@@ -0,0 +1,561 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Hi3519A Clock Driver + * + * Copyright (c) 2016-2017 HiSilicon Technologies Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; either version 2 of the License, or (at your + * option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + * + */ + +#include <linux/of_address.h> +#include <dt-bindings/clock/hi3519av100-clock.h> +#include <linux/slab.h> +#include <linux/delay.h> +#include "clk.h" +#include "reset.h" + +struct hi3519av100_pll_clock { + u32 id; + const char *name; + const char *parent_name; + u32 ctrl_reg1; + u8 frac_shift; + u8 frac_width; + u8 postdiv1_shift; + u8 postdiv1_width; + u8 postdiv2_shift; + u8 postdiv2_width; + u32 ctrl_reg2; + u8 fbdiv_shift; + u8 fbdiv_width; + u8 refdiv_shift; + u8 refdiv_width; +}; + +struct hi3519av100_clk_pll { + struct clk_hw hw; + u32 id; + void __iomem *ctrl_reg1; + u8 frac_shift; + u8 frac_width; + u8 postdiv1_shift; + u8 postdiv1_width; + u8 postdiv2_shift; + u8 postdiv2_width; + void __iomem *ctrl_reg2; + u8 fbdiv_shift; + u8 fbdiv_width; + u8 refdiv_shift; + u8 refdiv_width; +}; + +static struct hi3519av100_pll_clock hi3519av100_pll_clks[] __initdata = { + { + HI3519AV100_APLL_CLK, "apll", NULL, 0x0, 0, 24, 24, 3, 28, 3, + 0x4, 0, 12, 12, 6 + }, +}; + +#define to_pll_clk(_hw) container_of(_hw, struct hi3519av100_clk_pll, hw) + +/* soc clk config */ +static struct hisi_fixed_rate_clock hi3519av100_fixed_rate_clks[] __initdata = { + { HI3519AV100_FIXED_2376M, "2376m", NULL, 0, 2376000000UL, }, + { HI3519AV100_FIXED_1188M, "1188m", NULL, 0, 1188000000, }, + { HI3519AV100_FIXED_594M, "594m", NULL, 0, 594000000, }, + { HI3519AV100_FIXED_297M, "297m", NULL, 0, 297000000, }, + { HI3519AV100_FIXED_148P5M, "148p5m", NULL, 0, 148500000, }, + { HI3519AV100_FIXED_74P25M, "74p25m", NULL, 0, 74250000, }, + { HI3519AV100_FIXED_792M, "792m", NULL, 0, 792000000, }, + { HI3519AV100_FIXED_475M, "475m", NULL, 0, 475000000, }, + { HI3519AV100_FIXED_340M, "340m", NULL, 0, 340000000, }, + { HI3519AV100_FIXED_72M, "72m", NULL, 0, 72000000, }, + { HI3519AV100_FIXED_400M, "400m", NULL, 0, 400000000, }, + { HI3519AV100_FIXED_200M, "200m", NULL, 0, 200000000, }, + { HI3519AV100_FIXED_54M, "54m", NULL, 0, 54000000, }, + { HI3519AV100_FIXED_27M, "27m", NULL, 0, 1188000000, }, + { HI3519AV100_FIXED_37P125M, "37p125m", NULL, 0, 37125000, }, + { HI3519AV100_FIXED_3000M, "3000m", NULL, 0, 3000000000UL, }, + { HI3519AV100_FIXED_1500M, "1500m", NULL, 0, 1500000000, }, + { HI3519AV100_FIXED_500M, "500m", NULL, 0, 500000000, }, + { HI3519AV100_FIXED_250M, "250m", NULL, 0, 250000000, }, + { HI3519AV100_FIXED_125M, "125m", NULL, 0, 125000000, }, + { HI3519AV100_FIXED_1000M, "1000m", NULL, 0, 1000000000, }, + { HI3519AV100_FIXED_600M, "600m", NULL, 0, 600000000, }, + { HI3519AV100_FIXED_750M, "750m", NULL, 0, 750000000, }, + { HI3519AV100_FIXED_150M, "150m", NULL, 0, 150000000, }, + { HI3519AV100_FIXED_75M, "75m", NULL, 0, 75000000, }, + { HI3519AV100_FIXED_300M, "300m", NULL, 0, 300000000, }, + { HI3519AV100_FIXED_60M, "60m", NULL, 0, 60000000, }, + { HI3519AV100_FIXED_214M, "214m", NULL, 0, 214000000, }, + { HI3519AV100_FIXED_107M, "107m", NULL, 0, 107000000, }, + { HI3519AV100_FIXED_100M, "100m", NULL, 0, 100000000, }, + { HI3519AV100_FIXED_50M, "50m", NULL, 0, 50000000, }, + { HI3519AV100_FIXED_25M, "25m", NULL, 0, 25000000, }, + { HI3519AV100_FIXED_24M, "24m", NULL, 0, 24000000, }, + { HI3519AV100_FIXED_3M, "3m", NULL, 0, 3000000, }, + { HI3519AV100_FIXED_100K, "100k", NULL, 0, 100000, }, + { HI3519AV100_FIXED_400K, "400k", NULL, 0, 400000, }, + { HI3519AV100_FIXED_49P5M, "49p5m", NULL, 0, 49500000, }, + { HI3519AV100_FIXED_99M, "99m", NULL, 0, 99000000, }, + { HI3519AV100_FIXED_187P5M, "187p5m", NULL, 0, 187500000, }, + { HI3519AV100_FIXED_198M, "198m", NULL, 0, 198000000, }, +}; + + +static const char *fmc_mux_p[] __initconst = { + "24m", "100m", "150m", "198m", "250m", "300m", "396m" +}; +static u32 fmc_mux_table[] = {0, 1, 2, 3, 4, 5, 6}; + +static const char *mmc_mux_p[] __initconst = { + "100k", "25m", "49p5m", "99m", "187p5m", "150m", "198m", "400k" +}; +static u32 mmc_mux_table[] = {0, 1, 2, 3, 4, 5, 6, 7}; + +static const char *sysapb_mux_p[] __initconst = { + "24m", "50m", +}; +static u32 sysapb_mux_table[] = {0, 1}; + +static const char *sysbus_mux_p[] __initconst = { + "24m", "300m" +}; +static u32 sysbus_mux_table[] = {0, 1}; + +static const char *uart_mux_p[] __initconst = {"50m", "24m", "3m"}; +static u32 uart_mux_table[] = {0, 1, 2}; + +static const char *a53_1_clksel_mux_p[] __initconst = { + "24m", "apll", "vpll", "792m" +}; +static u32 a53_1_clksel_mux_table[] = {0, 1, 2, 3}; + +static struct hisi_mux_clock hi3519av100_mux_clks[] __initdata = { + { + HI3519AV100_FMC_MUX, "fmc_mux", fmc_mux_p, ARRAY_SIZE(fmc_mux_p), + CLK_SET_RATE_PARENT, 0x170, 2, 3, 0, fmc_mux_table, + }, + + { + HI3519AV100_MMC0_MUX, "mmc0_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p), + CLK_SET_RATE_PARENT, 0x1a8, 24, 3, 0, mmc_mux_table, + }, + + { + HI3519AV100_MMC1_MUX, "mmc1_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p), + CLK_SET_RATE_PARENT, 0x1ec, 24, 3, 0, mmc_mux_table, + }, + + { + HI3519AV100_MMC2_MUX, "mmc2_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p), + CLK_SET_RATE_PARENT, 0x214, 24, 3, 0, mmc_mux_table, + }, + + { + HI3519AV100_SYSAPB_MUX, "sysapb_mux", sysapb_mux_p, ARRAY_SIZE(sysapb_mux_p), + CLK_SET_RATE_PARENT, 0xe8, 3, 1, 0, sysapb_mux_table + }, + + { + HI3519AV100_SYSBUS_MUX, "sysbus_mux", sysbus_mux_p, ARRAY_SIZE(sysbus_mux_p), + CLK_SET_RATE_PARENT, 0xe8, 0, 1, 1, sysbus_mux_table + }, + + { + HI3519AV100_UART0_MUX, "uart0_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1a4, 0, 2, 1, uart_mux_table + }, + + { + HI3519AV100_UART1_MUX, "uart1_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1a4, 2, 2, 1, uart_mux_table + }, + + { + HI3519AV100_UART2_MUX, "uart2_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1a4, 4, 2, 1, uart_mux_table + }, + + { + HI3519AV100_UART3_MUX, "uart3_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1a4, 6, 2, 1, uart_mux_table + }, + + { + HI3519AV100_UART4_MUX, "uart4_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1a4, 8, 2, 1, uart_mux_table
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/clk.c
Changed
@@ -82,6 +82,10 @@ of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data->clk_data); return clk_data; err_data: + if (base) { + iounmap(base); + base = NULL; + } kfree(clk_data); err: return NULL;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/crg-hi3516cv300.c
Changed
@@ -170,7 +170,7 @@ return ERR_PTR(ret); } -static void hi3516cv300_clk_unregister(struct platform_device *pdev) +static void hi3516cv300_clk_unregister(const struct platform_device *pdev) { struct hisi_crg_dev *crg = platform_get_drvdata(pdev); @@ -229,7 +229,7 @@ return ERR_PTR(ret); } -static void hi3516cv300_sysctrl_clk_unregister(struct platform_device *pdev) +static void hi3516cv300_sysctrl_clk_unregister(const struct platform_device *pdev) { struct hisi_crg_dev *crg = platform_get_drvdata(pdev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/crg-hi3798cv200.c
Changed
@@ -251,7 +251,7 @@ return ERR_PTR(ret); } -static void hi3798cv200_clk_unregister(struct platform_device *pdev) +static void hi3798cv200_clk_unregister(const struct platform_device *pdev) { struct hisi_crg_dev *crg = platform_get_drvdata(pdev); @@ -316,7 +316,7 @@ return ERR_PTR(ret); } -static void hi3798cv200_sysctrl_clk_unregister(struct platform_device *pdev) +static void hi3798cv200_sysctrl_clk_unregister(const struct platform_device *pdev) { struct hisi_crg_dev *crg = platform_get_drvdata(pdev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/crg.h
Changed
@@ -13,7 +13,7 @@ struct hisi_crg_funcs { struct hisi_clock_data* (*register_clks)(struct platform_device *pdev); - void (*unregister_clks)(struct platform_device *pdev); + void (*unregister_clks)(const struct platform_device *pdev); }; struct hisi_crg_dev {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/reset.c
Changed
@@ -87,6 +87,36 @@ .deassert = hisi_reset_deassert, }; +#ifdef CONFIG_ARCH_HISI_BVT +int __init hibvt_reset_init(struct device_node *np, + int nr_rsts) +{ + struct hisi_reset_controller *rstc; + + rstc = kzalloc(sizeof(*rstc), GFP_KERNEL); + if (!rstc) + return -ENOMEM; + + rstc->membase = of_iomap(np, 0); + if (!rstc->membase) { + kfree(rstc); + return -EINVAL; + } + + spin_lock_init(&rstc->lock); + + rstc->rcdev.owner = THIS_MODULE; + rstc->rcdev.nr_resets = nr_rsts; + rstc->rcdev.ops = &hisi_reset_ops; + rstc->rcdev.of_node = np; + rstc->rcdev.of_reset_n_cells = 2; + rstc->rcdev.of_xlate = hisi_reset_of_xlate; + + return reset_controller_register(&rstc->rcdev); +} +EXPORT_SYMBOL_GPL(hibvt_reset_init); +#endif + struct hisi_reset_controller *hisi_reset_init(struct platform_device *pdev) { struct hisi_reset_controller *rstc;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/reset.h
Changed
@@ -11,6 +11,9 @@ #ifdef CONFIG_RESET_CONTROLLER struct hisi_reset_controller *hisi_reset_init(struct platform_device *pdev); +#ifdef CONFIG_ARCH_HISI_BVT +int __init hibvt_reset_init(struct device_node *np, int nr_rsts); +#endif void hisi_reset_exit(struct hisi_reset_controller *rstc); #else static inline
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/imx/clk.c
Changed
@@ -173,6 +173,8 @@ int i; imx_uart_clocks = kcalloc(clk_count, sizeof(struct clk *), GFP_KERNEL); + if (!imx_uart_clocks) + return; if (!of_stdout) return;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/cpuidle/cpuidle-haltpoll.c
Changed
@@ -18,9 +18,17 @@ #include <linux/kvm_para.h> #include <linux/cpuidle_haltpoll.h> -static bool force __read_mostly; -module_param(force, bool, 0444); -MODULE_PARM_DESC(force, "Load unconditionally"); +static bool force; +MODULE_PARM_DESC(force, "bool, enable haltpoll driver"); +static int enable_haltpoll_driver(const char *val, const struct kernel_param *kp); +static int register_haltpoll_driver(void); +static void unregister_haltpoll_driver(void); + +static const struct kernel_param_ops enable_haltpoll_ops = { + .set = enable_haltpoll_driver, + .get = param_get_bool, +}; +module_param_cb(force, &enable_haltpoll_ops, &force, 0644); static struct cpuidle_device __percpu *haltpoll_cpuidle_devices; static enum cpuhp_state haltpoll_hp_state; @@ -36,6 +44,42 @@ return index; } + +static int enable_haltpoll_driver(const char *val, const struct kernel_param *kp) +{ +#ifdef CONFIG_ARM64 + int ret; + bool do_enable; + + if (!val) + return 0; + + ret = strtobool(val, &do_enable); + + if (ret || force == do_enable) + return ret; + + if (do_enable) { + ret = register_haltpoll_driver(); + + if (!ret) { + pr_info("Enable haltpoll driver.\n"); + force = 1; + } else { + pr_err("Fail to enable haltpoll driver.\n"); + } + } else { + unregister_haltpoll_driver(); + force = 0; + pr_info("Unregister haltpoll driver.\n"); + } + + return ret; +#else + return -1; +#endif +} + static struct cpuidle_driver haltpoll_driver = { .name = "haltpoll", .governor = "haltpoll", @@ -84,22 +128,18 @@ return 0; } -static void haltpoll_uninit(void) -{ - if (haltpoll_hp_state) - cpuhp_remove_state(haltpoll_hp_state); - cpuidle_unregister_driver(&haltpoll_driver); - - free_percpu(haltpoll_cpuidle_devices); - haltpoll_cpuidle_devices = NULL; -} static bool haltpoll_want(void) { return kvm_para_has_hint(KVM_HINTS_REALTIME); } -static int __init haltpoll_init(void) +static void haltpoll_uninit(void) +{ + unregister_haltpoll_driver(); +} + +static int register_haltpoll_driver(void) { int ret; struct cpuidle_driver *drv = &haltpoll_driver; @@ -112,9 +152,6 @@ cpuidle_poll_state_init(drv); - if (!force && (!kvm_para_available() || !haltpoll_want())) - return -ENODEV; - ret = cpuidle_register_driver(drv); if (ret < 0) return ret; @@ -137,9 +174,35 @@ return ret; } +static void unregister_haltpoll_driver(void) +{ + if (haltpoll_hp_state) + cpuhp_remove_state(haltpoll_hp_state); + cpuidle_unregister_driver(&haltpoll_driver); + + free_percpu(haltpoll_cpuidle_devices); + haltpoll_cpuidle_devices = NULL; + +} + +static int __init haltpoll_init(void) +{ + int ret = 0; +#ifdef CONFIG_X86 + /* Do not load haltpoll if idle= is passed */ + if (boot_option_idle_override != IDLE_NO_OVERRIDE) + return -ENODEV; +#endif + if (force || (haltpoll_want() && kvm_para_available())) + ret = register_haltpoll_driver(); + + return ret; +} + static void __exit haltpoll_exit(void) { - haltpoll_uninit(); + if (haltpoll_cpuidle_devices) + haltpoll_uninit(); } module_init(haltpoll_init);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/cpuidle/governors/haltpoll.c
Changed
@@ -39,7 +39,7 @@ static bool guest_halt_poll_allow_shrink __read_mostly = true; module_param(guest_halt_poll_allow_shrink, bool, 0644); -static bool enable __read_mostly; +static bool enable __read_mostly = true; module_param(enable, bool, 0444); MODULE_PARM_DESC(enable, "Load unconditionally"); @@ -144,7 +144,7 @@ static int __init init_haltpoll(void) { - if (kvm_para_available() || enable) + if (enable) return cpuidle_register_governor(&haltpoll_governor); return 0;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/crypto/virtio/virtio_crypto_core.c
Changed
@@ -404,7 +404,7 @@ free_engines: virtcrypto_clear_crypto_engines(vcrypto); free_vqs: - vcrypto->vdev->config->reset(vdev); + virtio_reset_device(vdev); virtcrypto_del_vqs(vcrypto); free_dev: virtcrypto_devmgr_rm_dev(vcrypto); @@ -436,7 +436,7 @@ if (virtcrypto_dev_started(vcrypto)) virtcrypto_dev_stop(vcrypto); - vdev->config->reset(vdev); + virtio_reset_device(vdev); virtcrypto_free_unused_reqs(vcrypto); virtcrypto_clear_crypto_engines(vcrypto); virtcrypto_del_vqs(vcrypto); @@ -456,7 +456,7 @@ { struct virtio_crypto *vcrypto = vdev->priv; - vdev->config->reset(vdev); + virtio_reset_device(vdev); virtcrypto_free_unused_reqs(vcrypto); if (virtcrypto_dev_started(vcrypto)) virtcrypto_dev_stop(vcrypto); @@ -492,7 +492,7 @@ free_engines: virtcrypto_clear_crypto_engines(vcrypto); free_vqs: - vcrypto->vdev->config->reset(vdev); + virtio_reset_device(vdev); virtcrypto_del_vqs(vcrypto); return err; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/firmware/efi/libstub/efi-stub.c
Changed
@@ -40,7 +40,7 @@ #ifdef CONFIG_ARM64 # define EFI_RT_VIRTUAL_LIMIT DEFAULT_MAP_WINDOW_64 -#elif defined(CONFIG_RISCV) || defined(CONFIG_LOONGARCH) +#elif defined(CONFIG_LOONGARCH) # define EFI_RT_VIRTUAL_LIMIT TASK_SIZE_MIN #else # define EFI_RT_VIRTUAL_LIMIT TASK_SIZE
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
Changed
@@ -408,6 +408,9 @@ return -ENODEV; /* same everything but the other direction */ props2 = kmemdup(props, sizeof(*props2), GFP_KERNEL); + if (!props2) + return -ENOMEM; + props2->node_from = id_to; props2->node_to = id_from; props2->kobj = NULL;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/gpu/drm/i915/gt/intel_gt.c
Changed
@@ -745,6 +745,10 @@ if (!i915_mmio_reg_offset(rb.reg)) continue; + if (INTEL_GEN(i915) == 12 && (engine->class == VIDEO_DECODE_CLASS || + engine->class == VIDEO_ENHANCEMENT_CLASS)) + rb.bit = _MASKED_BIT_ENABLE(rb.bit); + intel_uncore_write_fw(uncore, rb.reg, rb.bit); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/gpu/drm/virtio/virtgpu_kms.c
Changed
@@ -232,7 +232,7 @@ flush_work(&vgdev->ctrlq.dequeue_work); flush_work(&vgdev->cursorq.dequeue_work); flush_work(&vgdev->config_changed_work); - vgdev->vdev->config->reset(vgdev->vdev); + virtio_reset_device(vgdev->vdev); vgdev->vdev->config->del_vqs(vgdev->vdev); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/hwtracing/coresight/Kconfig
Changed
@@ -200,3 +200,7 @@ called ultrasoc-smb. endif + +config ACPI_TRBE + depends on ARM64 && ACPI + def_bool y
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/hwtracing/coresight/coresight-etm4x-core.c
Changed
@@ -428,8 +428,10 @@ etm4x_relaxed_write32(csa, config->vipcssctlr, TRCVIPCSSCTLR); for (i = 0; i < drvdata->nrseqstate - 1; i++) etm4x_relaxed_write32(csa, config->seq_ctrl[i], TRCSEQEVRn(i)); - etm4x_relaxed_write32(csa, config->seq_rst, TRCSEQRSTEVR); - etm4x_relaxed_write32(csa, config->seq_state, TRCSEQSTR); + if (drvdata->nrseqstate) { + etm4x_relaxed_write32(csa, config->seq_rst, TRCSEQRSTEVR); + etm4x_relaxed_write32(csa, config->seq_state, TRCSEQSTR); + } etm4x_relaxed_write32(csa, config->ext_inp, TRCEXTINSELR); for (i = 0; i < drvdata->nr_cntr; i++) { @@ -1622,9 +1624,10 @@ for (i = 0; i < drvdata->nrseqstate - 1; i++) state->trcseqevr[i] = etm4x_read32(csa, TRCSEQEVRn(i)); - state->trcseqrstevr = etm4x_read32(csa, TRCSEQRSTEVR); - - state->trcseqstr = etm4x_read32(csa, TRCSEQSTR); + if (drvdata->nrseqstate) { + state->trcseqrstevr = etm4x_read32(csa, TRCSEQRSTEVR); + state->trcseqstr = etm4x_read32(csa, TRCSEQSTR); + } state->trcextinselr = etm4x_read32(csa, TRCEXTINSELR); for (i = 0; i < drvdata->nr_cntr; i++) { @@ -1751,8 +1754,10 @@ for (i = 0; i < drvdata->nrseqstate - 1; i++) etm4x_relaxed_write32(csa, state->trcseqevr[i], TRCSEQEVRn(i)); - etm4x_relaxed_write32(csa, state->trcseqrstevr, TRCSEQRSTEVR); - etm4x_relaxed_write32(csa, state->trcseqstr, TRCSEQSTR); + if (drvdata->nrseqstate) { + etm4x_relaxed_write32(csa, state->trcseqrstevr, TRCSEQRSTEVR); + etm4x_relaxed_write32(csa, state->trcseqstr, TRCSEQSTR); + } etm4x_relaxed_write32(csa, state->trcextinselr, TRCEXTINSELR); for (i = 0; i < drvdata->nr_cntr; i++) { @@ -2135,12 +2140,19 @@ {} }; +static const struct acpi_device_id static_ete_ids[] = { + {"HISI0461", 0}, + {} +}; +MODULE_DEVICE_TABLE(acpi, static_ete_ids); + static struct platform_driver etm4_platform_driver = { .probe = etm4_probe_platform_dev, .remove = etm4_remove_platform_dev, .driver = { .name = "coresight-etm4x", .of_match_table = etm4_sysreg_match, + .acpi_match_table = static_ete_ids, .suppress_bind_attrs = true, }, };
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/hwtracing/coresight/coresight-platform.c
Changed
@@ -844,15 +844,15 @@ struct coresight_platform_data *pdata = NULL; struct fwnode_handle *fwnode = dev_fwnode(dev); - if (IS_ERR_OR_NULL(fwnode)) - goto error; - pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL); if (!pdata) { ret = -ENOMEM; goto error; } + if (IS_ERR_OR_NULL(fwnode)) + return pdata; + if (is_of_node(fwnode)) ret = of_get_coresight_platform_data(dev, pdata); else if (is_acpi_device_node(fwnode))
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/hwtracing/coresight/coresight-trbe.c
Changed
@@ -1094,6 +1094,7 @@ static void arm_trbe_remove_cpuhp(struct trbe_drvdata *drvdata) { + cpuhp_state_remove_instance(drvdata->trbe_online, &drvdata->hotplug_node); cpuhp_remove_multi_state(drvdata->trbe_online); } @@ -1188,7 +1189,14 @@ }; MODULE_DEVICE_TABLE(of, arm_trbe_of_match); +static const struct platform_device_id arm_trbe_match[] = { + { ARMV9_TRBE_PDEV_NAME, 0}, + {} +}; +MODULE_DEVICE_TABLE(platform, arm_trbe_match); + static struct platform_driver arm_trbe_driver = { + .id_table = arm_trbe_match, .driver = { .name = DRVNAME, .of_match_table = of_match_ptr(arm_trbe_of_match),
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/i2c/busses/i2c-hix5hd2.c
Changed
@@ -360,7 +360,11 @@ pm_runtime_get_sync(priv->dev); for (i = 0; i < num; i++, msgs++) { - stop = (i == num - 1); + if ((i == num - 1) || (msgs->flags & I2C_M_STOP)) + stop = 1; + else + stop = 0; + ret = hix5hd2_i2c_xfer_msg(priv, msgs, stop); if (ret < 0) goto out;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/i2c/busses/i2c-ismt.c
Changed
@@ -507,6 +507,9 @@ if (read_write == I2C_SMBUS_WRITE) { /* Block Write */ dev_dbg(dev, "I2C_SMBUS_BLOCK_DATA: WRITE\n"); + if (data->block[0] < 1 || data->block[0] > I2C_SMBUS_BLOCK_MAX) + return -EINVAL; + dma_size = data->block[0] + 1; dma_direction = DMA_TO_DEVICE; desc->wr_len_cmd = dma_size;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/ide/ide-cd_ioctl.c
Changed
@@ -298,7 +298,7 @@ rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, 0); ide_req(rq)->type = ATA_PRIV_MISC; - rq->rq_flags = RQF_QUIET; + rq->rq_flags |= RQF_QUIET; blk_execute_rq(drive->queue, cd->disk, rq, 0); ret = scsi_req(rq)->result ? -EIO : 0; blk_put_request(rq);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/infiniband/hw/hns/hns_roce_bond.c
Changed
@@ -26,26 +26,43 @@ return hr_dev; } -bool hns_roce_bond_is_active(struct hns_roce_dev *hr_dev) +static struct hns_roce_bond_group *hns_roce_get_bond_grp(struct hns_roce_dev *hr_dev) { + struct hns_roce_bond_group *bond_grp = NULL; struct net_device *upper_dev; struct net_device *net_dev; - if (!netif_is_lag_port(hr_dev->iboe.netdevs[0])) - return false; - rcu_read_lock(); + upper_dev = netdev_master_upper_dev_get_rcu(hr_dev->iboe.netdevs[0]); + for_each_netdev_in_bond_rcu(upper_dev, net_dev) { hr_dev = hns_roce_get_hrdev_by_netdev(net_dev); - if (hr_dev && hr_dev->bond_grp && - hr_dev->bond_grp->bond_state == HNS_ROCE_BOND_IS_BONDED) { - rcu_read_unlock(); - return true; + if (hr_dev && hr_dev->bond_grp) { + bond_grp = hr_dev->bond_grp; + break; } } + rcu_read_unlock(); + return bond_grp; +} + +bool hns_roce_bond_is_active(struct hns_roce_dev *hr_dev) +{ + struct hns_roce_bond_group *bond_grp; + + if (!netif_is_lag_port(hr_dev->iboe.netdevs[0])) + return false; + + bond_grp = hns_roce_get_bond_grp(hr_dev); + + if (bond_grp && + (bond_grp->bond_state == HNS_ROCE_BOND_REGISTERING || + bond_grp->bond_state == HNS_ROCE_BOND_IS_BONDED)) + return true; + return false; } @@ -61,12 +78,15 @@ if (!netif_is_lag_port(hr_dev->iboe.netdevs[0])) return NULL; - if (!bond_grp) - return NULL; + if (!bond_grp) { + bond_grp = hns_roce_get_bond_grp(hr_dev); + if (!bond_grp) + return NULL; + } mutex_lock(&bond_grp->bond_mutex); - if (bond_grp->bond_state != HNS_ROCE_BOND_IS_BONDED) + if (bond_grp->bond_state == HNS_ROCE_BOND_NOT_BONDED) goto out; if (bond_grp->tx_type == NETDEV_LAG_TX_TYPE_ACTIVEBACKUP) { @@ -154,8 +174,8 @@ int ret; int i; - hns_roce_bond_get_active_slave(bond_grp); - /* bond_grp will be kfree during uninit_instance of main_hr_dev. + /* + * bond_grp will be kfree during uninit_instance of main_hr_dev. * Thus the main_hr_dev is switched before the uninit_instance * of the previous main_hr_dev. */ @@ -165,7 +185,7 @@ hns_roce_bond_uninit_client(bond_grp, i); } - bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; + bond_grp->bond_state = HNS_ROCE_BOND_REGISTERING; for (i = 0; i < ROCE_BOND_FUNC_MAX; i++) { net_dev = bond_grp->bond_func_info[i].net_dev; @@ -183,17 +203,18 @@ } } } - if (!hr_dev) return; hns_roce_bond_uninit_client(bond_grp, main_func_idx); + hns_roce_bond_get_active_slave(bond_grp); ret = hns_roce_cmd_bond(hr_dev, HNS_ROCE_SET_BOND); if (ret) { ibdev_err(&hr_dev->ib_dev, "failed to set RoCE bond!\n"); return; } + bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; ibdev_info(&hr_dev->ib_dev, "RoCE set bond finished!\n"); } @@ -239,7 +260,6 @@ int ret; hns_roce_bond_get_active_slave(bond_grp); - bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; ret = hns_roce_cmd_bond(bond_grp->main_hr_dev, HNS_ROCE_CHANGE_BOND); if (ret) { @@ -248,6 +268,7 @@ return; } + bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; ibdev_info(&bond_grp->main_hr_dev->ib_dev, "RoCE slave changestate finished!\n"); } @@ -258,8 +279,6 @@ u8 inc_func_idx = 0; int ret; - hns_roce_bond_get_active_slave(bond_grp); - while (inc_slave_map > 0) { if (inc_slave_map & 1) hns_roce_bond_uninit_client(bond_grp, inc_func_idx); @@ -267,8 +286,7 @@ inc_func_idx++; } - bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; - + hns_roce_bond_get_active_slave(bond_grp); ret = hns_roce_cmd_bond(bond_grp->main_hr_dev, HNS_ROCE_CHANGE_BOND); if (ret) { ibdev_err(&bond_grp->main_hr_dev->ib_dev, @@ -276,6 +294,7 @@ return; } + bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; ibdev_info(&bond_grp->main_hr_dev->ib_dev, "RoCE slave increase finished!\n"); } @@ -290,8 +309,6 @@ int ret; int i; - hns_roce_bond_get_active_slave(bond_grp); - bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; main_func_idx = PCI_FUNC(bond_grp->main_hr_dev->pci_dev->devfn); @@ -300,6 +317,7 @@ for (i = 0; i < ROCE_BOND_FUNC_MAX; i++) { net_dev = bond_grp->bond_func_info[i].net_dev; if (!(dec_slave_map & (1 << i)) && net_dev) { + bond_grp->bond_state = HNS_ROCE_BOND_REGISTERING; hr_dev = hns_roce_bond_init_client(bond_grp, i); if (hr_dev) { bond_grp->main_hr_dev = hr_dev; @@ -321,6 +339,7 @@ dec_func_idx++; } + hns_roce_bond_get_active_slave(bond_grp); if (bond_grp->slave_map_diff & (1 << main_func_idx)) ret = hns_roce_cmd_bond(hr_dev, HNS_ROCE_SET_BOND); else @@ -332,6 +351,7 @@ return; } + bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; ibdev_info(&bond_grp->main_hr_dev->ib_dev, "RoCE slave decrease finished!\n"); } @@ -493,13 +513,13 @@ struct netdev_notifier_changeupper_info *info) { struct hns_roce_bond_group *bond_grp = hr_dev->bond_grp; + struct netdev_lag_upper_info *bond_upper_info = NULL; struct net_device *upper_dev = info->upper_dev; - struct netdev_lag_upper_info *bond_upper_info; - u32 pre_slave_map = bond_grp->slave_map; - u8 pre_slave_num = bond_grp->slave_num; bool changed = false; + u32 pre_slave_map; + u8 pre_slave_num;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/infiniband/hw/hns/hns_roce_bond.h
Changed
@@ -17,9 +17,20 @@ BOND_MODE_2_4, }; +enum bond_support_type { + BOND_NOT_SUPPORT, + /* + * bond_grp already exists, but in the current + * conditions it's no longer supported + */ + BOND_EXISTING_NOT_SUPPORT, + BOND_SUPPORT, +}; + enum hns_roce_bond_state { HNS_ROCE_BOND_NOT_BONDED, HNS_ROCE_BOND_IS_BONDED, + HNS_ROCE_BOND_REGISTERING, HNS_ROCE_BOND_SLAVE_INC, HNS_ROCE_BOND_SLAVE_DEC, HNS_ROCE_BOND_SLAVE_CHANGESTATE,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/infiniband/hw/hns/hns_roce_main.c
Changed
@@ -47,6 +47,30 @@ #include "hns_roce_dca.h" #include "hns_roce_debugfs.h" +static struct net_device *hns_roce_get_netdev(struct ib_device *ib_dev, + u8 port_num) +{ + struct hns_roce_dev *hr_dev = to_hr_dev(ib_dev); + struct net_device *ndev; + + if (port_num < 1 || port_num > hr_dev->caps.num_ports) + return NULL; + + ndev = hr_dev->hw->get_bond_netdev(hr_dev); + + rcu_read_lock(); + + if (!ndev) + ndev = hr_dev->iboe.netdevs[port_num - 1]; + + if (ndev) + dev_hold(ndev); + + rcu_read_unlock(); + + return ndev; +} + static int hns_roce_set_mac(struct hns_roce_dev *hr_dev, u32 port, const u8 *addr) { @@ -677,6 +701,7 @@ .disassociate_ucontext = hns_roce_disassociate_ucontext, .get_dma_mr = hns_roce_get_dma_mr, .get_link_layer = hns_roce_get_link_layer, + .get_netdev = hns_roce_get_netdev, .get_port_immutable = hns_roce_port_immutable, .mmap = hns_roce_mmap, .mmap_free = hns_roce_free_mmap,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/infiniband/ulp/iser/iscsi_iser.c
Changed
@@ -979,22 +979,16 @@ .track_queue_depth = 1, }; -static struct iscsi_transport_expand iscsi_iser_expand = { - .unbind_conn = iscsi_conn_unbind, -}; - static struct iscsi_transport iscsi_iser_transport = { .owner = THIS_MODULE, .name = "iser", - .caps = CAP_RECOVERY_L0 | CAP_MULTI_R2T | CAP_TEXT_NEGO - | CAP_OPS_EXPAND, + .caps = CAP_RECOVERY_L0 | CAP_MULTI_R2T | CAP_TEXT_NEGO, /* session management */ .create_session = iscsi_iser_session_create, .destroy_session = iscsi_iser_session_destroy, /* connection management */ .create_conn = iscsi_iser_conn_create, .bind_conn = iscsi_iser_conn_bind, - .ops_expand = &iscsi_iser_expand, .destroy_conn = iscsi_conn_teardown, .attr_is_visible = iser_attr_is_visible, .set_param = iscsi_iser_set_param,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
Changed
@@ -3662,29 +3662,6 @@ return smmu_domain->ssid ?: -EINVAL; } -#ifdef CONFIG_SMMU_BYPASS_DEV -static int arm_smmu_device_domain_type(struct device *dev) -{ - int i; - struct pci_dev *pdev; - - if (!dev_is_pci(dev)) - return 0; - - pdev = to_pci_dev(dev); - for (i = 0; i < smmu_bypass_devices_num; i++) { - if ((smmu_bypass_devices[i].vendor == pdev->vendor) && - (smmu_bypass_devices[i].device == pdev->device)) { - dev_info(dev, "device 0x%hx:0x%hx uses identity mapping.", - pdev->vendor, pdev->device); - return IOMMU_DOMAIN_IDENTITY; - } - } - - return 0; -} -#endif - static int arm_smmu_set_mpam(struct arm_smmu_device *smmu, int sid, int ssid, int partid, int pmg, int s1mpam) { @@ -3933,16 +3910,40 @@ #define IS_HISI_PTT_DEVICE(pdev) ((pdev)->vendor == PCI_VENDOR_ID_HUAWEI && \ (pdev)->device == 0xa12e) +#ifdef CONFIG_SMMU_BYPASS_DEV +static int arm_smmu_bypass_dev_domain_type(struct device *dev) +{ + int i; + struct pci_dev *pdev = to_pci_dev(dev); + + for (i = 0; i < smmu_bypass_devices_num; i++) { + if ((smmu_bypass_devices[i].vendor == pdev->vendor) && + (smmu_bypass_devices[i].device == pdev->device)) { + dev_info(dev, "device 0x%hx:0x%hx uses identity mapping.", + pdev->vendor, pdev->device); + return IOMMU_DOMAIN_IDENTITY; + } + } + + return 0; +} +#endif + static int arm_smmu_def_domain_type(struct device *dev) { + int ret = 0; + if (dev_is_pci(dev)) { struct pci_dev *pdev = to_pci_dev(dev); if (IS_HISI_PTT_DEVICE(pdev)) return IOMMU_DOMAIN_IDENTITY; + #ifdef CONFIG_SMMU_BYPASS_DEV + ret = arm_smmu_bypass_dev_domain_type(dev); + #endif } - return 0; + return ret; } static struct iommu_ops arm_smmu_ops = { @@ -3979,9 +3980,6 @@ .aux_attach_dev = arm_smmu_aux_attach_dev, .aux_detach_dev = arm_smmu_aux_detach_dev, .aux_get_pasid = arm_smmu_aux_get_pasid, -#ifdef CONFIG_SMMU_BYPASS_DEV - .def_domain_type = arm_smmu_device_domain_type, -#endif .dev_get_config = arm_smmu_device_get_config, .dev_set_config = arm_smmu_device_set_config, .pgsize_bitmap = -1UL, /* Restricted during device attach */ @@ -4167,7 +4165,7 @@ struct pci_dev *pdev; struct arm_smmu_device *smmu = (struct arm_smmu_device *)data; - if (!arm_smmu_device_domain_type(dev)) + if (!arm_smmu_def_domain_type(dev)) return 0; pdev = to_pci_dev(dev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/iommu/intel/iommu.c
Changed
@@ -4995,8 +4995,10 @@ } iommu = device_to_iommu(dev, &bus, &devfn); - if (!iommu) - return -ENODEV; + if (!iommu) { + ret = -ENODEV; + goto unlock; + } info = dmar_search_domain_by_dev_info(iommu->segment, bus, devfn); if (!info) { pn->dev->bus->iommu_ops = &intel_iommu_ops; @@ -5011,8 +5013,10 @@ } if (!pn_dev) { iommu = device_to_iommu(dev, &bus, &devfn); - if (!iommu) - return -ENODEV; + if (!iommu) { + ret = -ENODEV; + goto unlock; + } info = dmar_search_domain_by_dev_info(iommu->segment, bus, devfn); if (!info) { dev->bus->iommu_ops = &intel_iommu_ops;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/iommu/virtio-iommu.c
Changed
@@ -1115,7 +1115,7 @@ iommu_device_unregister(&viommu->iommu); /* Stop all virtqueues */ - vdev->config->reset(vdev); + virtio_reset_device(vdev); vdev->config->del_vqs(vdev); dev_info(&vdev->dev, "device removed\n");
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/md/dm-thin-metadata.c
Changed
@@ -701,6 +701,15 @@ goto bad_cleanup_data_sm; } + /* + * For pool metadata opening process, root setting is redundant + * because it will be set again in __begin_transaction(). But dm + * pool aborting process really needs to get last transaction's + * root to avoid accessing broken btree. + */ + pmd->root = le64_to_cpu(disk_super->data_mapping_root); + pmd->details_root = le64_to_cpu(disk_super->device_details_root); + __setup_btree_details(pmd); dm_bm_unlock(sblock); @@ -753,13 +762,15 @@ return r; } -static void __destroy_persistent_data_objects(struct dm_pool_metadata *pmd) +static void __destroy_persistent_data_objects(struct dm_pool_metadata *pmd, + bool destroy_bm) { dm_sm_destroy(pmd->data_sm); dm_sm_destroy(pmd->metadata_sm); dm_tm_destroy(pmd->nb_tm); dm_tm_destroy(pmd->tm); - dm_block_manager_destroy(pmd->bm); + if (destroy_bm) + dm_block_manager_destroy(pmd->bm); } static int __begin_transaction(struct dm_pool_metadata *pmd) @@ -966,7 +977,7 @@ } pmd_write_unlock(pmd); if (!pmd->fail_io) - __destroy_persistent_data_objects(pmd); + __destroy_persistent_data_objects(pmd, true); kfree(pmd); return 0; @@ -1873,19 +1884,52 @@ int dm_pool_abort_metadata(struct dm_pool_metadata *pmd) { int r = -EINVAL; + struct dm_block_manager *old_bm = NULL, *new_bm = NULL; + + /* fail_io is double-checked with pmd->root_lock held below */ + if (unlikely(pmd->fail_io)) + return r; + + /* + * Replacement block manager (new_bm) is created and old_bm destroyed outside of + * pmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of + * shrinker associated with the block manager's bufio client vs pmd root_lock). + * - must take shrinker_rwsem without holding pmd->root_lock + */ + new_bm = dm_block_manager_create(pmd->bdev, THIN_METADATA_BLOCK_SIZE << SECTOR_SHIFT, + THIN_MAX_CONCURRENT_LOCKS); pmd_write_lock(pmd); - if (pmd->fail_io) + if (pmd->fail_io) { + pmd_write_unlock(pmd); goto out; + } __set_abort_with_changes_flags(pmd); - __destroy_persistent_data_objects(pmd); - r = __create_persistent_data_objects(pmd, false); + __destroy_persistent_data_objects(pmd, false); + old_bm = pmd->bm; + if (IS_ERR(new_bm)) { + DMERR("could not create block manager during abort"); + pmd->bm = NULL; + r = PTR_ERR(new_bm); + goto out_unlock; + } + + pmd->bm = new_bm; + r = __open_or_format_metadata(pmd, false); + if (r) { + pmd->bm = NULL; + goto out_unlock; + } + new_bm = NULL; +out_unlock: if (r) pmd->fail_io = true; - -out: pmd_write_unlock(pmd); + dm_block_manager_destroy(old_bm); +out: + if (new_bm && !IS_ERR(new_bm)) + dm_block_manager_destroy(new_bm); return r; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/media/dvb-core/dmxdev.c
Changed
@@ -800,6 +800,11 @@ if (mutex_lock_interruptible(&dmxdev->mutex)) return -ERESTARTSYS; + if (dmxdev->exit) { + mutex_unlock(&dmxdev->mutex); + return -ENODEV; + } + for (i = 0; i < dmxdev->filternum; i++) if (dmxdev->filter[i].state == DMXDEV_STATE_FREE) break; @@ -1458,7 +1463,10 @@ void dvb_dmxdev_release(struct dmxdev *dmxdev) { + mutex_lock(&dmxdev->mutex); dmxdev->exit = 1; + mutex_unlock(&dmxdev->mutex); + if (dmxdev->dvbdev->users > 1) { wait_event(dmxdev->dvbdev->wait_queue, dmxdev->dvbdev->users == 1);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/media/rc/mceusb.c
Changed
@@ -1416,42 +1416,37 @@ { int ret; struct device *dev = ir->dev; - char *data; - - data = kzalloc(USB_CTRL_MSG_SZ, GFP_KERNEL); - if (!data) { - dev_err(dev, "%s: memory allocation failed!", __func__); - return; - } + char data[USB_CTRL_MSG_SZ]; /* * This is a strange one. Windows issues a set address to the device * on the receive control pipe and expect a certain value pair back */ - ret = usb_control_msg(ir->usbdev, usb_rcvctrlpipe(ir->usbdev, 0), - USB_REQ_SET_ADDRESS, USB_TYPE_VENDOR, 0, 0, - data, USB_CTRL_MSG_SZ, 3000); + ret = usb_control_msg_recv(ir->usbdev, 0, USB_REQ_SET_ADDRESS, + USB_DIR_IN | USB_TYPE_VENDOR, + 0, 0, data, USB_CTRL_MSG_SZ, 3000, + GFP_KERNEL); dev_dbg(dev, "set address - ret = %d", ret); dev_dbg(dev, "set address - data[0] = %d, data[1] = %d", data[0], data[1]); /* set feature: bit rate 38400 bps */ - ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0), - USB_REQ_SET_FEATURE, USB_TYPE_VENDOR, - 0xc04e, 0x0000, NULL, 0, 3000); + ret = usb_control_msg_send(ir->usbdev, 0, + USB_REQ_SET_FEATURE, USB_TYPE_VENDOR, + 0xc04e, 0x0000, NULL, 0, 3000, GFP_KERNEL); dev_dbg(dev, "set feature - ret = %d", ret); /* bRequest 4: set char length to 8 bits */ - ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0), - 4, USB_TYPE_VENDOR, - 0x0808, 0x0000, NULL, 0, 3000); + ret = usb_control_msg_send(ir->usbdev, 0, + 4, USB_TYPE_VENDOR, + 0x0808, 0x0000, NULL, 0, 3000, GFP_KERNEL); dev_dbg(dev, "set char length - retB = %d", ret); /* bRequest 2: set handshaking to use DTR/DSR */ - ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0), - 2, USB_TYPE_VENDOR, - 0x0000, 0x0100, NULL, 0, 3000); + ret = usb_control_msg_send(ir->usbdev, 0, + 2, USB_TYPE_VENDOR, + 0x0000, 0x0100, NULL, 0, 3000, GFP_KERNEL); dev_dbg(dev, "set handshake - retC = %d", ret); /* device resume */ @@ -1459,8 +1454,6 @@ /* get hw/sw revision? */ mce_command_out(ir, GET_REVISION, sizeof(GET_REVISION)); - - kfree(data); } static void mceusb_gen2_init(struct mceusb_dev *ir)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/misc/sgi-gru/grufault.c
Changed
@@ -648,6 +648,7 @@ if ((cb & (GRU_HANDLE_STRIDE - 1)) || ucbnum >= GRU_NUM_CB) return -EINVAL; +again: gts = gru_find_lock_gts(cb); if (!gts) return -EINVAL; @@ -656,7 +657,11 @@ if (ucbnum >= gts->ts_cbr_au_count * GRU_CBR_AU_SIZE) goto exit; - gru_check_context_placement(gts); + if (gru_check_context_placement(gts)) { + gru_unlock_gts(gts); + gru_unload_context(gts, 1); + goto again; + } /* * CCH may contain stale data if ts_force_cch_reload is set. @@ -874,7 +879,11 @@ } else { gts->ts_user_blade_id = req.val1; gts->ts_user_chiplet_id = req.val0; - gru_check_context_placement(gts); + if (gru_check_context_placement(gts)) { + gru_unlock_gts(gts); + gru_unload_context(gts, 1); + return ret; + } } break; case sco_gseg_owner:
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/misc/sgi-gru/grumain.c
Changed
@@ -716,9 +716,10 @@ * chiplet. Misassignment can occur if the process migrates to a different * blade or if the user changes the selected blade/chiplet. */ -void gru_check_context_placement(struct gru_thread_state *gts) +int gru_check_context_placement(struct gru_thread_state *gts) { struct gru_state *gru; + int ret = 0; /* * If the current task is the context owner, verify that the @@ -726,15 +727,23 @@ * references. Pthread apps use non-owner references to the CBRs. */ gru = gts->ts_gru; + /* + * If gru or gts->ts_tgid_owner isn't initialized properly, return + * success to indicate that the caller does not need to unload the + * gru context.The caller is responsible for their inspection and + * reinitialization if needed. + */ if (!gru || gts->ts_tgid_owner != current->tgid) - return; + return ret; if (!gru_check_chiplet_assignment(gru, gts)) { STAT(check_context_unload); - gru_unload_context(gts, 1); + ret = -EINVAL; } else if (gru_retarget_intr(gts)) { STAT(check_context_retarget_intr); } + + return ret; } @@ -934,7 +943,12 @@ mutex_lock(>s->ts_ctxlock); preempt_disable(); - gru_check_context_placement(gts); + if (gru_check_context_placement(gts)) { + preempt_enable(); + mutex_unlock(>s->ts_ctxlock); + gru_unload_context(gts, 1); + return VM_FAULT_NOPAGE; + } if (!gts->ts_gru) { STAT(load_user_context);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/misc/sgi-gru/grutables.h
Changed
@@ -637,7 +637,7 @@ extern int gru_user_unload_context(unsigned long arg); extern int gru_get_exception_detail(unsigned long arg); extern int gru_set_context_option(unsigned long address); -extern void gru_check_context_placement(struct gru_thread_state *gts); +extern int gru_check_context_placement(struct gru_thread_state *gts); extern int gru_cpu_fault_map_id(void); extern struct vm_area_struct *gru_find_vma(unsigned long vaddr); extern void gru_flush_all_tlb(struct gru_state *gru);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/mtd/maps/physmap-core.c
Changed
@@ -307,6 +307,9 @@ const char *probe_type; match = of_match_device(of_flash_match, &dev->dev); + if (!match) + return NULL; + probe_type = match->data; if (probe_type) return probe_type;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/caif/caif_virtio.c
Changed
@@ -764,7 +764,7 @@ debugfs_remove_recursive(cfv->debugfs); vringh_kiov_cleanup(&cfv->ctx.riov); - vdev->config->reset(vdev); + virtio_reset_device(vdev); vdev->vringh_config->del_vrhs(cfv->vdev); cfv->vr_rx = NULL; vdev->config->del_vqs(cfv->vdev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/hisilicon/hns3/hnae3.h
Changed
@@ -845,7 +845,6 @@ const struct hnae3_dcb_ops *dcb_ops; u16 int_rl_setting; - enum pkt_hash_types rss_type; void __iomem *io_base; };
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h
Changed
@@ -294,8 +294,8 @@ HCLGE_PPP_CMD0_INT_CMD = 0x2100, HCLGE_PPP_CMD1_INT_CMD = 0x2101, HCLGE_MAC_ETHERTYPE_IDX_RD = 0x2105, - HCLGE_OPC_WOL_CFG = 0x2200, HCLGE_OPC_WOL_GET_SUPPORTED_MODE = 0x2201, + HCLGE_OPC_WOL_CFG = 0x2202, HCLGE_NCSI_INT_EN = 0x2401, /* ROH MAC commands */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_rss.c
Changed
@@ -191,23 +191,6 @@ return HCLGE_COMM_RSS_KEY_SIZE; } -void hclge_comm_get_rss_type(struct hnae3_handle *nic, - struct hclge_comm_rss_tuple_cfg *rss_tuple_sets) -{ - if (rss_tuple_sets->ipv4_tcp_en || - rss_tuple_sets->ipv4_udp_en || - rss_tuple_sets->ipv4_sctp_en || - rss_tuple_sets->ipv6_tcp_en || - rss_tuple_sets->ipv6_udp_en || - rss_tuple_sets->ipv6_sctp_en) - nic->kinfo.rss_type = PKT_HASH_TYPE_L4; - else if (rss_tuple_sets->ipv4_fragment_en || - rss_tuple_sets->ipv6_fragment_en) - nic->kinfo.rss_type = PKT_HASH_TYPE_L3; - else - nic->kinfo.rss_type = PKT_HASH_TYPE_NONE; -} - int hclge_comm_parse_rss_hfunc(struct hclge_comm_rss_cfg *rss_cfg, const u8 hfunc, u8 *hash_algo) { @@ -344,9 +327,6 @@ req->ipv6_sctp_en = rss_cfg->rss_tuple_sets.ipv6_sctp_en; req->ipv6_fragment_en = rss_cfg->rss_tuple_sets.ipv6_fragment_en; - if (is_pf) - hclge_comm_get_rss_type(nic, &rss_cfg->rss_tuple_sets); - ret = hclge_comm_cmd_send(hw, &desc, 1); if (ret) dev_err(&hw->cmq.csq.pdev->dev,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_rss.h
Changed
@@ -95,8 +95,6 @@ }; u32 hclge_comm_get_rss_key_size(struct hnae3_handle *handle); -void hclge_comm_get_rss_type(struct hnae3_handle *nic, - struct hclge_comm_rss_tuple_cfg *rss_tuple_sets); void hclge_comm_rss_indir_init_cfg(struct hnae3_ae_dev *ae_dev, struct hclge_comm_rss_cfg *rss_cfg); int hclge_comm_get_rss_tuple(struct hclge_comm_rss_cfg *rss_cfg, int flow_type,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
Changed
@@ -110,26 +110,28 @@ }; MODULE_DEVICE_TABLE(pci, hns3_pci_tbl); -#define HNS3_RX_PTYPE_ENTRY(ptype, l, s, t) \ +#define HNS3_RX_PTYPE_ENTRY(ptype, l, s, t, h) \ { ptype, \ l, \ CHECKSUM_##s, \ HNS3_L3_TYPE_##t, \ - 1 } + 1, \ + h} #define HNS3_RX_PTYPE_UNUSED_ENTRY(ptype) \ - { ptype, 0, CHECKSUM_NONE, HNS3_L3_TYPE_PARSE_FAIL, 0 } + { ptype, 0, CHECKSUM_NONE, HNS3_L3_TYPE_PARSE_FAIL, 0, \ + PKT_HASH_TYPE_NONE } static const struct hns3_rx_ptype hns3_rx_ptype_tbl[] = { HNS3_RX_PTYPE_UNUSED_ENTRY(0), - HNS3_RX_PTYPE_ENTRY(1, 0, COMPLETE, ARP), - HNS3_RX_PTYPE_ENTRY(2, 0, COMPLETE, RARP), - HNS3_RX_PTYPE_ENTRY(3, 0, COMPLETE, LLDP), - HNS3_RX_PTYPE_ENTRY(4, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(5, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(6, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(7, 0, COMPLETE, CNM), - HNS3_RX_PTYPE_ENTRY(8, 0, NONE, PARSE_FAIL), + HNS3_RX_PTYPE_ENTRY(1, 0, COMPLETE, ARP, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(2, 0, COMPLETE, RARP, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(3, 0, COMPLETE, LLDP, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(4, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(5, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(6, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(7, 0, COMPLETE, CNM, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(8, 0, NONE, PARSE_FAIL, PKT_HASH_TYPE_NONE), HNS3_RX_PTYPE_UNUSED_ENTRY(9), HNS3_RX_PTYPE_UNUSED_ENTRY(10), HNS3_RX_PTYPE_UNUSED_ENTRY(11), @@ -137,36 +139,36 @@ HNS3_RX_PTYPE_UNUSED_ENTRY(13), HNS3_RX_PTYPE_UNUSED_ENTRY(14), HNS3_RX_PTYPE_UNUSED_ENTRY(15), - HNS3_RX_PTYPE_ENTRY(16, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(17, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(18, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(19, 0, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(20, 0, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(21, 0, NONE, IPV4), - HNS3_RX_PTYPE_ENTRY(22, 0, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(23, 0, NONE, IPV4), - HNS3_RX_PTYPE_ENTRY(24, 0, NONE, IPV4), - HNS3_RX_PTYPE_ENTRY(25, 0, UNNECESSARY, IPV4), + HNS3_RX_PTYPE_ENTRY(16, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(17, 0, COMPLETE, IPV4, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(18, 0, COMPLETE, IPV4, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(19, 0, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(20, 0, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(21, 0, NONE, IPV4, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(22, 0, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(23, 0, NONE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(24, 0, NONE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(25, 0, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), HNS3_RX_PTYPE_UNUSED_ENTRY(26), HNS3_RX_PTYPE_UNUSED_ENTRY(27), HNS3_RX_PTYPE_UNUSED_ENTRY(28), - HNS3_RX_PTYPE_ENTRY(29, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(30, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(31, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(32, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(33, 1, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(34, 1, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(35, 1, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(36, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(37, 0, COMPLETE, IPV4), + HNS3_RX_PTYPE_ENTRY(29, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(30, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(31, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(32, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(33, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(34, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(35, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(36, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(37, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), HNS3_RX_PTYPE_UNUSED_ENTRY(38), - HNS3_RX_PTYPE_ENTRY(39, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(40, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(41, 1, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(42, 1, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(43, 1, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(44, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(45, 0, COMPLETE, IPV6), + HNS3_RX_PTYPE_ENTRY(39, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(40, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(41, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(42, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(43, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(44, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(45, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), HNS3_RX_PTYPE_UNUSED_ENTRY(46), HNS3_RX_PTYPE_UNUSED_ENTRY(47), HNS3_RX_PTYPE_UNUSED_ENTRY(48), @@ -232,35 +234,35 @@ HNS3_RX_PTYPE_UNUSED_ENTRY(108), HNS3_RX_PTYPE_UNUSED_ENTRY(109), HNS3_RX_PTYPE_UNUSED_ENTRY(110), - HNS3_RX_PTYPE_ENTRY(111, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(112, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(113, 0, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(114, 0, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(115, 0, NONE, IPV6), - HNS3_RX_PTYPE_ENTRY(116, 0, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(117, 0, NONE, IPV6), - HNS3_RX_PTYPE_ENTRY(118, 0, NONE, IPV6), - HNS3_RX_PTYPE_ENTRY(119, 0, UNNECESSARY, IPV6), + HNS3_RX_PTYPE_ENTRY(111, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(112, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(113, 0, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(114, 0, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(115, 0, NONE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(116, 0, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(117, 0, NONE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(118, 0, NONE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(119, 0, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), HNS3_RX_PTYPE_UNUSED_ENTRY(120), HNS3_RX_PTYPE_UNUSED_ENTRY(121), HNS3_RX_PTYPE_UNUSED_ENTRY(122), - HNS3_RX_PTYPE_ENTRY(123, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(124, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(125, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(126, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(127, 1, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(128, 1, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(129, 1, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(130, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(131, 0, COMPLETE, IPV4), + HNS3_RX_PTYPE_ENTRY(123, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(124, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(125, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(126, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(127, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(128, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(129, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(130, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(131, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), HNS3_RX_PTYPE_UNUSED_ENTRY(132), - HNS3_RX_PTYPE_ENTRY(133, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(134, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(135, 1, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(136, 1, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(137, 1, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(138, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(139, 0, COMPLETE, IPV6), + HNS3_RX_PTYPE_ENTRY(133, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(134, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(135, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(136, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(137, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(138, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(139, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), HNS3_RX_PTYPE_UNUSED_ENTRY(140), HNS3_RX_PTYPE_UNUSED_ENTRY(141), HNS3_RX_PTYPE_UNUSED_ENTRY(142), @@ -3856,8 +3858,8 @@ desc_cb->reuse_flag = 1; } else if (frag_size <= ring->rx_copybreak) { ret = hns3_handle_rx_copybreak(skb, i, ring, pull_len, desc_cb); - if (ret) - goto out; + if (!ret) + return; } out: @@ -4251,15 +4253,35 @@ } static void hns3_set_rx_skb_rss_type(struct hns3_enet_ring *ring, - struct sk_buff *skb, u32 rss_hash) + struct sk_buff *skb, u32 rss_hash, + u32 l234info, u32 ol_info) { - struct hnae3_handle *handle = ring->tqp->handle; - enum pkt_hash_types rss_type; + enum pkt_hash_types rss_type = PKT_HASH_TYPE_NONE; + struct net_device *netdev = ring_to_netdev(ring); + struct hns3_nic_priv *priv = netdev_priv(netdev); - if (rss_hash) - rss_type = handle->kinfo.rss_type; - else - rss_type = PKT_HASH_TYPE_NONE; + if (test_bit(HNS3_NIC_STATE_RXD_ADV_LAYOUT_ENABLE, &priv->state)) { + u32 ptype = hnae3_get_field(ol_info, HNS3_RXD_PTYPE_M, + HNS3_RXD_PTYPE_S); + + rss_type = hns3_rx_ptype_tbl[ptype].hash_type; + } else { + int l3_type = hnae3_get_field(l234info, HNS3_RXD_L3ID_M,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
Changed
@@ -414,6 +414,7 @@ u32 ip_summed : 2; u32 l3_type : 4; u32 valid : 1; + u32 hash_type: 3; }; struct ring_stats {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
Changed
@@ -3331,6 +3331,7 @@ hdev->hw.mac.autoneg = cmd.base.autoneg; hdev->hw.mac.speed = cmd.base.speed; hdev->hw.mac.duplex = cmd.base.duplex; + linkmode_copy(hdev->hw.mac.advertising, cmd.link_modes.advertising); return 0; } @@ -4974,7 +4975,6 @@ return ret; } - hclge_comm_get_rss_type(&vport->nic, &hdev->rss_cfg.rss_tuple_sets); return 0; } @@ -12199,12 +12199,10 @@ struct hclge_vport *vport = hclge_get_vport(handle); struct hclge_dev *hdev = vport->back; struct hclge_wol_info *wol_info = &hdev->hw.mac.wol; - u32 wol_supported; u32 wol_mode; - wol_supported = hclge_wol_mode_from_ethtool(wol->supported); wol_mode = hclge_wol_mode_from_ethtool(wol->wolopts); - if (wol_mode & ~wol_supported) + if (wol_mode & ~wol_info->wol_support_mode) return -EINVAL; wol_info->wol_current_mode = wol_mode; @@ -12305,9 +12303,12 @@ if (ret) goto err_msi_irq_uninit; - if (hdev->hw.mac.media_type == HNAE3_MEDIA_TYPE_COPPER && - !hnae3_dev_phy_imp_supported(hdev)) { - ret = hclge_mac_mdio_config(hdev); + if (hdev->hw.mac.media_type == HNAE3_MEDIA_TYPE_COPPER) { + if (hnae3_dev_phy_imp_supported(hdev)) + ret = hclge_update_tp_port_info(hdev); + else + ret = hclge_mac_mdio_config(hdev); + if (ret) goto err_msi_irq_uninit; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/Kconfig
Changed
@@ -16,7 +16,6 @@ if NET_VENDOR_NETSWIFT -source "drivers/net/ethernet/netswift/txgbe/Kconfig" config NGBE tristate "Netswift PCI-Express Gigabit Ethernet support" depends on PCI @@ -73,4 +72,58 @@ If unsure, say N. +config TXGBE + tristate "Netswift PCI-Express 10Gigabit Ethernet support" + depends on PCI + imply PTP_1588_CLOCK + help + This driver supports Netswift 10gigabit ethernet adapters. + For more information on how to identify your adapter, go + to <http://www.net-swift.com> + + To compile this driver as a module, choose M here. The module + will be called txgbe. + +config TXGBE_HWMON + bool "Netswift PCI-Express 10Gigabit adapters HWMON support" + default n + depends on TXGBE && HWMON && !(TXGBE=y && HWMON=m) + help + Say Y if you want to expose thermal sensor data on these devices. + For more information on how to use your adapter, go + to <http://www.net-swift.com> + + If unsure, say N. + +config TXGBE_DEBUG_FS + bool "Netswift PCI-Express 10Gigabit adapters debugfs support" + default n + depends on TXGBE + help + Say Y if you want to setup debugfs for these devices. + For more information on how to use your adapter, go + to <http://www.net-swift.com> + + If unsure, say N. + +config TXGBE_POLL_LINK_STATUS + bool "Netswift PCI-Express 10Gigabit adapters poll mode support" + default n + depends on TXGBE + help + Say Y if you want to turn these devices to poll mode instead of interrupt-trigged TX/RX. + For more information on how to use your adapter, go + to <http://www.net-swift.com> + + If unsure, say N. +config TXGBE_SYSFS + bool "Netswift PCI-Express 10Gigabit adapters sysfs support" + default n + depends on TXGBE + help + Say Y if you want to setup sysfs for these devices. + For more information on how to use your adapter, go + to <http://www.net-swift.com> + + If unsure, say N. endif # NET_VENDOR_NETSWIFT
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/Kconfig
Deleted
@@ -1,13 +0,0 @@ -# -# Netswift driver configuration -# - -config TXGBE - tristate "Netswift 10G Network Interface Card" - default n - depends on PCI_MSI && NUMA && PCI_IOV && DCB - help - This driver supports Netswift 10G Ethernet cards. - To compile this driver as part of the kernel, choose Y here. - If unsure, choose N. - The default is N.
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/Makefile
Changed
@@ -9,3 +9,7 @@ txgbe-objs := txgbe_main.o txgbe_ethtool.o \ txgbe_hw.o txgbe_phy.o txgbe_bp.o \ txgbe_mbx.o txgbe_mtd.o txgbe_param.o txgbe_lib.o txgbe_ptp.o + +txgbe-$(CONFIG_TXGBE_HWMON) += txgbe_sysfs.o +txgbe-$(CONFIG_TXGBE_DEBUG_FS) += txgbe_debugfs.o +txgbe-$(CONFIG_TXGBE_SYSFS) += txgbe_sysfs.o
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe.h
Changed
@@ -67,6 +67,10 @@ #define CL72_KRTR_PRBS_MODE_EN 0x2fff /*deepinsw : 512 default to 256*/ #endif +#ifndef TXGBE_STATIC_ITR +#define TXGBE_STATIC_ITR 1 /* static itr configure */ +#endif + #ifndef SFI_SET #define SFI_SET 0 #define SFI_MAIN 24 @@ -95,7 +99,6 @@ #define KX_POST 16 #endif - #ifndef KX4_TXRX_PIN #define KX4_TXRX_PIN 0 /*rx : 0xf tx : 0xf0 */ #endif @@ -118,8 +121,8 @@ #define KR_CL72_TRAINING 1 #endif -#ifndef KR_REINITED -#define KR_REINITED 1 +#ifndef KR_NOREINITED +#define KR_NOREINITED 0 #endif #ifndef KR_AN73_PRESET @@ -140,17 +143,17 @@ #define TXGBE_DEFAULT_TX_WORK DEFAULT_TX_WORK #else #define TXGBE_DEFAULT_TXD 512 -#define TXGBE_DEFAULT_TX_WORK 256 +#define TXGBE_DEFAULT_TX_WORK 256 #endif #define TXGBE_MAX_TXD 8192 #define TXGBE_MIN_TXD 128 #if (PAGE_SIZE < 8192) #define TXGBE_DEFAULT_RXD 512 -#define TXGBE_DEFAULT_RX_WORK 256 +#define TXGBE_DEFAULT_RX_WORK 256 #else #define TXGBE_DEFAULT_RXD 256 -#define TXGBE_DEFAULT_RX_WORK 128 +#define TXGBE_DEFAULT_RX_WORK 128 #endif #define TXGBE_MAX_RXD 8192 @@ -474,6 +477,7 @@ #define MAX_RX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1) #define MAX_TX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1) +#define MAX_XDP_QUEUES (TXGBE_MAX_FDIR_INDICES + 1) #define TXGBE_MAX_L2A_QUEUES 4 #define TXGBE_BAD_L2A_QUEUE 3 @@ -552,6 +556,26 @@ struct txgbe_ring ring[0] ____cacheline_internodealigned_in_smp; }; +#ifdef CONFIG_TXGBE_HWMON + +#define TXGBE_HWMON_TYPE_TEMP 0 +#define TXGBE_HWMON_TYPE_ALARMTHRESH 1 +#define TXGBE_HWMON_TYPE_DALARMTHRESH 2 + +struct hwmon_attr { + struct device_attribute dev_attr; + struct txgbe_hw *hw; + struct txgbe_thermal_diode_data *sensor; + char name[19]; +}; + +struct hwmon_buff { + struct device *device; + struct hwmon_attr *hwmon_list; + unsigned int n_hwmon; +}; +#endif /* CONFIG_TXGBE_HWMON */ + /* * microsecond values for various ITR rates shifted by 2 to fit itr register * with the first 3 bits reserved 0 @@ -603,6 +627,13 @@ #define TXGBE_MAC_STATE_MODIFIED 0x2 #define TXGBE_MAC_STATE_IN_USE 0x4 +#ifdef CONFIG_TXGBE_PROCFS +struct txgbe_therm_proc_data { + struct txgbe_hw *hw; + struct txgbe_thermal_diode_data *sensor_data; +}; +#endif + /* * Only for array allocations in our adapter struct. * we can actually assign 64 queue vectors based on our extended-extended @@ -718,16 +749,17 @@ */ u32 flags; u32 flags2; - u32 vf_mode; - u32 backplane_an; - u32 an73; - u32 an37; - u32 ffe_main; - u32 ffe_pre; - u32 ffe_post; - u32 ffe_set; - u32 backplane_mode; - u32 backplane_auto; + u8 an73_mode; + u8 vf_mode; + u8 backplane_an; + u8 an73; + u8 an37; + u16 ffe_main; + u16 ffe_pre; + u16 ffe_post; + u8 ffe_set; + u8 backplane_mode; + u8 backplane_auto; bool cloud_mode; @@ -744,6 +776,10 @@ unsigned int num_vmdqs; /* does not include pools assigned to VFs */ unsigned int queues_per_pool; + /* XDP */ + int num_xdp_queues; + struct txgbe_ring *xdp_ring[MAX_XDP_QUEUES]; + /* TX */ struct txgbe_ring *tx_ring[MAX_TX_QUEUES] ____cacheline_aligned_in_smp; @@ -798,6 +834,9 @@ struct timer_list service_timer; struct work_struct service_task; +#ifdef CONFIG_TXGBE_POLL_LINK_STATUS + struct timer_list link_check_timer; +#endif struct hlist_head fdir_filter_list; unsigned long fdir_overflow; /* number of times ATR was backed off */ union txgbe_atr_input fdir_mask; @@ -845,6 +884,23 @@ __le16 vxlan_port; __le16 geneve_port; +#ifdef CONFIG_TXGBE_SYSFS +#ifdef CONFIG_TXGBE_HWMON + struct hwmon_buff txgbe_hwmon_buff; +#endif /* CONFIG_TXGBE_HWMON */ +#else /* CONFIG_TXGBE_SYSFS */ +#ifdef CONFIG_TXGBE_PROCFS + struct proc_dir_entry *eth_dir; + struct proc_dir_entry *info_dir; + u64 old_lsc; + struct proc_dir_entry *therm_dir; + struct txgbe_therm_proc_data therm_data; +#endif /* CONFIG_TXGBE_PROCFS */ +#endif /* CONFIG_TXGBE_SYSFS */ + +#ifdef CONFIG_TXGBE_DEBUG_FS + struct dentry *txgbe_dbg_adapter; +#endif /*CONFIG_TXGBE_DEBUG_FS*/ u8 default_up; unsigned long fwd_bitmask; /* bitmask indicating in use pools */ @@ -914,6 +970,10 @@ /* ESX txgbe CIM IOCTL definition */ +#ifdef CONFIG_TXGBE_SYSFS +void txgbe_sysfs_exit(struct txgbe_adapter *adapter); +int txgbe_sysfs_init(struct txgbe_adapter *adapter); +#endif /* CONFIG_TXGBE_SYSFS */ extern struct dcbnl_rtnl_ops dcbnl_ops; int txgbe_copy_dcb_cfg(struct txgbe_adapter *adapter, int tc_max); @@ -974,6 +1034,37 @@ void txgbe_vlan_strip_enable(struct txgbe_adapter *adapter); void txgbe_vlan_strip_disable(struct txgbe_adapter *adapter); +#if IS_ENABLED(CONFIG_FCOE) +void txgbe_configure_fcoe(struct txgbe_adapter *adapter); +int txgbe_fso(struct txgbe_ring *tx_ring, + struct txgbe_tx_buffer *first, + u8 *hdr_len); +int txgbe_fcoe_ddp(struct txgbe_adapter *adapter, + union txgbe_rx_desc *rx_desc, + struct sk_buff *skb); +int txgbe_fcoe_ddp_get(struct net_device *netdev, u16 xid, + struct scatterlist *sgl, unsigned int sgc); +int txgbe_fcoe_ddp_target(struct net_device *netdev, u16 xid, + struct scatterlist *sgl, unsigned int sgc);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_bp.c
Changed
@@ -18,23 +18,23 @@ #include "txgbe_bp.h" -int Handle_bkp_an73_flow(unsigned char byLinkMode, struct txgbe_adapter *adapter); -int WaitBkpAn73XnpDone(struct txgbe_adapter *adapter); -int GetBkpAn73Ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner, - struct txgbe_adapter *adapter); -int Get_bkp_an73_ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner, - struct txgbe_adapter *adapter); -int ClearBkpAn73Interrupt(unsigned int intIndex, unsigned int intIndexHi, struct txgbe_adapter *adapter); -int CheckBkpAn73Interrupt(unsigned int intIndex, struct txgbe_adapter *adapter); -int Check_bkp_an73_ability(bkpan73ability tBkpAn73Ability, bkpan73ability tLpBkpAn73Ability, - struct txgbe_adapter *adapter); +int handle_bkp_an73_flow(unsigned char bp_link_mode, struct txgbe_adapter *adapter); +int wait_bkp_an73_xnp_done(struct txgbe_adapter *adapter); +int get_bkp_an73_ability(bkpan73ability *pt_bkp_an73_ability, + unsigned char byLinkPartner, struct txgbe_adapter *adapter); +int clr_bkp_an73_int(unsigned int intIndex, unsigned int intIndexHi, + struct txgbe_adapter *adapter); +int chk_bkp_an73_Int(unsigned int intIndex, struct txgbe_adapter *adapter); +int chk_bkp_an73_ability(bkpan73ability tBkpAn73Ability, + bkpan73ability tLpBkpAn73Ability, + struct txgbe_adapter *adapter); void txgbe_bp_close_protect(struct txgbe_adapter *adapter) { adapter->flags2 |= TXGBE_FLAG2_KR_PRO_DOWN; - if (adapter->flags2 & TXGBE_FLAG2_KR_PRO_REINIT) { + while (adapter->flags2 & TXGBE_FLAG2_KR_PRO_REINIT) { msleep(100); - printk("wait to reinited ok..%x\n", adapter->flags2); + e_dev_info("wait to reinited ok..%x\n", adapter->flags2); } } @@ -49,16 +49,12 @@ if (adapter->backplane_mode == TXGBE_BP_M_KR) { hw->subsystem_device_id = TXGBE_ID_WX1820_KR_KX_KX4; - hw->subsystem_id = TXGBE_ID_WX1820_KR_KX_KX4; } else if (adapter->backplane_mode == TXGBE_BP_M_KX4) { hw->subsystem_device_id = TXGBE_ID_WX1820_MAC_XAUI; - hw->subsystem_id = TXGBE_ID_WX1820_MAC_XAUI; } else if (adapter->backplane_mode == TXGBE_BP_M_KX) { hw->subsystem_device_id = TXGBE_ID_WX1820_MAC_SGMII; - hw->subsystem_id = TXGBE_ID_WX1820_MAC_SGMII; } else if (adapter->backplane_mode == TXGBE_BP_M_SFI) { hw->subsystem_device_id = TXGBE_ID_WX1820_SFP; - hw->subsystem_id = TXGBE_ID_WX1820_SFP; } if (adapter->backplane_auto == TXGBE_BP_M_AUTO) { @@ -99,7 +95,7 @@ static int txgbe_kr_subtask(struct txgbe_adapter *adapter) { - Handle_bkp_an73_flow(0, adapter); + handle_bkp_an73_flow(0, adapter); return 0; } @@ -127,32 +123,26 @@ void txgbe_bp_down_event(struct txgbe_adapter *adapter) { struct txgbe_hw *hw = &adapter->hw; + if (adapter->backplane_an == 1) { if (KR_NORESET == 1) { - txgbe_wr32_epcs(hw, 0x78003, 0x0000); - txgbe_wr32_epcs(hw, 0x70000, 0x0000); + txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, 0x0000); + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0000); txgbe_wr32_epcs(hw, 0x78001, 0x0000); - msleep(1050); + msleep(1000); txgbe_set_link_to_kr(hw, 1); - } else if (KR_REINITED == 1) { - txgbe_wr32_epcs(hw, 0x78003, 0x0000); - txgbe_wr32_epcs(hw, 0x70000, 0x0000); + } else if (KR_NOREINITED == 1) { + txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, 0x0000); + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0000); txgbe_wr32_epcs(hw, 0x78001, 0x0000); - txgbe_wr32_epcs(hw, 0x18035, 0x00FF); - txgbe_wr32_epcs(hw, 0x18055, 0x00FF); msleep(1050); - txgbe_wr32_epcs(hw, 0x78003, 0x0001); - txgbe_wr32_epcs(hw, 0x70000, 0x3200); + txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, 0x0001); + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x3200); txgbe_wr32_epcs(hw, 0x78001, 0x0007); - txgbe_wr32_epcs(hw, 0x18035, 0x00FC); - txgbe_wr32_epcs(hw, 0x18055, 0x00FC); } else { - msleep(1000); - if (!(adapter->flags2&TXGBE_FLAG2_KR_PRO_DOWN)) { - adapter->flags2 |= TXGBE_FLAG2_KR_PRO_REINIT; + msleep(200); + if (!(adapter->flags2&TXGBE_FLAG2_KR_PRO_DOWN)) txgbe_reinit_locked(adapter); - adapter->flags2 &= ~TXGBE_FLAG2_KR_PRO_REINIT; - } } } } @@ -170,18 +160,18 @@ /*1. Get the local AN73 Base Page Ability*/ if (KR_MODE) e_dev_info("<1>. Get the local AN73 Base Page Ability ...\n"); - GetBkpAn73Ability(&tBkpAn73Ability, 0, adapter); + get_bkp_an73_ability(&tBkpAn73Ability, 0, adapter); /*2. Check the AN73 Interrupt Status*/ if (KR_MODE) e_dev_info("<2>. Check the AN73 Interrupt Status ...\n"); /*3.Clear the AN_PG_RCV interrupt*/ - ClearBkpAn73Interrupt(2, 0x0, adapter); + clr_bkp_an73_int(2, 0x0, adapter); /*3.1. Get the link partner AN73 Base Page Ability*/ if (KR_MODE) e_dev_info("<3.1>. Get the link partner AN73 Base Page Ability ...\n"); - Get_bkp_an73_ability(&tLpBkpAn73Ability, 1, adapter); + get_bkp_an73_ability(&tLpBkpAn73Ability, 1, adapter); /*3.2. Check the AN73 Link Ability with Link Partner*/ if (KR_MODE) { @@ -189,7 +179,7 @@ e_dev_info(" Local Link Ability: 0x%x\n", tBkpAn73Ability.linkAbility); e_dev_info(" Link Partner Link Ability: 0x%x\n", tLpBkpAn73Ability.linkAbility); } - Check_bkp_an73_ability(tBkpAn73Ability, tLpBkpAn73Ability, adapter); + chk_bkp_an73_ability(tBkpAn73Ability, tLpBkpAn73Ability, adapter); return 0; } @@ -200,7 +190,7 @@ ** 0 : current link mode matched, wait AN73 to be completed ** 1 : current link mode not matched, set to matched link mode, re-start AN73 external */ -int Check_bkp_an73_ability(bkpan73ability tBkpAn73Ability, bkpan73ability tLpBkpAn73Ability, +int chk_bkp_an73_ability(bkpan73ability tBkpAn73Ability, bkpan73ability tLpBkpAn73Ability, struct txgbe_adapter *adapter) { unsigned int comLinkAbility; @@ -215,8 +205,10 @@ comLinkAbility = tBkpAn73Ability.linkAbility & tLpBkpAn73Ability.linkAbility; if (KR_MODE) e_dev_info("comLinkAbility= 0x%x, linkAbility= 0x%x, lpLinkAbility= 0x%x\n", - comLinkAbility, tBkpAn73Ability.linkAbility, tLpBkpAn73Ability.linkAbility); + comLinkAbility, tBkpAn73Ability.linkAbility, + tLpBkpAn73Ability.linkAbility); + /*only support kr*/ if (comLinkAbility == 0) { if (KR_MODE) e_dev_info("WARNING: The Link Partner does not support any compatible speed mode!!!\n\n"); @@ -234,33 +226,8 @@ txgbe_set_link_to_kr(hw, 1); return 1; } - } else if (comLinkAbility & 0x40) { - if (tBkpAn73Ability.currentLinkMode == 0x10) { - if (KR_MODE) - e_dev_info("Link mode is matched with Link Partner: [LINK_KX4].\n"); - return 0; - } else { - if (KR_MODE) { - e_dev_info("Link mode is not matched with Link Partner: [LINK_KX4].\n"); - e_dev_info("Set the local link mode to [LINK_KX4] ...\n"); - } - txgbe_set_link_to_kx4(hw, 1); - return 1; - } - } else if (comLinkAbility & 0x20) { - if (tBkpAn73Ability.currentLinkMode == 0x1) { - if (KR_MODE) - e_dev_info("Link mode is matched with Link Partner: [LINK_KX].\n"); - return 0; - } else { - if (KR_MODE) { - e_dev_info("Link mode is not matched with Link Partner: [LINK_KX].\n"); - e_dev_info("Set the local link mode to [LINK_KX] ...\n"); - } - txgbe_set_link_to_kx(hw, 1, 1); - return 1; - } } + return 0; } @@ -271,7 +238,7 @@ **- 2: Get Link Partner Next Page (only get NXP Ability Register 1 at the moment) **- 0: Get Local Device Base Page */ -int Get_bkp_an73_ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner, +int get_bkp_an73_ability(bkpan73ability *pt_bkp_an73_ability, unsigned char byLinkPartner, struct txgbe_adapter *adapter)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_debugfs.c
Added
@@ -0,0 +1,724 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2019 - 2022 Beijing WangXun Technology Co., Ltd. */ + +#include "txgbe.h" +#include <linux/debugfs.h> +#include <linux/module.h> + +static struct dentry *txgbe_dbg_root; +static int txgbe_data_mode; + +#define TXGBE_DATA_FUNC(dm) ((dm) & ~0xFFFF) +#define TXGBE_DATA_ARGS(dm) ((dm) & 0xFFFF) +enum txgbe_data_func { + TXGBE_FUNC_NONE = (0 << 16), + TXGBE_FUNC_DUMP_BAR = (1 << 16), + TXGBE_FUNC_DUMP_RDESC = (2 << 16), + TXGBE_FUNC_DUMP_TDESC = (3 << 16), + TXGBE_FUNC_FLASH_READ = (4 << 16), + TXGBE_FUNC_FLASH_WRITE = (5 << 16), +}; + +/** + * data operation + **/ +ssize_t +txgbe_simple_read_from_pcibar(struct txgbe_adapter *adapter, int res, + void __user *buf, size_t size, loff_t *ppos) +{ + loff_t pos = *ppos; + u32 miss, len, limit = pci_resource_len(adapter->pdev, res); + + if (pos < 0) + return 0; + + limit = (pos + size <= limit ? pos + size : limit); + for (miss = 0; pos < limit && !miss; buf += len, pos += len) { + u32 val = 0, reg = round_down(pos, 4); + u32 off = pos - reg; + + len = (reg + 4 <= limit ? 4 - off : 4 - off - (limit - reg - 4)); + val = txgbe_rd32(adapter->io_addr + reg); + miss = copy_to_user(buf, &val + off, len); + } + + size = pos - *ppos - miss; + *ppos += size; + + return size; +} + +ssize_t +txgbe_simple_read_from_flash(struct txgbe_adapter *adapter, + void __user *buf, size_t size, loff_t *ppos) +{ + struct txgbe_hw *hw = &adapter->hw; + loff_t pos = *ppos; + size_t ret = 0; + loff_t rpos, rtail; + void __user *to = buf; + size_t available = adapter->hw.flash.dword_size << 2; + + if (pos < 0) + return -EINVAL; + if (pos >= available || !size) + return 0; + if (size > available - pos) + size = available - pos; + + rpos = round_up(pos, 4); + rtail = round_down(pos + size, 4); + if (rtail < rpos) + return 0; + + to += rpos - pos; + while (rpos <= rtail) { + u32 value = txgbe_rd32(adapter->io_addr + rpos); + + if (TCALL(hw, flash.ops.write_buffer, rpos>>2, 1, &value)) { + ret = size; + break; + } + if (copy_to_user(to, &value, 4) == 4) { + ret = size; + break; + } + to += 4; + rpos += 4; + } + + if (ret == size) + return -EFAULT; + size -= ret; + *ppos = pos + size; + return size; +} + +ssize_t +txgbe_simple_write_to_flash(struct txgbe_adapter *adapter, + const void __user *from, size_t size, loff_t *ppos, size_t available) +{ + return size; +} + +static ssize_t +txgbe_dbg_data_ops_read(struct file *filp, char __user *buffer, + size_t size, loff_t *ppos) +{ + struct txgbe_adapter *adapter = filp->private_data; + u32 func = TXGBE_DATA_FUNC(txgbe_data_mode); + + /* Ensure all reads are done */ + rmb(); + + switch (func) { + case TXGBE_FUNC_DUMP_BAR: { + u32 bar = TXGBE_DATA_ARGS(txgbe_data_mode); + + return txgbe_simple_read_from_pcibar(adapter, bar, buffer, size, + ppos); + } + case TXGBE_FUNC_FLASH_READ: { + return txgbe_simple_read_from_flash(adapter, buffer, size, ppos); + } + case TXGBE_FUNC_DUMP_RDESC: { + struct txgbe_ring *ring; + u32 queue = TXGBE_DATA_ARGS(txgbe_data_mode); + + if (queue >= adapter->num_rx_queues) + return 0; + queue += VMDQ_P(0) * adapter->queues_per_pool; + ring = adapter->rx_ring[queue]; + + return simple_read_from_buffer(buffer, size, ppos, + ring->desc, ring->size); + } + case TXGBE_FUNC_DUMP_TDESC: { + struct txgbe_ring *ring; + u32 queue = TXGBE_DATA_ARGS(txgbe_data_mode); + + if (queue >= adapter->num_tx_queues) + return 0; + queue += VMDQ_P(0) * adapter->queues_per_pool; + ring = adapter->tx_ring[queue]; + + return simple_read_from_buffer(buffer, size, ppos, + ring->desc, ring->size); + } + default: + break; + } + + return 0; +} + +static ssize_t +txgbe_dbg_data_ops_write(struct file *filp, + const char __user *buffer, + size_t size, loff_t *ppos) +{ + struct txgbe_adapter *adapter = filp->private_data; + u32 func = TXGBE_DATA_FUNC(txgbe_data_mode); + + /* Ensure all reads are done */ + rmb(); + + switch (func) { + case TXGBE_FUNC_FLASH_WRITE: { + u32 size = TXGBE_DATA_ARGS(txgbe_data_mode); + + if (size > adapter->hw.flash.dword_size << 2) + size = adapter->hw.flash.dword_size << 2; + + return txgbe_simple_write_to_flash(adapter, buffer, size, ppos, size); + } + default: + break; + } + + return size; +} + +static const struct file_operations txgbe_dbg_data_ops_fops = { + .owner = THIS_MODULE, + .open = simple_open, + .read = txgbe_dbg_data_ops_read, + .write = txgbe_dbg_data_ops_write, +}; + +/** + * reg_ops operation + **/ +static char txgbe_dbg_reg_ops_buf[256] = ""; +static ssize_t +txgbe_dbg_reg_ops_read(struct file *filp, char __user *buffer, + size_t count, loff_t *ppos) +{ + struct txgbe_adapter *adapter = filp->private_data; + char *buf; + int len;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c
Changed
@@ -212,6 +212,7 @@ /* set the advertised speeds */ if (hw->phy.autoneg_advertised) { + advertising = 0; if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_100_FULL) advertising |= ADVERTISED_100baseT_Full; if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10GB_FULL) @@ -2194,6 +2195,7 @@ { struct txgbe_adapter *adapter = netdev_priv(netdev); struct txgbe_hw *hw = &adapter->hw; + u16 value = 0; switch (state) { case ETHTOOL_ID_ACTIVE: @@ -2201,17 +2203,58 @@ return 2; case ETHTOOL_ID_ON: - TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_UP); + if (hw->oem_ssid == 0x0075 && hw->oem_svid == 0x1bd4) { + if (adapter->link_up) { + switch (adapter->link_speed) { + case TXGBE_LINK_SPEED_10GB_FULL: + TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_10G); + break; + case TXGBE_LINK_SPEED_1GB_FULL: + TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_1G); + break; + case TXGBE_LINK_SPEED_100_FULL: + TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_100M); + break; + default: + break; + } + } else + TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_10G); + } else + TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_UP); break; case ETHTOOL_ID_OFF: - TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_UP); + if (hw->oem_ssid == 0x0075 && hw->oem_svid == 0x1bd4) { + if (adapter->link_up) { + switch (adapter->link_speed) { + case TXGBE_LINK_SPEED_10GB_FULL: + TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_10G); + break; + case TXGBE_LINK_SPEED_1GB_FULL: + TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_1G); + break; + case TXGBE_LINK_SPEED_100_FULL: + TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_100M); + break; + default: + break; + } + } else + TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_10G); + } else + TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_UP); break; case ETHTOOL_ID_INACTIVE: /* Restore LED settings */ wr32(&adapter->hw, TXGBE_CFG_LED_CTL, adapter->led_reg); + if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI) { + txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, &value); + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, + (value & 0xFFFC) | 0x0); + } break; } @@ -3319,16 +3362,21 @@ if (ret < 0) return ret; - if (txgbe_mng_present(&adapter->hw)) { + if (ef->region == 0) { + ret = txgbe_upgrade_flash(&adapter->hw, ef->region, + fw->data, fw->size); + } else { + if (txgbe_mng_present(&adapter->hw)) ret = txgbe_upgrade_flash_hostif(&adapter->hw, ef->region, fw->data, fw->size); - } else - ret = -EOPNOTSUPP; + else + ret = -EOPNOTSUPP; + } release_firmware(fw); if (!ret) dev_info(&netdev->dev, - "loaded firmware %s, reload txgbe driver\n", ef->data); + "loaded firmware %s, reboot to make firmware work\n", ef->data); return ret; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_hw.c
Changed
@@ -123,8 +123,6 @@ u16 max_msix_count; u32 pos; - DEBUGFUNC("\n"); - max_msix_count = TXGBE_MAX_MSIX_VECTORS_SAPPHIRE; pos = pci_find_capability(((struct txgbe_adapter *)hw->back)->pdev, PCI_CAP_ID_MSIX); if (!pos) @@ -159,8 +157,6 @@ { s32 status; - DEBUGFUNC("\n"); - /* Reset the hardware */ status = TCALL(hw, mac.ops.reset_hw); @@ -184,8 +180,6 @@ { u16 i = 0; - DEBUGFUNC("\n"); - rd32(hw, TXGBE_RX_CRC_ERROR_FRAMES_LOW); for (i = 0; i < 8; i++) rd32(hw, TXGBE_RDB_MPCNT(i)); @@ -239,9 +233,7 @@ bool supported = false; u32 speed; bool link_up; - u8 device_type = hw->subsystem_id & 0xF0; - - DEBUGFUNC("\n"); + u8 device_type = hw->subsystem_device_id & 0xF0; switch (hw->phy.media_type) { case txgbe_media_type_fiber: @@ -284,8 +276,6 @@ u32 value = 0; u32 pcap_backplane = 0; - DEBUGFUNC("\n"); - /* Validate the requested mode */ if (hw->fc.strict_ieee && hw->fc.requested_mode == txgbe_fc_rx_pause) { ERROR_REPORT1(TXGBE_ERROR_UNSUPPORTED, @@ -399,8 +389,6 @@ u16 offset; u16 length; - DEBUGFUNC("\n"); - if (pba_num == NULL) { DEBUGOUT("PBA string buffer was null\n"); return TXGBE_ERR_INVALID_ARGUMENT; @@ -512,8 +500,6 @@ u32 rar_low; u16 i; - DEBUGFUNC("\n"); - wr32(hw, TXGBE_PSR_MAC_SWC_IDX, 0); rar_high = rd32(hw, TXGBE_PSR_MAC_SWC_AD_H); rar_low = rd32(hw, TXGBE_PSR_MAC_SWC_AD_L); @@ -585,8 +571,6 @@ { u16 link_status; - DEBUGFUNC("\n"); - /* Get the negotiated link width and speed from PCI config space */ link_status = txgbe_read_pci_cfg_word(hw, TXGBE_PCI_LINK_STATUS); @@ -607,8 +591,6 @@ struct txgbe_bus_info *bus = &hw->bus; u32 reg; - DEBUGFUNC("\n"); - reg = rd32(hw, TXGBE_CFG_PORT_ST); bus->lan_id = TXGBE_CFG_PORT_ST_LAN_ID(reg); @@ -633,8 +615,6 @@ { u16 i; - DEBUGFUNC("\n"); - /* * Set the adapter_stopped flag so other driver functions stop touching * the hardware @@ -683,15 +663,10 @@ { u32 led_reg = rd32(hw, TXGBE_CFG_LED_CTL); u16 value = 0; - DEBUGFUNC("\n"); if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI) { txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, &value); - txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, value | 0x3); - txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, &value); - txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, value | 0x3); - txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, &value); - txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, value | 0x3); + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, (value & 0xFFFC) | 0x0); } /* To turn on the LED, set mode to ON. */ led_reg |= index | (index << TXGBE_CFG_LED_CTL_LINK_OD_SHIFT); @@ -710,15 +685,10 @@ { u32 led_reg = rd32(hw, TXGBE_CFG_LED_CTL); u16 value = 0; - DEBUGFUNC("\n"); if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI) { txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, &value); - txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, value & 0xFFFC); - txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, &value); - txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, value & 0xFFFC); - txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, &value); - txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, value & 0xFFFC); + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, (value & 0xFFFC) | 0x1); } /* To turn off the LED, set mode to OFF. */ @@ -843,8 +813,6 @@ { s32 status = 0; - DEBUGFUNC("\n"); - /* Make sure it is not a multicast address */ if (TXGBE_IS_MULTICAST(mac_addr)) { DEBUGOUT("MAC address is multicast\n"); @@ -878,8 +846,6 @@ u32 rar_low, rar_high; u32 rar_entries = hw->mac.num_rar_entries; - DEBUGFUNC("\n"); - /* Make sure we are using a valid rar index range */ if (index >= rar_entries) { ERROR_REPORT2(TXGBE_ERROR_ARGUMENT, @@ -932,8 +898,6 @@ { u32 rar_entries = hw->mac.num_rar_entries; - DEBUGFUNC("\n"); - /* Make sure we are using a valid rar index range */ if (index >= rar_entries) { ERROR_REPORT2(TXGBE_ERROR_ARGUMENT, @@ -975,8 +939,6 @@ u32 rar_entries = hw->mac.num_rar_entries; u32 psrctl; - DEBUGFUNC("\n"); - /* * If the current mac address is valid, assume it is a software override * to the permanent address. @@ -1044,13 +1006,7 @@ u32 rar_entries = hw->mac.num_rar_entries; u32 rar; - DEBUGFUNC("\n"); - - DEBUGOUT6(" UC Addr = %.2X %.2X %.2X %.2X %.2X %.2X\n", - addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]); - - /* - * Place this address in the RAR if there is room, + /* Place this address in the RAR if there is room, * else put the controller into promiscuous mode */ if (hw->addr_ctrl.rar_used_count < rar_entries) { @@ -1089,8 +1045,6 @@ u32 uc_addr_in_use; u32 vmdq; - DEBUGFUNC("\n"); - /* * Clear accounting of old secondary address list, * don't count RAR[0] @@ -1150,8 +1104,6 @@ { u32 vector = 0; - DEBUGFUNC("\n"); - switch (hw->mac.mc_filter_type) { case 0: /* use bits [47:36] of the address */ vector = ((mc_addr[4] >> 4) | (((u16)mc_addr[5]) << 4)); @@ -1189,8 +1141,6 @@ u32 vector_bit; u32 vector_reg; - DEBUGFUNC("\n");
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_hw.h
Changed
@@ -27,6 +27,43 @@ #define TXGBE_EMC_DIODE3_DATA 0x2A #define TXGBE_EMC_DIODE3_THERM_LIMIT 0x30 +#define SPI_CLK_DIV 2 + +#define SPI_CMD_ERASE_CHIP 4 // SPI erase chip command +#define SPI_CMD_ERASE_SECTOR 3 // SPI erase sector command +#define SPI_CMD_WRITE_DWORD 0 // SPI write a dword command +#define SPI_CMD_READ_DWORD 1 // SPI read a dword command +#define SPI_CMD_USER_CMD 5 // SPI user command + +#define SPI_CLK_CMD_OFFSET 28 // SPI command field offset in Command register +#define SPI_CLK_DIV_OFFSET 25 // SPI clock divide field offset in Command register + +#define SPI_TIME_OUT_VALUE 10000 +#define SPI_SECTOR_SIZE (4 * 1024) // FLASH sector size is 64KB +#define SPI_H_CMD_REG_ADDR 0x10104 // SPI Command register address +#define SPI_H_DAT_REG_ADDR 0x10108 // SPI Data register address +#define SPI_H_STA_REG_ADDR 0x1010c // SPI Status register address +#define SPI_H_USR_CMD_REG_ADDR 0x10110 // SPI User Command register address +#define SPI_CMD_CFG1_ADDR 0x10118 // Flash command configuration register 1 +#define MISC_RST_REG_ADDR 0x1000c // Misc reset register address +#define MGR_FLASH_RELOAD_REG_ADDR 0x101a0 // MGR reload flash read + +#define MAC_ADDR0_WORD0_OFFSET_1G 0x006000c // MAC Address for LAN0, stored in external FLASH +#define MAC_ADDR0_WORD1_OFFSET_1G 0x0060014 +#define MAC_ADDR1_WORD0_OFFSET_1G 0x007000c // MAC Address for LAN1, stored in external FLASH +#define MAC_ADDR1_WORD1_OFFSET_1G 0x0070014 +/* Product Serial Number, stored in external FLASH last sector */ +#define PRODUCT_SERIAL_NUM_OFFSET_1G 0x00f0000 + +struct txgbe_hic_read_cab { + union txgbe_hic_hdr2 hdr; + union { + u8 d8[252]; + u16 d16[126]; + u32 d32[63]; + } dbuf; +}; + /** * Packet Type decoding **/ @@ -238,6 +275,9 @@ s32 txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum_val); s32 txgbe_update_flash(struct txgbe_hw *hw); +int txgbe_upgrade_flash(struct txgbe_hw *hw, u32 region, + const u8 *data, u32 size); + s32 txgbe_write_ee_hostif_buffer(struct txgbe_hw *hw, u16 offset, u16 words, u16 *data); s32 txgbe_write_ee_hostif(struct txgbe_hw *hw, u16 offset, @@ -250,9 +290,13 @@ void txgbe_wr32_ephy(struct txgbe_hw *hw, u32 addr, u32 data); u32 rd32_ephy(struct txgbe_hw *hw, u32 addr); +u32 txgbe_flash_read_dword(struct txgbe_hw *hw, u32 addr); s32 txgbe_upgrade_flash_hostif(struct txgbe_hw *hw, u32 region, const u8 *data, u32 size); +s32 txgbe_close_notify(struct txgbe_hw *hw); +s32 txgbe_open_notify(struct txgbe_hw *hw); + s32 txgbe_set_link_to_kr(struct txgbe_hw *hw, bool autoneg); s32 txgbe_set_link_to_kx4(struct txgbe_hw *hw, bool autoneg);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_lib.c
Changed
@@ -472,6 +472,7 @@ adapter->num_rx_queues = 1; adapter->num_tx_queues = 1; adapter->queues_per_pool = 1; + adapter->num_xdp_queues = 0; if (txgbe_set_dcb_vmdq_queues(adapter)) return;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_main.c
Changed
@@ -40,7 +40,14 @@ #include <linux/if_macvlan.h> #include <linux/ethtool.h> #include <linux/if_bridge.h> +#include <linux/bpf.h> +#include <linux/bpf_trace.h> +#include <linux/atomic.h> #include <net/vxlan.h> +#include <net/udp_tunnel.h> +#include <net/pkt_cls.h> +#include <net/tc_act/tc_gact.h> +#include <net/tc_act/tc_mirred.h> #include "txgbe.h" #include "txgbe_hw.h" @@ -61,7 +68,7 @@ #define RELEASE_TAG -#define DRV_VERSION __stringify(1.1.17oe) +#define DRV_VERSION __stringify(1.3.2oe) const char txgbe_driver_version[32] = DRV_VERSION; static const char txgbe_copyright[] = @@ -472,8 +479,8 @@ wr32(&adapter->hw, TXGBE_PX_IMC(1), value3); } + ERROR_REPORT1(TXGBE_ERROR_POLLING, "tx timeout. do pcie recovery.\n"); if (adapter->hw.bus.lan_id == 0) { - ERROR_REPORT1(TXGBE_ERROR_POLLING, "tx timeout. do pcie recovery.\n"); adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER; txgbe_service_event_schedule(adapter); } else @@ -617,8 +624,11 @@ /* schedule immediate reset if we believe we hung */ e_info(hw, "real tx hang. do pcie recovery.\n"); - adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER; - txgbe_service_event_schedule(adapter); + if (adapter->hw.bus.lan_id == 0) { + adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER; + txgbe_service_event_schedule(adapter); + } else + wr32(&adapter->hw, TXGBE_MIS_PF_SM, 1); /* the adapter is about to reset, no point in enabling stuff */ return true; @@ -1800,7 +1810,7 @@ { u32 mask = 0; struct txgbe_hw *hw = &adapter->hw; - u8 device_type = hw->subsystem_id & 0xF0; + u8 device_type = hw->subsystem_device_id & 0xF0; /* enable gpio interrupt */ if (device_type != TXGBE_ID_MAC_XAUI && @@ -2097,27 +2107,32 @@ struct txgbe_adapter *adapter = data; struct txgbe_q_vector *q_vector = adapter->q_vector[0]; struct txgbe_hw *hw = &adapter->hw; - u32 eicr; u32 eicr_misc; u32 value ; + u16 pci_value; - eicr = txgbe_misc_isb(adapter, TXGBE_ISB_VEC0); - if (!eicr) { - /* - * shared interrupt alert! - * the interrupt that we masked before the EICR read. - */ - if (!test_bit(__TXGBE_DOWN, &adapter->state)) - txgbe_irq_enable(adapter, true, true); - return IRQ_NONE; /* Not our interrupt */ - } - adapter->isb_mem[TXGBE_ISB_VEC0] = 0; - if (!(adapter->flags & TXGBE_FLAG_MSI_ENABLED)) + if (!(adapter->flags & TXGBE_FLAG_MSI_ENABLED)) { + pci_read_config_word(adapter->pdev, PCI_STATUS, &pci_value); + if (!(pci_value & PCI_STATUS_INTERRUPT)) + return IRQ_HANDLED; /* Not our interrupt */ wr32(&(adapter->hw), TXGBE_PX_INTA, 1); + } eicr_misc = txgbe_misc_isb(adapter, TXGBE_ISB_MISC); - if (eicr_misc & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN)) - txgbe_check_lsc(adapter); + if (BOND_CHECK_LINK_MODE == 1) { + if (eicr_misc & (TXGBE_PX_MISC_IC_ETH_LKDN)) { + value = rd32(hw, 0x14404); + value = value & 0x1; + if (value == 0) { + adapter->link_up = false; + adapter->flags2 |= TXGBE_FLAG2_LINK_DOWN; + txgbe_service_event_schedule(adapter); + } + } + } else { + if (eicr_misc & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN)) + txgbe_check_lsc(adapter); + } if (eicr_misc & TXGBE_PX_MISC_IC_ETH_AN) { if (adapter->backplane_an == 1 && (KR_POLLING == 0)) { @@ -3215,12 +3230,23 @@ for (i = 0; i < hw->mac.num_rar_entries; i++) { if (adapter->mac_table[i].state & TXGBE_MAC_STATE_IN_USE) { + if (ether_addr_equal(addr, adapter->mac_table[i].addr)) { + if (adapter->mac_table[i].pools != (1ULL << pool)) { + memcpy(adapter->mac_table[i].addr, addr, ETH_ALEN); + adapter->mac_table[i].pools |= (1ULL << pool); + txgbe_sync_mac_table(adapter); + return i; + } + } + } + + if (adapter->mac_table[i].state & TXGBE_MAC_STATE_IN_USE) { continue; } adapter->mac_table[i].state |= (TXGBE_MAC_STATE_MODIFIED | TXGBE_MAC_STATE_IN_USE); memcpy(adapter->mac_table[i].addr, addr, ETH_ALEN); - adapter->mac_table[i].pools = (1ULL << pool); + adapter->mac_table[i].pools |= (1ULL << pool); txgbe_sync_mac_table(adapter); return i; } @@ -3251,16 +3277,29 @@ return -EINVAL; for (i = 0; i < hw->mac.num_rar_entries; i++) { - if (ether_addr_equal(addr, adapter->mac_table[i].addr) && - adapter->mac_table[i].pools | (1ULL << pool)) { - adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED; - adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE; - memset(adapter->mac_table[i].addr, 0, ETH_ALEN); - adapter->mac_table[i].pools = 0; - txgbe_sync_mac_table(adapter); + if (ether_addr_equal(addr, adapter->mac_table[i].addr)) { + if (adapter->mac_table[i].pools & (1ULL << pool)) { + adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED; + adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE; + adapter->mac_table[i].pools &= ~(1ULL << pool); + txgbe_sync_mac_table(adapter); + } return 0; } + + if (adapter->mac_table[i].pools != (1 << pool)) + continue; + if (!ether_addr_equal(addr, adapter->mac_table[i].addr)) + continue; + + adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED; + adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE; + memset(adapter->mac_table[i].addr, 0, ETH_ALEN); + adapter->mac_table[i].pools = 0; + txgbe_sync_mac_table(adapter); + return 0; } + return -ENOMEM; } @@ -3666,7 +3705,11 @@ wr32(hw, TXGBE_PX_ISB_ADDR_L, adapter->isb_dma & DMA_BIT_MASK(32)); +#ifdef CONFIG_64BIT wr32(hw, TXGBE_PX_ISB_ADDR_H, adapter->isb_dma >> 32); +#else + wr32(hw, TXGBE_PX_ISB_ADDR_H, 0); +#endif } void txgbe_configure_port(struct txgbe_adapter *adapter) @@ -3817,7 +3860,7 @@ if (link_up) return 0; - if ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI) { + if ((hw->subsystem_device_id & 0xF0) != TXGBE_ID_SFI_XAUI) { /* setup external PHY Mac Interface */ mtdSetMacInterfaceControl(&hw->phy_dev, hw->phy.addr, MTD_MAC_TYPE_XAUI, MTD_FALSE, MTD_MAC_SNOOP_OFF, @@ -3836,7 +3879,7 @@ autoneg = false; } - ret = TCALL(hw, mac.ops.setup_link, speed, autoneg); + ret = TCALL(hw, mac.ops.setup_link, speed, false); link_cfg_out: return ret; @@ -3956,10 +3999,12 @@ rd32(hw, TXGBE_PX_IC(0)); rd32(hw, TXGBE_PX_IC(1));
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_mbx.c
Changed
@@ -21,7 +21,7 @@ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 */ - +#include "txgbe_type.h" #include "txgbe.h" #include "txgbe_mbx.h" @@ -182,6 +182,304 @@ return countdown ? 0 : TXGBE_ERR_MBX; } +/** + * txgbe_read_posted_mbx - Wait for message notification and receive message + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @mbx_id: id of mailbox to write + * + * returns SUCCESS if it successfully received a message notification and + * copied it into the receive buffer. + **/ +int txgbe_read_posted_mbx(struct txgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + int err = TXGBE_ERR_MBX; + + if (!mbx->ops.read) + goto out; + + err = txgbe_poll_for_msg(hw, mbx_id); + + /* if ack received read message, otherwise we timed out */ + if (!err) + err = TCALL(hw, mbx.ops.read, msg, size, mbx_id); +out: + return err; +} + +/** + * txgbe_write_posted_mbx - Write a message to the mailbox, wait for ack + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @mbx_id: id of mailbox to write + * + * returns SUCCESS if it successfully copied message into the buffer and + * received an ack to that message within delay * timeout period + **/ +int txgbe_write_posted_mbx(struct txgbe_hw *hw, u32 *msg, u16 size, + u16 mbx_id) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + int err; + + /* exit if either we can't write or there isn't a defined timeout */ + if (!mbx->timeout) + return TXGBE_ERR_MBX; + + /* send msg */ + err = TCALL(hw, mbx.ops.write, msg, size, mbx_id); + + /* if msg sent wait until we receive an ack */ + if (!err) + err = txgbe_poll_for_ack(hw, mbx_id); + + return err; +} + +/** + * txgbe_init_mbx_ops - Initialize MB function pointers + * @hw: pointer to the HW structure + * + * Setups up the mailbox read and write message function pointers + **/ +void txgbe_init_mbx_ops(struct txgbe_hw *hw) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + + mbx->ops.read_posted = txgbe_read_posted_mbx; + mbx->ops.write_posted = txgbe_write_posted_mbx; +} + +/** + * txgbe_read_v2p_mailbox - read v2p mailbox + * @hw: pointer to the HW structure + * + * This function is used to read the v2p mailbox without losing the read to + * clear status bits. + **/ +u32 txgbe_read_v2p_mailbox(struct txgbe_hw *hw) +{ + u32 v2p_mailbox = rd32(hw, TXGBE_VXMAILBOX); + + v2p_mailbox |= hw->mbx.v2p_mailbox; + /* read and clear mirrored mailbox flags */ + v2p_mailbox |= rd32a(hw, TXGBE_VXMBMEM, TXGBE_VXMAILBOX_SIZE); + wr32a(hw, TXGBE_VXMBMEM, TXGBE_VXMAILBOX_SIZE, 0); + hw->mbx.v2p_mailbox |= v2p_mailbox & TXGBE_VXMAILBOX_R2C_BITS; + + return v2p_mailbox; +} + +/** + * txgbe_check_for_bit_vf - Determine if a status bit was set + * @hw: pointer to the HW structure + * @mask: bitmask for bits to be tested and cleared + * + * This function is used to check for the read to clear bits within + * the V2P mailbox. + **/ +int txgbe_check_for_bit_vf(struct txgbe_hw *hw, u32 mask) +{ + u32 mailbox = txgbe_read_v2p_mailbox(hw); + + hw->mbx.v2p_mailbox &= ~mask; + + return (mailbox & mask ? 0 : TXGBE_ERR_MBX); +} + +/** + * txgbe_check_for_msg_vf - checks to see if the PF has sent mail + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns SUCCESS if the PF has set the Status bit or else ERR_MBX + **/ +int txgbe_check_for_msg_vf(struct txgbe_hw *hw, u16 __always_unused mbx_id) +{ + int err = TXGBE_ERR_MBX; + + /* read clear the pf sts bit */ + if (!txgbe_check_for_bit_vf(hw, TXGBE_VXMAILBOX_PFSTS)) { + err = 0; + hw->mbx.stats.reqs++; + } + + return err; +} + +/** + * txgbe_check_for_ack_vf - checks to see if the PF has ACK'd + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns SUCCESS if the PF has set the ACK bit or else ERR_MBX + **/ +int txgbe_check_for_ack_vf(struct txgbe_hw *hw, u16 __always_unused mbx_id) +{ + int err = TXGBE_ERR_MBX; + + /* read clear the pf ack bit */ + if (!txgbe_check_for_bit_vf(hw, TXGBE_VXMAILBOX_PFACK)) { + err = 0; + hw->mbx.stats.acks++; + } + + return err; +} + +/** + * txgbe_check_for_rst_vf - checks to see if the PF has reset + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns true if the PF has set the reset done bit or else false + **/ +int txgbe_check_for_rst_vf(struct txgbe_hw *hw, u16 __always_unused mbx_id) +{ + int err = TXGBE_ERR_MBX; + + if (!txgbe_check_for_bit_vf(hw, (TXGBE_VXMAILBOX_RSTD | + TXGBE_VXMAILBOX_RSTI))) { + err = 0; + hw->mbx.stats.rsts++; + } + + return err; +} + +/** + * txgbe_obtain_mbx_lock_vf - obtain mailbox lock + * @hw: pointer to the HW structure + * + * return SUCCESS if we obtained the mailbox lock + **/ +int txgbe_obtain_mbx_lock_vf(struct txgbe_hw *hw) +{ + int err = TXGBE_ERR_MBX; + u32 mailbox; + + /* Take ownership of the buffer */ + wr32(hw, TXGBE_VXMAILBOX, TXGBE_VXMAILBOX_VFU); + + /* reserve mailbox for vf use */ + mailbox = txgbe_read_v2p_mailbox(hw); + if (mailbox & TXGBE_VXMAILBOX_VFU) + err = 0; + else
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.c
Changed
@@ -765,12 +765,101 @@ return MTD_FAIL; break; } + } + } + + return MTD_OK; +} +/****************************************************************************/ +MTD_STATUS mtdIsBaseTUp( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_U16 *speed, + OUT MTD_BOOL *linkUp) +{ + MTD_BOOL speedIsForced; + MTD_U16 forcedSpeed, cuSpeed, cuLinkStatus; + + *linkUp = MTD_FALSE; + *speed = MTD_ADV_NONE; + + /* first check if speed is forced to one of the speeds not requiring AN to train */ + ATTEMPT(mtdGetForcedSpeed(devPtr, port, &speedIsForced, &forcedSpeed)); + + if (speedIsForced) { + /* check if the link is up at the speed it's forced to */ + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, 3, 0x8008, 14, 2, &cuSpeed)); + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, 3, 0x8008, 10, 1, &cuLinkStatus)); + + switch (forcedSpeed) { + case MTD_SPEED_10M_HD_AN_DIS: + case MTD_SPEED_10M_FD_AN_DIS: + /* might want to add checking the duplex to make sure there + * is no duplex mismatch + */ + if (cuSpeed == MTD_CU_SPEED_10_MBPS) + *speed = forcedSpeed; + else + *speed = MTD_SPEED_MISMATCH; + + if (cuLinkStatus) + *linkUp = MTD_TRUE; + + break; + + case MTD_SPEED_100M_HD_AN_DIS: + case MTD_SPEED_100M_FD_AN_DIS: + /* might want to add checking the duplex to make sure there + * is no duplex mismatch + */ + if (cuSpeed == MTD_CU_SPEED_100_MBPS) + *speed = forcedSpeed; + else + *speed = MTD_SPEED_MISMATCH; + + if (cuLinkStatus) + *linkUp = MTD_TRUE; + + break; + + default: + return MTD_FAIL; } + } else { + /* must be going through AN */ + ATTEMPT(mtdGetAutonegSpeedDuplexResolution(devPtr, port, speed)); + + if (*speed != MTD_ADV_NONE) { + /* check if the link is up at the speed it's AN to */ + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, 3, 0x8008, 10, 1, &cuLinkStatus)); + + switch (*speed) { + case MTD_SPEED_10M_HD: + case MTD_SPEED_10M_FD: + case MTD_SPEED_100M_HD: + case MTD_SPEED_100M_FD: + case MTD_SPEED_1GIG_HD: + case MTD_SPEED_1GIG_FD: + case MTD_SPEED_10GIG_FD: + case MTD_SPEED_2P5GIG_FD: + case MTD_SPEED_5GIG_FD: + if (cuLinkStatus) + *linkUp = MTD_TRUE; + break; + default: + return MTD_FAIL; + } + } + /* else link is down, and AN is in progress, */ } - return MTD_OK; + if (*speed == MTD_SPEED_MISMATCH) { + return MTD_FAIL; + } else { + return MTD_OK; + } } MTD_STATUS mtdSetPauseAdvertisement(
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.h
Changed
@@ -1109,6 +1109,13 @@ OUT MTD_U16 *speedResolution ); +MTD_STATUS mtdIsBaseTUp( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_U16 *speed, + OUT MTD_BOOL *linkUp +); + MTD_STATUS mtdAutonegIsSpeedDuplexResolutionDone ( IN MTD_DEV_PTR devPtr,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_param.c
Changed
@@ -46,6 +46,9 @@ module_param_array(X, int, &num_##X, 0); \ MODULE_PARM_DESC(X, desc); +TXGBE_PARAM(an73_train_mode, "an73_train_mode to different switch(0 to centc, 1 to other)"); +#define TXGBE_DEFAULT_FFE_AN73_TRAIN_MODE 0 + /* ffe_main (KR/KX4/KX/SFI) * * Valid Range: 0-60 @@ -213,7 +216,9 @@ * * Default Value: 1 */ -#define DEFAULT_ITR 1 +#define DEFAULT_ITR ((TXGBE_STATIC_ITR == 0) || \ + (TXGBE_STATIC_ITR == 1) ? TXGBE_STATIC_ITR : (u16)((1000000/TXGBE_STATIC_ITR) << 2)) + TXGBE_PARAM(InterruptThrottleRate, "Maximum interrupts per second, per vector, " "(0,1,980-500000), default 1"); @@ -477,6 +482,30 @@ "Warning: no configuration for board #%d\n", bd); txgbe_notice("Using defaults for all values\n"); } + + { /* an73_mode */ + u32 an73_mode; + static struct txgbe_option opt = { + .type = range_option, + .name = "an73_train_mode", + .err = + "using default of "__MODULE_STRING(TXGBE_DEFAULT_FFE_AN73_TRAIN_MODE), + .def = TXGBE_DEFAULT_FFE_AN73_TRAIN_MODE, + .arg = { .r = { .min = 0, + .max = 1} } + }; + + if (num_an73_train_mode > bd) { + an73_mode = an73_train_mode[bd]; + if (an73_mode == OPTION_UNSET) + an73_mode = an73_train_mode[bd]; + txgbe_validate_option(&an73_mode, &opt); + adapter->an73_mode = an73_mode; + } else { + adapter->an73_mode = 0; + } + } + { /* MAIN */ u32 ffe_main; static struct txgbe_option opt = { @@ -760,6 +789,7 @@ } else if (opt.def == 0) { rss = min_t(int, txgbe_max_rss_indices(adapter), num_online_cpus()); + feature[RING_F_FDIR].limit = (u16)rss; feature[RING_F_RSS].limit = rss; } /* Check Interoperability */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_phy.c
Changed
@@ -37,8 +37,6 @@ { u32 mmngc; - DEBUGFUNC("\n"); - mmngc = rd32(hw, TXGBE_MIS_ST); if (mmngc & TXGBE_MIS_ST_MNG_VETO) { ERROR_REPORT1(TXGBE_ERROR_SOFTWARE, @@ -61,7 +59,6 @@ u16 phy_id_high = 0; u16 phy_id_low = 0; u8 numport, thisport; - DEBUGFUNC("\n"); status = mtdHwXmdioRead(&hw->phy_dev, hw->phy.addr, TXGBE_MDIO_PMA_PMD_DEV_TYPE, @@ -96,8 +93,6 @@ enum txgbe_phy_type phy_type; u16 ext_ability = 0; - DEBUGFUNC("\n"); - switch (hw->phy.id) { case TN1010_PHY_ID: phy_type = txgbe_phy_tn; @@ -134,9 +129,6 @@ { s32 status = 0; - DEBUGFUNC("\n"); - - if (status != 0 || hw->phy.type == txgbe_phy_none) goto out; @@ -208,8 +200,6 @@ s32 status; u32 gssr = hw->phy.phy_semaphore_mask; - DEBUGFUNC("\n"); - if (0 == TCALL(hw, mac.ops.acquire_swfw_sync, gssr)) { status = txgbe_read_phy_reg_mdi(hw, reg_addr, device_type, phy_data); @@ -272,8 +262,6 @@ s32 status; u32 gssr = hw->phy.phy_semaphore_mask; - DEBUGFUNC("\n"); - if (TCALL(hw, mac.ops.acquire_swfw_sync, gssr) == 0) { status = txgbe_write_phy_reg_mdi(hw, reg_addr, device_type, phy_data); @@ -325,31 +313,40 @@ { u16 speed = MTD_ADV_NONE; MTD_DEV_PTR devptr = &hw->phy_dev; - MTD_BOOL anDone = MTD_FALSE; u16 port = hw->phy.addr; - - DEBUGFUNC("\n"); - + int i = 0; + MTD_BOOL linkUp = MTD_FALSE; + u16 linkSpeed = MTD_ADV_NONE; + + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10GB_FULL) + speed |= MTD_SPEED_10GIG_FD; + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_1GB_FULL) + speed |= MTD_SPEED_1GIG_FD; + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_100_FULL) + speed |= MTD_SPEED_100M_FD; + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10_FULL) + speed |= MTD_SPEED_10M_FD; if (!autoneg_wait_to_complete) { - mtdAutonegIsSpeedDuplexResolutionDone(devptr, port, &anDone); - if (anDone) { - mtdGetAutonegSpeedDuplexResolution(devptr, port, &speed); + mtdGetAutonegSpeedDuplexResolution(devptr, port, &linkSpeed); + if (linkSpeed & speed) { + speed = linkSpeed; + goto out; } - } else { - if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10GB_FULL) - speed |= MTD_SPEED_10GIG_FD; - if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_1GB_FULL) - speed |= MTD_SPEED_1GIG_FD; - if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_100_FULL) - speed |= MTD_SPEED_100M_FD; - if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10_FULL) - speed |= MTD_SPEED_10M_FD; - mtdEnableSpeeds(devptr, port, speed, MTD_TRUE); - - /* wait autoneg to be done */ - speed = MTD_ADV_NONE; } + mtdEnableSpeeds(devptr, port, speed, MTD_TRUE); + mdelay(10); + + /* wait autoneg to be done */ + speed = MTD_ADV_NONE; + for (i = 0; i < 300; i++) { + mtdIsBaseTUp(devptr, port, &speed, &linkUp); + if (linkUp) + break; + mdelay(10); + } + +out: switch (speed) { case MTD_SPEED_10GIG_FD: return TXGBE_LINK_SPEED_10GB_FULL; @@ -374,9 +371,6 @@ u32 speed, bool autoneg_wait_to_complete) { - - DEBUGFUNC("\n"); - /* * Clear autoneg_advertised and set new values based on input link * speed. @@ -414,9 +408,6 @@ { s32 status; u16 speed_ability; - - DEBUGFUNC("\n"); - *speed = 0; *autoneg = true; @@ -439,6 +430,24 @@ } /** + * txgbe_get_phy_firmware_version - Gets the PHY Firmware Version + * @hw: pointer to hardware structure + * @firmware_version: pointer to the PHY Firmware Version + **/ +s32 txgbe_get_phy_firmware_version(struct txgbe_hw *hw, + u16 *firmware_version) +{ + s32 status; + u8 major, minor, inc, test; + + status = mtdGetFirmwareVersion(&hw->phy_dev, hw->phy.addr, + &major, &minor, &inc, &test); + if (status == 0) + *firmware_version = (major << 8) | minor; + return status; +} + +/** * txgbe_identify_module - Identifies module type * @hw: pointer to hardware structure * @@ -448,8 +457,6 @@ { s32 status = TXGBE_ERR_SFP_NOT_PRESENT; - DEBUGFUNC("\n"); - switch (TCALL(hw, mac.ops.get_media_type)) { case txgbe_media_type_fiber: status = txgbe_identify_sfp_module(hw); @@ -482,8 +489,6 @@ u8 cable_tech = 0; u8 cable_spec = 0; - DEBUGFUNC("\n"); - if (TCALL(hw, mac.ops.get_media_type) != txgbe_media_type_fiber) { hw->phy.sfp_type = txgbe_sfp_type_not_present; status = TXGBE_ERR_SFP_NOT_PRESENT; @@ -754,8 +759,6 @@ s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset, u8 *eeprom_data) { - DEBUGFUNC("\n"); - return TCALL(hw, phy.ops.read_i2c_byte, byte_offset, TXGBE_I2C_EEPROM_DEV_ADDR, eeprom_data); @@ -788,8 +791,6 @@ s32 txgbe_write_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset, u8 eeprom_data) { - DEBUGFUNC("\n"); - return TCALL(hw, phy.ops.write_i2c_byte, byte_offset, TXGBE_I2C_EEPROM_DEV_ADDR, eeprom_data); @@ -944,8 +945,6 @@ s32 status = 0; u32 ts_state;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_phy.h
Changed
@@ -150,6 +150,8 @@ bool *autoneg); s32 txgbe_check_reset_blocked(struct txgbe_hw *hw); +s32 txgbe_get_phy_firmware_version(struct txgbe_hw *hw, + u16 *firmware_version); s32 txgbe_identify_module(struct txgbe_hw *hw); s32 txgbe_identify_sfp_module(struct txgbe_hw *hw); s32 txgbe_tn_check_overtemp(struct txgbe_hw *hw);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_ptp.c
Changed
@@ -670,8 +670,8 @@ *incval = TXGBE_INCVAL_100; break; case TXGBE_LINK_SPEED_1GB_FULL: - *shift = TXGBE_INCVAL_SHIFT_FPGA; - *incval = TXGBE_INCVAL_FPGA; + *shift = TXGBE_INCVAL_SHIFT_1GB; + *incval = TXGBE_INCVAL_1GB; break; case TXGBE_LINK_SPEED_10GB_FULL: default: /* TXGBE_LINK_SPEED_10GB_FULL */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_sysfs.c
Added
@@ -0,0 +1,205 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2019 - 2022 Beijing WangXun Technology Co., Ltd. */ + +#include "txgbe.h" +#include "txgbe_hw.h" +#include "txgbe_type.h" + +#ifdef CONFIG_TXGBE_SYSFS + +#include <linux/module.h> +#include <linux/types.h> +#include <linux/sysfs.h> +#include <linux/kobject.h> +#include <linux/device.h> +#include <linux/netdevice.h> +#include <linux/time.h> +#ifdef CONFIG_TXGBE_HWMON +#include <linux/hwmon.h> +#endif + +#ifdef CONFIG_TXGBE_HWMON +/* hwmon callback functions */ +static ssize_t txgbe_hwmon_show_temp(struct device __always_unused *dev, + struct device_attribute *attr, + char *buf) +{ + struct hwmon_attr *txgbe_attr = container_of(attr, struct hwmon_attr, + dev_attr); + unsigned int value; + + /* reset the temp field */ + TCALL(txgbe_attr->hw, mac.ops.get_thermal_sensor_data); + + value = txgbe_attr->sensor->temp; + + /* display millidegree */ + value *= 1000; + + return sprintf(buf, "%u\n", value); +} + +static ssize_t txgbe_hwmon_show_alarmthresh(struct device __always_unused *dev, + struct device_attribute *attr, + char *buf) +{ + struct hwmon_attr *txgbe_attr = container_of(attr, struct hwmon_attr, + dev_attr); + unsigned int value = txgbe_attr->sensor->alarm_thresh; + + /* display millidegree */ + value *= 1000; + + return sprintf(buf, "%u\n", value); +} + +static ssize_t txgbe_hwmon_show_dalarmthresh(struct device __always_unused *dev, + struct device_attribute *attr, + char *buf) +{ + struct hwmon_attr *txgbe_attr = container_of(attr, struct hwmon_attr, + dev_attr); + unsigned int value = txgbe_attr->sensor->dalarm_thresh; + + /* display millidegree */ + value *= 1000; + + return sprintf(buf, "%u\n", value); +} + +/** + * txgbe_add_hwmon_attr - Create hwmon attr table for a hwmon sysfs file. + * @adapter: pointer to the adapter structure + * @type: type of sensor data to display + * + * For each file we want in hwmon's sysfs interface we need a device_attribute + * This is included in our hwmon_attr struct that contains the references to + * the data structures we need to get the data to display. + */ +static int txgbe_add_hwmon_attr(struct txgbe_adapter *adapter, int type) +{ + int rc; + unsigned int n_attr; + struct hwmon_attr *txgbe_attr; + + n_attr = adapter->txgbe_hwmon_buff.n_hwmon; + txgbe_attr = &adapter->txgbe_hwmon_buff.hwmon_list[n_attr]; + + switch (type) { + case TXGBE_HWMON_TYPE_TEMP: + txgbe_attr->dev_attr.show = txgbe_hwmon_show_temp; + snprintf(txgbe_attr->name, sizeof(txgbe_attr->name), + "temp%u_input", 0); + break; + case TXGBE_HWMON_TYPE_ALARMTHRESH: + txgbe_attr->dev_attr.show = txgbe_hwmon_show_alarmthresh; + snprintf(txgbe_attr->name, sizeof(txgbe_attr->name), + "temp%u_alarmthresh", 0); + break; + case TXGBE_HWMON_TYPE_DALARMTHRESH: + txgbe_attr->dev_attr.show = txgbe_hwmon_show_dalarmthresh; + snprintf(txgbe_attr->name, sizeof(txgbe_attr->name), + "temp%u_dalarmthresh", 0); + break; + default: + rc = -EPERM; + return rc; + } + + /* These always the same regardless of type */ + txgbe_attr->sensor = + &adapter->hw.mac.thermal_sensor_data.sensor; + txgbe_attr->hw = &adapter->hw; + txgbe_attr->dev_attr.store = NULL; + txgbe_attr->dev_attr.attr.mode = 0444; + txgbe_attr->dev_attr.attr.name = txgbe_attr->name; + + rc = device_create_file(pci_dev_to_dev(adapter->pdev), + &txgbe_attr->dev_attr); + + if (rc == 0) + ++adapter->txgbe_hwmon_buff.n_hwmon; + + return rc; +} +#endif /* CONFIG_TXGBE_HWMON */ + +static void txgbe_sysfs_del_adapter(struct txgbe_adapter __maybe_unused *adapter) +{ +#ifdef CONFIG_TXGBE_HWMON + int i; + + if (!adapter) + return; + + for (i = 0; i < adapter->txgbe_hwmon_buff.n_hwmon; i++) { + device_remove_file(pci_dev_to_dev(adapter->pdev), + &adapter->txgbe_hwmon_buff.hwmon_list[i].dev_attr); + } + + kfree(adapter->txgbe_hwmon_buff.hwmon_list); + + if (adapter->txgbe_hwmon_buff.device) + hwmon_device_unregister(adapter->txgbe_hwmon_buff.device); +#endif /* CONFIG_TXGBE_HWMON */ +} + +/* called from txgbe_main.c */ +void txgbe_sysfs_exit(struct txgbe_adapter *adapter) +{ + txgbe_sysfs_del_adapter(adapter); +} + +/* called from txgbe_main.c */ +int txgbe_sysfs_init(struct txgbe_adapter *adapter) +{ + int rc = 0; +#ifdef CONFIG_TXGBE_HWMON + struct hwmon_buff *txgbe_hwmon = &adapter->txgbe_hwmon_buff; + int n_attrs; + +#endif /* CONFIG_TXGBE_HWMON */ + if (!adapter) + goto err; + +#ifdef CONFIG_TXGBE_HWMON + + /* Don't create thermal hwmon interface if no sensors present */ + if (TCALL(&adapter->hw, mac.ops.init_thermal_sensor_thresh)) + goto no_thermal; + + /* Allocation space for max attributs + * max num sensors * values (temp, alamthresh, dalarmthresh) + */ + n_attrs = 3; + txgbe_hwmon->hwmon_list = kcalloc(n_attrs, sizeof(struct hwmon_attr), + GFP_KERNEL); + if (!txgbe_hwmon->hwmon_list) { + rc = -ENOMEM; + goto err; + } + + txgbe_hwmon->device = + hwmon_device_register(pci_dev_to_dev(adapter->pdev)); + if (IS_ERR(txgbe_hwmon->device)) { + rc = PTR_ERR(txgbe_hwmon->device); + goto err; + } + + /* Bail if any hwmon attr struct fails to initialize */ + rc = txgbe_add_hwmon_attr(adapter, TXGBE_HWMON_TYPE_TEMP); + rc |= txgbe_add_hwmon_attr(adapter, TXGBE_HWMON_TYPE_ALARMTHRESH); + rc |= txgbe_add_hwmon_attr(adapter, TXGBE_HWMON_TYPE_DALARMTHRESH); + if (rc) + goto err; + +no_thermal: +#endif /* CONFIG_TXGBE_HWMON */ + goto exit; +
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_type.h
Changed
@@ -128,6 +128,8 @@ #define TXGBE_WOL_SUP 0x4000 #define TXGBE_WOL_MASK 0x4000 +#define TXGBE_DEV_MASK 0xf0 + /* Combined interface*/ #define TXGBE_ID_SFI_XAUI 0x50 @@ -355,6 +357,7 @@ #define TXGBE_MIS_PWR 0x10000 #define TXGBE_MIS_CTL 0x10004 #define TXGBE_MIS_PF_SM 0x10008 +#define TXGBE_MIS_PRB_CTL 0x10010 #define TXGBE_MIS_ST 0x10028 #define TXGBE_MIS_SWSM 0x1002C #define TXGBE_MIS_RST_ST 0x10030 @@ -392,6 +395,8 @@ #define TXGBE_MIS_RST_ST_RST_INI_SHIFT 8 #define TXGBE_MIS_RST_ST_RST_TIM 0x000000FFU #define TXGBE_MIS_PF_SM_SM 1 +#define TXGBE_MIS_PRB_CTL_LAN0_UP 0x2 +#define TXGBE_MIS_PRB_CTL_LAN1_UP 0x1 /* Sensors for PVT(Process Voltage Temperature) */ #define TXGBE_TS_CTL 0x10300 @@ -2262,6 +2267,12 @@ #define FW_FLASH_UPGRADE_WRITE_CMD 0xE4 #define FW_FLASH_UPGRADE_VERIFY_CMD 0xE5 #define FW_FLASH_UPGRADE_VERIFY_LEN 0x4 +#define FW_DW_OPEN_NOTIFY 0xE9 +#define FW_DW_CLOSE_NOTIFY 0xEA + +#define TXGBE_CHECKSUM_CAP_ST_PASS 0x80658383 +#define TXGBE_CHECKSUM_CAP_ST_FAIL 0x70657376 + /* Host Interface Command Structures */ struct txgbe_hic_hdr { @@ -3028,8 +3039,9 @@ #endif MTD_DEV phy_dev; enum txgbe_link_status link_status; - u16 subsystem_id; u16 tpid[8]; + u16 oem_ssid; + u16 oem_svid; }; #define TCALL(hw, func, args...) (((hw)->func != NULL) \
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ipvlan/ipvlan_main.c
Changed
@@ -14,7 +14,7 @@ int sysctl_ipvlan_loop_delay = 10; static int ipvlan_default_mode = IPVLAN_MODE_L3; module_param(ipvlan_default_mode, int, 0400); -MODULE_PARM_DESC(ipvlan_default_mode, "set ipvlan default mode: 0 for l2, 1 for l3, 2 for l2e, 3 for l3s, others invalid now"); +MODULE_PARM_DESC(ipvlan_default_mode, "set ipvlan default mode: 0 for l2, 1 for l3, 2 for l3s, 3 for l2e, others invalid now"); static struct ctl_table_header *ipvlan_table_hrd; static struct ctl_table ipvlan_table[] = {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/virtio_net.c
Changed
@@ -3255,7 +3255,7 @@ return 0; free_unregister_netdev: - vi->vdev->config->reset(vdev); + virtio_reset_device(vdev); unregister_netdev(dev); free_failover: @@ -3271,7 +3271,7 @@ static void remove_vq_common(struct virtnet_info *vi) { - vi->vdev->config->reset(vi->vdev); + virtio_reset_device(vi->vdev); /* Free unused buffers in both send and recv, if any. */ free_unused_bufs(vi);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/wireless/mac80211_hwsim.c
Changed
@@ -4318,7 +4318,7 @@ { int i; - vdev->config->reset(vdev); + virtio_reset_device(vdev); for (i = 0; i < ARRAY_SIZE(hwsim_vqs); i++) { struct virtqueue *vq = hwsim_vqs[i];
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/wireless/microchip/wilc1000/cfg80211.c
Changed
@@ -939,30 +939,52 @@ return; while (index + sizeof(*e) <= len) { + u16 attr_size; + e = (struct wilc_attr_entry *)&buf[index]; - if (e->attr_type == IEEE80211_P2P_ATTR_CHANNEL_LIST) + attr_size = le16_to_cpu(e->attr_len); + + if (index + sizeof(*e) + attr_size > len) + return; + + if (e->attr_type == IEEE80211_P2P_ATTR_CHANNEL_LIST && + attr_size >= (sizeof(struct wilc_attr_ch_list) - sizeof(*e))) ch_list_idx = index; - else if (e->attr_type == IEEE80211_P2P_ATTR_OPER_CHANNEL) + else if (e->attr_type == IEEE80211_P2P_ATTR_OPER_CHANNEL && + attr_size == (sizeof(struct wilc_attr_oper_ch) - sizeof(*e))) op_ch_idx = index; + if (ch_list_idx && op_ch_idx) break; - index += le16_to_cpu(e->attr_len) + sizeof(*e); + + index += sizeof(*e) + attr_size; } if (ch_list_idx) { - u16 attr_size; - struct wilc_ch_list_elem *e; - int i; + unsigned int i; + u16 elem_size; ch_list = (struct wilc_attr_ch_list *)&buf[ch_list_idx]; - attr_size = le16_to_cpu(ch_list->attr_len); - for (i = 0; i < attr_size;) { + /* the number of bytes following the final 'elem' member */ + elem_size = le16_to_cpu(ch_list->attr_len) - + (sizeof(*ch_list) - sizeof(struct wilc_attr_entry)); + for (i = 0; i < elem_size;) { + struct wilc_ch_list_elem *e; + e = (struct wilc_ch_list_elem *)(ch_list->elem + i); + + i += sizeof(*e); + if (i > elem_size) + break; + + i += e->no_of_channels; + if (i > elem_size) + break; + if (e->op_class == WILC_WLAN_OPERATING_CLASS_2_4GHZ) { memset(e->ch_list, sta_ch, e->no_of_channels); break; } - i += e->no_of_channels; } }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/wireless/microchip/wilc1000/hif.c
Changed
@@ -467,14 +467,25 @@ rsn_ie = cfg80211_find_ie(WLAN_EID_RSN, ies->data, ies->len); if (rsn_ie) { + int rsn_ie_len = sizeof(struct element) + rsn_ie[1]; int offset = 8; - param->mode_802_11i = 2; - param->rsn_found = true; /* extract RSN capabilities */ - offset += (rsn_ie[offset] * 4) + 2; - offset += (rsn_ie[offset] * 4) + 2; - memcpy(param->rsn_cap, &rsn_ie[offset], 2); + if (offset < rsn_ie_len) { + /* skip over pairwise suites */ + offset += (rsn_ie[offset] * 4) + 2; + + if (offset < rsn_ie_len) { + /* skip over authentication suites */ + offset += (rsn_ie[offset] * 4) + 2; + + if (offset + 1 < rsn_ie_len) { + param->mode_802_11i = 2; + param->rsn_found = true; + memcpy(param->rsn_cap, &rsn_ie[offset], 2); + } + } + } } if (param->rsn_found) {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/xen-netback/common.h
Changed
@@ -395,7 +395,7 @@ bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread); void xenvif_rx_action(struct xenvif_queue *queue); -void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb); +bool xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb); void xenvif_carrier_on(struct xenvif *vif);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/xen-netback/interface.c
Changed
@@ -269,14 +269,16 @@ if (vif->hash.alg == XEN_NETIF_CTRL_HASH_ALGORITHM_NONE) skb_clear_hash(skb); - xenvif_rx_queue_tail(queue, skb); + if (!xenvif_rx_queue_tail(queue, skb)) + goto drop; + xenvif_kick_thread(queue); return NETDEV_TX_OK; drop: vif->dev->stats.tx_dropped++; - dev_kfree_skb(skb); + dev_kfree_skb_any(skb); return NETDEV_TX_OK; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/xen-netback/netback.c
Changed
@@ -330,10 +330,13 @@ struct xenvif_tx_cb { - u16 pending_idx; + u16 copy_pending_idx[XEN_NETBK_LEGACY_SLOTS_MAX + 1]; + u8 copy_count; }; #define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb) +#define copy_pending_idx(skb, i) (XENVIF_TX_CB(skb)->copy_pending_idx[i]) +#define copy_count(skb) (XENVIF_TX_CB(skb)->copy_count) static inline void xenvif_tx_create_map_op(struct xenvif_queue *queue, u16 pending_idx, @@ -368,31 +371,93 @@ return skb; } -static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif_queue *queue, - struct sk_buff *skb, - struct xen_netif_tx_request *txp, - struct gnttab_map_grant_ref *gop, - unsigned int frag_overflow, - struct sk_buff *nskb) +static void xenvif_get_requests(struct xenvif_queue *queue, + struct sk_buff *skb, + struct xen_netif_tx_request *first, + struct xen_netif_tx_request *txfrags, + unsigned *copy_ops, + unsigned *map_ops, + unsigned int frag_overflow, + struct sk_buff *nskb, + unsigned int extra_count, + unsigned int data_len) { struct skb_shared_info *shinfo = skb_shinfo(skb); skb_frag_t *frags = shinfo->frags; - u16 pending_idx = XENVIF_TX_CB(skb)->pending_idx; - int start; + u16 pending_idx; pending_ring_idx_t index; unsigned int nr_slots; + struct gnttab_copy *cop = queue->tx_copy_ops + *copy_ops; + struct gnttab_map_grant_ref *gop = queue->tx_map_ops + *map_ops; + struct xen_netif_tx_request *txp = first; + + nr_slots = shinfo->nr_frags + 1; + + copy_count(skb) = 0; + + /* Create copy ops for exactly data_len bytes into the skb head. */ + __skb_put(skb, data_len); + while (data_len > 0) { + int amount = data_len > txp->size ? txp->size : data_len; + + cop->source.u.ref = txp->gref; + cop->source.domid = queue->vif->domid; + cop->source.offset = txp->offset; + + cop->dest.domid = DOMID_SELF; + cop->dest.offset = (offset_in_page(skb->data + + skb_headlen(skb) - + data_len)) & ~XEN_PAGE_MASK; + cop->dest.u.gmfn = virt_to_gfn(skb->data + skb_headlen(skb) + - data_len); + + cop->len = amount; + cop->flags = GNTCOPY_source_gref; - nr_slots = shinfo->nr_frags; + index = pending_index(queue->pending_cons); + pending_idx = queue->pending_ring[index]; + callback_param(queue, pending_idx).ctx = NULL; + copy_pending_idx(skb, copy_count(skb)) = pending_idx; + copy_count(skb)++; + + cop++; + data_len -= amount; - /* Skip first skb fragment if it is on same page as header fragment. */ - start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx); + if (amount == txp->size) { + /* The copy op covered the full tx_request */ + + memcpy(&queue->pending_tx_info[pending_idx].req, + txp, sizeof(*txp)); + queue->pending_tx_info[pending_idx].extra_count = + (txp == first) ? extra_count : 0; + + if (txp == first) + txp = txfrags; + else + txp++; + queue->pending_cons++; + nr_slots--; + } else { + /* The copy op partially covered the tx_request. + * The remainder will be mapped. + */ + txp->offset += amount; + txp->size -= amount; + } + } - for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots; - shinfo->nr_frags++, txp++, gop++) { + for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots; + shinfo->nr_frags++, gop++) { index = pending_index(queue->pending_cons++); pending_idx = queue->pending_ring[index]; - xenvif_tx_create_map_op(queue, pending_idx, txp, 0, gop); + xenvif_tx_create_map_op(queue, pending_idx, txp, + txp == first ? extra_count : 0, gop); frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx); + + if (txp == first) + txp = txfrags; + else + txp++; } if (frag_overflow) { @@ -413,7 +478,8 @@ skb_shinfo(skb)->frag_list = nskb; } - return gop; + (*copy_ops) = cop - queue->tx_copy_ops; + (*map_ops) = gop - queue->tx_map_ops; } static inline void xenvif_grant_handle_set(struct xenvif_queue *queue, @@ -449,7 +515,7 @@ struct gnttab_copy **gopp_copy) { struct gnttab_map_grant_ref *gop_map = *gopp_map; - u16 pending_idx = XENVIF_TX_CB(skb)->pending_idx; + u16 pending_idx; /* This always points to the shinfo of the skb being checked, which * could be either the first or the one on the frag_list */ @@ -460,24 +526,37 @@ struct skb_shared_info *first_shinfo = NULL; int nr_frags = shinfo->nr_frags; const bool sharedslot = nr_frags && - frag_get_pending_idx(&shinfo->frags[0]) == pending_idx; - int i, err; + frag_get_pending_idx(&shinfo->frags[0]) == + copy_pending_idx(skb, copy_count(skb) - 1); + int i, err = 0; - /* Check status of header. */ - err = (*gopp_copy)->status; - if (unlikely(err)) { - if (net_ratelimit()) - netdev_dbg(queue->vif->dev, - "Grant copy of header failed! status: %d pending_idx: %u ref: %u\n", - (*gopp_copy)->status, - pending_idx, - (*gopp_copy)->source.u.ref); - /* The first frag might still have this slot mapped */ - if (!sharedslot) - xenvif_idx_release(queue, pending_idx, - XEN_NETIF_RSP_ERROR); + for (i = 0; i < copy_count(skb); i++) { + int newerr; + + /* Check status of header. */ + pending_idx = copy_pending_idx(skb, i); + + newerr = (*gopp_copy)->status; + if (likely(!newerr)) { + /* The first frag might still have this slot mapped */ + if (i < copy_count(skb) - 1 || !sharedslot) + xenvif_idx_release(queue, pending_idx, + XEN_NETIF_RSP_OKAY); + } else { + err = newerr; + if (net_ratelimit()) + netdev_dbg(queue->vif->dev, + "Grant copy of header failed! status: %d pending_idx: %u ref: %u\n", + (*gopp_copy)->status, + pending_idx, + (*gopp_copy)->source.u.ref); + /* The first frag might still have this slot mapped */ + if (i < copy_count(skb) - 1 || !sharedslot) + xenvif_idx_release(queue, pending_idx, + XEN_NETIF_RSP_ERROR); + } + (*gopp_copy)++; } - (*gopp_copy)++; check_frags: for (i = 0; i < nr_frags; i++, gop_map++) { @@ -524,14 +603,6 @@ if (err) continue;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/xen-netback/rx.c
Changed
@@ -82,9 +82,10 @@ return false; } -void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb) +bool xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb) { unsigned long flags; + bool ret = true; spin_lock_irqsave(&queue->rx_queue.lock, flags); @@ -92,8 +93,7 @@ struct net_device *dev = queue->vif->dev; netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id)); - kfree_skb(skb); - queue->vif->dev->stats.rx_dropped++; + ret = false; } else { if (skb_queue_empty(&queue->rx_queue)) xenvif_update_needed_slots(queue, skb); @@ -104,6 +104,8 @@ } spin_unlock_irqrestore(&queue->rx_queue.lock, flags); + + return ret; } static struct sk_buff *xenvif_rx_dequeue(struct xenvif_queue *queue)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/nvdimm/virtio_pmem.c
Changed
@@ -105,7 +105,7 @@ nvdimm_bus_unregister(nvdimm_bus); vdev->config->del_vqs(vdev); - vdev->config->reset(vdev); + virtio_reset_device(vdev); } static struct virtio_driver virtio_pmem_driver = {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/perf/hisilicon/hisi_uncore_pmu.c
Changed
@@ -437,7 +437,6 @@ if (mt) { switch (read_cpuid_part_number()) { case HISI_CPU_PART_TSV110: - case HISI_CPU_PART_TSV200: case ARM_CPU_PART_CORTEX_A55: sccl = aff2 >> 3; ccl = aff2 & 0x7;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/Kconfig
Changed
@@ -16,6 +16,8 @@ if X86_PLATFORM_DEVICES +source "drivers/platform/x86/intel/ifs/Kconfig" + config ACPI_WMI tristate "WMI" depends on ACPI
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/Makefile
Changed
@@ -83,6 +83,7 @@ obj-$(CONFIG_INTEL_MENLOW) += intel_menlow.o obj-$(CONFIG_INTEL_OAKTRAIL) += intel_oaktrail.o obj-$(CONFIG_INTEL_VBTN) += intel-vbtn.o +obj-$(CONFIG_INTEL_IFS) += intel/ifs/ # Microsoft obj-$(CONFIG_SURFACE3_WMI) += surface3-wmi.o
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel
Added
+(directory)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs
Added
+(directory)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs/Kconfig
Added
@@ -0,0 +1,13 @@ +config INTEL_IFS + tristate "Intel In Field Scan" + depends on X86 && CPU_SUP_INTEL && 64BIT && SMP + select INTEL_IFS_DEVICE + help + Enable support for the In Field Scan capability in select + CPUs. The capability allows for running low level tests via + a scan image distributed by Intel via Github to validate CPU + operation beyond baseline RAS capabilities. To compile this + support as a module, choose M here. The module will be called + intel_ifs. + + If unsure, say N.
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs/Makefile
Added
@@ -0,0 +1,3 @@ +obj-$(CONFIG_INTEL_IFS) += intel_ifs.o + +intel_ifs-objs := core.o load.o runtest.o sysfs.o
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs/core.c
Added
@@ -0,0 +1,73 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2022 Intel Corporation. */ + +#include <linux/module.h> +#include <linux/kdev_t.h> +#include <linux/semaphore.h> + +#include <asm/cpu_device_id.h> + +#include "ifs.h" + +#define X86_MATCH(model) \ + X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, \ + INTEL_FAM6_##model, X86_FEATURE_CORE_CAPABILITIES, NULL) + +static const struct x86_cpu_id ifs_cpu_ids[] __initconst = { + X86_MATCH(SAPPHIRERAPIDS_X), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, ifs_cpu_ids); + +static struct ifs_device ifs_device = { + .data = { + .integrity_cap_bit = MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT, + }, + .misc = { + .name = "intel_ifs_0", + .nodename = "intel_ifs/0", + .minor = MISC_DYNAMIC_MINOR, + }, +}; + +static int __init ifs_init(void) +{ + const struct x86_cpu_id *m; + u64 msrval; + + m = x86_match_cpu(ifs_cpu_ids); + if (!m) + return -ENODEV; + + if (rdmsrl_safe(MSR_IA32_CORE_CAPS, &msrval)) + return -ENODEV; + + if (!(msrval & MSR_IA32_CORE_CAPS_INTEGRITY_CAPS)) + return -ENODEV; + + if (rdmsrl_safe(MSR_INTEGRITY_CAPS, &msrval)) + return -ENODEV; + + ifs_device.misc.groups = ifs_get_groups(); + + if ((msrval & BIT(ifs_device.data.integrity_cap_bit)) && + !misc_register(&ifs_device.misc)) { + down(&ifs_sem); + ifs_load_firmware(ifs_device.misc.this_device); + up(&ifs_sem); + return 0; + } + + return -ENODEV; +} + +static void __exit ifs_exit(void) +{ + misc_deregister(&ifs_device.misc); +} + +module_init(ifs_init); +module_exit(ifs_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Intel In Field Scan (IFS) device");
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs/ifs.h
Added
@@ -0,0 +1,234 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2022 Intel Corporation. */ + +#ifndef _IFS_H_ +#define _IFS_H_ + +/** + * DOC: In-Field Scan + * + * ============= + * In-Field Scan + * ============= + * + * Introduction + * ------------ + * + * In Field Scan (IFS) is a hardware feature to run circuit level tests on + * a CPU core to detect problems that are not caught by parity or ECC checks. + * Future CPUs will support more than one type of test which will show up + * with a new platform-device instance-id, for now only .0 is exposed. + * + * + * IFS Image + * --------- + * + * Intel provides a firmware file containing the scan tests via + * github [#f1]_. Similar to microcode there is a separate file for each + * family-model-stepping. + * + * IFS Image Loading + * ----------------- + * + * The driver loads the tests into memory reserved BIOS local to each CPU + * socket in a two step process using writes to MSRs to first load the + * SHA hashes for the test. Then the tests themselves. Status MSRs provide + * feedback on the success/failure of these steps. When a new test file + * is installed it can be loaded by writing to the driver reload file:: + * + * # echo 1 > /sys/devices/virtual/misc/intel_ifs_0/reload + * + * Similar to microcode, the current version of the scan tests is stored + * in a fixed location: /lib/firmware/intel/ifs.0/family-model-stepping.scan + * + * Running tests + * ------------- + * + * Tests are run by the driver synchronizing execution of all threads on a + * core and then writing to the ACTIVATE_SCAN MSR on all threads. Instruction + * execution continues when: + * + * 1) All tests have completed. + * 2) Execution was interrupted. + * 3) A test detected a problem. + * + * Note that ALL THREADS ON THE CORE ARE EFFECTIVELY OFFLINE FOR THE + * DURATION OF THE TEST. This can be up to 200 milliseconds. If the system + * is running latency sensitive applications that cannot tolerate an + * interruption of this magnitude, the system administrator must arrange + * to migrate those applications to other cores before running a core test. + * It may also be necessary to redirect interrupts to other CPUs. + * + * In all cases reading the SCAN_STATUS MSR provides details on what + * happened. The driver makes the value of this MSR visible to applications + * via the "details" file (see below). Interrupted tests may be restarted. + * + * The IFS driver provides sysfs interfaces via /sys/devices/virtual/misc/intel_ifs_0/ + * to control execution: + * + * Test a specific core:: + * + * # echo <cpu#> > /sys/devices/virtual/misc/intel_ifs_0/run_test + * + * when HT is enabled any of the sibling cpu# can be specified to test + * its corresponding physical core. Since the tests are per physical core, + * the result of testing any thread is same. All siblings must be online + * to run a core test. It is only necessary to test one thread. + * + * For e.g. to test core corresponding to cpu5 + * + * # echo 5 > /sys/devices/virtual/misc/intel_ifs_0/run_test + * + * Results of the last test is provided in /sys:: + * + * $ cat /sys/devices/virtual/misc/intel_ifs_0/status + * pass + * + * Status can be one of pass, fail, untested + * + * Additional details of the last test is provided by the details file:: + * + * $ cat /sys/devices/virtual/misc/intel_ifs_0/details + * 0x8081 + * + * The details file reports the hex value of the SCAN_STATUS MSR. + * Hardware defined error codes are documented in volume 4 of the Intel + * Software Developer's Manual but the error_code field may contain one of + * the following driver defined software codes: + * + * +------+--------------------+ + * | 0xFD | Software timeout | + * +------+--------------------+ + * | 0xFE | Partial completion | + * +------+--------------------+ + * + * Driver design choices + * --------------------- + * + * 1) The ACTIVATE_SCAN MSR allows for running any consecutive subrange of + * available tests. But the driver always tries to run all tests and only + * uses the subrange feature to restart an interrupted test. + * + * 2) Hardware allows for some number of cores to be tested in parallel. + * The driver does not make use of this, it only tests one core at a time. + * + * .. [#f1] https://github.com/intel/TBD + */ +#include <linux/device.h> +#include <linux/miscdevice.h> + +#define MSR_COPY_SCAN_HASHES 0x000002c2 +#define MSR_SCAN_HASHES_STATUS 0x000002c3 +#define MSR_AUTHENTICATE_AND_COPY_CHUNK 0x000002c4 +#define MSR_CHUNKS_AUTHENTICATION_STATUS 0x000002c5 +#define MSR_ACTIVATE_SCAN 0x000002c6 +#define MSR_SCAN_STATUS 0x000002c7 +#define SCAN_NOT_TESTED 0 +#define SCAN_TEST_PASS 1 +#define SCAN_TEST_FAIL 2 + +/* MSR_SCAN_HASHES_STATUS bit fields */ +union ifs_scan_hashes_status { + u64 data; + struct { + u32 chunk_size :16; + u32 num_chunks :8; + u32 rsvd1 :8; + u32 error_code :8; + u32 rsvd2 :11; + u32 max_core_limit :12; + u32 valid :1; + }; +}; + +/* MSR_CHUNKS_AUTH_STATUS bit fields */ +union ifs_chunks_auth_status { + u64 data; + struct { + u32 valid_chunks :8; + u32 total_chunks :8; + u32 rsvd1 :16; + u32 error_code :8; + u32 rsvd2 :24; + }; +}; + +/* MSR_ACTIVATE_SCAN bit fields */ +union ifs_scan { + u64 data; + struct { + u32 start :8; + u32 stop :8; + u32 rsvd :16; + u32 delay :31; + u32 sigmce :1; + }; +}; + +/* MSR_SCAN_STATUS bit fields */ +union ifs_status { + u64 data; + struct { + u32 chunk_num :8; + u32 chunk_stop_index :8; + u32 rsvd1 :16; + u32 error_code :8; + u32 rsvd2 :22; + u32 control_error :1; + u32 signature_error :1; + }; +}; + +/* + * Driver populated error-codes + * 0xFD: Test timed out before completing all the chunks. + * 0xFE: not all scan chunks were executed. Maximum forward progress retries exceeded. + */ +#define IFS_SW_TIMEOUT 0xFD +#define IFS_SW_PARTIAL_COMPLETION 0xFE + +/** + * struct ifs_data - attributes related to intel IFS driver + * @integrity_cap_bit: MSR_INTEGRITY_CAPS bit enumerating this test + * @loaded_version: stores the currently loaded ifs image version. + * @loaded: If a valid test binary has been loaded into the memory + * @loading_error: Error occurred on another CPU while loading image + * @valid_chunks: number of chunks which could be validated. + * @status: it holds simple status pass/fail/untested + * @scan_details: opaque scan status code from h/w + */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs/load.c
Added
@@ -0,0 +1,267 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2022 Intel Corporation. */ + +#include <linux/firmware.h> +#include <asm/cpu.h> +#include <linux/slab.h> +#include <asm/microcode_intel.h> + +#include "ifs.h" + +struct ifs_header { + u32 header_ver; + u32 blob_revision; + u32 date; + u32 processor_sig; + u32 check_sum; + u32 loader_rev; + u32 processor_flags; + u32 metadata_size; + u32 total_size; + u32 fusa_info; + u64 reserved; +}; + +#define IFS_HEADER_SIZE (sizeof(struct ifs_header)) +static struct ifs_header *ifs_header_ptr; /* pointer to the ifs image header */ +static u64 ifs_hash_ptr; /* Address of ifs metadata (hash) */ +static u64 ifs_test_image_ptr; /* 256B aligned address of test pattern */ +static DECLARE_COMPLETION(ifs_done); + +static const char * const scan_hash_status[] = { + [0] = "No error reported", + [1] = "Attempt to copy scan hashes when copy already in progress", + [2] = "Secure Memory not set up correctly", + [3] = "FuSaInfo.ProgramID does not match or ff-mm-ss does not match", + [4] = "Reserved", + [5] = "Integrity check failed", + [6] = "Scan reload or test is in progress" +}; + +static const char * const scan_authentication_status[] = { + [0] = "No error reported", + [1] = "Attempt to authenticate a chunk which is already marked as authentic", + [2] = "Chunk authentication error. The hash of chunk did not match expected value" +}; + +/* + * To copy scan hashes and authenticate test chunks, the initiating cpu must point + * to the EDX:EAX to the test image in linear address. + * Run wrmsr(MSR_COPY_SCAN_HASHES) for scan hash copy and run wrmsr(MSR_AUTHENTICATE_AND_COPY_CHUNK) + * for scan hash copy and test chunk authentication. + */ +static void copy_hashes_authenticate_chunks(struct work_struct *work) +{ + struct ifs_work *local_work = container_of(work, struct ifs_work, w); + union ifs_scan_hashes_status hashes_status; + union ifs_chunks_auth_status chunk_status; + struct device *dev = local_work->dev; + int i, num_chunks, chunk_size; + struct ifs_data *ifsd; + u64 linear_addr, base; + u32 err_code; + + ifsd = ifs_get_data(dev); + /* run scan hash copy */ + wrmsrl(MSR_COPY_SCAN_HASHES, ifs_hash_ptr); + rdmsrl(MSR_SCAN_HASHES_STATUS, hashes_status.data); + + /* enumerate the scan image information */ + num_chunks = hashes_status.num_chunks; + chunk_size = hashes_status.chunk_size * 1024; + err_code = hashes_status.error_code; + + if (!hashes_status.valid) { + ifsd->loading_error = true; + if (err_code >= ARRAY_SIZE(scan_hash_status)) { + dev_err(dev, "invalid error code 0x%x for hash copy\n", err_code); + goto done; + } + dev_err(dev, "Hash copy error : %s", scan_hash_status[err_code]); + goto done; + } + + /* base linear address to the scan data */ + base = ifs_test_image_ptr; + + /* scan data authentication and copy chunks to secured memory */ + for (i = 0; i < num_chunks; i++) { + linear_addr = base + i * chunk_size; + linear_addr |= i; + + wrmsrl(MSR_AUTHENTICATE_AND_COPY_CHUNK, linear_addr); + rdmsrl(MSR_CHUNKS_AUTHENTICATION_STATUS, chunk_status.data); + + ifsd->valid_chunks = chunk_status.valid_chunks; + err_code = chunk_status.error_code; + + if (err_code) { + ifsd->loading_error = true; + if (err_code >= ARRAY_SIZE(scan_authentication_status)) { + dev_err(dev, + "invalid error code 0x%x for authentication\n", err_code); + goto done; + } + dev_err(dev, "Chunk authentication error %s\n", + scan_authentication_status[err_code]); + goto done; + } + } +done: + complete(&ifs_done); +} + +/* + * IFS requires scan chunks authenticated per each socket in the platform. + * Once the test chunk is authenticated, it is automatically copied to secured memory + * and proceed the authentication for the next chunk. + */ +static int scan_chunks_sanity_check(struct device *dev) +{ + int metadata_size, curr_pkg, cpu, ret = -ENOMEM; + struct ifs_data *ifsd = ifs_get_data(dev); + bool *package_authenticated; + struct ifs_work local_work; + char *test_ptr; + + package_authenticated = kcalloc(topology_max_packages(), sizeof(bool), GFP_KERNEL); + if (!package_authenticated) + return ret; + + metadata_size = ifs_header_ptr->metadata_size; + + /* Spec says that if the Meta Data Size = 0 then it should be treated as 2000 */ + if (metadata_size == 0) + metadata_size = 2000; + + /* Scan chunk start must be 256 byte aligned */ + if ((metadata_size + IFS_HEADER_SIZE) % 256) { + dev_err(dev, "Scan pattern offset within the binary is not 256 byte aligned\n"); + return -EINVAL; + } + + test_ptr = (char *)ifs_header_ptr + IFS_HEADER_SIZE + metadata_size; + ifsd->loading_error = false; + + ifs_test_image_ptr = (u64)test_ptr; + ifsd->loaded_version = ifs_header_ptr->blob_revision; + + /* copy the scan hash and authenticate per package */ + cpus_read_lock(); + for_each_online_cpu(cpu) { + curr_pkg = topology_physical_package_id(cpu); + if (package_authenticated[curr_pkg]) + continue; + reinit_completion(&ifs_done); + local_work.dev = dev; + INIT_WORK(&local_work.w, copy_hashes_authenticate_chunks); + schedule_work_on(cpu, &local_work.w); + wait_for_completion(&ifs_done); + if (ifsd->loading_error) + goto out; + package_authenticated[curr_pkg] = 1; + } + ret = 0; +out: + cpus_read_unlock(); + kfree(package_authenticated); + + return ret; +} + +static int ifs_sanity_check(struct device *dev, + const struct microcode_header_intel *mc_header) +{ + unsigned long total_size, data_size; + u32 sum, *mc; + int i; + + total_size = get_totalsize(mc_header); + data_size = get_datasize(mc_header); + + if ((data_size + MC_HEADER_SIZE > total_size) || (total_size % sizeof(u32))) { + dev_err(dev, "bad ifs data file size.\n"); + return -EINVAL; + } + + if (mc_header->ldrver != 1 || mc_header->hdrver != 1) { + dev_err(dev, "invalid/unknown ifs update format.\n"); + return -EINVAL; + } + + mc = (u32 *)mc_header; + sum = 0; + for (i = 0; i < total_size / sizeof(u32); i++) + sum += mc[i]; + + if (sum) { + dev_err(dev, "bad ifs data checksum, aborting.\n"); + return -EINVAL;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs/runtest.c
Added
@@ -0,0 +1,252 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2022 Intel Corporation. */ + +#include <linux/cpu.h> +#include <linux/delay.h> +#include <linux/fs.h> +#include <linux/nmi.h> +#include <linux/slab.h> +#include <linux/stop_machine.h> + +#include "ifs.h" + +/* + * Note all code and data in this file is protected by + * ifs_sem. On HT systems all threads on a core will + * execute together, but only the first thread on the + * core will update results of the test. + */ + +#define CREATE_TRACE_POINTS +#include <trace/events/intel_ifs.h> + +/* Max retries on the same chunk */ +#define MAX_IFS_RETRIES 5 + +/* + * Number of TSC cycles that a logical CPU will wait for the other + * logical CPU on the core in the WRMSR(ACTIVATE_SCAN). + */ +#define IFS_THREAD_WAIT 100000 + +enum ifs_status_err_code { + IFS_NO_ERROR = 0, + IFS_OTHER_THREAD_COULD_NOT_JOIN = 1, + IFS_INTERRUPTED_BEFORE_RENDEZVOUS = 2, + IFS_POWER_MGMT_INADEQUATE_FOR_SCAN = 3, + IFS_INVALID_CHUNK_RANGE = 4, + IFS_MISMATCH_ARGUMENTS_BETWEEN_THREADS = 5, + IFS_CORE_NOT_CAPABLE_CURRENTLY = 6, + IFS_UNASSIGNED_ERROR_CODE = 7, + IFS_EXCEED_NUMBER_OF_THREADS_CONCURRENT = 8, + IFS_INTERRUPTED_DURING_EXECUTION = 9, +}; + +static const char * const scan_test_status[] = { + [IFS_NO_ERROR] = "SCAN no error", + [IFS_OTHER_THREAD_COULD_NOT_JOIN] = "Other thread could not join.", + [IFS_INTERRUPTED_BEFORE_RENDEZVOUS] = "Interrupt occurred prior to SCAN coordination.", + [IFS_POWER_MGMT_INADEQUATE_FOR_SCAN] = + "Core Abort SCAN Response due to power management condition.", + [IFS_INVALID_CHUNK_RANGE] = "Non valid chunks in the range", + [IFS_MISMATCH_ARGUMENTS_BETWEEN_THREADS] = "Mismatch in arguments between threads T0/T1.", + [IFS_CORE_NOT_CAPABLE_CURRENTLY] = "Core not capable of performing SCAN currently", + [IFS_UNASSIGNED_ERROR_CODE] = "Unassigned error code 0x7", + [IFS_EXCEED_NUMBER_OF_THREADS_CONCURRENT] = + "Exceeded number of Logical Processors (LP) allowed to run Scan-At-Field concurrently", + [IFS_INTERRUPTED_DURING_EXECUTION] = "Interrupt occurred prior to SCAN start", +}; + +static void message_not_tested(struct device *dev, int cpu, union ifs_status status) +{ + if (status.error_code < ARRAY_SIZE(scan_test_status)) { + dev_info(dev, "CPU(s) %*pbl: SCAN operation did not start. %s\n", + cpumask_pr_args(cpu_smt_mask(cpu)), + scan_test_status[status.error_code]); + } else if (status.error_code == IFS_SW_TIMEOUT) { + dev_info(dev, "CPU(s) %*pbl: software timeout during scan\n", + cpumask_pr_args(cpu_smt_mask(cpu))); + } else if (status.error_code == IFS_SW_PARTIAL_COMPLETION) { + dev_info(dev, "CPU(s) %*pbl: %s\n", + cpumask_pr_args(cpu_smt_mask(cpu)), + "Not all scan chunks were executed. Maximum forward progress retries exceeded"); + } else { + dev_info(dev, "CPU(s) %*pbl: SCAN unknown status %llx\n", + cpumask_pr_args(cpu_smt_mask(cpu)), status.data); + } +} + +static void message_fail(struct device *dev, int cpu, union ifs_status status) +{ + /* + * control_error is set when the microcode runs into a problem + * loading the image from the reserved BIOS memory, or it has + * been corrupted. Reloading the image may fix this issue. + */ + if (status.control_error) { + dev_err(dev, "CPU(s) %*pbl: could not execute from loaded scan image\n", + cpumask_pr_args(cpu_smt_mask(cpu))); + } + + /* + * signature_error is set when the output from the scan chains does not + * match the expected signature. This might be a transient problem (e.g. + * due to a bit flip from an alpha particle or neutron). If the problem + * repeats on a subsequent test, then it indicates an actual problem in + * the core being tested. + */ + if (status.signature_error) { + dev_err(dev, "CPU(s) %*pbl: test signature incorrect.\n", + cpumask_pr_args(cpu_smt_mask(cpu))); + } +} + +static bool can_restart(union ifs_status status) +{ + enum ifs_status_err_code err_code = status.error_code; + + /* Signature for chunk is bad, or scan test failed */ + if (status.signature_error || status.control_error) + return false; + + switch (err_code) { + case IFS_NO_ERROR: + case IFS_OTHER_THREAD_COULD_NOT_JOIN: + case IFS_INTERRUPTED_BEFORE_RENDEZVOUS: + case IFS_POWER_MGMT_INADEQUATE_FOR_SCAN: + case IFS_EXCEED_NUMBER_OF_THREADS_CONCURRENT: + case IFS_INTERRUPTED_DURING_EXECUTION: + return true; + case IFS_INVALID_CHUNK_RANGE: + case IFS_MISMATCH_ARGUMENTS_BETWEEN_THREADS: + case IFS_CORE_NOT_CAPABLE_CURRENTLY: + case IFS_UNASSIGNED_ERROR_CODE: + break; + } + return false; +} + +/* + * Execute the scan. Called "simultaneously" on all threads of a core + * at high priority using the stop_cpus mechanism. + */ +static int doscan(void *data) +{ + int cpu = smp_processor_id(); + u64 *msrs = data; + int first; + + /* Only the first logical CPU on a core reports result */ + first = cpumask_first(cpu_smt_mask(cpu)); + + /* + * This WRMSR will wait for other HT threads to also write + * to this MSR (at most for activate.delay cycles). Then it + * starts scan of each requested chunk. The core scan happens + * during the "execution" of the WRMSR. This instruction can + * take up to 200 milliseconds (in the case where all chunks + * are processed in a single pass) before it retires. + */ + wrmsrl(MSR_ACTIVATE_SCAN, msrs[0]); + + if (cpu == first) { + /* Pass back the result of the scan */ + rdmsrl(MSR_SCAN_STATUS, msrs[1]); + } + + return 0; +} + +/* + * Use stop_core_cpuslocked() to synchronize writing to MSR_ACTIVATE_SCAN + * on all threads of the core to be tested. Loop if necessary to complete + * run of all chunks. Include some defensive tests to make sure forward + * progress is made, and that the whole test completes in a reasonable time. + */ +static void ifs_test_core(int cpu, struct device *dev) +{ + union ifs_scan activate; + union ifs_status status; + unsigned long timeout; + struct ifs_data *ifsd; + u64 msrvals[2]; + int retries; + + ifsd = ifs_get_data(dev); + + activate.rsvd = 0; + activate.delay = IFS_THREAD_WAIT; + activate.sigmce = 0; + activate.start = 0; + activate.stop = ifsd->valid_chunks - 1; + + timeout = jiffies + HZ / 2; + retries = MAX_IFS_RETRIES; + + while (activate.start <= activate.stop) { + if (time_after(jiffies, timeout)) { + status.error_code = IFS_SW_TIMEOUT; + break; + } + + msrvals[0] = activate.data; + stop_core_cpuslocked(cpu, doscan, msrvals); + + status.data = msrvals[1]; + + trace_ifs_status(cpu, activate, status); + + /* Some cases can be retried, give up for others */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs/sysfs.c
Added
@@ -0,0 +1,149 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2022 Intel Corporation. */ + +#include <linux/cpu.h> +#include <linux/delay.h> +#include <linux/fs.h> +#include <linux/semaphore.h> +#include <linux/slab.h> + +#include "ifs.h" + +/* + * Protects against simultaneous tests on multiple cores, or + * reloading can file while a test is in progress + */ +DEFINE_SEMAPHORE(ifs_sem); + +/* + * The sysfs interface to check additional details of last test + * cat /sys/devices/system/platform/ifs/details + */ +static ssize_t details_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct ifs_data *ifsd = ifs_get_data(dev); + + return sysfs_emit(buf, "%#llx\n", ifsd->scan_details); +} + +static DEVICE_ATTR_RO(details); + +static const char * const status_msg[] = { + [SCAN_NOT_TESTED] = "untested", + [SCAN_TEST_PASS] = "pass", + [SCAN_TEST_FAIL] = "fail" +}; + +/* + * The sysfs interface to check the test status: + * To check the status of last test + * cat /sys/devices/platform/ifs/status + */ +static ssize_t status_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct ifs_data *ifsd = ifs_get_data(dev); + + return sysfs_emit(buf, "%s\n", status_msg[ifsd->status]); +} + +static DEVICE_ATTR_RO(status); + +/* + * The sysfs interface for single core testing + * To start test, for example, cpu5 + * echo 5 > /sys/devices/platform/ifs/run_test + * To check the result: + * cat /sys/devices/platform/ifs/result + * The sibling core gets tested at the same time. + */ +static ssize_t run_test_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct ifs_data *ifsd = ifs_get_data(dev); + unsigned int cpu; + int rc; + + rc = kstrtouint(buf, 0, &cpu); + if (rc < 0 || cpu >= nr_cpu_ids) + return -EINVAL; + + if (down_interruptible(&ifs_sem)) + return -EINTR; + + if (!ifsd->loaded) + rc = -EPERM; + else + rc = do_core_test(cpu, dev); + + up(&ifs_sem); + + return rc ? rc : count; +} + +static DEVICE_ATTR_WO(run_test); + +/* + * Reload the IFS image. When user wants to install new IFS image + */ +static ssize_t reload_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct ifs_data *ifsd = ifs_get_data(dev); + bool res; + + + if (kstrtobool(buf, &res)) + return -EINVAL; + if (!res) + return count; + + if (down_interruptible(&ifs_sem)) + return -EINTR; + + ifs_load_firmware(dev); + + up(&ifs_sem); + + return ifsd->loaded ? count : -ENODEV; +} + +static DEVICE_ATTR_WO(reload); + +/* + * Display currently loaded IFS image version. + */ +static ssize_t image_version_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct ifs_data *ifsd = ifs_get_data(dev); + + if (!ifsd->loaded) + return sysfs_emit(buf, "%s\n", "none"); + else + return sysfs_emit(buf, "%#x\n", ifsd->loaded_version); +} + +static DEVICE_ATTR_RO(image_version); + +/* global scan sysfs attributes */ +static struct attribute *plat_ifs_attrs[] = { + &dev_attr_details.attr, + &dev_attr_status.attr, + &dev_attr_run_test.attr, + &dev_attr_reload.attr, + &dev_attr_image_version.attr, + NULL +}; + +ATTRIBUTE_GROUPS(plat_ifs); + +const struct attribute_group **ifs_get_groups(void) +{ + return plat_ifs_groups; +}
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/rpmsg/virtio_rpmsg_bus.c
Changed
@@ -1012,7 +1012,7 @@ size_t total_buf_space = vrp->num_bufs * vrp->buf_size; int ret; - vdev->config->reset(vdev); + virtio_reset_device(vdev); ret = device_for_each_child(&vdev->dev, NULL, rpmsg_remove_device); if (ret)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/be2iscsi/be_main.c
Changed
@@ -5801,21 +5801,15 @@ .resume = beiscsi_eeh_resume, }; -struct iscsi_transport_expand beiscsi_iscsi_expand = { - .unbind_conn = iscsi_conn_unbind, -}; - struct iscsi_transport beiscsi_iscsi_transport = { .owner = THIS_MODULE, .name = DRV_NAME, .caps = CAP_RECOVERY_L0 | CAP_HDRDGST | CAP_TEXT_NEGO | - CAP_MULTI_R2T | CAP_DATADGST | CAP_DATA_PATH_OFFLOAD | - CAP_OPS_EXPAND, + CAP_MULTI_R2T | CAP_DATADGST | CAP_DATA_PATH_OFFLOAD, .create_session = beiscsi_session_create, .destroy_session = beiscsi_session_destroy, .create_conn = beiscsi_conn_create, .bind_conn = beiscsi_conn_bind, - .ops_expand = &beiscsi_iscsi_expand, .destroy_conn = iscsi_conn_teardown, .attr_is_visible = beiscsi_attr_is_visible, .set_iface_param = beiscsi_iface_set_param,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/bnx2i/bnx2i_iscsi.c
Changed
@@ -2274,23 +2274,17 @@ .track_queue_depth = 1, }; - -static struct iscsi_transport_expand bnx2i_iscsi_expand = { - .unbind_conn = iscsi_conn_unbind, -}; - struct iscsi_transport bnx2i_iscsi_transport = { .owner = THIS_MODULE, .name = "bnx2i", .caps = CAP_RECOVERY_L0 | CAP_HDRDGST | CAP_MULTI_R2T | CAP_DATADGST | CAP_DATA_PATH_OFFLOAD | - CAP_TEXT_NEGO | CAP_OPS_EXPAND, + CAP_TEXT_NEGO, .create_session = bnx2i_session_create, .destroy_session = bnx2i_session_destroy, .create_conn = bnx2i_conn_create, .bind_conn = bnx2i_conn_bind, - .ops_expand = &bnx2i_iscsi_expand, .destroy_conn = bnx2i_conn_destroy, .attr_is_visible = bnx2i_attr_is_visible, .set_param = iscsi_set_param,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
Changed
@@ -100,18 +100,13 @@ .track_queue_depth = 1, }; -static struct iscsi_transport_expand cxgb3i_iscsi_expand = { - .unbind_conn = iscsi_conn_unbind, -}; - static struct iscsi_transport cxgb3i_iscsi_transport = { .owner = THIS_MODULE, .name = DRV_MODULE_NAME, /* owner and name should be set already */ .caps = CAP_RECOVERY_L0 | CAP_MULTI_R2T | CAP_HDRDGST | CAP_DATADGST | CAP_DIGEST_OFFLOAD | - CAP_PADDING_OFFLOAD | CAP_TEXT_NEGO | - CAP_OPS_EXPAND, + CAP_PADDING_OFFLOAD | CAP_TEXT_NEGO, .attr_is_visible = cxgbi_attr_is_visible, .get_host_param = cxgbi_get_host_param, .set_host_param = cxgbi_set_host_param, @@ -122,7 +117,6 @@ /* connection management */ .create_conn = cxgbi_create_conn, .bind_conn = cxgbi_bind_conn, - .ops_expand = &cxgb3i_iscsi_expand, .destroy_conn = iscsi_tcp_conn_teardown, .start_conn = iscsi_conn_start, .stop_conn = iscsi_conn_stop,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
Changed
@@ -118,17 +118,12 @@ .track_queue_depth = 1, }; -static struct iscsi_transport_expand cxgb4i_iscsi_expand = { - .unbind_conn = iscsi_conn_unbind, -}; - static struct iscsi_transport cxgb4i_iscsi_transport = { .owner = THIS_MODULE, .name = DRV_MODULE_NAME, .caps = CAP_RECOVERY_L0 | CAP_MULTI_R2T | CAP_HDRDGST | CAP_DATADGST | CAP_DIGEST_OFFLOAD | - CAP_PADDING_OFFLOAD | CAP_TEXT_NEGO | - CAP_OPS_EXPAND, + CAP_PADDING_OFFLOAD | CAP_TEXT_NEGO, .attr_is_visible = cxgbi_attr_is_visible, .get_host_param = cxgbi_get_host_param, .set_host_param = cxgbi_set_host_param, @@ -139,7 +134,6 @@ /* connection management */ .create_conn = cxgbi_create_conn, .bind_conn = cxgbi_bind_conn, - .ops_expand = &cxgb4i_iscsi_expand, .destroy_conn = iscsi_tcp_conn_teardown, .start_conn = iscsi_conn_start, .stop_conn = iscsi_conn_stop,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/qedi/qedi_iscsi.c
Changed
@@ -1429,20 +1429,15 @@ cmd->scsi_cmd = NULL; } -static struct iscsi_transport_expand qedi_iscsi_expand = { - .unbind_conn = iscsi_conn_unbind, -}; - struct iscsi_transport qedi_iscsi_transport = { .owner = THIS_MODULE, .name = QEDI_MODULE_NAME, .caps = CAP_RECOVERY_L0 | CAP_HDRDGST | CAP_MULTI_R2T | CAP_DATADGST | - CAP_DATA_PATH_OFFLOAD | CAP_TEXT_NEGO | CAP_OPS_EXPAND, + CAP_DATA_PATH_OFFLOAD | CAP_TEXT_NEGO, .create_session = qedi_session_create, .destroy_session = qedi_session_destroy, .create_conn = qedi_conn_create, .bind_conn = qedi_conn_bind, - .ops_expand = &qedi_iscsi_expand, .start_conn = qedi_conn_start, .stop_conn = iscsi_conn_stop, .destroy_conn = qedi_conn_destroy,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/qla4xxx/ql4_os.c
Changed
@@ -246,24 +246,19 @@ .vendor_id = SCSI_NL_VID_TYPE_PCI | PCI_VENDOR_ID_QLOGIC, }; -static struct iscsi_transport_expand qla4xxx_iscsi_expand = { - .unbind_conn = iscsi_conn_unbind, -}; - static struct iscsi_transport qla4xxx_iscsi_transport = { .owner = THIS_MODULE, .name = DRIVER_NAME, .caps = CAP_TEXT_NEGO | CAP_DATA_PATH_OFFLOAD | CAP_HDRDGST | CAP_DATADGST | CAP_LOGIN_OFFLOAD | - CAP_MULTI_R2T | CAP_OPS_EXPAND, + CAP_MULTI_R2T, .attr_is_visible = qla4_attr_is_visible, .create_session = qla4xxx_session_create, .destroy_session = qla4xxx_session_destroy, .start_conn = qla4xxx_conn_start, .create_conn = qla4xxx_conn_create, .bind_conn = qla4xxx_conn_bind, - .ops_expand = &qla4xxx_iscsi_expand, .stop_conn = iscsi_conn_stop, .destroy_conn = qla4xxx_conn_destroy, .set_param = iscsi_set_param,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/scsi_error.c
Changed
@@ -2359,7 +2359,7 @@ return -EIO; error = -EIO; - rq = kzalloc(sizeof(struct request_wrapper) + sizeof(struct scsi_cmnd) + + rq = kzalloc(sizeof(struct request) + sizeof(struct scsi_cmnd) + shost->hostt->cmd_size, GFP_KERNEL); if (!rq) goto out_put_autopm_host;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/scsi_transport_iscsi.c
Changed
@@ -2257,11 +2257,6 @@ ep = conn->ep; conn->ep = NULL; - if (session->transport->caps & CAP_OPS_EXPAND && - session->transport->ops_expand && - session->transport->ops_expand->unbind_conn) - session->transport->ops_expand->unbind_conn(conn, is_active); - session->transport->ep_disconnect(ep); ISCSI_DBG_TRANS_CONN(conn, "disconnect ep done.\n"); } @@ -3228,19 +3223,10 @@ struct Scsi_Host *shost; struct sockaddr *dst_addr; int err; - int (*tgt_dscvr)(struct Scsi_Host *shost, enum iscsi_tgt_dscvr type, - uint32_t enable, struct sockaddr *dst_addr); - if (transport->caps & CAP_OPS_EXPAND) { - if (!transport->ops_expand || !transport->ops_expand->tgt_dscvr) - return -EINVAL; - tgt_dscvr = transport->ops_expand->tgt_dscvr; - } else { - if (!transport->ops_expand) - return -EINVAL; - tgt_dscvr = (int (*)(struct Scsi_Host *, enum iscsi_tgt_dscvr, uint32_t, - struct sockaddr *))(transport->ops_expand); - } + if (!transport->tgt_dscvr) + return -EINVAL; + shost = scsi_host_lookup(ev->u.tgt_dscvr.host_no); if (!shost) { printk(KERN_ERR "target discovery could not find host no %u\n", @@ -3250,8 +3236,8 @@ dst_addr = (struct sockaddr *)((char*)ev + sizeof(*ev)); - err = tgt_dscvr(shost, ev->u.tgt_dscvr.type, - ev->u.tgt_dscvr.enable, dst_addr); + err = transport->tgt_dscvr(shost, ev->u.tgt_dscvr.type, + ev->u.tgt_dscvr.enable, dst_addr); scsi_host_put(shost); return err; } @@ -4904,10 +4890,7 @@ int err; BUG_ON(!tt); - if (tt->caps & CAP_OPS_EXPAND) { - BUG_ON(!tt->ops_expand); - WARN_ON(tt->ep_disconnect && !tt->ops_expand->unbind_conn); - } + priv = iscsi_if_transport_lookup(tt); if (priv) return NULL;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/virtio_scsi.c
Changed
@@ -780,7 +780,7 @@ static void virtscsi_remove_vqs(struct virtio_device *vdev) { /* Stop all the virtqueues. */ - vdev->config->reset(vdev); + virtio_reset_device(vdev); vdev->config->del_vqs(vdev); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/spi/spi-hisi-sfc-v3xx.c
Changed
@@ -5,13 +5,13 @@ // Copyright (c) 2019 HiSilicon Technologies Co., Ltd. // Author: John Garry <john.garry@huawei.com> -#include <linux/acpi.h> #include <linux/bitops.h> #include <linux/completion.h> #include <linux/dmi.h> #include <linux/interrupt.h> #include <linux/iopoll.h> #include <linux/module.h> +#include <linux/mod_devicetable.h> #include <linux/platform_device.h> #include <linux/slab.h> #include <linux/spi/spi.h> @@ -19,6 +19,8 @@ #define HISI_SFC_V3XX_VERSION (0x1f8) +#define HISI_SFC_V3XX_GLB_CFG (0x100) +#define HISI_SFC_V3XX_GLB_CFG_CS0_ADDR_MODE BIT(2) #define HISI_SFC_V3XX_RAW_INT_STAT (0x120) #define HISI_SFC_V3XX_INT_STAT (0x124) #define HISI_SFC_V3XX_INT_MASK (0x128) @@ -75,6 +77,7 @@ void __iomem *regbase; int max_cmd_dword; struct completion *completion; + u8 address_mode; int irq; }; @@ -168,10 +171,18 @@ static bool hisi_sfc_v3xx_supports_op(struct spi_mem *mem, const struct spi_mem_op *op) { + struct spi_device *spi = mem->spi; + struct hisi_sfc_v3xx_host *host; + + host = spi_controller_get_devdata(spi->master); + if (op->data.buswidth > 4 || op->dummy.buswidth > 4 || op->addr.buswidth > 4 || op->cmd.buswidth > 4) return false; + if (op->addr.nbytes != host->address_mode && op->addr.nbytes) + return false; + return spi_mem_default_supports_op(mem, op); } @@ -331,6 +342,7 @@ ret = 0; hisi_sfc_v3xx_disable_int(host); + synchronize_irq(host->irq); host->completion = NULL; } else { ret = hisi_sfc_v3xx_wait_cmd_idle(host); @@ -416,7 +428,7 @@ struct device *dev = &pdev->dev; struct hisi_sfc_v3xx_host *host; struct spi_controller *ctlr; - u32 version; + u32 version, glb_config; int ret; ctlr = spi_alloc_master(&pdev->dev, sizeof(*host)); @@ -463,16 +475,24 @@ ctlr->num_chipselect = 1; ctlr->mem_ops = &hisi_sfc_v3xx_mem_ops; + /* + * The address mode of the controller is either 3 or 4, + * which is indicated by the address mode bit in + * the global config register. The register is read only + * for the OS driver. + */ + glb_config = readl(host->regbase + HISI_SFC_V3XX_GLB_CFG); + if (glb_config & HISI_SFC_V3XX_GLB_CFG_CS0_ADDR_MODE) + host->address_mode = 4; + else + host->address_mode = 3; + version = readl(host->regbase + HISI_SFC_V3XX_VERSION); - switch (version) { - case 0x351: + if (version >= 0x351) host->max_cmd_dword = 64; - break; - default: + else host->max_cmd_dword = 16; - break; - } ret = devm_spi_register_controller(dev, ctlr); if (ret) @@ -488,18 +508,16 @@ return ret; } -#if IS_ENABLED(CONFIG_ACPI) static const struct acpi_device_id hisi_sfc_v3xx_acpi_ids[] = { {"HISI0341", 0}, {} }; MODULE_DEVICE_TABLE(acpi, hisi_sfc_v3xx_acpi_ids); -#endif static struct platform_driver hisi_sfc_v3xx_spi_driver = { .driver = { .name = "hisi-sfc-v3xx", - .acpi_match_table = ACPI_PTR(hisi_sfc_v3xx_acpi_ids), + .acpi_match_table = hisi_sfc_v3xx_acpi_ids, }, .probe = hisi_sfc_v3xx_probe, };
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/usb/core/hub.c
Changed
@@ -5967,6 +5967,11 @@ * the reset is over (using their post_reset method). * * Return: The same as for usb_reset_and_verify_device(). + * However, if a reset is already in progress (for instance, if a + * driver doesn't have pre_reset() or post_reset() callbacks, and while + * being unbound or re-bound during the ongoing reset its disconnect() + * or probe() routine tries to perform a second, nested reset), the + * routine returns -EINPROGRESS. * * Note: * The caller must own the device lock. For example, it's safe to use @@ -6000,6 +6005,10 @@ return -EISDIR; } + if (udev->reset_in_progress) + return -EINPROGRESS; + udev->reset_in_progress = 1; + port_dev = hub->ports[udev->portnum - 1]; /* @@ -6064,6 +6073,7 @@ usb_autosuspend_device(udev); memalloc_noio_restore(noio_flag); + udev->reset_in_progress = 0; return ret; } EXPORT_SYMBOL_GPL(usb_reset_device);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/usb/serial/option.c
Changed
@@ -97,6 +97,10 @@ #define YISO_VENDOR_ID 0x0EAB #define YISO_PRODUCT_U893 0xC893 +/* MEIG PRODUCTS */ +#define MEIG_VENDOR_ID 0x2DEE +#define MEIG_PRODUCT_SLM790 0x4D20 + /* * NOVATEL WIRELESS PRODUCTS * @@ -593,6 +597,7 @@ static const struct usb_device_id option_ids[] = { + { USB_DEVICE(MEIG_VENDOR_ID, MEIG_PRODUCT_SLM790) }, { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COLT) }, { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_RICOLA) }, { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_RICOLA_LIGHT) },
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/ifcvf/ifcvf_main.c
Changed
@@ -167,7 +167,7 @@ return &adapter->vf; } -static u64 ifcvf_vdpa_get_features(struct vdpa_device *vdpa_dev) +static u64 ifcvf_vdpa_get_device_features(struct vdpa_device *vdpa_dev) { struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev); u64 features; @@ -177,7 +177,7 @@ return features; } -static int ifcvf_vdpa_set_features(struct vdpa_device *vdpa_dev, u64 features) +static int ifcvf_vdpa_set_driver_features(struct vdpa_device *vdpa_dev, u64 features) { struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev); @@ -186,6 +186,13 @@ return 0; } +static u64 ifcvf_vdpa_get_driver_features(struct vdpa_device *vdpa_dev) +{ + struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev); + + return vf->req_features; +} + static u8 ifcvf_vdpa_get_status(struct vdpa_device *vdpa_dev) { struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev); @@ -391,8 +398,9 @@ * implemented set_map()/dma_map()/dma_unmap() */ static const struct vdpa_config_ops ifc_vdpa_ops = { - .get_features = ifcvf_vdpa_get_features, - .set_features = ifcvf_vdpa_set_features, + .get_device_features = ifcvf_vdpa_get_device_features, + .set_driver_features = ifcvf_vdpa_set_driver_features, + .get_driver_features = ifcvf_vdpa_get_driver_features, .get_status = ifcvf_vdpa_get_status, .set_status = ifcvf_vdpa_set_status, .reset = ifcvf_vdpa_reset, @@ -457,7 +465,7 @@ } adapter = vdpa_alloc_device(struct ifcvf_adapter, vdpa, - dev, &ifc_vdpa_ops, NULL); + dev, &ifc_vdpa_ops, 1, 1, NULL, false); if (adapter == NULL) { IFCVF_ERR(pdev, "Failed to allocate vDPA structure"); return -ENOMEM;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/mlx5/net/mlx5_vnet.c
Changed
@@ -1467,7 +1467,7 @@ return result; } -static u64 mlx5_vdpa_get_features(struct vdpa_device *vdev) +static u64 mlx5_vdpa_get_device_features(struct vdpa_device *vdev) { struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev); @@ -1550,7 +1550,7 @@ return __cpu_to_virtio16(mlx5_vdpa_is_little_endian(mvdev), val); } -static int mlx5_vdpa_set_features(struct vdpa_device *vdev, u64 features) +static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features) { struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev); @@ -1843,7 +1843,8 @@ return mvdev->generation; } -static int mlx5_vdpa_set_map(struct vdpa_device *vdev, struct vhost_iotlb *iotlb) +static int mlx5_vdpa_set_map(struct vdpa_device *vdev, unsigned int asid, + struct vhost_iotlb *iotlb) { struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev); @@ -1891,6 +1892,13 @@ return -EOPNOTSUPP; } +static u64 mlx5_vdpa_get_driver_features(struct vdpa_device *vdev) +{ + struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); + + return mvdev->actual_features; +} + static const struct vdpa_config_ops mlx5_vdpa_ops = { .set_vq_address = mlx5_vdpa_set_vq_address, .set_vq_num = mlx5_vdpa_set_vq_num, @@ -1903,8 +1911,9 @@ .get_vq_notification = mlx5_get_vq_notification, .get_vq_irq = mlx5_get_vq_irq, .get_vq_align = mlx5_vdpa_get_vq_align, - .get_features = mlx5_vdpa_get_features, - .set_features = mlx5_vdpa_set_features, + .get_device_features = mlx5_vdpa_get_device_features, + .set_driver_features = mlx5_vdpa_set_driver_features, + .get_driver_features = mlx5_vdpa_get_driver_features, .set_config_cb = mlx5_vdpa_set_config_cb, .get_vq_num_max = mlx5_vdpa_get_vq_num_max, .get_device_id = mlx5_vdpa_get_device_id, @@ -2006,7 +2015,7 @@ max_vqs = min_t(u32, max_vqs, MLX5_MAX_SUPPORTED_VQS); ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device, &mlx5_vdpa_ops, - NULL); + 1, 1, NULL, false); if (IS_ERR(ndev)) return ndev;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/vdpa.c
Changed
@@ -14,20 +14,37 @@ #include <uapi/linux/vdpa.h> #include <net/genetlink.h> #include <linux/mod_devicetable.h> +#include <linux/virtio_ids.h> static LIST_HEAD(mdev_head); /* A global mutex that protects vdpa management device and device level operations. */ -static DEFINE_MUTEX(vdpa_dev_mutex); +static DECLARE_RWSEM(vdpa_dev_lock); static DEFINE_IDA(vdpa_index_ida); +void vdpa_set_status(struct vdpa_device *vdev, u8 status) +{ + down_write(&vdev->cf_lock); + vdev->config->set_status(vdev, status); + up_write(&vdev->cf_lock); +} +EXPORT_SYMBOL(vdpa_set_status); + static struct genl_family vdpa_nl_family; static int vdpa_dev_probe(struct device *d) { struct vdpa_device *vdev = dev_to_vdpa(d); struct vdpa_driver *drv = drv_to_vdpa(vdev->dev.driver); + const struct vdpa_config_ops *ops = vdev->config; + u32 max_num, min_num = 1; int ret = 0; + max_num = ops->get_vq_num_max(vdev); + if (ops->get_vq_num_min) + min_num = ops->get_vq_num_min(vdev); + if (max_num < min_num) + return -EINVAL; + if (drv && drv->probe) ret = drv->probe(vdev); @@ -47,14 +64,14 @@ static int vdpa_dev_match(struct device *dev, struct device_driver *drv) { - struct vdpa_device *vdev = dev_to_vdpa(dev); + struct vdpa_device *vdev = dev_to_vdpa(dev); - /* Check override first, and if set, only use the named driver */ - if (vdev->driver_override) - return strcmp(vdev->driver_override, drv->name) == 0; + /* Check override first, and if set, only use the named driver */ + if (vdev->driver_override) + return strcmp(vdev->driver_override, drv->name) == 0; - /* Currently devices must be supported by all vDPA bus drivers */ - return 1; + /* Currently devices must be supported by all vDPA bus drivers */ + return 1; } static ssize_t driver_override_store(struct device *dev, @@ -95,30 +112,30 @@ static ssize_t driver_override_show(struct device *dev, struct device_attribute *attr, char *buf) { - struct vdpa_device *vdev = dev_to_vdpa(dev); - ssize_t len; + struct vdpa_device *vdev = dev_to_vdpa(dev); + ssize_t len; - device_lock(dev); - len = snprintf(buf, PAGE_SIZE, "%s\n", vdev->driver_override); - device_unlock(dev); + device_lock(dev); + len = snprintf(buf, PAGE_SIZE, "%s\n", vdev->driver_override); + device_unlock(dev); - return len; + return len; } static DEVICE_ATTR_RW(driver_override); static struct attribute *vdpa_dev_attrs[] = { - &dev_attr_driver_override.attr, - NULL, + &dev_attr_driver_override.attr, + NULL, }; static const struct attribute_group vdpa_dev_group = { - .attrs = vdpa_dev_attrs, + .attrs = vdpa_dev_attrs, }; __ATTRIBUTE_GROUPS(vdpa_dev); static struct bus_type vdpa_bus = { - .name = "vdpa", + .name = "vdpa", .dev_groups = vdpa_dev_groups, .match = vdpa_dev_match, .probe = vdpa_dev_probe, @@ -144,18 +161,23 @@ * initialized but before registered. * @parent: the parent device * @config: the bus operations that is supported by this device + * @ngroups: number of groups supported by this device + * @nas: number of address spaces supported by this device * @size: size of the parent structure that contains private data * @name: name of the vdpa device; optional. + * @use_va: indicate whether virtual address must be used by this device * * Driver should use vdpa_alloc_device() wrapper macro instead of * using this directly. * - * Returns an error when parent/config/dma_dev is not set or fail to get - * ida. + * Return: Returns an error when parent/config/dma_dev is not set or fail to get + * ida. */ struct vdpa_device *__vdpa_alloc_device(struct device *parent, const struct vdpa_config_ops *config, - size_t size, const char *name) + unsigned int ngroups, unsigned int nas, + size_t size, const char *name, + bool use_va) { struct vdpa_device *vdev; int err = -EINVAL; @@ -166,12 +188,16 @@ if (!!config->dma_map != !!config->dma_unmap) goto err; + /* It should only work for the device that use on-chip IOMMU */ + if (use_va && !(config->dma_map || config->set_map)) + goto err; + err = -ENOMEM; vdev = kzalloc(size, GFP_KERNEL); if (!vdev) goto err; - err = ida_simple_get(&vdpa_index_ida, 0, 0, GFP_KERNEL); + err = ida_alloc(&vdpa_index_ida, GFP_KERNEL); if (err < 0) goto err_ida; @@ -181,6 +207,9 @@ vdev->index = err; vdev->config = config; vdev->features_valid = false; + vdev->use_va = use_va; + vdev->ngroups = ngroups; + vdev->nas = nas; if (name) err = dev_set_name(&vdev->dev, "%s", name); @@ -189,6 +218,7 @@ if (err) goto err_name; + init_rwsem(&vdev->cf_lock); device_initialize(&vdev->dev); return vdev; @@ -209,13 +239,13 @@ return (strcmp(dev_name(&vdev->dev), data) == 0); } -static int __vdpa_register_device(struct vdpa_device *vdev, int nvqs) +static int __vdpa_register_device(struct vdpa_device *vdev, u32 nvqs) { struct device *dev; vdev->nvqs = nvqs; - lockdep_assert_held(&vdpa_dev_mutex); + lockdep_assert_held(&vdpa_dev_lock); dev = bus_find_device(&vdpa_bus, NULL, dev_name(&vdev->dev), vdpa_name_match); if (dev) { put_device(dev); @@ -232,9 +262,9 @@ * @vdev: the vdpa device to be registered to vDPA bus * @nvqs: number of virtqueues supported by this device * - * Returns an error when fail to add device to vDPA bus + * Return: Returns an error when fail to add device to vDPA bus */ -int _vdpa_register_device(struct vdpa_device *vdev, int nvqs) +int _vdpa_register_device(struct vdpa_device *vdev, u32 nvqs) { if (!vdev->mdev) return -EINVAL; @@ -249,15 +279,15 @@ * @vdev: the vdpa device to be registered to vDPA bus * @nvqs: number of virtqueues supported by this device * - * Returns an error when fail to add to vDPA bus + * Return: Returns an error when fail to add to vDPA bus */ -int vdpa_register_device(struct vdpa_device *vdev, int nvqs)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/vdpa_sim/vdpa_sim.c
Changed
@@ -52,6 +52,17 @@ return vdpa_to_sim(vdpa); } +static void vdpasim_vq_notify(struct vringh *vring) +{ + struct vdpasim_virtqueue *vq = + container_of(vring, struct vdpasim_virtqueue, vring); + + if (!vq->cb) + return; + + vq->cb(vq->private); +} + static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx) { struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; @@ -63,6 +74,8 @@ (uintptr_t)vq->driver_addr, (struct vring_used *) (uintptr_t)vq->device_addr); + + vq->vring.notify = vdpasim_vq_notify; } static void vdpasim_vq_reset(struct vdpasim *vdpasim, @@ -76,6 +89,8 @@ vq->private = NULL; vringh_init_iotlb(&vq->vring, vdpasim->dev_attr.supported_features, VDPASIM_QUEUE_MAX, false, NULL, NULL, NULL); + + vq->vring.notify = NULL; } static void vdpasim_do_reset(struct vdpasim *vdpasim) @@ -221,8 +236,8 @@ else ops = &vdpasim_net_config_ops; - vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops, - dev_attr->name); + vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops, 1, + 1, dev_attr->name, false); if (!vdpasim) goto err_alloc; @@ -363,14 +378,19 @@ return VDPASIM_QUEUE_ALIGN; } -static u64 vdpasim_get_features(struct vdpa_device *vdpa) +static u32 vdpasim_get_vq_group(struct vdpa_device *vdpa, u16 idx) +{ + return 0; +} + +static u64 vdpasim_get_device_features(struct vdpa_device *vdpa) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); return vdpasim->dev_attr.supported_features; } -static int vdpasim_set_features(struct vdpa_device *vdpa, u64 features) +static int vdpasim_set_driver_features(struct vdpa_device *vdpa, u64 features) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); @@ -383,6 +403,13 @@ return 0; } +static u64 vdpasim_get_driver_features(struct vdpa_device *vdpa) +{ + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); + + return vdpasim->features; +} + static void vdpasim_set_config_cb(struct vdpa_device *vdpa, struct vdpa_callback *cb) { @@ -491,7 +518,7 @@ return range; } -static int vdpasim_set_map(struct vdpa_device *vdpa, +static int vdpasim_set_map(struct vdpa_device *vdpa, unsigned int asid, struct vhost_iotlb *iotlb) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); @@ -518,21 +545,23 @@ return ret; } -static int vdpasim_dma_map(struct vdpa_device *vdpa, u64 iova, u64 size, - u64 pa, u32 perm) +static int vdpasim_dma_map(struct vdpa_device *vdpa, unsigned int asid, + u64 iova, u64 size, + u64 pa, u32 perm, void *opaque) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); int ret; spin_lock(&vdpasim->iommu_lock); - ret = vhost_iotlb_add_range(vdpasim->iommu, iova, iova + size - 1, pa, - perm); + ret = vhost_iotlb_add_range_ctx(vdpasim->iommu, iova, iova + size - 1, + pa, perm, opaque); spin_unlock(&vdpasim->iommu_lock); return ret; } -static int vdpasim_dma_unmap(struct vdpa_device *vdpa, u64 iova, u64 size) +static int vdpasim_dma_unmap(struct vdpa_device *vdpa, unsigned int asid, + u64 iova, u64 size) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); @@ -565,8 +594,10 @@ .set_vq_state = vdpasim_set_vq_state, .get_vq_state = vdpasim_get_vq_state, .get_vq_align = vdpasim_get_vq_align, - .get_features = vdpasim_get_features, - .set_features = vdpasim_set_features, + .get_vq_group = vdpasim_get_vq_group, + .get_device_features = vdpasim_get_device_features, + .set_driver_features = vdpasim_set_driver_features, + .get_driver_features = vdpasim_get_driver_features, .set_config_cb = vdpasim_set_config_cb, .get_vq_num_max = vdpasim_get_vq_num_max, .get_device_id = vdpasim_get_device_id, @@ -594,8 +625,10 @@ .set_vq_state = vdpasim_set_vq_state, .get_vq_state = vdpasim_get_vq_state, .get_vq_align = vdpasim_get_vq_align, - .get_features = vdpasim_get_features, - .set_features = vdpasim_set_features, + .get_vq_group = vdpasim_get_vq_group, + .get_device_features = vdpasim_get_device_features, + .set_driver_features = vdpasim_set_driver_features, + .get_driver_features = vdpasim_get_driver_features, .set_config_cb = vdpasim_set_config_cb, .get_vq_num_max = vdpasim_get_vq_num_max, .get_device_id = vdpasim_get_device_id,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/vdpa_sim/vdpa_sim.h
Changed
@@ -61,6 +61,7 @@ u32 status; u32 generation; u64 features; + u32 groups; /* spinlock to synchronize iommu table */ spinlock_t iommu_lock; };
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c
Changed
@@ -105,7 +105,8 @@ .release = vdpasim_blk_mgmtdev_release, }; -static int vdpasim_blk_dev_add(struct vdpa_mgmt_dev *mdev, const char *name) +static int vdpasim_blk_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, + const struct vdpa_dev_set_config *config) { struct vdpasim_dev_attr dev_attr = {}; struct vdpasim *simdev;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/vdpa_sim/vdpa_sim_net.c
Changed
@@ -127,7 +127,8 @@ .release = vdpasim_net_mgmtdev_release, }; -static int vdpasim_net_dev_add(struct vdpa_mgmt_dev *mdev, const char *name) +static int vdpasim_net_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, + const struct vdpa_dev_set_config *config) { struct vdpasim_dev_attr dev_attr = {}; struct vdpasim *simdev;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/virtio_pci/vp_vdpa.c
Changed
@@ -32,7 +32,7 @@ struct vp_vdpa { struct vdpa_device vdpa; - struct virtio_pci_modern_device mdev; + struct virtio_pci_modern_device *mdev; struct vp_vring *vring; struct vdpa_callback config_cb; char msix_name[VP_VDPA_NAME_SIZE]; @@ -41,6 +41,12 @@ int vectors; }; +struct vp_vdpa_mgmtdev { + struct vdpa_mgmt_dev mgtdev; + struct virtio_pci_modern_device *mdev; + struct vp_vdpa *vp_vdpa; +}; + static struct vp_vdpa *vdpa_to_vp(struct vdpa_device *vdpa) { return container_of(vdpa, struct vp_vdpa, vdpa); @@ -50,17 +56,22 @@ { struct vp_vdpa *vp_vdpa = vdpa_to_vp(vdpa); - return &vp_vdpa->mdev; + return vp_vdpa->mdev; +} + +static struct virtio_pci_modern_device *vp_vdpa_to_mdev(struct vp_vdpa *vp_vdpa) +{ + return vp_vdpa->mdev; } -static u64 vp_vdpa_get_features(struct vdpa_device *vdpa) +static u64 vp_vdpa_get_device_features(struct vdpa_device *vdpa) { struct virtio_pci_modern_device *mdev = vdpa_to_mdev(vdpa); return vp_modern_get_features(mdev); } -static int vp_vdpa_set_features(struct vdpa_device *vdpa, u64 features) +static int vp_vdpa_set_driver_features(struct vdpa_device *vdpa, u64 features) { struct virtio_pci_modern_device *mdev = vdpa_to_mdev(vdpa); @@ -69,6 +80,13 @@ return 0; } +static u64 vp_vdpa_get_driver_features(struct vdpa_device *vdpa) +{ + struct virtio_pci_modern_device *mdev = vdpa_to_mdev(vdpa); + + return vp_modern_get_driver_features(mdev); +} + static u8 vp_vdpa_get_status(struct vdpa_device *vdpa) { struct virtio_pci_modern_device *mdev = vdpa_to_mdev(vdpa); @@ -89,7 +107,7 @@ static void vp_vdpa_free_irq(struct vp_vdpa *vp_vdpa) { - struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; + struct virtio_pci_modern_device *mdev = vp_vdpa_to_mdev(vp_vdpa); struct pci_dev *pdev = mdev->pci_dev; int i; @@ -136,7 +154,7 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa) { - struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; + struct virtio_pci_modern_device *mdev = vp_vdpa_to_mdev(vp_vdpa); struct pci_dev *pdev = mdev->pci_dev; int i, ret, irq; int queues = vp_vdpa->queues; @@ -191,7 +209,7 @@ static void vp_vdpa_set_status(struct vdpa_device *vdpa, u8 status) { struct vp_vdpa *vp_vdpa = vdpa_to_vp(vdpa); - struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; + struct virtio_pci_modern_device *mdev = vp_vdpa_to_mdev(vp_vdpa); u8 s = vp_vdpa_get_status(vdpa); if (status & VIRTIO_CONFIG_S_DRIVER_OK && @@ -205,7 +223,7 @@ static int vp_vdpa_reset(struct vdpa_device *vdpa) { struct vp_vdpa *vp_vdpa = vdpa_to_vp(vdpa); - struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; + struct virtio_pci_modern_device *mdev = vp_vdpa_to_mdev(vp_vdpa); u8 s = vp_vdpa_get_status(vdpa); vp_modern_set_status(mdev, 0); @@ -365,7 +383,7 @@ void *buf, unsigned int len) { struct vp_vdpa *vp_vdpa = vdpa_to_vp(vdpa); - struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; + struct virtio_pci_modern_device *mdev = vp_vdpa_to_mdev(vp_vdpa); u8 old, new; u8 *p; int i; @@ -385,7 +403,7 @@ unsigned int len) { struct vp_vdpa *vp_vdpa = vdpa_to_vp(vdpa); - struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; + struct virtio_pci_modern_device *mdev = vp_vdpa_to_mdev(vp_vdpa); const u8 *p = buf; int i; @@ -405,7 +423,7 @@ vp_vdpa_get_vq_notification(struct vdpa_device *vdpa, u16 qid) { struct vp_vdpa *vp_vdpa = vdpa_to_vp(vdpa); - struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; + struct virtio_pci_modern_device *mdev = vp_vdpa_to_mdev(vp_vdpa); struct vdpa_notification_area notify; notify.addr = vp_vdpa->vring[qid].notify_pa; @@ -415,8 +433,9 @@ } static const struct vdpa_config_ops vp_vdpa_ops = { - .get_features = vp_vdpa_get_features, - .set_features = vp_vdpa_set_features, + .get_device_features = vp_vdpa_get_device_features, + .set_driver_features = vp_vdpa_set_driver_features, + .get_driver_features = vp_vdpa_get_driver_features, .get_status = vp_vdpa_get_status, .set_status = vp_vdpa_set_status, .reset = vp_vdpa_reset, @@ -446,38 +465,31 @@ pci_free_irq_vectors(data); } -static int vp_vdpa_probe(struct pci_dev *pdev, const struct pci_device_id *id) +static int vp_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name, + const struct vdpa_dev_set_config *add_config) { - struct virtio_pci_modern_device *mdev; + struct vp_vdpa_mgmtdev *vp_vdpa_mgtdev = + container_of(v_mdev, struct vp_vdpa_mgmtdev, mgtdev); + + struct virtio_pci_modern_device *mdev = vp_vdpa_mgtdev->mdev; + struct pci_dev *pdev = mdev->pci_dev; struct device *dev = &pdev->dev; - struct vp_vdpa *vp_vdpa; + struct vp_vdpa *vp_vdpa = NULL; int ret, i; - ret = pcim_enable_device(pdev); - if (ret) - return ret; - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, - dev, &vp_vdpa_ops, NULL); + dev, &vp_vdpa_ops, 1, 1, name, false); + if (IS_ERR(vp_vdpa)) { dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); return PTR_ERR(vp_vdpa); } - mdev = &vp_vdpa->mdev; - mdev->pci_dev = pdev; - - ret = vp_modern_probe(mdev); - if (ret) { - dev_err(&pdev->dev, "Failed to probe modern PCI device\n"); - goto err; - } - - pci_set_master(pdev); - pci_set_drvdata(pdev, vp_vdpa); + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; vp_vdpa->vdpa.dma_dev = &pdev->dev; vp_vdpa->queues = vp_modern_get_num_queues(mdev); + vp_vdpa->mdev = mdev; ret = devm_add_action_or_reset(dev, vp_vdpa_free_irq_vectors, pdev); if (ret) { @@ -501,13 +513,15 @@ vp_modern_map_vq_notify(mdev, i, &vp_vdpa->vring[i].notify_pa); if (!vp_vdpa->vring[i].notify) { + ret = -EINVAL; dev_warn(&pdev->dev, "Fail to map vq notify %d\n", i); goto err; } } vp_vdpa->config_irq = VIRTIO_MSI_NO_VECTOR; - ret = vdpa_register_device(&vp_vdpa->vdpa, vp_vdpa->queues);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vhost/iotlb.c
Changed
@@ -36,25 +36,42 @@ EXPORT_SYMBOL_GPL(vhost_iotlb_map_free); /** - * vhost_iotlb_add_range - add a new range to vhost IOTLB + * vhost_iotlb_add_range_ctx - add a new range to vhost IOTLB * @iotlb: the IOTLB * @start: start of the IOVA range * @last: last of IOVA range * @addr: the address that is mapped to @start * @perm: access permission of this range + * @opaque: the opaque pointer for the new mapping * * Returns an error last is smaller than start or memory allocation * fails */ -int vhost_iotlb_add_range(struct vhost_iotlb *iotlb, - u64 start, u64 last, - u64 addr, unsigned int perm) +int vhost_iotlb_add_range_ctx(struct vhost_iotlb *iotlb, + u64 start, u64 last, + u64 addr, unsigned int perm, + void *opaque) { struct vhost_iotlb_map *map; if (last < start) return -EFAULT; + /* If the range being mapped is [0, ULONG_MAX], split it into two entries + * otherwise its size would overflow u64. + */ + if (start == 0 && last == ULONG_MAX) { + u64 mid = last / 2; + int err = vhost_iotlb_add_range_ctx(iotlb, start, mid, addr, + perm, opaque); + + if (err) + return err; + + addr += mid + 1; + start = mid + 1; + } + if (iotlb->limit && iotlb->nmaps == iotlb->limit && iotlb->flags & VHOST_IOTLB_FLAG_RETIRE) { @@ -71,6 +88,7 @@ map->last = last; map->addr = addr; map->perm = perm; + map->opaque = opaque; iotlb->nmaps++; vhost_iotlb_itree_insert(map, &iotlb->root); @@ -80,6 +98,15 @@ return 0; } +EXPORT_SYMBOL_GPL(vhost_iotlb_add_range_ctx); + +int vhost_iotlb_add_range(struct vhost_iotlb *iotlb, + u64 start, u64 last, + u64 addr, unsigned int perm) +{ + return vhost_iotlb_add_range_ctx(iotlb, start, last, + addr, perm, NULL); +} EXPORT_SYMBOL_GPL(vhost_iotlb_add_range); /** @@ -99,6 +126,23 @@ EXPORT_SYMBOL_GPL(vhost_iotlb_del_range); /** + * vhost_iotlb_init - initialize a vhost IOTLB + * @iotlb: the IOTLB that needs to be initialized + * @limit: maximum number of IOTLB entries + * @flags: VHOST_IOTLB_FLAG_XXX + */ +void vhost_iotlb_init(struct vhost_iotlb *iotlb, unsigned int limit, + unsigned int flags) +{ + iotlb->root = RB_ROOT_CACHED; + iotlb->limit = limit; + iotlb->nmaps = 0; + iotlb->flags = flags; + INIT_LIST_HEAD(&iotlb->list); +} +EXPORT_SYMBOL_GPL(vhost_iotlb_init); + +/** * vhost_iotlb_alloc - add a new vhost IOTLB * @limit: maximum number of IOTLB entries * @flags: VHOST_IOTLB_FLAG_XXX @@ -112,11 +156,7 @@ if (!iotlb) return NULL; - iotlb->root = RB_ROOT_CACHED; - iotlb->limit = limit; - iotlb->nmaps = 0; - iotlb->flags = flags; - INIT_LIST_HEAD(&iotlb->list); + vhost_iotlb_init(iotlb, limit, flags); return iotlb; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vhost/vdpa.c
Changed
@@ -16,44 +16,131 @@ #include <linux/cdev.h> #include <linux/device.h> #include <linux/mm.h> +#include <linux/slab.h> #include <linux/iommu.h> #include <linux/uuid.h> #include <linux/vdpa.h> #include <linux/nospec.h> #include <linux/vhost.h> -#include <linux/virtio_net.h> #include "vhost.h" enum { VHOST_VDPA_BACKEND_FEATURES = (1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2) | - (1ULL << VHOST_BACKEND_F_IOTLB_BATCH), + (1ULL << VHOST_BACKEND_F_IOTLB_BATCH) | + (1ULL << VHOST_BACKEND_F_IOTLB_ASID), }; #define VHOST_VDPA_DEV_MAX (1U << MINORBITS) +#define VHOST_VDPA_IOTLB_BUCKETS 16 + +struct vhost_vdpa_as { + struct hlist_node hash_link; + struct vhost_iotlb iotlb; + u32 id; +}; + struct vhost_vdpa { struct vhost_dev vdev; struct iommu_domain *domain; struct vhost_virtqueue *vqs; struct completion completion; struct vdpa_device *vdpa; + struct hlist_head as[VHOST_VDPA_IOTLB_BUCKETS]; struct device dev; struct cdev cdev; atomic_t opened; - int nvqs; + u32 nvqs; int virtio_id; int minor; struct eventfd_ctx *config_ctx; int in_batch; struct vdpa_iova_range range; + u32 batch_asid; }; static DEFINE_IDA(vhost_vdpa_ida); static dev_t vhost_vdpa_major; +static inline u32 iotlb_to_asid(struct vhost_iotlb *iotlb) +{ + struct vhost_vdpa_as *as = container_of(iotlb, struct + vhost_vdpa_as, iotlb); + return as->id; +} + +static struct vhost_vdpa_as *asid_to_as(struct vhost_vdpa *v, u32 asid) +{ + struct hlist_head *head = &v->as[asid % VHOST_VDPA_IOTLB_BUCKETS]; + struct vhost_vdpa_as *as; + + hlist_for_each_entry(as, head, hash_link) + if (as->id == asid) + return as; + + return NULL; +} + +static struct vhost_iotlb *asid_to_iotlb(struct vhost_vdpa *v, u32 asid) +{ + struct vhost_vdpa_as *as = asid_to_as(v, asid); + + if (!as) + return NULL; + + return &as->iotlb; +} + +static struct vhost_vdpa_as *vhost_vdpa_alloc_as(struct vhost_vdpa *v, u32 asid) +{ + struct hlist_head *head = &v->as[asid % VHOST_VDPA_IOTLB_BUCKETS]; + struct vhost_vdpa_as *as; + + if (asid_to_as(v, asid)) + return NULL; + + if (asid >= v->vdpa->nas) + return NULL; + + as = kmalloc(sizeof(*as), GFP_KERNEL); + if (!as) + return NULL; + + vhost_iotlb_init(&as->iotlb, 0, 0); + as->id = asid; + hlist_add_head(&as->hash_link, head); + + return as; +} + +static struct vhost_vdpa_as *vhost_vdpa_find_alloc_as(struct vhost_vdpa *v, + u32 asid) +{ + struct vhost_vdpa_as *as = asid_to_as(v, asid); + + if (as) + return as; + + return vhost_vdpa_alloc_as(v, asid); +} + +static int vhost_vdpa_remove_as(struct vhost_vdpa *v, u32 asid) +{ + struct vhost_vdpa_as *as = asid_to_as(v, asid); + + if (!as) + return -EINVAL; + + hlist_del(&as->hash_link); + vhost_iotlb_reset(&as->iotlb); + kfree(as); + + return 0; +} + static void handle_vq_kick(struct vhost_work *work) { struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue, @@ -119,12 +206,13 @@ irq_bypass_unregister_producer(&vq->call_ctx.producer); } -static void vhost_vdpa_reset(struct vhost_vdpa *v) +static int vhost_vdpa_reset(struct vhost_vdpa *v) { struct vdpa_device *vdpa = v->vdpa; - vdpa_reset(vdpa); v->in_batch = 0; + + return vdpa_reset(vdpa); } static long vhost_vdpa_get_device_id(struct vhost_vdpa *v, u8 __user *argp) @@ -160,7 +248,8 @@ struct vdpa_device *vdpa = v->vdpa; const struct vdpa_config_ops *ops = vdpa->config; u8 status, status_old; - int ret, nvqs = v->nvqs; + u32 nvqs = v->nvqs; + int ret; u16 i; if (copy_from_user(&status, statusp, sizeof(status))) @@ -172,24 +261,24 @@ * Userspace shouldn't remove status bits unless reset the * status to 0. */ - if (status != 0 && (ops->get_status(vdpa) & ~status) != 0) + if (status != 0 && (status_old & ~status) != 0) return -EINVAL; + if ((status_old & VIRTIO_CONFIG_S_DRIVER_OK) && !(status & VIRTIO_CONFIG_S_DRIVER_OK)) + for (i = 0; i < nvqs; i++) + vhost_vdpa_unsetup_vq_irq(v, i); + if (status == 0) { - ret = ops->reset(vdpa); + ret = vdpa_reset(vdpa); if (ret) return ret; } else - ops->set_status(vdpa, status); + vdpa_set_status(vdpa, status); if ((status & VIRTIO_CONFIG_S_DRIVER_OK) && !(status_old & VIRTIO_CONFIG_S_DRIVER_OK)) for (i = 0; i < nvqs; i++) vhost_vdpa_setup_vq_irq(v, i); - if ((status_old & VIRTIO_CONFIG_S_DRIVER_OK) && !(status & VIRTIO_CONFIG_S_DRIVER_OK)) - for (i = 0; i < nvqs; i++) - vhost_vdpa_unsetup_vq_irq(v, i); - return 0; } @@ -239,7 +328,6 @@ struct vhost_vdpa_config __user *c) { struct vdpa_device *vdpa = v->vdpa; - const struct vdpa_config_ops *ops = vdpa->config; struct vhost_vdpa_config config; unsigned long size = offsetof(struct vhost_vdpa_config, buf);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vhost/vhost.c
Changed
@@ -468,7 +468,7 @@ struct vhost_virtqueue **vqs, int nvqs, int iov_limit, int weight, int byte_weight, bool use_worker, - int (*msg_handler)(struct vhost_dev *dev, + int (*msg_handler)(struct vhost_dev *dev, u32 asid, struct vhost_iotlb_msg *msg)) { struct vhost_virtqueue *vq; @@ -1090,11 +1090,14 @@ return true; } -static int vhost_process_iotlb_msg(struct vhost_dev *dev, +static int vhost_process_iotlb_msg(struct vhost_dev *dev, u32 asid, struct vhost_iotlb_msg *msg) { int ret = 0; + if (asid != 0) + return -EINVAL; + mutex_lock(&dev->mutex); vhost_dev_lock_vqs(dev); switch (msg->type) { @@ -1141,6 +1144,7 @@ struct vhost_iotlb_msg msg; size_t offset; int type, ret; + u32 asid = 0; ret = copy_from_iter(&type, sizeof(type), from); if (ret != sizeof(type)) { @@ -1156,7 +1160,16 @@ offset = offsetof(struct vhost_msg, iotlb) - sizeof(int); break; case VHOST_IOTLB_MSG_V2: - offset = sizeof(__u32); + if (vhost_backend_has_feature(dev->vqs[0], + VHOST_BACKEND_F_IOTLB_ASID)) { + ret = copy_from_iter(&asid, sizeof(asid), from); + if (ret != sizeof(asid)) { + ret = -EINVAL; + goto done; + } + offset = 0; + } else + offset = sizeof(__u32); break; default: ret = -EINVAL; @@ -1170,10 +1183,17 @@ goto done; } + if ((msg.type == VHOST_IOTLB_UPDATE || + msg.type == VHOST_IOTLB_INVALIDATE) && + msg.size == 0) { + ret = -EINVAL; + goto done; + } + if (dev->msg_handler) - ret = dev->msg_handler(dev, &msg); + ret = dev->msg_handler(dev, asid, &msg); else - ret = vhost_process_iotlb_msg(dev, &msg); + ret = vhost_process_iotlb_msg(dev, asid, &msg); if (ret) { ret = -EFAULT; goto done;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vhost/vhost.h
Changed
@@ -162,7 +162,7 @@ int byte_weight; u64 kcov_handle; bool use_worker; - int (*msg_handler)(struct vhost_dev *dev, + int (*msg_handler)(struct vhost_dev *dev, u32 asid, struct vhost_iotlb_msg *msg); }; @@ -170,7 +170,7 @@ void vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs, int nvqs, int iov_limit, int weight, int byte_weight, bool use_worker, - int (*msg_handler)(struct vhost_dev *dev, + int (*msg_handler)(struct vhost_dev *dev, u32 asid, struct vhost_iotlb_msg *msg)); long vhost_dev_set_owner(struct vhost_dev *dev); bool vhost_dev_has_owner(struct vhost_dev *dev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/virtio/virtio.c
Changed
@@ -203,6 +203,28 @@ return 0; } +/** + * virtio_reset_device - quiesce device for removal + * @dev: the device to reset + * + * Prevents device from sending interrupts and accessing memory. + * + * Generally used for cleanup during driver / device removal. + * + * Once this has been invoked, caller must ensure that + * virtqueue_notify / virtqueue_kick are not in progress. + * + * Note: this guarantees that vq callbacks are not in progress, however caller + * is responsible for preventing access from other contexts, such as a system + * call/workqueue/bh. Invoking virtio_break_device then flushing any such + * contexts is one way to handle that. + * */ +void virtio_reset_device(struct virtio_device *dev) +{ + dev->config->reset(dev); +} +EXPORT_SYMBOL_GPL(virtio_reset_device); + static int virtio_dev_probe(struct device *_d) { int err, i; @@ -362,7 +384,7 @@ /* We always start by resetting the device, in case a previous * driver messed it up. This also tests that code path a little. */ - dev->config->reset(dev); + virtio_reset_device(dev); /* Acknowledge that we've seen the device. */ virtio_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE); @@ -422,7 +444,7 @@ /* We always start by resetting the device, in case a previous * driver messed it up. */ - dev->config->reset(dev); + virtio_reset_device(dev); /* Acknowledge that we've seen the device. */ virtio_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/virtio/virtio_balloon.c
Changed
@@ -1039,7 +1039,7 @@ return_free_pages_to_mm(vb, ULONG_MAX); /* Now we reset the device so we can clean up the queues. */ - vb->vdev->config->reset(vb->vdev); + virtio_reset_device(vb->vdev); vb->vdev->config->del_vqs(vb->vdev); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/virtio/virtio_input.c
Changed
@@ -323,7 +323,7 @@ spin_unlock_irqrestore(&vi->lock, flags); input_unregister_device(vi->idev); - vdev->config->reset(vdev); + virtio_reset_device(vdev); while ((buf = virtqueue_detach_unused_buf(vi->sts)) != NULL) kfree(buf); vdev->config->del_vqs(vdev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/virtio/virtio_mem.c
Changed
@@ -1889,7 +1889,7 @@ vfree(vm->sb_bitmap); /* reset the device and cleanup the queues */ - vdev->config->reset(vdev); + virtio_reset_device(vdev); vdev->config->del_vqs(vdev); kfree(vm);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/virtio/virtio_pci_modern.c
Changed
@@ -176,6 +176,29 @@ vp_synchronize_vectors(vdev); } +static int vp_active_vq(struct virtqueue *vq, u16 msix_vec) +{ + struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev); + struct virtio_pci_modern_device *mdev = &vp_dev->mdev; + unsigned long index; + + index = vq->index; + + /* activate the queue */ + vp_modern_set_queue_size(mdev, index, virtqueue_get_vring_size(vq)); + vp_modern_queue_address(mdev, index, virtqueue_get_desc_addr(vq), + virtqueue_get_avail_addr(vq), + virtqueue_get_used_addr(vq)); + + if (msix_vec != VIRTIO_MSI_NO_VECTOR) { + msix_vec = vp_modern_queue_vector(mdev, index, msix_vec); + if (msix_vec == VIRTIO_MSI_NO_VECTOR) + return -EBUSY; + } + + return 0; +} + static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector) { return vp_modern_config_vector(&vp_dev->mdev, vector); @@ -218,32 +241,19 @@ if (!vq) return ERR_PTR(-ENOMEM); - /* activate the queue */ - vp_modern_set_queue_size(mdev, index, virtqueue_get_vring_size(vq)); - vp_modern_queue_address(mdev, index, virtqueue_get_desc_addr(vq), - virtqueue_get_avail_addr(vq), - virtqueue_get_used_addr(vq)); + err = vp_active_vq(vq, msix_vec); + if (err) + goto err; vq->priv = (void __force *)vp_modern_map_vq_notify(mdev, index, NULL); if (!vq->priv) { err = -ENOMEM; - goto err_map_notify; - } - - if (msix_vec != VIRTIO_MSI_NO_VECTOR) { - msix_vec = vp_modern_queue_vector(mdev, index, msix_vec); - if (msix_vec == VIRTIO_MSI_NO_VECTOR) { - err = -EBUSY; - goto err_assign_vector; - } + goto err; } return vq; -err_assign_vector: - if (!mdev->notify_base) - pci_iounmap(mdev->pci_dev, (void __iomem __force *)vq->priv); -err_map_notify: +err: vring_del_virtqueue(vq); return ERR_PTR(err); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/virtio/virtio_pci_modern_dev.c
Changed
@@ -3,6 +3,7 @@ #include <linux/virtio_pci_modern.h> #include <linux/module.h> #include <linux/pci.h> +#include <linux/delay.h> /* * vp_modern_map_capability - map a part of virtio pci capability @@ -17,11 +18,10 @@ * * Returns the io address of for the part of the capability */ -void __iomem *vp_modern_map_capability(struct virtio_pci_modern_device *mdev, int off, - size_t minlen, - u32 align, - u32 start, u32 size, - size_t *len, resource_size_t *pa) +static void __iomem* +vp_modern_map_capability(struct virtio_pci_modern_device *mdev, int off, + size_t minlen, u32 align, u32 start, u32 size, + size_t *len, resource_size_t *pa) { struct pci_dev *dev = mdev->pci_dev; u8 bar; @@ -95,7 +95,6 @@ return p; } -EXPORT_SYMBOL_GPL(vp_modern_map_capability); /** * virtio_pci_find_capability - walk capabilities to find device info. @@ -467,6 +466,44 @@ EXPORT_SYMBOL_GPL(vp_modern_set_status); /* + * vp_modern_get_queue_reset - get the queue reset status + * @mdev: the modern virtio-pci device + * @index: queue index + */ +int vp_modern_get_queue_reset(struct virtio_pci_modern_device *mdev, u16 index) +{ + struct virtio_pci_modern_common_cfg __iomem *cfg; + + cfg = (struct virtio_pci_modern_common_cfg __iomem *)mdev->common; + + vp_iowrite16(index, &cfg->cfg.queue_select); + return vp_ioread16(&cfg->queue_reset); +} +EXPORT_SYMBOL_GPL(vp_modern_get_queue_reset); + +/* + * vp_modern_set_queue_reset - reset the queue + * @mdev: the modern virtio-pci device + * @index: queue index + */ +void vp_modern_set_queue_reset(struct virtio_pci_modern_device *mdev, u16 index) +{ + struct virtio_pci_modern_common_cfg __iomem *cfg; + + cfg = (struct virtio_pci_modern_common_cfg __iomem *)mdev->common; + + vp_iowrite16(index, &cfg->cfg.queue_select); + vp_iowrite16(1, &cfg->queue_reset); + + while (vp_ioread16(&cfg->queue_reset)) + msleep(1); + + while (vp_ioread16(&cfg->cfg.queue_enable)) + msleep(1); +} +EXPORT_SYMBOL_GPL(vp_modern_set_queue_reset); + +/* * vp_modern_queue_vector - set the MSIX vector for a specific virtqueue * @mdev: the modern virtio-pci device * @index: queue index @@ -612,14 +649,13 @@ * * Returns the notification offset for a virtqueue */ -u16 vp_modern_get_queue_notify_off(struct virtio_pci_modern_device *mdev, - u16 index) +static u16 vp_modern_get_queue_notify_off(struct virtio_pci_modern_device *mdev, + u16 index) { vp_iowrite16(index, &mdev->common->queue_select); return vp_ioread16(&mdev->common->queue_notify_off); } -EXPORT_SYMBOL_GPL(vp_modern_get_queue_notify_off); /* * vp_modern_map_vq_notify - map notification area for a
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/virtio/virtio_vdpa.c
Changed
@@ -65,9 +65,8 @@ const void *buf, unsigned len) { struct vdpa_device *vdpa = vd_get_vdpa(vdev); - const struct vdpa_config_ops *ops = vdpa->config; - ops->set_config(vdpa, offset, buf, len); + vdpa_set_config(vdpa, offset, buf, len); } static u32 virtio_vdpa_generation(struct virtio_device *vdev) @@ -92,9 +91,8 @@ static void virtio_vdpa_set_status(struct virtio_device *vdev, u8 status) { struct vdpa_device *vdpa = vd_get_vdpa(vdev); - const struct vdpa_config_ops *ops = vdpa->config; - return ops->set_status(vdpa, status); + return vdpa_set_status(vdpa, status); } static void virtio_vdpa_reset(struct virtio_device *vdev) @@ -142,8 +140,11 @@ struct vdpa_callback cb; struct virtqueue *vq; u64 desc_addr, driver_addr, device_addr; + /* Assume split virtqueue, switch to packed if necessary */ + struct vdpa_vq_state state = {0}; unsigned long flags; - u32 align, num; + u32 align, max_num, min_num = 1; + bool may_reduce_num = true; int err; if (!name) @@ -161,16 +162,21 @@ if (!info) return ERR_PTR(-ENOMEM); - num = ops->get_vq_num_max(vdpa); - if (num == 0) { + max_num = ops->get_vq_num_max(vdpa); + if (max_num == 0) { err = -ENOENT; goto error_new_virtqueue; } + if (ops->get_vq_num_min) + min_num = ops->get_vq_num_min(vdpa); + + may_reduce_num = (max_num == min_num) ? false : true; + /* Create the vring */ align = ops->get_vq_align(vdpa); - vq = vring_create_virtqueue(index, num, align, vdev, - true, true, ctx, + vq = vring_create_virtqueue(index, max_num, align, vdev, + true, may_reduce_num, ctx, virtio_vdpa_notify, callback, name); if (!vq) { err = -ENOMEM; @@ -178,7 +184,7 @@ } /* Setup virtqueue callback */ - cb.callback = virtio_vdpa_virtqueue_cb; + cb.callback = callback ? virtio_vdpa_virtqueue_cb : NULL; cb.private = info; ops->set_vq_cb(vdpa, index, &cb); ops->set_vq_num(vdpa, index, virtqueue_get_vring_size(vq)); @@ -194,6 +200,19 @@ goto err_vq; } + /* reset virtqueue state index */ + if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED)) { + struct vdpa_vq_state_packed *s = &state.packed; + + s->last_avail_counter = 1; + s->last_avail_idx = 0; + s->last_used_counter = 1; + s->last_used_idx = 0; + } + err = ops->set_vq_state(vdpa, index, &state); + if (err) + goto err_vq; + ops->set_vq_ready(vdpa, index, 1); vq->priv = info; @@ -228,9 +247,8 @@ list_del(&info->node); spin_unlock_irqrestore(&vd_dev->lock, flags); - /* Select and deactivate the queue */ + /* Select and deactivate the queue (best effort) */ ops->set_vq_ready(vdpa, index, 0); - WARN_ON(ops->get_vq_ready(vdpa, index)); vring_del_virtqueue(vq); @@ -289,7 +307,7 @@ struct vdpa_device *vdpa = vd_get_vdpa(vdev); const struct vdpa_config_ops *ops = vdpa->config; - return ops->get_features(vdpa); + return ops->get_device_features(vdpa); } static int virtio_vdpa_finalize_features(struct virtio_device *vdev)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/buffer.c
Changed
@@ -562,7 +562,7 @@ struct buffer_head *bh = __find_get_block(bdev, bblock + 1, blocksize); if (bh) { if (buffer_dirty(bh)) - ll_rw_block(REQ_OP_WRITE, 0, 1, &bh); + write_dirty_buffer(bh, 0); put_bh(bh); } } @@ -1363,7 +1363,7 @@ { struct buffer_head *bh = __getblk(bdev, block, size); if (likely(bh)) { - ll_rw_block(REQ_OP_READ, REQ_RAHEAD, 1, &bh); + bh_readahead(bh, REQ_RAHEAD); brelse(bh); } } @@ -2038,7 +2038,7 @@ if (!buffer_uptodate(bh) && !buffer_delay(bh) && !buffer_unwritten(bh) && (block_start < from || block_end > to)) { - ll_rw_block(REQ_OP_READ, 0, 1, &bh); + bh_read_nowait(bh, 0); *wait_bh++=bh; } } @@ -2927,11 +2927,9 @@ set_buffer_uptodate(bh); if (!buffer_uptodate(bh) && !buffer_delay(bh) && !buffer_unwritten(bh)) { - err = -EIO; - ll_rw_block(REQ_OP_READ, 0, 1, &bh); - wait_on_buffer(bh); + err = bh_read(bh, 0); /* Uhhuh. Read error. Complain and punt. */ - if (!buffer_uptodate(bh)) + if (err < 0) goto unlock; } @@ -3392,6 +3390,71 @@ EXPORT_SYMBOL(bh_uptodate_or_lock); /** + * __bh_read - Submit read for a locked buffer + * @bh: struct buffer_head + * @op_flags: appending REQ_OP_* flags besides REQ_OP_READ + * @wait: wait until reading finish + * + * Returns zero on success or don't wait, and -EIO on error. + */ +int __bh_read(struct buffer_head *bh, unsigned int op_flags, bool wait) +{ + int ret = 0; + + BUG_ON(!buffer_locked(bh)); + + get_bh(bh); + bh->b_end_io = end_buffer_read_sync; + submit_bh(REQ_OP_READ, op_flags, bh); + if (wait) { + wait_on_buffer(bh); + if (!buffer_uptodate(bh)) + ret = -EIO; + } + return ret; +} +EXPORT_SYMBOL(__bh_read); + +/** + * __bh_read_batch - Submit read for a batch of unlocked buffers + * @nr: entry number of the buffer batch + * @bhs: a batch of struct buffer_head + * @op_flags: appending REQ_OP_* flags besides REQ_OP_READ + * @force_lock: force to get a lock on the buffer if set, otherwise drops any + * buffer that cannot lock. + * + * Returns zero on success or don't wait, and -EIO on error. + */ +void __bh_read_batch(int nr, struct buffer_head *bhs[], + unsigned int op_flags, bool force_lock) +{ + int i; + + for (i = 0; i < nr; i++) { + struct buffer_head *bh = bhs[i]; + + if (buffer_uptodate(bh)) + continue; + + if (force_lock) + lock_buffer(bh); + else + if (!trylock_buffer(bh)) + continue; + + if (buffer_uptodate(bh)) { + unlock_buffer(bh); + continue; + } + + bh->b_end_io = end_buffer_read_sync; + get_bh(bh); + submit_bh(REQ_OP_READ, op_flags, bh); + } +} +EXPORT_SYMBOL(__bh_read_batch); + +/** * bh_submit_read - Submit a locked buffer for reading * @bh: struct buffer_head *
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ext2/balloc.c
Changed
@@ -126,6 +126,7 @@ struct ext2_group_desc * desc; struct buffer_head * bh = NULL; ext2_fsblk_t bitmap_blk; + int ret; desc = ext2_get_group_desc(sb, block_group, NULL); if (!desc) @@ -139,10 +140,10 @@ block_group, le32_to_cpu(desc->bg_block_bitmap)); return NULL; } - if (likely(bh_uptodate_or_lock(bh))) + ret = bh_read(bh, 0); + if (ret > 0) return bh; - - if (bh_submit_read(bh) < 0) { + if (ret < 0) { brelse(bh); ext2_error(sb, __func__, "Cannot read block bitmap - "
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ext4/resize.c
Changed
@@ -1440,8 +1440,6 @@ * active. */ ext4_r_blocks_count_set(es, ext4_r_blocks_count(es) + reserved_blocks); - ext4_superblock_csum_set(sb); - unlock_buffer(sbi->s_sbh); /* Update the free space counts */ percpu_counter_add(&sbi->s_freeclusters_counter, @@ -1469,6 +1467,8 @@ ext4_calculate_overhead(sb); es->s_overhead_clusters = cpu_to_le32(sbi->s_overhead); + ext4_superblock_csum_set(sb); + unlock_buffer(sbi->s_sbh); if (test_opt(sb, DEBUG)) printk(KERN_DEBUG "EXT4-fs: added group %u:" "%llu blocks(%llu free %llu reserved)\n", flex_gd->count,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/fuse/virtio_fs.c
Changed
@@ -894,7 +894,7 @@ return 0; out_vqs: - vdev->config->reset(vdev); + virtio_reset_device(vdev); virtio_fs_cleanup_vqs(vdev, fs); kfree(fs->vqs); @@ -926,7 +926,7 @@ list_del_init(&fs->list); virtio_fs_stop_all_queues(fs); virtio_fs_drain_all_queues_locked(fs); - vdev->config->reset(vdev); + virtio_reset_device(vdev); virtio_fs_cleanup_vqs(vdev, fs); vdev->priv = NULL;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/gfs2/meta_io.c
Changed
@@ -521,8 +521,7 @@ if (buffer_uptodate(first_bh)) goto out; - if (!buffer_locked(first_bh)) - ll_rw_block(REQ_OP_READ, REQ_META | REQ_PRIO, 1, &first_bh); + bh_read_nowait(first_bh, REQ_META | REQ_PRIO); dblock++; extlen--; @@ -530,10 +529,7 @@ while (extlen) { bh = gfs2_getbuf(gl, dblock, CREATE); - if (!buffer_uptodate(bh) && !buffer_locked(bh)) - ll_rw_block(REQ_OP_READ, - REQ_RAHEAD | REQ_META | REQ_PRIO, - 1, &bh); + bh_readahead(bh, REQ_RAHEAD | REQ_META | REQ_PRIO); brelse(bh); dblock++; extlen--;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/gfs2/quota.c
Changed
@@ -741,12 +741,8 @@ } if (PageUptodate(page)) set_buffer_uptodate(bh); - if (!buffer_uptodate(bh)) { - ll_rw_block(REQ_OP_READ, REQ_META | REQ_PRIO, 1, &bh); - wait_on_buffer(bh); - if (!buffer_uptodate(bh)) - goto unlock_out; - } + if (bh_read(bh, REQ_META | REQ_PRIO) < 0) + goto unlock_out; if (gfs2_is_jdata(ip)) gfs2_trans_add_data(ip->i_gl, bh); else
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/io_uring.c
Changed
@@ -935,7 +935,7 @@ .needs_file = 1, .hash_reg_file = 1, .unbound_nonreg_file = 1, - .work_flags = IO_WQ_WORK_BLKCG, + .work_flags = IO_WQ_WORK_BLKCG | IO_WQ_WORK_FILES, }, [IORING_OP_PROVIDE_BUFFERS] = {}, [IORING_OP_REMOVE_BUFFERS] = {}, @@ -5233,6 +5233,11 @@ struct io_ring_ctx *ctx = req->ctx; bool cancel = false; + if (req->file->f_op->may_pollfree) { + spin_lock_irq(&ctx->completion_lock); + return -EOPNOTSUPP; + } + INIT_HLIST_NODE(&req->hash_node); io_init_poll_iocb(poll, mask, wake_func); poll->file = req->file; @@ -9076,7 +9081,7 @@ if (unlikely(ctx->sqo_dead)) { ret = -EOWNERDEAD; - goto out; + break; } if (!io_sqring_full(ctx)) @@ -9086,7 +9091,6 @@ } while (!signal_pending(current)); finish_wait(&ctx->sqo_sq_wait, &wait); -out: return ret; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/iomap/buffered-io.c
Changed
@@ -1456,13 +1456,6 @@ goto redirty; /* - * Given that we do not allow direct reclaim to call us, we should - * never be called in a recursive filesystem reclaim context. - */ - if (WARN_ON_ONCE(current->flags & PF_MEMALLOC_NOFS)) - goto redirty; - - /* * Is this page beyond the end of the file? * * The page index is less than the end_index, adjust the end_offset
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/isofs/compress.c
Changed
@@ -82,7 +82,7 @@ return 0; } haveblocks = isofs_get_blocks(inode, blocknum, bhs, needblocks); - ll_rw_block(REQ_OP_READ, 0, haveblocks, bhs); + bh_read_batch(haveblocks, bhs); curbh = 0; curpage = 0;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/jbd2/journal.c
Changed
@@ -1793,19 +1793,16 @@ { struct buffer_head *bh; journal_superblock_t *sb; - int err = -EIO; + int err; bh = journal->j_sb_buffer; J_ASSERT(bh != NULL); - if (!buffer_uptodate(bh)) { - ll_rw_block(REQ_OP_READ, 0, 1, &bh); - wait_on_buffer(bh); - if (!buffer_uptodate(bh)) { - printk(KERN_ERR - "JBD2: IO error reading journal superblock\n"); - goto out; - } + err = bh_read(bh, 0); + if (err < 0) { + printk(KERN_ERR + "JBD2: IO error reading journal superblock\n"); + goto out; } if (buffer_verified(bh))
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/jbd2/recovery.c
Changed
@@ -100,7 +100,7 @@ if (!buffer_uptodate(bh) && !buffer_locked(bh)) { bufs[nbufs++] = bh; if (nbufs == MAXBUF) { - ll_rw_block(REQ_OP_READ, 0, nbufs, bufs); + bh_readahead_batch(nbufs, bufs, 0); journal_brelse_array(bufs, nbufs); nbufs = 0; } @@ -109,7 +109,7 @@ } if (nbufs) - ll_rw_block(REQ_OP_READ, 0, nbufs, bufs); + bh_readahead_batch(nbufs, bufs, 0); err = 0; failed: @@ -152,9 +152,14 @@ return -ENOMEM; if (!buffer_uptodate(bh)) { - /* If this is a brand new buffer, start readahead. - Otherwise, we assume we are already reading it. */ - if (!buffer_req(bh)) + /* + * If this is a brand new buffer, start readahead. + * Otherwise, we assume we are already reading it. + */ + bool need_readahead = !buffer_req(bh); + + bh_read_nowait(bh, 0); + if (need_readahead) do_readahead(journal, offset); wait_on_buffer(bh); } @@ -687,7 +692,6 @@ mark_buffer_dirty(nbh); BUFFER_TRACE(nbh, "marking uptodate"); ++info->nr_replays; - /* ll_rw_block(WRITE, 1, &nbh); */ unlock_buffer(nbh); brelse(obh); brelse(nbh);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ksmbd/auth.c
Changed
@@ -321,7 +321,8 @@ dn_off = le32_to_cpu(authblob->DomainName.BufferOffset); dn_len = le16_to_cpu(authblob->DomainName.Length); - if (blob_len < (u64)dn_off + dn_len || blob_len < (u64)nt_off + nt_len) + if (blob_len < (u64)dn_off + dn_len || blob_len < (u64)nt_off + nt_len || + nt_len < CIFS_ENCPWD_SIZE) return -EINVAL; /* TODO : use domain name that imported from configuration file */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ksmbd/smb2misc.c
Changed
@@ -132,8 +132,11 @@ *len = le16_to_cpu(((struct smb2_read_req *)hdr)->ReadChannelInfoLength); break; case SMB2_WRITE: - if (((struct smb2_write_req *)hdr)->DataOffset) { - *off = le16_to_cpu(((struct smb2_write_req *)hdr)->DataOffset); + if (((struct smb2_write_req *)hdr)->DataOffset || + ((struct smb2_write_req *)hdr)->Length) { + *off = max_t(unsigned int, + le16_to_cpu(((struct smb2_write_req *)hdr)->DataOffset), + offsetof(struct smb2_write_req, Buffer)); *len = le32_to_cpu(((struct smb2_write_req *)hdr)->Length); break; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ksmbd/smb2pdu.c
Changed
@@ -539,9 +539,10 @@ struct smb2_query_info_req *req; req = smb2_get_msg(work->request_buf); - if (req->InfoType == SMB2_O_INFO_FILE && - (req->FileInfoClass == FILE_FULL_EA_INFORMATION || - req->FileInfoClass == FILE_ALL_INFORMATION)) + if ((req->InfoType == SMB2_O_INFO_FILE && + (req->FileInfoClass == FILE_FULL_EA_INFORMATION || + req->FileInfoClass == FILE_ALL_INFORMATION)) || + req->InfoType == SMB2_O_INFO_SECURITY) sz = large_sz; } @@ -2972,7 +2973,7 @@ if (!pntsd) goto err_out; - rc = build_sec_desc(pntsd, NULL, + rc = build_sec_desc(pntsd, NULL, 0, OWNER_SECINFO | GROUP_SECINFO | DACL_SECINFO, @@ -3807,6 +3808,15 @@ return 0; } +static int smb2_resp_buf_len(struct ksmbd_work *work, unsigned short hdr2_len) +{ + int free_len; + + free_len = (int)(work->response_sz - + (get_rfc1002_len(work->response_buf) + 4)) - hdr2_len; + return free_len; +} + static int smb2_calc_max_out_buf_len(struct ksmbd_work *work, unsigned short hdr2_len, unsigned int out_buf_len) @@ -3816,9 +3826,7 @@ if (out_buf_len > work->conn->vals->max_trans_size) return -EINVAL; - free_len = (int)(work->response_sz - - (get_rfc1002_len(work->response_buf) + 4)) - - hdr2_len; + free_len = smb2_resp_buf_len(work, hdr2_len); if (free_len < 0) return -EINVAL; @@ -5074,10 +5082,10 @@ struct smb_ntsd *pntsd = (struct smb_ntsd *)rsp->Buffer, *ppntsd = NULL; struct smb_fattr fattr = {{0}}; struct inode *inode; - __u32 secdesclen; + __u32 secdesclen = 0; unsigned int id = KSMBD_NO_FID, pid = KSMBD_NO_FID; int addition_info = le32_to_cpu(req->AdditionalInformation); - int rc; + int rc = 0, ppntsd_size = 0; if (addition_info & ~(OWNER_SECINFO | GROUP_SECINFO | DACL_SECINFO | PROTECTED_DACL_SECINFO | @@ -5122,9 +5130,14 @@ if (test_share_config_flag(work->tcon->share_conf, KSMBD_SHARE_FLAG_ACL_XATTR)) - ksmbd_vfs_get_sd_xattr(work->conn, fp->filp->f_path.dentry, &ppntsd); - - rc = build_sec_desc(pntsd, ppntsd, addition_info, &secdesclen, &fattr); + ppntsd_size = ksmbd_vfs_get_sd_xattr(work->conn, + fp->filp->f_path.dentry, + &ppntsd); + + /* Check if sd buffer size exceeds response buffer size */ + if (smb2_resp_buf_len(work, 8) > ppntsd_size) + rc = build_sec_desc(pntsd, ppntsd, ppntsd_size, + addition_info, &secdesclen, &fattr); posix_acl_release(fattr.cf_acls); posix_acl_release(fattr.cf_dacls); kfree(ppntsd); @@ -6315,23 +6328,18 @@ length = le32_to_cpu(req->Length); id = le64_to_cpu(req->VolatileFileId); - if (le16_to_cpu(req->DataOffset) == - offsetof(struct smb2_write_req, Buffer)) { - data_buf = (char *)&req->Buffer[0]; - } else { - if ((u64)le16_to_cpu(req->DataOffset) + length > - get_rfc1002_len(work->request_buf)) { - pr_err("invalid write data offset %u, smb_len %u\n", - le16_to_cpu(req->DataOffset), - get_rfc1002_len(work->request_buf)); - err = -EINVAL; - goto out; - } - - data_buf = (char *)(((char *)&req->hdr.ProtocolId) + - le16_to_cpu(req->DataOffset)); + if ((u64)le16_to_cpu(req->DataOffset) + length > + get_rfc1002_len(work->request_buf)) { + pr_err("invalid write data offset %u, smb_len %u\n", + le16_to_cpu(req->DataOffset), + get_rfc1002_len(work->request_buf)); + err = -EINVAL; + goto out; } + data_buf = (char *)(((char *)&req->hdr.ProtocolId) + + le16_to_cpu(req->DataOffset)); + rpc_resp = ksmbd_rpc_write(work->sess, id, data_buf, length); if (rpc_resp) { if (rpc_resp->flags == KSMBD_RPC_ENOTIMPLEMENTED) { @@ -6477,23 +6485,15 @@ if (req->Channel != SMB2_CHANNEL_RDMA_V1 && req->Channel != SMB2_CHANNEL_RDMA_V1_INVALIDATE) { - if (le16_to_cpu(req->DataOffset) == + if (le16_to_cpu(req->DataOffset) < offsetof(struct smb2_write_req, Buffer)) { - data_buf = (char *)&req->Buffer[0]; - } else { - if ((u64)le16_to_cpu(req->DataOffset) + length > - get_rfc1002_len(work->request_buf)) { - pr_err("invalid write data offset %u, smb_len %u\n", - le16_to_cpu(req->DataOffset), - get_rfc1002_len(work->request_buf)); - err = -EINVAL; - goto out; - } - - data_buf = (char *)(((char *)&req->hdr.ProtocolId) + - le16_to_cpu(req->DataOffset)); + err = -EINVAL; + goto out; } + data_buf = (char *)(((char *)&req->hdr.ProtocolId) + + le16_to_cpu(req->DataOffset)); + ksmbd_debug(SMB, "flags %u\n", le32_to_cpu(req->Flags)); if (le32_to_cpu(req->Flags) & SMB2_WRITEFLAG_WRITE_THROUGH) writethrough = true;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ksmbd/smbacl.c
Changed
@@ -688,6 +688,7 @@ } static void set_ntacl_dacl(struct smb_acl *pndacl, struct smb_acl *nt_dacl, + unsigned int aces_size, const struct smb_sid *pownersid, const struct smb_sid *pgrpsid, struct smb_fattr *fattr) @@ -701,9 +702,19 @@ if (nt_num_aces) { ntace = (struct smb_ace *)((char *)nt_dacl + sizeof(struct smb_acl)); for (i = 0; i < nt_num_aces; i++) { - memcpy((char *)pndace + size, ntace, le16_to_cpu(ntace->size)); - size += le16_to_cpu(ntace->size); - ntace = (struct smb_ace *)((char *)ntace + le16_to_cpu(ntace->size)); + unsigned short nt_ace_size; + + if (offsetof(struct smb_ace, access_req) > aces_size) + break; + + nt_ace_size = le16_to_cpu(ntace->size); + if (nt_ace_size > aces_size) + break; + + memcpy((char *)pndace + size, ntace, nt_ace_size); + size += nt_ace_size; + aces_size -= nt_ace_size; + ntace = (struct smb_ace *)((char *)ntace + nt_ace_size); num_aces++; } } @@ -872,7 +883,7 @@ /* Convert permission bits from mode to equivalent CIFS ACL */ int build_sec_desc(struct smb_ntsd *pntsd, struct smb_ntsd *ppntsd, - int addition_info, __u32 *secdesclen, + int ppntsd_size, int addition_info, __u32 *secdesclen, struct smb_fattr *fattr) { int rc = 0; @@ -932,15 +943,25 @@ if (!ppntsd) { set_mode_dacl(dacl_ptr, fattr); - } else if (!ppntsd->dacloffset) { - goto out; } else { struct smb_acl *ppdacl_ptr; - - ppdacl_ptr = (struct smb_acl *)((char *)ppntsd + - le32_to_cpu(ppntsd->dacloffset)); - set_ntacl_dacl(dacl_ptr, ppdacl_ptr, nowner_sid_ptr, - ngroup_sid_ptr, fattr); + unsigned int dacl_offset = le32_to_cpu(ppntsd->dacloffset); + int ppdacl_size, ntacl_size = ppntsd_size - dacl_offset; + + if (!dacl_offset || + (dacl_offset + sizeof(struct smb_acl) > ppntsd_size)) + goto out; + + ppdacl_ptr = (struct smb_acl *)((char *)ppntsd + dacl_offset); + ppdacl_size = le16_to_cpu(ppdacl_ptr->size); + if (ppdacl_size > ntacl_size || + ppdacl_size < sizeof(struct smb_acl)) + goto out; + + set_ntacl_dacl(dacl_ptr, ppdacl_ptr, + ntacl_size - sizeof(struct smb_acl), + nowner_sid_ptr, ngroup_sid_ptr, + fattr); } pntsd->dacloffset = cpu_to_le32(offset); offset += le16_to_cpu(dacl_ptr->size); @@ -973,23 +994,31 @@ struct smb_ntsd *parent_pntsd = NULL; struct smb_sid owner_sid, group_sid; struct dentry *parent = path->dentry->d_parent; - int inherited_flags = 0, flags = 0, i, ace_cnt = 0, nt_size = 0; - int rc = 0, num_aces, dacloffset, pntsd_type, acl_len; + int inherited_flags = 0, flags = 0, i, ace_cnt = 0, nt_size = 0, pdacl_size; + int rc = 0, num_aces, dacloffset, pntsd_type, pntsd_size, acl_len, aces_size; char *aces_base; bool is_dir = S_ISDIR(d_inode(path->dentry)->i_mode); - acl_len = ksmbd_vfs_get_sd_xattr(conn, parent, &parent_pntsd); - if (acl_len <= 0) + pntsd_size = ksmbd_vfs_get_sd_xattr(conn, + parent, &parent_pntsd); + if (pntsd_size <= 0) return -ENOENT; dacloffset = le32_to_cpu(parent_pntsd->dacloffset); - if (!dacloffset) { + if (!dacloffset || (dacloffset + sizeof(struct smb_acl) > pntsd_size)) { rc = -EINVAL; goto free_parent_pntsd; } parent_pdacl = (struct smb_acl *)((char *)parent_pntsd + dacloffset); + acl_len = pntsd_size - dacloffset; num_aces = le32_to_cpu(parent_pdacl->num_aces); pntsd_type = le16_to_cpu(parent_pntsd->type); + pdacl_size = le16_to_cpu(parent_pdacl->size); + + if (pdacl_size > acl_len || pdacl_size < sizeof(struct smb_acl)) { + rc = -EINVAL; + goto free_parent_pntsd; + } aces_base = kmalloc(sizeof(struct smb_ace) * num_aces * 2, GFP_KERNEL); if (!aces_base) { @@ -1000,11 +1029,23 @@ aces = (struct smb_ace *)aces_base; parent_aces = (struct smb_ace *)((char *)parent_pdacl + sizeof(struct smb_acl)); + aces_size = acl_len - sizeof(struct smb_acl); if (pntsd_type & DACL_AUTO_INHERITED) inherited_flags = INHERITED_ACE; for (i = 0; i < num_aces; i++) { + int pace_size; + + if (offsetof(struct smb_ace, access_req) > aces_size) + break; + + pace_size = le16_to_cpu(parent_aces->size); + if (pace_size > aces_size) + break; + + aces_size -= pace_size; + flags = parent_aces->flags; if (!smb_inherit_flags(flags, is_dir)) goto pass; @@ -1049,8 +1090,7 @@ aces = (struct smb_ace *)((char *)aces + le16_to_cpu(aces->size)); ace_cnt++; pass: - parent_aces = - (struct smb_ace *)((char *)parent_aces + le16_to_cpu(parent_aces->size)); + parent_aces = (struct smb_ace *)((char *)parent_aces + pace_size); } if (nt_size > 0) { @@ -1143,7 +1183,7 @@ struct smb_ntsd *pntsd = NULL; struct smb_acl *pdacl; struct posix_acl *posix_acls; - int rc = 0, acl_size; + int rc = 0, pntsd_size, acl_size, aces_size, pdacl_size, dacl_offset; struct smb_sid sid; int granted = le32_to_cpu(*pdaccess & ~FILE_MAXIMAL_ACCESS_LE); struct smb_ace *ace; @@ -1152,36 +1192,33 @@ struct smb_ace *others_ace = NULL; struct posix_acl_entry *pa_entry; unsigned int sid_type = SIDOWNER; - char *end_of_acl; + unsigned short ace_size; ksmbd_debug(SMB, "check permission using windows acl\n"); - acl_size = ksmbd_vfs_get_sd_xattr(conn, path->dentry, &pntsd); - if (acl_size <= 0 || !pntsd || !pntsd->dacloffset) { - kfree(pntsd); - return 0; - } + pntsd_size = ksmbd_vfs_get_sd_xattr(conn, + path->dentry, &pntsd); + if (pntsd_size <= 0 || !pntsd) + goto err_out; + + dacl_offset = le32_to_cpu(pntsd->dacloffset); + if (!dacl_offset || + (dacl_offset + sizeof(struct smb_acl) > pntsd_size)) + goto err_out; pdacl = (struct smb_acl *)((char *)pntsd + le32_to_cpu(pntsd->dacloffset)); - end_of_acl = ((char *)pntsd) + acl_size; - if (end_of_acl <= (char *)pdacl) { - kfree(pntsd); - return 0; - } + acl_size = pntsd_size - dacl_offset; + pdacl_size = le16_to_cpu(pdacl->size); - if (end_of_acl < (char *)pdacl + le16_to_cpu(pdacl->size) || - le16_to_cpu(pdacl->size) < sizeof(struct smb_acl)) { - kfree(pntsd); - return 0; - } + if (pdacl_size > acl_size || pdacl_size < sizeof(struct smb_acl)) + goto err_out; if (!pdacl->num_aces) { - if (!(le16_to_cpu(pdacl->size) - sizeof(struct smb_acl)) && + if (!(pdacl_size - sizeof(struct smb_acl)) && *pdaccess & ~(FILE_READ_CONTROL_LE | FILE_WRITE_DAC_LE)) { rc = -EACCES; goto err_out; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ksmbd/smbacl.h
Changed
@@ -192,7 +192,7 @@ int parse_sec_desc(struct smb_ntsd *pntsd, int acl_len, struct smb_fattr *fattr); int build_sec_desc(struct smb_ntsd *pntsd, struct smb_ntsd *ppntsd, - int addition_info, __u32 *secdesclen, + int ppntsd_size, int addition_info, __u32 *secdesclen, struct smb_fattr *fattr); int init_acl_state(struct posix_acl_state *state, int cnt); void free_acl_state(struct posix_acl_state *state);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ksmbd/vfs.c
Changed
@@ -1495,6 +1495,11 @@ } *pntsd = acl.sd_buf; + if (acl.sd_size < sizeof(struct smb_ntsd)) { + pr_err("sd size is invalid\n"); + goto out_free; + } + (*pntsd)->osidoffset = cpu_to_le32(le32_to_cpu((*pntsd)->osidoffset) - NDR_NTSD_OFFSETOF); (*pntsd)->gsidoffset =
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ntfs3/attrib.c
Changed
@@ -1949,7 +1949,7 @@ return -ENOENT; if (!attr_b->non_res) { - u32 data_size = le32_to_cpu(attr->res.data_size); + u32 data_size = le32_to_cpu(attr_b->res.data_size); u32 from, to; if (vbo > data_size)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ntfs3/inode.c
Changed
@@ -629,12 +629,9 @@ bh->b_size = block_size; off = vbo & (PAGE_SIZE - 1); set_bh_page(bh, page, off); - ll_rw_block(REQ_OP_READ, 0, 1, &bh); - wait_on_buffer(bh); - if (!buffer_uptodate(bh)) { - err = -EIO; + err = bh_read(bh, 0); + if (err < 0) goto out; - } zero_user_segment(page, off + voff, off + block_size); } }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ocfs2/aops.c
Changed
@@ -640,7 +640,7 @@ !buffer_new(bh) && ocfs2_should_read_blk(inode, page, block_start) && (block_start < from || block_end > to)) { - ll_rw_block(REQ_OP_READ, 0, 1, &bh); + bh_read_nowait(bh, 0); *wait_bh++=bh; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ocfs2/super.c
Changed
@@ -1772,9 +1772,7 @@ if (!buffer_dirty(*bh)) clear_buffer_uptodate(*bh); unlock_buffer(*bh); - ll_rw_block(REQ_OP_READ, 0, 1, bh); - wait_on_buffer(*bh); - if (!buffer_uptodate(*bh)) { + if (bh_read(*bh, 0) < 0) { mlog_errno(-EIO); brelse(*bh); *bh = NULL;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/reiserfs/journal.c
Changed
@@ -870,7 +870,7 @@ */ if (buffer_dirty(bh) && unlikely(bh->b_page->mapping == NULL)) { spin_unlock(lock); - ll_rw_block(REQ_OP_WRITE, 0, 1, &bh); + write_dirty_buffer(bh, 0); spin_lock(lock); } put_bh(bh); @@ -1054,7 +1054,7 @@ if (tbh) { if (buffer_dirty(tbh)) { depth = reiserfs_write_unlock_nested(s); - ll_rw_block(REQ_OP_WRITE, 0, 1, &tbh); + write_dirty_buffer(tbh, 0); reiserfs_write_lock_nested(s, depth); } put_bh(tbh) ; @@ -2239,7 +2239,7 @@ } } /* read in the log blocks, memcpy to the corresponding real block */ - ll_rw_block(REQ_OP_READ, 0, get_desc_trans_len(desc), log_blocks); + bh_read_batch(get_desc_trans_len(desc), log_blocks); for (i = 0; i < get_desc_trans_len(desc); i++) { wait_on_buffer(log_blocks[i]); @@ -2341,10 +2341,11 @@ } else bhlist[j++] = bh; } - ll_rw_block(REQ_OP_READ, 0, j, bhlist); + bh = bhlist[0]; + bh_read_nowait(bh, 0); + bh_readahead_batch(j - 1, &bhlist[1], 0); for (i = 1; i < j; i++) brelse(bhlist[i]); - bh = bhlist[0]; wait_on_buffer(bh); if (buffer_uptodate(bh)) return bh;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/reiserfs/stree.c
Changed
@@ -579,7 +579,7 @@ if (!buffer_uptodate(bh[j])) { if (depth == -1) depth = reiserfs_write_unlock_nested(s); - ll_rw_block(REQ_OP_READ, REQ_RAHEAD, 1, bh + j); + bh_readahead(bh[j], REQ_RAHEAD); } brelse(bh[j]); } @@ -685,7 +685,7 @@ if (!buffer_uptodate(bh) && depth == -1) depth = reiserfs_write_unlock_nested(sb); - ll_rw_block(REQ_OP_READ, 0, 1, &bh); + bh_read_nowait(bh, 0); wait_on_buffer(bh); if (depth != -1)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/reiserfs/super.c
Changed
@@ -1708,9 +1708,7 @@ /* after journal replay, reread all bitmap and super blocks */ static int reread_meta_blocks(struct super_block *s) { - ll_rw_block(REQ_OP_READ, 0, 1, &SB_BUFFER_WITH_SB(s)); - wait_on_buffer(SB_BUFFER_WITH_SB(s)); - if (!buffer_uptodate(SB_BUFFER_WITH_SB(s))) { + if (bh_read(SB_BUFFER_WITH_SB(s), 0) < 0) { reiserfs_warning(s, "reiserfs-2504", "error reading the super"); return 1; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/resctrlfs.c
Changed
@@ -211,8 +211,7 @@ if (atomic_dec_and_test(&rdtgrp->waitcount) && (rdtgrp->flags & RDT_DELETED)) { kernfs_unbreak_active_protection(kn); - kernfs_put(rdtgrp->kn); - kfree(rdtgrp); + rdtgroup_remove(rdtgrp); } else { kernfs_unbreak_active_protection(kn); } @@ -272,12 +271,6 @@ if (dest_kn) *dest_kn = kn; - /* - * This extra ref will be put in kernfs_remove() and guarantees - * that @rdtgrp->kn is always accessible. - */ - kernfs_get(kn); - ret = resctrl_group_kn_set_ugid(kn); if (ret) goto out_destroy; @@ -399,8 +392,6 @@ if (ret) goto out_info; - kernfs_get(kn_mongrp); - ret = mkdir_mondata_all_prepare(&resctrl_group_default); if (ret < 0) goto out_mongrp; @@ -410,7 +401,6 @@ if (ret) goto out_mongrp; - kernfs_get(kn_mondata); resctrl_group_default.mon.mon_data_kn = kn_mondata; } @@ -495,7 +485,7 @@ /* rmid may not be used */ rmid_free(sentry->mon.rmid); list_del(&sentry->mon.crdtgrp_list); - kfree(sentry); + rdtgroup_remove(sentry); } } @@ -529,7 +519,7 @@ kernfs_remove(rdtgrp->kn); list_del(&rdtgrp->resctrl_group_list); - kfree(rdtgrp); + rdtgroup_remove(rdtgrp); } /* Notify online CPUs to update per cpu storage and PQR_ASSOC MSR */ update_closid_rmid(cpu_online_mask, &resctrl_group_default); @@ -622,12 +612,11 @@ case Opt_caPrio: ctx->enable_caPrio = true; return 0; + default: + break; + } - return 0; -} - -return -EINVAL; - + return -EINVAL; } static void resctrl_fs_context_free(struct fs_context *fc) @@ -776,7 +765,7 @@ * kernfs_remove() will drop the reference count on "kn" which * will free it. But we still need it to stick around for the * resctrl_group_kn_unlock(kn} call below. Take one extra reference - * here, which will be dropped inside resctrl_group_kn_unlock(). + * here, which will be dropped inside rdtgroup_remove(). */ kernfs_get(kn); @@ -816,6 +805,7 @@ out_prepare_clean: mkdir_mondata_all_prepare_clean(rdtgrp); out_destroy: + kernfs_put(rdtgrp->kn); kernfs_remove(rdtgrp->kn); out_free_rmid: rmid_free(rdtgrp->mon.rmid); @@ -832,7 +822,7 @@ static void mkdir_resctrl_prepare_clean(struct resctrl_group *rgrp) { kernfs_remove(rgrp->kn); - kfree(rgrp); + rdtgroup_remove(rgrp); } /* @@ -997,11 +987,6 @@ { resctrl_group_rm_mon(rdtgrp, tmpmask); - /* - * one extra hold on this, will drop when we kfree(rdtgrp) - * in resctrl_group_kn_unlock() - */ - kernfs_get(kn); kernfs_remove(rdtgrp->kn); return 0; @@ -1050,11 +1035,6 @@ { resctrl_group_rm_ctrl(rdtgrp, tmpmask); - /* - * one extra hold on this, will drop when we kfree(rdtgrp) - * in resctrl_group_kn_unlock() - */ - kernfs_get(kn); kernfs_remove(rdtgrp->kn); return 0;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/signalfd.c
Changed
@@ -248,6 +248,7 @@ .poll = signalfd_poll, .read = signalfd_read, .llseek = noop_llseek, + .may_pollfree = true, }; static int do_signalfd4(int ufd, sigset_t *mask, int flags)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/udf/dir.c
Changed
@@ -131,7 +131,7 @@ brelse(tmp); } if (num) { - ll_rw_block(REQ_OP_READ, REQ_RAHEAD, num, bha); + bh_readahead_batch(num, bha, REQ_RAHEAD); for (i = 0; i < num; i++) brelse(bha[i]); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/udf/directory.c
Changed
@@ -89,7 +89,7 @@ brelse(tmp); } if (num) { - ll_rw_block(REQ_OP_READ, REQ_RAHEAD, num, bha); + bh_readahead_batch(num, bha, REQ_RAHEAD); for (i = 0; i < num; i++) brelse(bha[i]); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/udf/inode.c
Changed
@@ -1210,13 +1210,7 @@ if (!bh) return NULL; - if (buffer_uptodate(bh)) - return bh; - - ll_rw_block(REQ_OP_READ, 0, 1, &bh); - - wait_on_buffer(bh); - if (buffer_uptodate(bh)) + if (bh_read(bh, 0) >= 0) return bh; brelse(bh);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ufs/balloc.c
Changed
@@ -295,14 +295,10 @@ if (!buffer_mapped(bh)) map_bh(bh, inode->i_sb, oldb + pos); - if (!buffer_uptodate(bh)) { - ll_rw_block(REQ_OP_READ, 0, 1, &bh); - wait_on_buffer(bh); - if (!buffer_uptodate(bh)) { - ufs_error(inode->i_sb, __func__, - "read of block failed\n"); - break; - } + if (bh_read(bh, 0) < 0) { + ufs_error(inode->i_sb, __func__, + "read of block failed\n"); + break; } UFSD(" change from %llu to %llu, pos %u\n",
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/libxfs/xfs_btree.c
Changed
@@ -2811,7 +2811,7 @@ struct xfs_btree_split_args *args = container_of(work, struct xfs_btree_split_args, work); unsigned long pflags; - unsigned long new_pflags = PF_MEMALLOC_NOFS; + unsigned long new_pflags = 0; /* * we are in a transaction context here, but may also be doing work @@ -2823,12 +2823,20 @@ new_pflags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD; current_set_flags_nested(&pflags, new_pflags); + xfs_trans_set_context(args->cur->bc_tp); args->result = __xfs_btree_split(args->cur, args->level, args->ptrp, args->key, args->curp, args->stat); - complete(args->done); + xfs_trans_clear_context(args->cur->bc_tp); current_restore_flags_nested(&pflags, new_pflags); + + /* + * Do not access args after complete() has run here. We don't own args + * and the owner may run and free args before we return here. + */ + complete(args->done); + } /*
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/libxfs/xfs_btree.h
Changed
@@ -523,7 +523,6 @@ struct xfs_buf *bp; block = xfs_btree_get_block(cur, level, &bp); - ASSERT(block && xfs_btree_check_block(cur, block, level, bp) == 0); if (cur->bc_flags & XFS_BTREE_LONG_PTRS) return block->bb_u.l.bb_rightsib == cpu_to_be64(NULLFSBLOCK);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_aops.c
Changed
@@ -98,7 +98,7 @@ * thus we need to mark ourselves as being in a transaction manually. * Similarly for freeze protection. */ - current_set_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); + xfs_trans_set_context(tp); __sb_writers_acquired(VFS_I(ip)->i_sb, SB_FREEZE_FS); /* we abort the update if there was an IO error */ @@ -538,6 +538,12 @@ { struct xfs_writepage_ctx wpc = { }; + if (WARN_ON_ONCE(current->journal_info)) { + redirty_page_for_writepage(wbc, page); + unlock_page(page); + return 0; + } + return iomap_writepage(page, wbc, &wpc.ctx, &xfs_writeback_ops); } @@ -548,6 +554,13 @@ { struct xfs_writepage_ctx wpc = { }; + /* + * Writing back data in a transaction context can result in recursive + * transactions. This is bad, so issue a warning and get out of here. + */ + if (WARN_ON_ONCE(current->journal_info)) + return 0; + xfs_iflags_clear(XFS_I(mapping->host), XFS_ITRUNCATED); return iomap_writepages(mapping, wbc, &wpc.ctx, &xfs_writeback_ops); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_attr_inactive.c
Changed
@@ -158,6 +158,7 @@ } child_fsb = be32_to_cpu(ichdr.btree[0].before); xfs_trans_brelse(*trans, bp); /* no locks for later trans */ + bp = NULL; /* * If this is the node level just above the leaves, simply loop @@ -211,12 +212,8 @@ &child_bp); if (error) return error; - error = bp->b_error; - if (error) { - xfs_trans_brelse(*trans, child_bp); - return error; - } xfs_trans_binval(*trans, child_bp); + child_bp = NULL; /* * If we're not done, re-read the parent to get the next @@ -233,6 +230,7 @@ bp->b_addr); child_fsb = be32_to_cpu(phdr.btree[i + 1].before); xfs_trans_brelse(*trans, bp); + bp = NULL; } /* * Atomically commit the whole invalidate stuff.
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_buf_item.c
Changed
@@ -936,6 +936,8 @@ trace_xfs_buf_item_relse(bp, _RET_IP_); ASSERT(!test_bit(XFS_LI_IN_AIL, &bip->bli_item.li_flags)); + if (atomic_read(&bip->bli_refcount)) + return; bp->b_log_item = NULL; xfs_buf_rele(bp); xfs_buf_item_free(bip);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_fsops.c
Changed
@@ -374,46 +374,36 @@ * If the request is larger than the current reservation, reserve the * blocks before we update the reserve counters. Sample m_fdblocks and * perform a partial reservation if the request exceeds free space. + * + * The code below estimates how many blocks it can request from + * fdblocks to stash in the reserve pool. This is a classic TOCTOU + * race since fdblocks updates are not always coordinated via + * m_sb_lock. Set the reserve size even if there's not enough free + * space to fill it because mod_fdblocks will refill an undersized + * reserve when it can. */ - error = -ENOSPC; - do { - free = percpu_counter_sum(&mp->m_fdblocks) - - mp->m_alloc_set_aside; - if (free <= 0) - break; - - delta = request - mp->m_resblks; - lcounter = free - delta; - if (lcounter < 0) - /* We can't satisfy the request, just get what we can */ - fdblks_delta = free; - else - fdblks_delta = delta; - + free = percpu_counter_sum(&mp->m_fdblocks) - + xfs_fdblocks_unavailable(mp); + delta = request - mp->m_resblks; + mp->m_resblks = request; + if (delta > 0 && free > 0) { /* * We'll either succeed in getting space from the free block - * count or we'll get an ENOSPC. If we get a ENOSPC, it means - * things changed while we were calculating fdblks_delta and so - * we should try again to see if there is anything left to - * reserve. + * count or we'll get an ENOSPC. Don't set the reserved flag + * here - we don't want to reserve the extra reserve blocks + * from the reserve. * - * Don't set the reserved flag here - we don't want to reserve - * the extra reserve blocks from the reserve..... + * The desired reserve size can change after we drop the lock. + * Use mod_fdblocks to put the space into the reserve or into + * fdblocks as appropriate. */ + fdblks_delta = min(free, delta); spin_unlock(&mp->m_sb_lock); error = xfs_mod_fdblocks(mp, -fdblks_delta, 0); + if (!error) + xfs_mod_fdblocks(mp, fdblks_delta, 0); spin_lock(&mp->m_sb_lock); - } while (error == -ENOSPC); - - /* - * Update the reserve counters if blocks have been successfully - * allocated. - */ - if (!error && fdblks_delta) { - mp->m_resblks += fdblks_delta; - mp->m_resblks_avail += fdblks_delta; } - out: if (outval) { outval->resblks = mp->m_resblks;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_log.c
Changed
@@ -848,6 +848,24 @@ } /* + * Cycle all the iclogbuf locks to make sure all log IO completion + * is done before we tear down these buffers. + */ +static void +xlog_wait_iclog_completion(struct xlog *log) +{ + int i; + struct xlog_in_core *iclog = log->l_iclog; + + for (i = 0; i < log->l_iclog_bufs; i++) { + down(&iclog->ic_sema); + up(&iclog->ic_sema); + iclog = iclog->ic_next; + } +} + + +/* * Wait for the iclog and all prior iclogs to be written disk as required by the * log force state machine. Waiting on ic_force_wait ensures iclog completions * have been ordered and callbacks run before we are woken here, hence @@ -1034,6 +1052,13 @@ struct xfs_mount *mp) { xfs_log_quiesce(mp); + /* + * If shutdown has come from iclog IO context, the log + * cleaning will have been skipped and so we need to wait + * for the iclog to complete shutdown processing before we + * tear anything down. + */ + xlog_wait_iclog_completion(mp->m_log); xfs_trans_ail_destroy(mp); @@ -1942,17 +1967,6 @@ int i; /* - * Cycle all the iclogbuf locks to make sure all log IO completion - * is done before we tear down these buffers. - */ - iclog = log->l_iclog; - for (i = 0; i < log->l_iclog_bufs; i++) { - down(&iclog->ic_sema); - up(&iclog->ic_sema); - iclog = iclog->ic_next; - } - - /* * Destroy the CIL after waiting for iclog IO completion because an * iclog EIO error will try to shut down the log, which accesses the * CIL to wake up the waiters.
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_mount.h
Changed
@@ -467,6 +467,14 @@ */ #define XFS_FDBLOCKS_BATCH 1024 +/* Accessor added for 5.10.y backport */ +static inline uint64_t +xfs_fdblocks_unavailable( + struct xfs_mount *mp) +{ + return mp->m_alloc_set_aside; +} + extern int xfs_mod_fdblocks(struct xfs_mount *mp, int64_t delta, bool reserved); extern int xfs_mod_frextents(struct xfs_mount *mp, int64_t delta);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_trans.c
Changed
@@ -72,6 +72,7 @@ xfs_extent_busy_clear(tp->t_mountp, &tp->t_busy, false); trace_xfs_trans_free(tp, _RET_IP_); + xfs_trans_clear_context(tp); if (!(tp->t_flags & XFS_TRANS_NO_WRITECOUNT)) sb_end_intwrite(tp->t_mountp->m_super); xfs_trans_free_dqinfo(tp); @@ -123,7 +124,8 @@ ntp->t_rtx_res = tp->t_rtx_res - tp->t_rtx_res_used; tp->t_rtx_res = tp->t_rtx_res_used; - ntp->t_pflags = tp->t_pflags; + + xfs_trans_switch_context(tp, ntp); /* move deferred ops over to the new tp */ xfs_defer_move(ntp, tp); @@ -157,9 +159,6 @@ int error = 0; bool rsvd = (tp->t_flags & XFS_TRANS_RESERVE) != 0; - /* Mark this thread as being in a transaction */ - current_set_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); - /* * Attempt to reserve the needed disk blocks by decrementing * the number needed from the number available. This will @@ -167,10 +166,8 @@ */ if (blocks > 0) { error = xfs_mod_fdblocks(mp, -((int64_t)blocks), rsvd); - if (error != 0) { - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); + if (error != 0) return -ENOSPC; - } tp->t_blk_res += blocks; } @@ -244,9 +241,6 @@ xfs_mod_fdblocks(mp, (int64_t)blocks, rsvd); tp->t_blk_res = 0; } - - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); - return error; } @@ -272,6 +266,7 @@ tp = kmem_cache_zalloc(xfs_trans_zone, GFP_KERNEL | __GFP_NOFAIL); if (!(flags & XFS_TRANS_NO_WRITECOUNT)) sb_start_intwrite(mp->m_super); + xfs_trans_set_context(tp); /* * Zero-reservation ("empty") transactions can't modify anything, so @@ -893,7 +888,6 @@ xlog_cil_commit(mp->m_log, tp, &commit_seq, regrant); - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); xfs_trans_free(tp); /* @@ -925,7 +919,6 @@ xfs_log_ticket_ungrant(mp->m_log, tp->t_ticket); tp->t_ticket = NULL; } - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); xfs_trans_free_items(tp, !!error); xfs_trans_free(tp); @@ -985,9 +978,6 @@ tp->t_ticket = NULL; } - /* mark this thread as no longer being in a transaction */ - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); - xfs_trans_free_items(tp, dirty); xfs_trans_free(tp); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_trans.h
Changed
@@ -266,4 +266,34 @@ struct xfs_dquot *gdqp, struct xfs_dquot *pdqp, bool force, struct xfs_trans **tpp); +static inline void +xfs_trans_set_context( + struct xfs_trans *tp) +{ + ASSERT(current->journal_info == NULL); + tp->t_pflags = memalloc_nofs_save(); + current->journal_info = tp; +} + +static inline void +xfs_trans_clear_context( + struct xfs_trans *tp) +{ + if (current->journal_info == tp) { + memalloc_nofs_restore(tp->t_pflags); + current->journal_info = NULL; + } +} + +static inline void +xfs_trans_switch_context( + struct xfs_trans *old_tp, + struct xfs_trans *new_tp) +{ + ASSERT(current->journal_info == old_tp); + new_tp->t_pflags = old_tp->t_pflags; + old_tp->t_pflags = 0; + current->journal_info = new_tp; +} + #endif /* __XFS_TRANS_H__ */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/acpi/actbl2.h
Changed
@@ -647,7 +647,7 @@ u8 reserved[3]; /* reserved - must be zero */ }; -/* 11: Generic interrupt - GICC (ACPI 5.0 + ACPI 6.0 + ACPI 6.3 changes) */ +/* 11: Generic interrupt - GICC (ACPI 5.0 + ACPI 6.0 + ACPI 6.3 + ACPI 6.5 changes) */ struct acpi_madt_generic_interrupt { struct acpi_subtable_header header; @@ -667,6 +667,7 @@ u8 efficiency_class; u8 reserved2[1]; u16 spe_interrupt; /* ACPI 6.3 */ + u16 trbe_interrupt; /* ACPI 6.5 */ }; /* Masks for Flags field above */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/dt-bindings/clock/hi3516dv300-clock.h
Added
@@ -0,0 +1,101 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2016-2017 HiSilicon Technologies Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; either version 2 of the License, or (at your + * option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + * + */ + +#ifndef __DTS_HI3516DV300_CLOCK_H +#define __DTS_HI3516DV300_CLOCK_H + +/* clk in Hi3516CV500 CRG */ +/* fixed rate clocks */ +#define HI3516DV300_FIXED_3M 1 +#define HI3516DV300_FIXED_6M 2 +#define HI3516DV300_FIXED_12M 3 +#define HI3516DV300_FIXED_24M 4 +#define HI3516DV300_FIXED_50M 5 +#define HI3516DV300_FIXED_83P3M 6 +#define HI3516DV300_FIXED_100M 7 +#define HI3516DV300_FIXED_125M 8 +#define HI3516DV300_FIXED_148P5M 9 +#define HI3516DV300_FIXED_150M 10 +#define HI3516DV300_FIXED_200M 11 +#define HI3516DV300_FIXED_250M 12 +#define HI3516DV300_FIXED_300M 13 +#define HI3516DV300_FIXED_324M 14 +#define HI3516DV300_FIXED_342M 15 +#define HI3516DV300_FIXED_375M 16 +#define HI3516DV300_FIXED_400M 17 +#define HI3516DV300_FIXED_448M 18 +#define HI3516DV300_FIXED_500M 19 +#define HI3516DV300_FIXED_540M 20 +#define HI3516DV300_FIXED_600M 21 +#define HI3516DV300_FIXED_750M 22 +#define HI3516DV300_FIXED_1000M 23 +#define HI3516DV300_FIXED_1500M 24 +#define HI3516DV300_FIXED_54M 25 +#define HI3516DV300_FIXED_25M 26 +#define HI3516DV300_FIXED_163M 27 +#define HI3516DV300_FIXED_257M 28 +#define HI3516DV300_FIXED_396M 29 + +/* mux clocks */ +#define HI3516DV300_SYSAXI_CLK 30 +#define HI3516DV300_SYSAPB_CLK 31 +#define HI3516DV300_FMC_MUX 32 +#define HI3516DV300_UART_MUX 33 +#define HI3516DV300_MMC0_MUX 34 +#define HI3516DV300_MMC1_MUX 35 +#define HI3516DV300_MMC2_MUX 36 +#define HI3516DV300_UART1_MUX 33 +#define HI3516DV300_UART2_MUX 37 +#define HI3516DV300_UART4_MUX 38 +#define HI3516DV300_ETH_MUX 39 + +/* gate clocks */ +#define HI3516DV300_UART0_CLK 40 +#define HI3516DV300_UART1_CLK 41 +#define HI3516DV300_UART2_CLK 42 +#define HI3516DV300_FMC_CLK 43 +#define HI3516DV300_ETH0_CLK 44 +#define HI3516DV300_USB2_BUS_CLK 45 +#define HI3516DV300_USB2_CLK 46 +#define HI3516DV300_DMAC_CLK 47 +#define HI3516DV300_SPI0_CLK 48 +#define HI3516DV300_SPI1_CLK 49 +#define HI3516DV300_MMC0_CLK 50 +#define HI3516DV300_MMC1_CLK 51 +#define HI3516DV300_MMC2_CLK 52 +#define HI3516DV300_UART4_CLK 53 +#define HI3516DV300_SPI2_CLK 54 +#define HI3516DV300_I2C0_CLK 55 +#define HI3516DV300_I2C1_CLK 56 +#define HI3516DV300_I2C2_CLK 57 +#define HI3516DV300_I2C3_CLK 58 +#define HI3516DV300_I2C4_CLK 59 +#define HI3516DV300_I2C5_CLK 60 +#define HI3516DV300_I2C6_CLK 61 +#define HI3516DV300_I2C7_CLK 62 +#define HI3516DV300_UART3_MUX 63 +#define HI3516DV300_UART3_CLK 64 +#define HI3516DV300_DMAC_AXICLK 70 +#define HI3516DV300_PWM_CLK 71 +#define HI3516DV300_PWM_MUX 72 + +#define HI3516DV300_NR_CLKS 256 +#define HI3516DV300_NR_RSTS 256 + +#endif /* __DTS_HI3516DV300_CLOCK_H */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/blk-mq.h
Changed
@@ -303,15 +303,6 @@ KABI_RESERVE(1) }; -struct request_wrapper { - struct request rq; - - /* Time that I/O was counted in part_get_stat_info(). */ - u64 stat_time_ns; -}; - -#define request_to_wrapper(_rq) container_of(_rq, struct request_wrapper, rq) - typedef bool (busy_iter_fn)(struct blk_mq_hw_ctx *, struct request *, void *, bool); typedef bool (busy_tag_iter_fn)(struct request *, void *, bool); @@ -606,7 +597,7 @@ */ static inline struct request *blk_mq_rq_from_pdu(void *pdu) { - return pdu - sizeof(struct request_wrapper); + return pdu - sizeof(struct request); } /** @@ -620,7 +611,7 @@ */ static inline void *blk_mq_rq_to_pdu(struct request *rq) { - return request_to_wrapper(rq) + 1; + return rq + 1; } static inline struct blk_mq_hw_ctx *queue_hctx(struct request_queue *q, int id)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/blk_types.h
Changed
@@ -239,9 +239,6 @@ */ struct blkcg_gq *bi_blkg; struct bio_issue bi_issue; -#ifdef CONFIG_BLK_CGROUP_IOCOST - u64 bi_iocost_cost; -#endif #endif #ifdef CONFIG_BLK_INLINE_ENCRYPTION @@ -268,7 +265,11 @@ struct bio_set *bi_pool; +#ifdef CONFIG_BLK_CGROUP_IOCOST + KABI_USE(1, u64 bi_iocost_cost) +#else KABI_RESERVE(1) +#endif KABI_RESERVE(2) KABI_RESERVE(3) KABI_RESERVE(4)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/blkdev.h
Changed
@@ -115,6 +115,8 @@ #define RQF_MQ_POLL_SLEPT ((__force req_flags_t)(1 << 20)) /* ->timeout has been called, don't expire again */ #define RQF_TIMED_OUT ((__force req_flags_t)(1 << 21)) +/* The rq is allocated from block layer */ +#define RQF_FROM_BLOCK ((__force req_flags_t)(1 << 22)) /* flags that prevent us from merging requests: */ #define RQF_NOMERGE_FLAGS \ @@ -200,10 +202,6 @@ struct gendisk *rq_disk; struct hd_struct *part; -#ifdef CONFIG_BLK_RQ_ALLOC_TIME - /* Time that the first bio started allocating this request. */ - u64 alloc_time_ns; -#endif /* Time that this request was allocated for this IO. */ u64 start_time_ns; /* Time that I/O was submitted to the device. */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/buffer_head.h
Changed
@@ -117,6 +117,7 @@ * of the form "mark_buffer_foo()". These are higher-level functions which * do something in addition to setting a b_state bit. */ +BUFFER_FNS(Uptodate, uptodate) BUFFER_FNS(Dirty, dirty) TAS_BUFFER_FNS(Dirty, dirty) BUFFER_FNS(Lock, locked) @@ -134,30 +135,6 @@ BUFFER_FNS(Prio, prio) BUFFER_FNS(Defer_Completion, defer_completion) -static __always_inline void set_buffer_uptodate(struct buffer_head *bh) -{ - /* - * make it consistent with folio_mark_uptodate - * pairs with smp_load_acquire in buffer_uptodate - */ - smp_mb__before_atomic(); - set_bit(BH_Uptodate, &bh->b_state); -} - -static __always_inline void clear_buffer_uptodate(struct buffer_head *bh) -{ - clear_bit(BH_Uptodate, &bh->b_state); -} - -static __always_inline int buffer_uptodate(const struct buffer_head *bh) -{ - /* - * make it consistent with folio_test_uptodate - * pairs with smp_mb__before_atomic in set_buffer_uptodate - */ - return (smp_load_acquire(&bh->b_state) & (1UL << BH_Uptodate)) != 0; -} - #define bh_offset(bh) ((unsigned long)(bh)->b_data & ~PAGE_MASK) /* If we *know* page->private refers to buffer_heads */ @@ -230,6 +207,9 @@ sector_t bblock, unsigned blocksize); int bh_uptodate_or_lock(struct buffer_head *bh); int bh_submit_read(struct buffer_head *bh); +int __bh_read(struct buffer_head *bh, unsigned int op_flags, bool wait); +void __bh_read_batch(int nr, struct buffer_head *bhs[], + unsigned int op_flags, bool force_lock); extern int buffer_heads_over_limit; @@ -403,6 +383,41 @@ return __getblk_gfp(bdev, block, size, __GFP_MOVABLE); } +static inline void bh_readahead(struct buffer_head *bh, unsigned int op_flags) +{ + if (!buffer_uptodate(bh) && trylock_buffer(bh)) { + if (!buffer_uptodate(bh)) + __bh_read(bh, op_flags, false); + else + unlock_buffer(bh); + } +} + +static inline void bh_read_nowait(struct buffer_head *bh, unsigned int op_flags) +{ + if (!bh_uptodate_or_lock(bh)) + __bh_read(bh, op_flags, false); +} + +/* Returns 1 if buffer uptodated, 0 on success, and -EIO on error. */ +static inline int bh_read(struct buffer_head *bh, unsigned int op_flags) +{ + if (bh_uptodate_or_lock(bh)) + return 1; + return __bh_read(bh, op_flags, true); +} + +static inline void bh_read_batch(int nr, struct buffer_head *bhs[]) +{ + __bh_read_batch(nr, bhs, 0, true); +} + +static inline void bh_readahead_batch(int nr, struct buffer_head *bhs[], + unsigned int op_flags) +{ + __bh_read_batch(nr, bhs, op_flags, false); +} + /** * __bread() - reads a specified block and returns the bh * @bdev: the block_device to read from
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/coresight.h
Changed
@@ -33,6 +33,8 @@ #define CORESIGHT_UNLOCK 0xc5acce55 +#define ARMV9_TRBE_PDEV_NAME "arm,trbe-v1" + extern struct bus_type coresight_bustype; enum coresight_dev_type {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/fault-inject.h
Changed
@@ -20,7 +20,6 @@ atomic_t space; unsigned long verbose; bool task_filter; - bool no_warn; unsigned long stacktrace_depth; unsigned long require_start; unsigned long require_end; @@ -32,6 +31,10 @@ struct dentry *dname; }; +enum fault_flags { + FAULT_NOWARN = 1 << 0, +}; + #define FAULT_ATTR_INITIALIZER { \ .interval = 1, \ .times = ATOMIC_INIT(1), \ @@ -40,11 +43,11 @@ .ratelimit_state = RATELIMIT_STATE_INIT_DISABLED, \ .verbose = 2, \ .dname = NULL, \ - .no_warn = false, \ } #define DECLARE_FAULT_ATTR(name) struct fault_attr name = FAULT_ATTR_INITIALIZER int setup_fault_attr(struct fault_attr *attr, char *str); +bool should_fail_ex(struct fault_attr *attr, ssize_t size, int flags); bool should_fail(struct fault_attr *attr, ssize_t size); #ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/fs.h
Changed
@@ -1899,7 +1899,7 @@ loff_t len, unsigned int remap_flags); int (*fadvise)(struct file *, loff_t, loff_t, int); - KABI_RESERVE(1) + KABI_USE(1, bool may_pollfree) KABI_RESERVE(2) KABI_RESERVE(3) KABI_RESERVE(4)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/migrate.h
Changed
@@ -56,6 +56,8 @@ struct page *newpage, struct page *page); extern int migrate_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page, int extra_count); +void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep, + spinlock_t *ptl); #else static inline void putback_movable_pages(struct list_head *l) {}
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/mmzone.h
Changed
@@ -1009,6 +1009,8 @@ size_t *, loff_t *); int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *, int, void *, size_t *, loff_t *); +int percpu_max_batchsize_sysctl_handler(struct ctl_table *, int, + void *, size_t *, loff_t *); int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *, int, void *, size_t *, loff_t *); int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *, int, @@ -1016,6 +1018,7 @@ int numa_zonelist_order_handler(struct ctl_table *, int, void *, size_t *, loff_t *); extern int percpu_pagelist_fraction; +extern int percpu_max_batchsize; extern char numa_zonelist_order[]; #define NUMA_ZONELIST_ORDER_LEN 16
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/stop_machine.h
Changed
@@ -121,6 +121,22 @@ */ int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus); +/** + * stop_core_cpuslocked: - stop all threads on just one core + * @cpu: any cpu in the targeted core + * @fn: the function to run + * @data: the data ptr for @fn() + * + * Same as above, but instead of every CPU, only the logical CPUs of a + * single core are affected. + * + * Context: Must be called from within a cpus_read_lock() protected region. + * + * Return: 0 if all executions of @fn returned 0, any non zero return + * value if any returned non zero. + */ +int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data); + int stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus); #else /* CONFIG_SMP || CONFIG_HOTPLUG_CPU */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/swap.h
Changed
@@ -246,6 +246,11 @@ struct swap_cluster_info tail; }; +struct swap_extend_info { + struct percpu_ref users; /* indicate and keep swap device valid. */ + struct completion comp; /* seldom referenced */ +}; + /* * The in-memory structure used to track swap areas. */ @@ -293,7 +298,7 @@ */ struct work_struct discard_work; /* discard worker */ struct swap_cluster_list discard_clusters; /* discard clusters list */ - KABI_RESERVE(1) + KABI_USE(1, struct swap_extend_info *sei) KABI_RESERVE(2) struct plist_node avail_lists[]; /* * entries in swap_avail_heads, one @@ -535,7 +540,7 @@ static inline void put_swap_device(struct swap_info_struct *si) { - rcu_read_unlock(); + percpu_ref_put(&si->sei->users); } #else /* CONFIG_SWAP */ @@ -550,6 +555,15 @@ return NULL; } +static inline struct swap_info_struct *get_swap_device(swp_entry_t entry) +{ + return NULL; +} + +static inline void put_swap_device(struct swap_info_struct *si) +{ +} + #define swap_address_space(entry) (NULL) #define get_nr_swap_pages() 0L #define total_swap_pages 0L
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/usb.h
Changed
@@ -580,6 +580,7 @@ * @devaddr: device address, XHCI: assigned by HW, others: same as devnum * @can_submit: URBs may be submitted * @persist_enabled: USB_PERSIST enabled for this device + * @reset_in_progress: the device is being reset * @have_langid: whether string_langid is valid * @authorized: policy has said we can use it; * (user space) policy determines if we authorize this device to be @@ -665,6 +666,7 @@ unsigned can_submit:1; unsigned persist_enabled:1; + unsigned reset_in_progress:1; unsigned have_langid:1; unsigned authorized:1; unsigned authenticated:1;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/vdpa.h
Changed
@@ -6,9 +6,11 @@ #include <linux/device.h> #include <linux/interrupt.h> #include <linux/vhost_iotlb.h> +#include <linux/virtio_net.h> +#include <linux/if_ether.h> /** - * vDPA callback definition. + * struct vdpa_calllback - vDPA callback definition. * @callback: interrupt callback function * @private: the data passed to the callback function */ @@ -18,7 +20,7 @@ }; /** - * vDPA notification area + * struct vdpa_notification_area - vDPA notification area * @addr: base address of the notification area * @size: size of the notification area */ @@ -43,29 +45,33 @@ * @last_used_idx: used index */ struct vdpa_vq_state_packed { - u16 last_avail_counter:1; - u16 last_avail_idx:15; - u16 last_used_counter:1; - u16 last_used_idx:15; + u16 last_avail_counter:1; + u16 last_avail_idx:15; + u16 last_used_counter:1; + u16 last_used_idx:15; }; struct vdpa_vq_state { - union { - struct vdpa_vq_state_split split; - struct vdpa_vq_state_packed packed; - }; + union { + struct vdpa_vq_state_split split; + struct vdpa_vq_state_packed packed; + }; }; struct vdpa_mgmt_dev; /** - * vDPA device - representation of a vDPA device + * struct vdpa_device - representation of a vDPA device * @dev: underlying device * @dma_dev: the actual device that is performing DMA * @driver_override: driver name to force a match * @config: the configuration ops for this device. + * @cf_lock: Protects get and set access to configuration layout. * @index: device index * @features_valid: were features initialized? for legacy guests + * @ngroups: the number of virtqueue groups + * @nas: the number of address spaces + * @use_va: indicate whether virtual address must be used by this device * @nvqs: maximum number of supported virtqueues * @mdev: management device pointer; caller must setup when registering device as part * of dev_add() mgmtdev ops callback before invoking _vdpa_register_device(). @@ -75,14 +81,18 @@ struct device *dma_dev; const char *driver_override; const struct vdpa_config_ops *config; + struct rw_semaphore cf_lock; /* Protects get/set config */ unsigned int index; bool features_valid; - int nvqs; + bool use_va; + u32 nvqs; struct vdpa_mgmt_dev *mdev; + unsigned int ngroups; + unsigned int nas; }; /** - * vDPA IOVA range - the IOVA range support by the device + * struct vdpa_iova_range - the IOVA range support by the device * @first: start of the IOVA range * @last: end of the IOVA range */ @@ -91,8 +101,27 @@ u64 last; }; +struct vdpa_dev_set_config { + struct { + u8 mac[ETH_ALEN]; + u16 mtu; + u16 max_vq_pairs; + } net; + u64 mask; +}; + /** - * vDPA_config_ops - operations for configuring a vDPA device. + * Corresponding file area for device memory mapping + * @file: vma->vm_file for the mapping + * @offset: mapping offset in the vm_file + */ +struct vdpa_map_file { + struct file *file; + u64 offset; +}; + +/** + * struct vdpa_config_ops - operations for configuring a vDPA device. * Note: vDPA device drivers are required to implement all of the * operations unless it is mentioned to be optional in the following * list. @@ -133,7 +162,7 @@ * @vdev: vdpa device * @idx: virtqueue index * @state: pointer to returned state (last_avail_idx) - * @get_vq_notification: Get the notification area for a virtqueue + * @get_vq_notification: Get the notification area for a virtqueue (optional) * @vdev: vdpa device * @idx: virtqueue index * Returns the notifcation area @@ -147,20 +176,31 @@ * for the device * @vdev: vdpa device * Returns virtqueue algin requirement - * @get_features: Get virtio features supported by the device + * @get_vq_group: Get the group id for a specific + * virtqueue (optional) + * @vdev: vdpa device + * @idx: virtqueue index + * Returns u32: group id for this virtqueue + * @get_device_features: Get virtio features supported by the device * @vdev: vdpa device * Returns the virtio features support by the * device - * @set_features: Set virtio features supported by the driver + * @set_driver_features: Set virtio features supported by the driver * @vdev: vdpa device * @features: feature support by the driver * Returns integer: success (0) or error (< 0) + * @get_driver_features: Get the virtio driver features in action + * @vdev: vdpa device + * Returns the virtio features accepted * @set_config_cb: Set the config interrupt callback * @vdev: vdpa device * @cb: virtio-vdev interrupt callback structure * @get_vq_num_max: Get the max size of virtqueue * @vdev: vdpa device * Returns u16: max size of virtqueue + * @get_vq_num_min: Get the min size of virtqueue (optional) + * @vdev: vdpa device + * Returns u16: min size of virtqueue * @get_device_id: Get virtio device id * @vdev: vdpa device * Returns u32: virtio device id @@ -176,6 +216,9 @@ * @reset: Reset device * @vdev: vdpa device * Returns integer: success (0) or error (< 0) + * @suspend: Suspend or resume the device (optional) + * @vdev: vdpa device + * Returns integer: success (0) or error (< 0) * @get_config_size: Get the size of the configuration space includes * fields that are conditional on feature bits. * @vdev: vdpa device @@ -201,10 +244,17 @@ * @vdev: vdpa device * Returns the iova range supported by * the device. + * @set_group_asid: Set address space identifier for a + * virtqueue group (optional) + * @vdev: vdpa device + * @group: virtqueue group + * @asid: address space id for this group + * Returns integer: success (0) or error (< 0) * @set_map: Set device memory mapping (optional) * Needed for device that using device * specific DMA translation (on-chip IOMMU) * @vdev: vdpa device + * @asid: address space identifier * @iotlb: vhost memory mapping to be * used by the vDPA * Returns integer: success (0) or error (< 0) @@ -213,6 +263,7 @@ * specific DMA translation (on-chip IOMMU) * and preferring incremental map. * @vdev: vdpa device + * @asid: address space identifier * @iova: iova to be mapped * @size: size of the area * @pa: physical address for the map @@ -224,6 +275,7 @@ * specific DMA translation (on-chip IOMMU) * and preferring incremental unmap. * @vdev: vdpa device + * @asid: address space identifier * @iova: iova to be unmapped
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/vhost_iotlb.h
Changed
@@ -17,6 +17,7 @@ u32 perm; u32 flags_padding; u64 __subtree_last; + void *opaque; }; #define VHOST_IOTLB_FLAG_RETIRE 0x1 @@ -29,10 +30,14 @@ unsigned int flags; }; +int vhost_iotlb_add_range_ctx(struct vhost_iotlb *iotlb, u64 start, u64 last, + u64 addr, unsigned int perm, void *opaque); int vhost_iotlb_add_range(struct vhost_iotlb *iotlb, u64 start, u64 last, u64 addr, unsigned int perm); void vhost_iotlb_del_range(struct vhost_iotlb *iotlb, u64 start, u64 last); +void vhost_iotlb_init(struct vhost_iotlb *iotlb, unsigned int limit, + unsigned int flags); struct vhost_iotlb *vhost_iotlb_alloc(unsigned int limit, unsigned int flags); void vhost_iotlb_free(struct vhost_iotlb *iotlb); void vhost_iotlb_reset(struct vhost_iotlb *iotlb);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/virtio.h
Changed
@@ -139,6 +139,7 @@ int virtio_device_freeze(struct virtio_device *dev); int virtio_device_restore(struct virtio_device *dev); #endif +void virtio_reset_device(struct virtio_device *dev); size_t virtio_max_dma_size(struct virtio_device *vdev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/virtio_pci_modern.h
Changed
@@ -5,6 +5,13 @@ #include <linux/pci.h> #include <linux/virtio_pci.h> +struct virtio_pci_modern_common_cfg { + struct virtio_pci_common_cfg cfg; + + __le16 queue_notify_data; /* read-write */ + __le16 queue_reset; /* read-write */ +}; + struct virtio_pci_modern_device { struct pci_dev *pci_dev; @@ -102,15 +109,10 @@ u16 vp_modern_get_queue_size(struct virtio_pci_modern_device *mdev, u16 idx); u16 vp_modern_get_num_queues(struct virtio_pci_modern_device *mdev); -u16 vp_modern_get_queue_notify_off(struct virtio_pci_modern_device *mdev, - u16 idx); -void __iomem *vp_modern_map_capability(struct virtio_pci_modern_device *mdev, int off, - size_t minlen, - u32 align, - u32 start, u32 size, - size_t *len, resource_size_t *pa); void __iomem *vp_modern_map_vq_notify(struct virtio_pci_modern_device *mdev, u16 index, resource_size_t *pa); int vp_modern_probe(struct virtio_pci_modern_device *mdev); void vp_modern_remove(struct virtio_pci_modern_device *mdev); +int vp_modern_get_queue_reset(struct virtio_pci_modern_device *mdev, u16 index); +void vp_modern_set_queue_reset(struct virtio_pci_modern_device *mdev, u16 index); #endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/net/sock.h
Changed
@@ -318,7 +318,7 @@ * @sk_tskey: counter to disambiguate concurrent tstamp requests * @sk_zckey: counter to order MSG_ZEROCOPY notifications * @sk_socket: Identd and reporting IO signals - * @sk_user_data: RPC layer private data + * @sk_user_data: RPC layer private data. Write-protected by @sk_callback_lock. * @sk_frag: cached page frag * @sk_peek_off: current peek_offset value * @sk_send_head: front of stuff to transmit
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/scsi/iscsi_if.h
Changed
@@ -761,7 +761,6 @@ and verification */ #define CAP_LOGIN_OFFLOAD 0x4000 /* offload session login */ -#define CAP_OPS_EXPAND 0x8000 /* oiscsi_transport->ops_expand flag */ /* * These flags describes reason of stop_conn() call */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/scsi/scsi_transport_iscsi.h
Changed
@@ -29,15 +29,6 @@ struct iscsi_bus_flash_session; struct iscsi_bus_flash_conn; -/* - * The expansion of iscsi_transport to fix kabi while adding members. - */ -struct iscsi_transport_expand { - int (*tgt_dscvr)(struct Scsi_Host *shost, enum iscsi_tgt_dscvr type, - uint32_t enable, struct sockaddr *dst_addr); - void (*unbind_conn)(struct iscsi_cls_conn *conn, bool is_active); -}; - /** * struct iscsi_transport - iSCSI Transport template * @@ -132,15 +123,8 @@ int non_blocking); int (*ep_poll) (struct iscsi_endpoint *ep, int timeout_ms); void (*ep_disconnect) (struct iscsi_endpoint *ep); -#ifdef __GENKSYMS__ int (*tgt_dscvr) (struct Scsi_Host *shost, enum iscsi_tgt_dscvr type, uint32_t enable, struct sockaddr *dst_addr); -#else - /* - * onece ops_expand is used, caps must be set to CAP_OPS_EXPAND - */ - struct iscsi_transport_expand *ops_expand; -#endif int (*set_path) (struct Scsi_Host *shost, struct iscsi_path *params); int (*set_iface_param) (struct Scsi_Host *shost, void *data, uint32_t len);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/trace/events/intel_ifs.h
Added
@@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM intel_ifs + +#if !defined(_TRACE_IFS_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_IFS_H + +#include <linux/ktime.h> +#include <linux/tracepoint.h> + +TRACE_EVENT(ifs_status, + + TP_PROTO(int cpu, union ifs_scan activate, union ifs_status status), + + TP_ARGS(cpu, activate, status), + + TP_STRUCT__entry( + __field( u64, status ) + __field( int, cpu ) + __field( u8, start ) + __field( u8, stop ) + ), + + TP_fast_assign( + __entry->cpu = cpu; + __entry->start = activate.start; + __entry->stop = activate.stop; + __entry->status = status.data; + ), + + TP_printk("cpu: %d, start: %.2x, stop: %.2x, status: %llx", + __entry->cpu, + __entry->start, + __entry->stop, + __entry->status) +); + +#endif /* _TRACE_IFS_H */ + +/* This part must be outside protection */ +#include <trace/define_trace.h>
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/uapi/linux/if_link.h
Changed
@@ -689,8 +689,8 @@ enum ipvlan_mode { IPVLAN_MODE_L2 = 0, IPVLAN_MODE_L3, - IPVLAN_MODE_L2E, IPVLAN_MODE_L3S, + IPVLAN_MODE_L2E, IPVLAN_MODE_MAX };
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/uapi/linux/vdpa.h
Changed
@@ -17,11 +17,16 @@ VDPA_CMD_DEV_NEW, VDPA_CMD_DEV_DEL, VDPA_CMD_DEV_GET, /* can dump */ + VDPA_CMD_DEV_CONFIG_GET, /* can dump */ + VDPA_CMD_DEV_VSTATS_GET, }; enum vdpa_attr { VDPA_ATTR_UNSPEC, + /* Pad attribute for 64b alignment */ + VDPA_ATTR_PAD = VDPA_ATTR_UNSPEC, + /* bus name (optional) + dev name together make the parent device handle */ VDPA_ATTR_MGMTDEV_BUS_NAME, /* string */ VDPA_ATTR_MGMTDEV_DEV_NAME, /* string */ @@ -32,6 +37,20 @@ VDPA_ATTR_DEV_VENDOR_ID, /* u32 */ VDPA_ATTR_DEV_MAX_VQS, /* u32 */ VDPA_ATTR_DEV_MAX_VQ_SIZE, /* u16 */ + VDPA_ATTR_DEV_MIN_VQ_SIZE, /* u16 */ + + VDPA_ATTR_DEV_NET_CFG_MACADDR, /* binary */ + VDPA_ATTR_DEV_NET_STATUS, /* u8 */ + VDPA_ATTR_DEV_NET_CFG_MAX_VQP, /* u16 */ + VDPA_ATTR_DEV_NET_CFG_MTU, /* u16 */ + + VDPA_ATTR_DEV_NEGOTIATED_FEATURES, /* u64 */ + VDPA_ATTR_DEV_MGMTDEV_MAX_VQS, /* u32 */ + VDPA_ATTR_DEV_SUPPORTED_FEATURES, /* u64 */ + + VDPA_ATTR_DEV_QUEUE_INDEX, /* u32 */ + VDPA_ATTR_DEV_VENDOR_ATTR_NAME, /* string */ + VDPA_ATTR_DEV_VENDOR_ATTR_VALUE, /* u64 */ /* new attributes must be added above here */ VDPA_ATTR_MAX,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/uapi/linux/vhost.h
Changed
@@ -89,11 +89,6 @@ /* Set or get vhost backend capability */ -/* Use message type V2 */ -#define VHOST_BACKEND_F_IOTLB_MSG_V2 0x1 -/* IOTLB can accept batching hints */ -#define VHOST_BACKEND_F_IOTLB_BATCH 0x2 - #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64) #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64) @@ -150,11 +145,39 @@ /* Get the valid iova range */ #define VHOST_VDPA_GET_IOVA_RANGE _IOR(VHOST_VIRTIO, 0x78, \ struct vhost_vdpa_iova_range) - /* Get the config size */ #define VHOST_VDPA_GET_CONFIG_SIZE _IOR(VHOST_VIRTIO, 0x79, __u32) /* Get the count of all virtqueues */ #define VHOST_VDPA_GET_VQS_COUNT _IOR(VHOST_VIRTIO, 0x80, __u32) +/* Get the number of virtqueue groups. */ +#define VHOST_VDPA_GET_GROUP_NUM _IOR(VHOST_VIRTIO, 0x81, __u32) + +/* Get the number of address spaces. */ +#define VHOST_VDPA_GET_AS_NUM _IOR(VHOST_VIRTIO, 0x7A, unsigned int) + +/* Get the group for a virtqueue: read index, write group in num, + * The virtqueue index is stored in the index field of + * vhost_vring_state. The group for this specific virtqueue is + * returned via num field of vhost_vring_state. + */ +#define VHOST_VDPA_GET_VRING_GROUP _IOWR(VHOST_VIRTIO, 0x7B, \ + struct vhost_vring_state) +/* Set the ASID for a virtqueue group. The group index is stored in + * the index field of vhost_vring_state, the ASID associated with this + * group is stored at num field of vhost_vring_state. + */ +#define VHOST_VDPA_SET_GROUP_ASID _IOW(VHOST_VIRTIO, 0x7C, \ + struct vhost_vring_state) + +/* Suspend a device so it does not process virtqueue requests anymore + * + * After the return of ioctl the device must preserve all the necessary state + * (the virtqueue vring base plus the possible device specific states) that is + * required for restoring in the future. The device must not change its + * configuration after that point. + */ +#define VHOST_VDPA_SUSPEND _IO(VHOST_VIRTIO, 0x7D) + #endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/uapi/linux/vhost_types.h
Changed
@@ -87,7 +87,7 @@ struct vhost_msg_v2 { __u32 type; - __u32 reserved; + __u32 asid; union { struct vhost_iotlb_msg iotlb; __u8 padding[64]; @@ -153,4 +153,15 @@ /* vhost-net should add virtio_net_hdr for RX, and strip for TX packets. */ #define VHOST_NET_F_VIRTIO_NET_HDR 27 +/* Use message type V2 */ +#define VHOST_BACKEND_F_IOTLB_MSG_V2 0x1 +/* IOTLB can accept batching hints */ +#define VHOST_BACKEND_F_IOTLB_BATCH 0x2 +/* IOTLB can accept address space identifier through V2 type of IOTLB + * message + */ +#define VHOST_BACKEND_F_IOTLB_ASID 0x3 +/* Device can be suspended */ +#define VHOST_BACKEND_F_SUSPEND 0x4 + #endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/uapi/linux/virtio_pci.h
Changed
@@ -202,6 +202,8 @@ #define VIRTIO_PCI_COMMON_Q_AVAILHI 44 #define VIRTIO_PCI_COMMON_Q_USEDLO 48 #define VIRTIO_PCI_COMMON_Q_USEDHI 52 +#define VIRTIO_PCI_COMMON_Q_NDATA 56 +#define VIRTIO_PCI_COMMON_Q_RESET 58 #endif /* VIRTIO_PCI_NO_MODERN */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/entry/common.c
Changed
@@ -394,7 +394,7 @@ instrumentation_begin(); if (IS_ENABLED(CONFIG_PREEMPTION)) { -#ifdef CONFIG_PREEMT_DYNAMIC +#ifdef CONFIG_PREEMPT_DYNAMIC static_call(irqentry_exit_cond_resched)(); #else irqentry_exit_cond_resched();
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/fork.c
Changed
@@ -688,6 +688,7 @@ mmu_notifier_subscriptions_destroy(mm); check_mm(mm); put_user_ns(mm->user_ns); + mm_pasid_drop(mm); free_mm(mm); } EXPORT_SYMBOL_GPL(__mmdrop); @@ -856,7 +857,7 @@ static bool dup_resvd_task_struct(struct task_struct *dst, struct task_struct *orig, int node) { - dst->_resvd = kmalloc_node(sizeof(struct task_struct_resvd), + dst->_resvd = kzalloc_node(sizeof(struct task_struct_resvd), GFP_KERNEL, node); if (!dst->_resvd) return false; @@ -1137,7 +1138,6 @@ } if (mm->binfmt) module_put(mm->binfmt->module); - mm_pasid_drop(mm); mmdrop(mm); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/jump_label.c
Changed
@@ -823,6 +823,7 @@ static void jump_label_update(struct static_key *key) { struct jump_entry *stop = __stop___jump_table; + bool init = system_state < SYSTEM_RUNNING; struct jump_entry *entry; #ifdef CONFIG_MODULES struct module *mod; @@ -834,15 +835,16 @@ preempt_disable(); mod = __module_address((unsigned long)key); - if (mod) + if (mod) { stop = mod->jump_entries + mod->num_jump_entries; + init = mod->state == MODULE_STATE_COMING; + } preempt_enable(); #endif entry = static_key_entries(key); /* if there are no users, entry can be NULL */ if (entry) - __jump_label_update(key, entry, stop, - system_state < SYSTEM_RUNNING); + __jump_label_update(key, entry, stop, init); } #ifdef CONFIG_STATIC_KEYS_SELFTEST
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/livepatch/core.c
Changed
@@ -1041,11 +1041,13 @@ func->old_name); return -ENOENT; } +#ifdef CONFIG_LIVEPATCH_STOP_MACHINE_CONSISTENCY if (func->old_size < KLP_MAX_REPLACE_SIZE) { pr_err("%s size less than limit (%lu < %zu)\n", func->old_name, func->old_size, KLP_MAX_REPLACE_SIZE); return -EINVAL; } +#endif #ifdef PPC64_ELF_ABI_v1 /* @@ -1195,6 +1197,7 @@ static inline int klp_static_call_register(struct module *mod) { return 0; } #endif +#ifdef CONFIG_LIVEPATCH_STOP_MACHINE_CONSISTENCY static int check_address_conflict(struct klp_patch *patch) { struct klp_object *obj; @@ -1231,6 +1234,7 @@ } return 0; } +#endif static int klp_init_patch(struct klp_patch *patch) { @@ -1278,11 +1282,11 @@ } module_enable_ro(patch->mod, true); +#ifdef CONFIG_LIVEPATCH_STOP_MACHINE_CONSISTENCY ret = check_address_conflict(patch); if (ret) return ret; -#ifdef CONFIG_LIVEPATCH_STOP_MACHINE_CONSISTENCY klp_for_each_object(patch, obj) klp_load_hook(obj); #endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/sched/autogroup.c
Changed
@@ -5,7 +5,7 @@ #include <linux/nospec.h> #include "sched.h" -unsigned int __read_mostly sysctl_sched_autogroup_enabled = 1; +unsigned int __read_mostly sysctl_sched_autogroup_enabled; static struct autogroup autogroup_default; static atomic_t autogroup_seq_nr;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/sched/core.c
Changed
@@ -9121,6 +9121,12 @@ return -EINVAL; /* + * Ensure burst equals to zero when quota is -1. + */ + if (quota == RUNTIME_INF && burst) + return -EINVAL; + + /* * Prevent race between setting of cfs_rq->runtime_enabled and * unthrottle_offline_cfs_rqs(). */ @@ -9179,8 +9185,10 @@ period = ktime_to_ns(tg->cfs_bandwidth.period); burst = tg->cfs_bandwidth.burst; - if (cfs_quota_us < 0) + if (cfs_quota_us < 0) { quota = RUNTIME_INF; + burst = 0; + } else if ((u64)cfs_quota_us <= U64_MAX / NSEC_PER_USEC) quota = (u64)cfs_quota_us * NSEC_PER_USEC; else
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/sched/fair.c
Changed
@@ -124,6 +124,13 @@ #endif #ifdef CONFIG_QOS_SCHED + +/* + * To distinguish cfs bw, use QOS_THROTTLED mark cfs_rq->throttled + * when qos throttled(and cfs bw throttle mark cfs_rq->throttled as 1). + */ +#define QOS_THROTTLED 2 + static DEFINE_PER_CPU_SHARED_ALIGNED(struct list_head, qos_throttled_cfs_rq); static DEFINE_PER_CPU_SHARED_ALIGNED(struct hrtimer, qos_overload_timer); static DEFINE_PER_CPU(int, qos_cpu_overload); @@ -4932,6 +4939,14 @@ se = cfs_rq->tg->se[cpu_of(rq)]; +#ifdef CONFIG_QOS_SCHED + /* + * if this cfs_rq throttled by qos, not need unthrottle it. + */ + if (cfs_rq->throttled == QOS_THROTTLED) + return; +#endif + cfs_rq->throttled = 0; update_rq_clock(rq); @@ -7278,26 +7293,6 @@ static void start_qos_hrtimer(int cpu); -static int qos_tg_unthrottle_up(struct task_group *tg, void *data) -{ - struct rq *rq = data; - struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)]; - - cfs_rq->throttle_count--; - - return 0; -} - -static int qos_tg_throttle_down(struct task_group *tg, void *data) -{ - struct rq *rq = data; - struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)]; - - cfs_rq->throttle_count++; - - return 0; -} - static void throttle_qos_cfs_rq(struct cfs_rq *cfs_rq) { struct rq *rq = rq_of(cfs_rq); @@ -7309,7 +7304,7 @@ /* freeze hierarchy runnable averages while throttled */ rcu_read_lock(); - walk_tg_tree_from(cfs_rq->tg, qos_tg_throttle_down, tg_nop, (void *)rq); + walk_tg_tree_from(cfs_rq->tg, tg_throttle_down, tg_nop, (void *)rq); rcu_read_unlock(); task_delta = cfs_rq->h_nr_running; @@ -7320,8 +7315,13 @@ if (!se->on_rq) break; - if (dequeue) + if (dequeue) { dequeue_entity(qcfs_rq, se, DEQUEUE_SLEEP); + } else { + update_load_avg(qcfs_rq, se, 0); + se_update_runnable(se); + } + qcfs_rq->h_nr_running -= task_delta; qcfs_rq->idle_h_nr_running -= idle_task_delta; @@ -7339,7 +7339,7 @@ if (list_empty(&per_cpu(qos_throttled_cfs_rq, cpu_of(rq)))) start_qos_hrtimer(cpu_of(rq)); - cfs_rq->throttled = 1; + cfs_rq->throttled = QOS_THROTTLED; list_add(&cfs_rq->qos_throttled_list, &per_cpu(qos_throttled_cfs_rq, cpu_of(rq))); @@ -7349,12 +7349,14 @@ { struct rq *rq = rq_of(cfs_rq); struct sched_entity *se; - int enqueue = 1; unsigned int prev_nr = cfs_rq->h_nr_running; long task_delta, idle_task_delta; se = cfs_rq->tg->se[cpu_of(rq)]; + if (cfs_rq->throttled != QOS_THROTTLED) + return; + cfs_rq->throttled = 0; update_rq_clock(rq); @@ -7362,7 +7364,7 @@ /* update hierarchical throttle state */ rcu_read_lock(); - walk_tg_tree_from(cfs_rq->tg, tg_nop, qos_tg_unthrottle_up, (void *)rq); + walk_tg_tree_from(cfs_rq->tg, tg_nop, tg_unthrottle_up, (void *)rq); rcu_read_unlock(); if (!cfs_rq->load.weight) @@ -7372,26 +7374,58 @@ idle_task_delta = cfs_rq->idle_h_nr_running; for_each_sched_entity(se) { if (se->on_rq) - enqueue = 0; + break; + + cfs_rq = cfs_rq_of(se); + enqueue_entity(cfs_rq, se, ENQUEUE_WAKEUP); + + cfs_rq->h_nr_running += task_delta; + cfs_rq->idle_h_nr_running += idle_task_delta; + if (cfs_rq_throttled(cfs_rq)) + goto unthrottle_throttle; + } + + for_each_sched_entity(se) { cfs_rq = cfs_rq_of(se); - if (enqueue) - enqueue_entity(cfs_rq, se, ENQUEUE_WAKEUP); + + update_load_avg(cfs_rq, se, UPDATE_TG); + se_update_runnable(se); + cfs_rq->h_nr_running += task_delta; cfs_rq->idle_h_nr_running += idle_task_delta; + /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(cfs_rq)) - break; + goto unthrottle_throttle; + + /* + * One parent has been throttled and cfs_rq removed from the + * list. Add it back to not break the leaf list. + */ + if (throttled_hierarchy(cfs_rq)) + list_add_leaf_cfs_rq(cfs_rq); } - assert_list_leaf_cfs_rq(rq); + add_nr_running(rq, task_delta); + if (prev_nr < 2 && prev_nr + task_delta >= 2) + overload_set(rq); - if (!se) { - add_nr_running(rq, task_delta); - if (prev_nr < 2 && prev_nr + task_delta >= 2) - overload_set(rq); +unthrottle_throttle: + /* + * The cfs_rq_throttled() breaks in the above iteration can result in + * incomplete leaf list maintenance, resulting in triggering the + * assertion below. + */ + for_each_sched_entity(se) { + cfs_rq = cfs_rq_of(se); + + if (list_add_leaf_cfs_rq(cfs_rq)) + break; } + assert_list_leaf_cfs_rq(rq); + /* Determine whether we need to wake up potentially idle CPU: */ if (rq->curr == rq->idle && rq->cfs.nr_running) resched_curr(rq); @@ -12157,7 +12191,7 @@ for_each_possible_cpu(i) { #ifdef CONFIG_QOS_SCHED - if (tg->cfs_rq) + if (tg->cfs_rq && tg->cfs_rq[i]) unthrottle_qos_sched_group(tg->cfs_rq[i]); #endif if (tg->cfs_rq)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/stop_machine.c
Changed
@@ -624,6 +624,27 @@ } EXPORT_SYMBOL_GPL(stop_machine); +#ifdef CONFIG_SCHED_SMT +int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data) +{ + const struct cpumask *smt_mask = cpu_smt_mask(cpu); + + struct multi_stop_data msdata = { + .fn = fn, + .data = data, + .num_threads = cpumask_weight(smt_mask), + .active_cpus = smt_mask, + }; + + lockdep_assert_cpus_held(); + + /* Set the initial state and stop all online cpus. */ + set_state(&msdata, MULTI_STOP_PREPARE); + return stop_cpus(smt_mask, multi_cpu_stop, &msdata); +} +EXPORT_SYMBOL_GPL(stop_core_cpuslocked); +#endif + /** * stop_machine_from_inactive_cpu - stop_machine() from inactive CPU * @fn: the function to run
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/sysctl.c
Changed
@@ -396,13 +396,14 @@ ppos); } -static size_t proc_skip_spaces(char **buf) +static void proc_skip_spaces(char **buf, size_t *size) { - size_t ret; - char *tmp = skip_spaces(*buf); - ret = tmp - *buf; - *buf = tmp; - return ret; + while (*size) { + if (!isspace(**buf)) + break; + (*size)--; + (*buf)++; + } } static void proc_skip_char(char **buf, size_t *size, const char v) @@ -471,13 +472,12 @@ unsigned long *val, bool *neg, const char *perm_tr, unsigned perm_tr_len, char *tr) { - int len; char *p, tmp[TMPBUFLEN]; + ssize_t len = *size; - if (!*size) + if (len <= 0) return -EINVAL; - len = *size; if (len > TMPBUFLEN - 1) len = TMPBUFLEN - 1; @@ -635,7 +635,7 @@ bool neg; if (write) { - left -= proc_skip_spaces(&p); + proc_skip_spaces(&p, &left); if (!left) break; @@ -662,7 +662,7 @@ if (!write && !first && left && !err) proc_put_char(&buffer, &left, '\n'); if (write && !err && left) - left -= proc_skip_spaces(&p); + proc_skip_spaces(&p, &left); if (write && first) return err ? : -EINVAL; *lenp -= left; @@ -704,7 +704,7 @@ if (left > PAGE_SIZE - 1) left = PAGE_SIZE - 1; - left -= proc_skip_spaces(&p); + proc_skip_spaces(&p, &left); if (!left) { err = -EINVAL; goto out_free; @@ -724,7 +724,7 @@ } if (!err && left) - left -= proc_skip_spaces(&p); + proc_skip_spaces(&p, &left); out_free: if (err) @@ -1182,7 +1182,7 @@ if (write) { bool neg; - left -= proc_skip_spaces(&p); + proc_skip_spaces(&p, &left); if (!left) break; @@ -1211,7 +1211,7 @@ if (!write && !first && left && !err) proc_put_char(&buffer, &left, '\n'); if (write && !err) - left -= proc_skip_spaces(&p); + proc_skip_spaces(&p, &left); if (write && first) return err ? : -EINVAL; *lenp -= left; @@ -2994,6 +2994,14 @@ .extra1 = SYSCTL_ZERO, }, { + .procname = "percpu_max_batchsize", + .data = &percpu_max_batchsize, + .maxlen = sizeof(percpu_max_batchsize), + .mode = 0644, + .proc_handler = percpu_max_batchsize_sysctl_handler, + .extra1 = SYSCTL_ZERO, + }, + { .procname = "page_lock_unfairness", .data = &sysctl_page_lock_unfairness, .maxlen = sizeof(sysctl_page_lock_unfairness),
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/time/timekeeping.c
Changed
@@ -52,6 +52,9 @@ u64 padding[8]; #endif seqcount_raw_spinlock_t seq; +#ifdef CONFIG_ARCH_LLC_128_LINE_SIZE + u64 padding2[2]; +#endif struct timekeeper timekeeper; #ifdef CONFIG_ARCH_LLC_128_LINE_SIZE } tk_core ____cacheline_aligned_128 = {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/trace/trace_osnoise.c
Changed
@@ -2103,6 +2103,13 @@ return -EINVAL; } +static void osnoise_unhook_events(void) +{ + unhook_thread_events(); + unhook_softirq_events(); + unhook_irq_events(); +} + /* * osnoise_workload_start - start the workload and hook to events */ @@ -2135,7 +2142,14 @@ retval = start_per_cpu_kthreads(); if (retval) { - unhook_irq_events(); + trace_osnoise_callback_enabled = false; + /* + * Make sure that ftrace_nmi_enter/exit() see + * trace_osnoise_callback_enabled as false before continuing. + */ + barrier(); + + osnoise_unhook_events(); return retval; } @@ -2157,6 +2171,17 @@ if (osnoise_has_registered_instances()) return; + /* + * If callbacks were already disabled in a previous stop + * call, there is no need to disable then again. + * + * For instance, this happens when tracing is stopped via: + * echo 0 > tracing_on + * echo nop > current_tracer. + */ + if (!trace_osnoise_callback_enabled) + return; + trace_osnoise_callback_enabled = false; /* * Make sure that ftrace_nmi_enter/exit() see @@ -2166,9 +2191,7 @@ stop_per_cpu_kthreads(); - unhook_irq_events(); - unhook_softirq_events(); - unhook_thread_events(); + osnoise_unhook_events(); } static void osnoise_tracer_start(struct trace_array *tr)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/workqueue.c
Changed
@@ -4812,8 +4812,16 @@ for_each_pwq(pwq, wq) { raw_spin_lock_irqsave(&pwq->pool->lock, flags); - if (pwq->nr_active || !list_empty(&pwq->delayed_works)) + if (pwq->nr_active || !list_empty(&pwq->delayed_works)) { + /* + * Defer printing to avoid deadlocks in console + * drivers that queue work while holding locks + * also taken in their write paths. + */ + printk_safe_enter(); show_pwq(pwq); + printk_safe_exit(); + } raw_spin_unlock_irqrestore(&pwq->pool->lock, flags); /* * We could be printing a lot from atomic context, e.g. @@ -4831,7 +4839,12 @@ raw_spin_lock_irqsave(&pool->lock, flags); if (pool->nr_workers == pool->nr_idle) goto next_pool; - + /* + * Defer printing to avoid deadlocks in console drivers that + * queue work while holding locks also taken in their write + * paths. + */ + printk_safe_enter(); pr_info("pool %d:", pool->id); pr_cont_pool_info(pool); pr_cont(" hung=%us workers=%d", @@ -4846,6 +4859,7 @@ first = false; } pr_cont("\n"); + printk_safe_exit(); next_pool: raw_spin_unlock_irqrestore(&pool->lock, flags); /*
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/lib/fault-inject.c
Changed
@@ -41,9 +41,6 @@ static void fail_dump(struct fault_attr *attr) { - if (attr->no_warn) - return; - if (attr->verbose > 0 && __ratelimit(&attr->ratelimit_state)) { printk(KERN_NOTICE "FAULT_INJECTION: forcing a failure.\n" "name %pd, interval %lu, probability %lu, " @@ -103,7 +100,7 @@ * http://www.nongnu.org/failmalloc/ */ -bool should_fail(struct fault_attr *attr, ssize_t size) +bool should_fail_ex(struct fault_attr *attr, ssize_t size, int flags) { if (in_task()) { unsigned int fail_nth = READ_ONCE(current->fail_nth); @@ -146,13 +143,19 @@ return false; fail: - fail_dump(attr); + if (!(flags & FAULT_NOWARN)) + fail_dump(attr); if (atomic_read(&attr->times) != -1) atomic_dec_not_zero(&attr->times); return true; } + +bool should_fail(struct fault_attr *attr, ssize_t size) +{ + return should_fail_ex(attr, size, 0); +} EXPORT_SYMBOL_GPL(should_fail); #ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/dynamic_hugetlb.c
Changed
@@ -182,25 +182,6 @@ } } -static void clear_percpu_pools(struct dhugetlb_pool *hpool) -{ - struct percpu_pages_pool *percpu_pool; - int i; - - lockdep_assert_held(&hpool->lock); - - spin_unlock(&hpool->lock); - for (i = 0; i < NR_PERCPU_POOL; i++) - spin_lock(&hpool->percpu_pool[i].lock); - spin_lock(&hpool->lock); - for (i = 0; i < NR_PERCPU_POOL; i++) { - percpu_pool = &hpool->percpu_pool[i]; - reclaim_pages_from_percpu_pool(hpool, percpu_pool, percpu_pool->free_pages); - } - for (i = 0; i < NR_PERCPU_POOL; i++) - spin_unlock(&hpool->percpu_pool[i].lock); -} - /* We only try 5 times to reclaim pages */ #define HPOOL_RECLAIM_RETRIES 5 @@ -210,6 +191,7 @@ struct split_hugepage *split_page, *split_next; unsigned long nr_pages, block_size; struct page *page, *next, *p; + struct percpu_pages_pool *percpu_pool; bool need_migrate = false, need_initial = false; int i, try; LIST_HEAD(wait_page_list); @@ -241,7 +223,22 @@ try = 0; merge: - clear_percpu_pools(hpool); + /* + * If we are merging 4K page to 2M page, we need to get + * lock of percpu pool sequentially and clear percpu pool. + */ + if (hpages_pool_idx == HUGE_PAGES_POOL_2M) { + spin_unlock(&hpool->lock); + for (i = 0; i < NR_PERCPU_POOL; i++) + spin_lock(&hpool->percpu_pool[i].lock); + spin_lock(&hpool->lock); + for (i = 0; i < NR_PERCPU_POOL; i++) { + percpu_pool = &hpool->percpu_pool[i]; + reclaim_pages_from_percpu_pool(hpool, percpu_pool, + percpu_pool->free_pages); + } + } + page = pfn_to_page(split_page->start_pfn); for (i = 0; i < nr_pages; i+= block_size) { p = pfn_to_page(split_page->start_pfn + i); @@ -252,6 +249,14 @@ goto migrate; } } + if (hpages_pool_idx == HUGE_PAGES_POOL_2M) { + /* + * All target 4K page are in src_hpages_pool, we + * can unlock percpu pool. + */ + for (i = 0; i < NR_PERCPU_POOL; i++) + spin_unlock(&hpool->percpu_pool[i].lock); + } list_del(&split_page->head_pages); hpages_pool->split_normal_pages--; @@ -284,8 +289,14 @@ trace_dynamic_hugetlb_split_merge(hpool, page, DHUGETLB_MERGE, page_size(page)); return 0; next: + if (hpages_pool_idx == HUGE_PAGES_POOL_2M) { + /* Unlock percpu pool before try next */ + for (i = 0; i < NR_PERCPU_POOL; i++) + spin_unlock(&hpool->percpu_pool[i].lock); + } continue; migrate: + /* page migration only used for HUGE_PAGES_POOL_2M */ if (try++ >= HPOOL_RECLAIM_RETRIES) goto next; @@ -300,7 +311,10 @@ } /* Unlock and try migration. */ + for (i = 0; i < NR_PERCPU_POOL; i++) + spin_unlock(&hpool->percpu_pool[i].lock); spin_unlock(&hpool->lock); + for (i = 0; i < nr_pages; i+= block_size) { p = pfn_to_page(split_page->start_pfn + i); if (PagePool(p)) @@ -312,6 +326,10 @@ } spin_lock(&hpool->lock); + /* + * Move all isolate pages to src_hpages_pool and then try + * merge again. + */ list_for_each_entry_safe(page, next, &wait_page_list, lru) { list_move_tail(&page->lru, &src_hpages_pool->hugepage_freelists); src_hpages_pool->free_normal_pages++; @@ -559,6 +577,10 @@ spin_lock_irqsave(&percpu_pool->lock, flags); ClearPagePool(page); + if (!free_pages_prepare(page, 0, true)) { + SetPagePool(page); + goto out; + } list_add(&page->lru, &percpu_pool->head_page); percpu_pool->free_pages++; percpu_pool->used_pages--; @@ -567,7 +589,7 @@ reclaim_pages_from_percpu_pool(hpool, percpu_pool, PERCPU_POOL_PAGE_BATCH); spin_unlock(&hpool->lock); } - +out: spin_unlock_irqrestore(&percpu_pool->lock, flags); put_hpool(hpool); } @@ -577,8 +599,7 @@ if (!dhugetlb_enabled || !PagePool(page)) return false; - if (free_pages_prepare(page, 0, true)) - __free_page_to_dhugetlb_pool(page); + __free_page_to_dhugetlb_pool(page); return true; } @@ -592,8 +613,7 @@ list_for_each_entry_safe(page, next, list, lru) { if (PagePool(page)) { list_del(&page->lru); - if (free_pages_prepare(page, 0, true)) - __free_page_to_dhugetlb_pool(page); + __free_page_to_dhugetlb_pool(page); } } } @@ -799,7 +819,8 @@ p->mapping = NULL; } set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); - + /* compound_nr and mapping are union in page, reset it. */ + set_compound_order(page, PUD_SHIFT - PAGE_SHIFT); nid = page_to_nid(page); SetHPageFreed(page); list_move(&page->lru, &h->hugepage_freelists[nid]);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/failslab.c
Changed
@@ -16,6 +16,8 @@ bool __should_failslab(struct kmem_cache *s, gfp_t gfpflags) { + int flags = 0; + /* No fault-injection for bootstrap cache */ if (unlikely(s == kmem_cache)) return false; @@ -30,10 +32,16 @@ if (failslab.cache_filter && !(s->flags & SLAB_FAILSLAB)) return false; + /* + * In some cases, it expects to specify __GFP_NOWARN + * to avoid printing any information(not just a warning), + * thus avoiding deadlocks. See commit 6b9dbedbe349 for + * details. + */ if (gfpflags & __GFP_NOWARN) - failslab.attr.no_warn = true; + flags |= FAULT_NOWARN; - return should_fail(&failslab.attr, s->object_size); + return should_fail_ex(&failslab.attr, s->object_size, flags); } static int __init setup_failslab(char *str)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/filemap.c
Changed
@@ -21,6 +21,7 @@ #include <linux/gfp.h> #include <linux/mm.h> #include <linux/swap.h> +#include <linux/swapops.h> #include <linux/mman.h> #include <linux/pagemap.h> #include <linux/file.h> @@ -42,6 +43,7 @@ #include <linux/psi.h> #include <linux/ramfs.h> #include <linux/page_idle.h> +#include <linux/migrate.h> #include "internal.h" #define CREATE_TRACE_POINTS @@ -1323,6 +1325,95 @@ return wait->flags & WQ_FLAG_WOKEN ? 0 : -EINTR; } +#ifdef CONFIG_MIGRATION +/** + * migration_entry_wait_on_locked - Wait for a migration entry to be removed + * @entry: migration swap entry. + * @ptep: mapped pte pointer. Will return with the ptep unmapped. Only required + * for pte entries, pass NULL for pmd entries. + * @ptl: already locked ptl. This function will drop the lock. + * + * Wait for a migration entry referencing the given page to be removed. This is + * equivalent to put_and_wait_on_page_locked(page, TASK_UNINTERRUPTIBLE) except + * this can be called without taking a reference on the page. Instead this + * should be called while holding the ptl for the migration entry referencing + * the page. + * + * Returns after unmapping and unlocking the pte/ptl with pte_unmap_unlock(). + * + * This follows the same logic as wait_on_page_bit_common() so see the comments + * there. + */ +void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep, + spinlock_t *ptl) +{ + struct wait_page_queue wait_page; + wait_queue_entry_t *wait = &wait_page.wait; + bool thrashing = false; + bool delayacct = false; + unsigned long pflags; + wait_queue_head_t *q; + struct page *page = compound_head(migration_entry_to_page(entry)); + + q = page_waitqueue(page); + if (!PageUptodate(page) && PageWorkingset(page)) { + if (!PageSwapBacked(page)) { + delayacct_thrashing_start(); + delayacct = true; + } + psi_memstall_enter(&pflags); + thrashing = true; + } + + init_wait(wait); + wait->func = wake_page_function; + wait_page.page = page; + wait_page.bit_nr = PG_locked; + wait->flags = 0; + + spin_lock_irq(&q->lock); + SetPageWaiters(page); + if (!trylock_page_bit_common(page, PG_locked, wait)) + __add_wait_queue_entry_tail(q, wait); + spin_unlock_irq(&q->lock); + + /* + * If a migration entry exists for the page the migration path must hold + * a valid reference to the page, and it must take the ptl to remove the + * migration entry. So the page is valid until the ptl is dropped. + */ + if (ptep) + pte_unmap_unlock(ptep, ptl); + else + spin_unlock(ptl); + + for (;;) { + unsigned int flags; + + set_current_state(TASK_UNINTERRUPTIBLE); + + /* Loop until we've been woken or interrupted */ + flags = smp_load_acquire(&wait->flags); + if (!(flags & WQ_FLAG_WOKEN)) { + if (signal_pending_state(TASK_UNINTERRUPTIBLE, current)) + break; + + io_schedule(); + continue; + } + break; + } + + finish_wait(q, wait); + + if (thrashing) { + if (delayacct) + delayacct_thrashing_end(); + psi_memstall_leave(&pflags); + } +} +#endif + void wait_on_page_bit(struct page *page, int bit_nr) { wait_queue_head_t *q = page_waitqueue(page);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/madvise.c
Changed
@@ -221,6 +221,7 @@ if (page) put_page(page); } + cond_resched(); return 0; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/memory.c
Changed
@@ -3366,6 +3366,7 @@ { struct vm_area_struct *vma = vmf->vma; struct page *page = NULL, *swapcache; + struct swap_info_struct *si = NULL; swp_entry_t entry; pte_t pte; int locked; @@ -3425,14 +3426,16 @@ goto out; } + /* Prevent swapoff from happening to us. */ + si = get_swap_device(entry); + if (unlikely(!si)) + goto out; delayacct_set_flag(DELAYACCT_PF_SWAPIN); page = lookup_swap_cache(entry, vma, vmf->address); swapcache = page; if (!page) { - struct swap_info_struct *si = swp_swap_info(entry); - if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) { /* skip swapcache */ @@ -3604,6 +3607,8 @@ unlock: pte_unmap_unlock(vmf->pte, vmf->ptl); out: + if (si) + put_swap_device(si); return ret; out_nomap: pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -3615,6 +3620,8 @@ unlock_page(swapcache); put_page(swapcache); } + if (si) + put_swap_device(si); return ret; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/migrate.c
Changed
@@ -315,7 +315,6 @@ { pte_t pte; swp_entry_t entry; - struct page *page; spin_lock(ptl); pte = *ptep; @@ -326,18 +325,7 @@ if (!is_migration_entry(entry)) goto out; - page = migration_entry_to_page(entry); - page = compound_head(page); - - /* - * Once page cache replacement of page migration started, page_count - * is zero; but we must not call put_and_wait_on_page_locked() without - * a ref. Use get_page_unless_zero(), and just fault again if it fails. - */ - if (!get_page_unless_zero(page)) - goto out; - pte_unmap_unlock(ptep, ptl); - put_and_wait_on_page_locked(page); + migration_entry_wait_on_locked(entry, ptep, ptl); return; out: pte_unmap_unlock(ptep, ptl); @@ -362,16 +350,11 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd) { spinlock_t *ptl; - struct page *page; ptl = pmd_lock(mm, pmd); if (!is_pmd_migration_entry(*pmd)) goto unlock; - page = migration_entry_to_page(pmd_to_swp_entry(*pmd)); - if (!get_page_unless_zero(page)) - goto unlock; - spin_unlock(ptl); - put_and_wait_on_page_locked(page); + migration_entry_wait_on_locked(pmd_to_swp_entry(*pmd), NULL, ptl); return; unlock: spin_unlock(ptl); @@ -2558,22 +2541,8 @@ return false; /* Page from ZONE_DEVICE have one extra reference */ - if (is_zone_device_page(page)) { - /* - * Private page can never be pin as they have no valid pte and - * GUP will fail for those. Yet if there is a pending migration - * a thread might try to wait on the pte migration entry and - * will bump the page reference count. Sadly there is no way to - * differentiate a regular pin from migration wait. Hence to - * avoid 2 racing thread trying to migrate back to CPU to enter - * infinite loop (one stopping migration because the other is - * waiting on pte migration entry). We always return true here. - * - * FIXME proper solution is to rework migration_entry_wait() so - * it does not need to take a reference on page. - */ - return is_device_private_page(page); - } + if (is_zone_device_page(page)) + extra++; /* For file back page */ if (page_mapping(page))
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/mmap.c
Changed
@@ -1479,9 +1479,11 @@ pkey = 0; } +#ifdef CONFIG_ASCEND_FEATURES /* Physical address is within 4G */ if (flags & MAP_PA32BIT) vm_flags |= VM_PA32BIT; +#endif /* Do simple checking here so the lower-level routines won't have * to. we assume access permissions have been handled by the open
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/page_alloc.c
Changed
@@ -112,6 +112,8 @@ /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_FRACTION (8) +#define MAX_PERCPU_MAX_BATCHSIZE ((512 * 1024) / PAGE_SIZE) +#define MIN_PERCPU_MAX_BATCHSIZE (MAX_PERCPU_MAX_BATCHSIZE / 8) #ifdef CONFIG_USE_PERCPU_NUMA_NODE_ID DEFINE_PER_CPU(int, numa_node); @@ -167,6 +169,8 @@ unsigned long totalcma_pages __read_mostly; int percpu_pagelist_fraction; +int percpu_max_batchsize = MAX_PERCPU_MAX_BATCHSIZE / 2; + gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; #ifdef CONFIG_INIT_ON_ALLOC_DEFAULT_ON DEFINE_STATIC_KEY_TRUE(init_on_alloc); @@ -3545,6 +3549,8 @@ static bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) { + int flags = 0; + if (order < fail_page_alloc.min_order) return false; if (gfp_mask & __GFP_NOFAIL) @@ -3555,10 +3561,11 @@ (gfp_mask & __GFP_DIRECT_RECLAIM)) return false; + /* See comment in __should_failslab() */ if (gfp_mask & __GFP_NOWARN) - fail_page_alloc.attr.no_warn = true; + flags |= FAULT_NOWARN; - return should_fail(&fail_page_alloc.attr, 1 << order); + return should_fail_ex(&fail_page_alloc.attr, 1 << order, flags); } #ifdef CONFIG_FAULT_INJECTION_DEBUG_FS @@ -6757,10 +6764,9 @@ * size of the zone. */ batch = zone_managed_pages(zone) / 1024; - /* But no more than a meg. */ - if (batch * PAGE_SIZE > 1024 * 1024) - batch = (1024 * 1024) / PAGE_SIZE; batch /= 4; /* We effectively *= 4 below */ + if (batch > percpu_max_batchsize) + batch = percpu_max_batchsize; if (batch < 1) batch = 1; @@ -8609,6 +8615,39 @@ goto out; for_each_populated_zone(zone) + zone_set_pageset_high_and_batch(zone); +out: + mutex_unlock(&pcp_batch_high_lock); + return ret; +} + +int percpu_max_batchsize_sysctl_handler(struct ctl_table *table, int write, + void *buffer, size_t *length, loff_t *ppos) +{ + struct zone *zone; + int old_percpu_max_batchsize; + int ret; + + mutex_lock(&pcp_batch_high_lock); + old_percpu_max_batchsize = percpu_max_batchsize; + + ret = proc_dointvec_minmax(table, write, buffer, length, ppos); + if (!write || ret < 0) + goto out; + + /* Sanity checking to avoid pcp imbalance */ + if (percpu_max_batchsize > MAX_PERCPU_MAX_BATCHSIZE || + percpu_max_batchsize < MIN_PERCPU_MAX_BATCHSIZE) { + percpu_max_batchsize = old_percpu_max_batchsize; + ret = -EINVAL; + goto out; + } + + /* No change? */ + if (percpu_max_batchsize == old_percpu_max_batchsize) + goto out; + + for_each_populated_zone(zone) zone_set_pageset_high_and_batch(zone); out: mutex_unlock(&pcp_batch_high_lock);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/shmem.c
Changed
@@ -1711,7 +1711,8 @@ struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm; - struct page *page; + struct swap_info_struct *si; + struct page *page = NULL; swp_entry_t swap; int error; @@ -1719,6 +1720,12 @@ swap = radix_to_swp_entry(*pagep); *pagep = NULL; + /* Prevent swapoff from happening to us. */ + si = get_swap_device(swap); + if (!si) { + error = EINVAL; + goto failed; + } /* Look it up and read it in.. */ page = lookup_swap_cache(swap, NULL, 0); if (!page) { @@ -1780,6 +1787,8 @@ swap_free(swap); *pagep = page; + if (si) + put_swap_device(si); return 0; failed: if (!shmem_confirm_swap(mapping, index, swap)) @@ -1790,6 +1799,9 @@ put_page(page); } + if (si) + put_swap_device(si); + return error; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/swapfile.c
Changed
@@ -39,6 +39,7 @@ #include <linux/export.h> #include <linux/swap_slots.h> #include <linux/sort.h> +#include <linux/completion.h> #include <asm/tlbflush.h> #include <linux/swapops.h> @@ -512,6 +513,14 @@ spin_unlock(&si->lock); } +static void swap_users_ref_free(struct percpu_ref *ref) +{ + struct swap_extend_info *sei; + + sei = container_of(ref, struct swap_extend_info, users); + complete(&sei->comp); +} + static void alloc_cluster(struct swap_info_struct *si, unsigned long idx) { struct swap_cluster_info *ci = si->cluster_info; @@ -944,6 +953,11 @@ scan: spin_unlock(&si->lock); while (++offset <= READ_ONCE(si->highest_bit)) { + if (unlikely(--latency_ration < 0)) { + cond_resched(); + latency_ration = LATENCY_LIMIT; + scanned_many = true; + } if (data_race(!si->swap_map[offset])) { spin_lock(&si->lock); goto checks; @@ -953,14 +967,14 @@ spin_lock(&si->lock); goto checks; } + } + offset = si->lowest_bit; + while (offset < scan_base) { if (unlikely(--latency_ration < 0)) { cond_resched(); latency_ration = LATENCY_LIMIT; scanned_many = true; } - } - offset = si->lowest_bit; - while (offset < scan_base) { if (data_race(!si->swap_map[offset])) { spin_lock(&si->lock); goto checks; @@ -970,11 +984,6 @@ spin_lock(&si->lock); goto checks; } - if (unlikely(--latency_ration < 0)) { - cond_resched(); - latency_ration = LATENCY_LIMIT; - scanned_many = true; - } offset++; } spin_lock(&si->lock); @@ -1274,18 +1283,12 @@ * via preventing the swap device from being swapoff, until * put_swap_device() is called. Otherwise return NULL. * - * The entirety of the RCU read critical section must come before the - * return from or after the call to synchronize_rcu() in - * enable_swap_info() or swapoff(). So if "si->flags & SWP_VALID" is - * true, the si->map, si->cluster_info, etc. must be valid in the - * critical section. - * * Notice that swapoff or swapoff+swapon can still happen before the - * rcu_read_lock() in get_swap_device() or after the rcu_read_unlock() - * in put_swap_device() if there isn't any other way to prevent - * swapoff, such as page lock, page table lock, etc. The caller must - * be prepared for that. For example, the following situation is - * possible. + * percpu_ref_tryget_live() in get_swap_device() or after the + * percpu_ref_put() in put_swap_device() if there isn't any other way + * to prevent swapoff, such as page lock, page table lock, etc. The + * caller must be prepared for that. For example, the following + * situation is possible. * * CPU1 CPU2 * do_swap_page() @@ -1313,21 +1316,27 @@ si = swp_swap_info(entry); if (!si) goto bad_nofile; - - rcu_read_lock(); - if (data_race(!(si->flags & SWP_VALID))) - goto unlock_out; + if (!percpu_ref_tryget_live(&si->sei->users)) + goto out; + /* + * Guarantee the si->users are checked before accessing other + * fields of swap_info_struct. + * + * Paired with the spin_unlock() after setup_swap_info() in + * enable_swap_info(). + */ + smp_rmb(); offset = swp_offset(entry); if (offset >= si->max) - goto unlock_out; + goto put_out; return si; bad_nofile: pr_err("%s: %s%08lx\n", __func__, Bad_file, entry.val); out: return NULL; -unlock_out: - rcu_read_unlock(); +put_out: + percpu_ref_put(&si->sei->users); return NULL; } @@ -2500,7 +2509,7 @@ static void _enable_swap_info(struct swap_info_struct *p) { - p->flags |= SWP_WRITEOK | SWP_VALID; + p->flags |= SWP_WRITEOK; atomic_long_add(p->pages, &nr_swap_pages); total_swap_pages += p->pages; @@ -2531,10 +2540,9 @@ spin_unlock(&p->lock); spin_unlock(&swap_lock); /* - * Guarantee swap_map, cluster_info, etc. fields are valid - * between get/put_swap_device() if SWP_VALID bit is set + * Finished initializing swap device, now it's safe to reference it. */ - synchronize_rcu(); + percpu_ref_resurrect(&p->sei->users); spin_lock(&swap_lock); spin_lock(&p->lock); _enable_swap_info(p); @@ -2650,16 +2658,16 @@ reenable_swap_slots_cache_unlock(); - spin_lock(&swap_lock); - spin_lock(&p->lock); - p->flags &= ~SWP_VALID; /* mark swap device as invalid */ - spin_unlock(&p->lock); - spin_unlock(&swap_lock); /* - * wait for swap operations protected by get/put_swap_device() - * to complete + * Wait for swap operations protected by get/put_swap_device() + * to complete. + * + * We need synchronize_rcu() here to protect the accessing to + * the swap cache data structure. */ + percpu_ref_kill(&p->sei->users); synchronize_rcu(); + wait_for_completion(&p->sei->comp); flush_work(&p->discard_work); @@ -2891,6 +2899,19 @@ if (!p) return ERR_PTR(-ENOMEM); + p->sei = kvzalloc(sizeof(struct swap_extend_info), GFP_KERNEL); + if (!p->sei) { + kvfree(p); + return ERR_PTR(-ENOMEM); + } + + if (percpu_ref_init(&p->sei->users, swap_users_ref_free, + PERCPU_REF_INIT_DEAD, GFP_KERNEL)) { + kvfree(p->sei); + kvfree(p); + return ERR_PTR(-ENOMEM); + } + spin_lock(&swap_lock); for (type = 0; type < nr_swapfiles; type++) { if (!(swap_info[type]->flags & SWP_USED)) @@ -2898,6 +2919,8 @@ } if (type >= MAX_SWAPFILES) { spin_unlock(&swap_lock); + percpu_ref_exit(&p->sei->users); + kvfree(p->sei); kvfree(p); return ERR_PTR(-EPERM); } @@ -2925,9 +2948,14 @@
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/zswap.c
Changed
@@ -79,6 +79,8 @@ #define ZSWAP_PARAM_UNSET "" +static int zswap_setup(void); + /* Enable/disable zswap */ static bool zswap_enabled = IS_ENABLED(CONFIG_ZSWAP_DEFAULT_ON); static int zswap_enabled_param_set(const char *, @@ -203,11 +205,14 @@ /* pool counter to provide unique names to zpool */ static atomic_t zswap_pools_count = ATOMIC_INIT(0); -/* used by param callback function */ -static bool zswap_init_started; +#define ZSWAP_UNINIT 0 +#define ZSWAP_INIT_SUCCEED 1 +#define ZSWAP_INIT_FAILED 2 -/* fatal error during init */ -static bool zswap_init_failed; +/* init state */ +static int zswap_init_state; +/* used to ensure the integrity of initialization */ +static DEFINE_MUTEX(zswap_init_lock); /* init completed, but couldn't create the initial pool */ static bool zswap_has_pool; @@ -261,13 +266,13 @@ **********************************/ static struct kmem_cache *zswap_entry_cache; -static int __init zswap_entry_cache_create(void) +static int zswap_entry_cache_create(void) { zswap_entry_cache = KMEM_CACHE(zswap_entry, 0); return zswap_entry_cache == NULL; } -static void __init zswap_entry_cache_destroy(void) +static void zswap_entry_cache_destroy(void) { kmem_cache_destroy(zswap_entry_cache); } @@ -648,7 +653,7 @@ return NULL; } -static __init struct zswap_pool *__zswap_pool_create_fallback(void) +static struct zswap_pool *__zswap_pool_create_fallback(void) { bool has_comp, has_zpool; @@ -757,7 +762,7 @@ char *s = strstrip((char *)val); int ret; - if (zswap_init_failed) { + if (zswap_init_state == ZSWAP_INIT_FAILED) { pr_err("can't set param, initialization failed\n"); return -ENODEV; } @@ -766,11 +771,17 @@ if (!strcmp(s, *(char **)kp->arg) && zswap_has_pool) return 0; - /* if this is load-time (pre-init) param setting, + /* + * if zswap has not been initialized, * don't create a pool; that's done during init. */ - if (!zswap_init_started) - return param_set_charp(s, kp); + mutex_lock(&zswap_init_lock); + if (zswap_init_state == ZSWAP_UNINIT) { + ret = param_set_charp(s, kp); + mutex_unlock(&zswap_init_lock); + return ret; + } + mutex_unlock(&zswap_init_lock); if (!type) { if (!zpool_has_pool(s)) { @@ -860,11 +871,19 @@ static int zswap_enabled_param_set(const char *val, const struct kernel_param *kp) { - if (zswap_init_failed) { + if (system_state == SYSTEM_RUNNING) { + mutex_lock(&zswap_init_lock); + if (zswap_setup()) { + mutex_unlock(&zswap_init_lock); + return -ENODEV; + } + mutex_unlock(&zswap_init_lock); + } + if (zswap_init_state == ZSWAP_INIT_FAILED) { pr_err("can't enable, initialization failed\n"); return -ENODEV; } - if (!zswap_has_pool && zswap_init_started) { + if (!zswap_has_pool && zswap_init_state == ZSWAP_INIT_SUCCEED) { pr_err("can't enable, no pool configured\n"); return -ENODEV; } @@ -1390,7 +1409,7 @@ static struct dentry *zswap_debugfs_root; -static int __init zswap_debugfs_init(void) +static int zswap_debugfs_init(void) { if (!debugfs_initialized()) return -ENODEV; @@ -1426,7 +1445,7 @@ debugfs_remove_recursive(zswap_debugfs_root); } #else -static int __init zswap_debugfs_init(void) +static int zswap_debugfs_init(void) { return 0; } @@ -1434,15 +1453,13 @@ static void __exit zswap_debugfs_exit(void) { } #endif -/********************************* -* module init and exit -**********************************/ -static int __init init_zswap(void) +static int zswap_setup(void) { struct zswap_pool *pool; int ret; - zswap_init_started = true; + if (zswap_init_state != ZSWAP_UNINIT) + return 0; if (zswap_entry_cache_create()) { pr_err("entry cache creation failed\n"); @@ -1481,6 +1498,7 @@ frontswap_register_ops(&zswap_frontswap_ops); if (zswap_debugfs_init()) pr_warn("debugfs initialization failed\n"); + zswap_init_state = ZSWAP_INIT_SUCCEED; return 0; fallback_fail: @@ -1492,10 +1510,22 @@ zswap_entry_cache_destroy(); cache_fail: /* if built-in, we aren't unloaded on failure; don't allow use */ - zswap_init_failed = true; + zswap_init_state = ZSWAP_INIT_FAILED; zswap_enabled = false; return -ENOMEM; } + +/********************************* +* module init and exit +**********************************/ +static int __init init_zswap(void) +{ + /* skip init if zswap is disabled when system startup */ + if (!zswap_enabled) + return 0; + return zswap_setup(); +} + /* must be late so crypto has time to come up */ late_initcall(init_zswap);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/net/9p/trans_virtio.c
Changed
@@ -716,7 +716,7 @@ mutex_unlock(&virtio_9p_lock); - vdev->config->reset(vdev); + virtio_reset_device(vdev); vdev->config->del_vqs(vdev); sysfs_remove_file(&(vdev->dev.kobj), &dev_attr_mount_tag.attr);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/net/bluetooth/l2cap_core.c
Changed
@@ -1988,11 +1988,11 @@ src_match = !bacmp(&c->src, src); dst_match = !bacmp(&c->dst, dst); if (src_match && dst_match) { - c = l2cap_chan_hold_unless_zero(c); - if (c) { - read_unlock(&chan_list_lock); - return c; - } + if (!l2cap_chan_hold_unless_zero(c)) + continue; + + read_unlock(&chan_list_lock); + return c; } /* Closest match */ @@ -4440,7 +4440,8 @@ chan->ident = cmd->ident; l2cap_send_cmd(conn, cmd->ident, L2CAP_CONF_RSP, len, rsp); - chan->num_conf_rsp++; + if (chan->num_conf_rsp < L2CAP_CONF_MAX_CONF_RSP) + chan->num_conf_rsp++; /* Reset config buffer. */ chan->conf_len = 0; @@ -7621,6 +7622,7 @@ return; } + l2cap_chan_hold(chan); l2cap_chan_lock(chan); } else { BT_DBG("unknown cid 0x%4.4x", cid);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/net/l2tp/l2tp_core.c
Changed
@@ -1150,8 +1150,10 @@ } /* Remove hooks into tunnel socket */ + write_lock_bh(&sk->sk_callback_lock); sk->sk_destruct = tunnel->old_sk_destruct; sk->sk_user_data = NULL; + write_unlock_bh(&sk->sk_callback_lock); /* Call the original destructor */ if (sk->sk_destruct) @@ -1471,16 +1473,19 @@ sock = sockfd_lookup(tunnel->fd, &ret); if (!sock) goto err; - - ret = l2tp_validate_socket(sock->sk, net, tunnel->encap); - if (ret < 0) - goto err_sock; } + sk = sock->sk; + write_lock_bh(&sk->sk_callback_lock); + ret = l2tp_validate_socket(sk, net, tunnel->encap); + if (ret < 0) + goto err_inval_sock; + rcu_assign_sk_user_data(sk, tunnel); + write_unlock_bh(&sk->sk_callback_lock); + tunnel->l2tp_net = net; pn = l2tp_pernet(net); - sk = sock->sk; sock_hold(sk); tunnel->sock = sk; @@ -1505,8 +1510,6 @@ }; setup_udp_tunnel_sock(net, sock, &udp_cfg); - } else { - sk->sk_user_data = tunnel; } tunnel->old_sk_destruct = sk->sk_destruct; @@ -1523,6 +1526,11 @@ return 0; err_sock: + write_lock_bh(&sk->sk_callback_lock); + rcu_assign_sk_user_data(sk, NULL); +err_inval_sock: + write_unlock_bh(&sk->sk_callback_lock); + if (tunnel->fd < 0) sock_release(sock); else
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/net/packet/af_packet.c
Changed
@@ -1896,8 +1896,10 @@ /* Move network header to the right position for VLAN tagged packets */ if (likely(skb->dev->type == ARPHRD_ETHER) && eth_type_vlan(skb->protocol) && - __vlan_get_protocol(skb, skb->protocol, &depth) != 0) - skb_set_network_header(skb, depth); + __vlan_get_protocol(skb, skb->protocol, &depth) != 0) { + if (pskb_may_pull(skb, depth)) + skb_set_network_header(skb, depth); + } skb_probe_transport_header(skb); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/net/vmw_vsock/virtio_transport.c
Changed
@@ -641,7 +641,7 @@ virtio_vsock_reset_sock); /* Stop all work handlers to make sure no one is accessing the device, - * so we can safely call vdev->config->reset(). + * so we can safely call virtio_reset_device(). */ mutex_lock(&vsock->rx_lock); vsock->rx_run = false; @@ -658,7 +658,7 @@ /* Flush all device writes and interrupts, device will not use any * more buffers. */ - vdev->config->reset(vdev); + virtio_reset_device(vdev); mutex_lock(&vsock->rx_lock); while ((pkt = virtqueue_detach_unused_buf(vsock->vqs[VSOCK_VQ_RX])))
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/openEuler/MAINTAINERS
Changed
@@ -67,6 +67,24 @@ first. When adding to this list, please keep the entries in alphabetical order. +KERNEL VIRTUAL MACHINE (KVM) +M: zhukeqian1@huawei.com +M: yuzenghui@huawei.com +S: Maintained +F: Documentation/virt/kvm/ +F: include/asm-generic/kvm* +F: include/kvm/ +F: include/linux/kvm* +F: include/trace/events/kvm.h +F: include/uapi/asm-generic/kvm* +F: include/uapi/linux/kvm* +F: tools/kvm/ +F: tools/testing/selftests/kvm/ +F: virt/kvm/ +F: arch/*/include/asm/kvm* +F: arch/*/include/uapi/asm/kvm* +F: arch/*/kvm/ + SCHEDULER M: zhengzucheng@huawei.com S: Maintained
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/security/integrity/ima/ima_template.c
Changed
@@ -292,8 +292,11 @@ template_desc->name = ""; template_desc->fmt = kstrdup(template_name, GFP_KERNEL); - if (!template_desc->fmt) + if (!template_desc->fmt) { + kfree(template_desc); + template_desc = NULL; goto out; + } spin_lock(&template_list); list_add_tail_rcu(&template_desc->list, &defined_templates);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/tools/include/uapi/linux/vhost.h
Changed
@@ -89,11 +89,6 @@ /* Set or get vhost backend capability */ -/* Use message type V2 */ -#define VHOST_BACKEND_F_IOTLB_MSG_V2 0x1 -/* IOTLB can accept batching hints */ -#define VHOST_BACKEND_F_IOTLB_BATCH 0x2 - #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64) #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64) @@ -150,4 +145,39 @@ /* Get the valid iova range */ #define VHOST_VDPA_GET_IOVA_RANGE _IOR(VHOST_VIRTIO, 0x78, \ struct vhost_vdpa_iova_range) +/* Get the config size */ +#define VHOST_VDPA_GET_CONFIG_SIZE _IOR(VHOST_VIRTIO, 0x79, __u32) + +/* Get the count of all virtqueues */ +#define VHOST_VDPA_GET_VQS_COUNT _IOR(VHOST_VIRTIO, 0x80, __u32) + +/* Get the number of virtqueue groups. */ +#define VHOST_VDPA_GET_GROUP_NUM _IOR(VHOST_VIRTIO, 0x81, __u32) + +/* Get the number of address spaces. */ +#define VHOST_VDPA_GET_AS_NUM _IOR(VHOST_VIRTIO, 0x7A, unsigned int) + +/* Get the group for a virtqueue: read index, write group in num, + * The virtqueue index is stored in the index field of + * vhost_vring_state. The group for this specific virtqueue is + * returned via num field of vhost_vring_state. + */ +#define VHOST_VDPA_GET_VRING_GROUP _IOWR(VHOST_VIRTIO, 0x7B, \ + struct vhost_vring_state) +/* Set the ASID for a virtqueue group. The group index is stored in + * the index field of vhost_vring_state, the ASID associated with this + * group is stored at num field of vhost_vring_state. + */ +#define VHOST_VDPA_SET_GROUP_ASID _IOW(VHOST_VIRTIO, 0x7C, \ + struct vhost_vring_state) + +/* Suspend a device so it does not process virtqueue requests anymore + * + * After the return of ioctl the device must preserve all the necessary state + * (the virtqueue vring base plus the possible device specific states) that is + * required for restoring in the future. The device must not change its + * configuration after that point. + */ +#define VHOST_VDPA_SUSPEND _IO(VHOST_VIRTIO, 0x7D) + #endif
View file
_service:tar_scm:openEuler_riscv64_defconfig
Changed
@@ -117,7 +117,14 @@ CONFIG_FAT_DEFAULT_UTF8=y CONFIG_EXFAT_FS=m CONFIG_EXFAT_DEFAULT_IOCHARSET="utf8" - +CONFIG_NLS_CODEPAGE_437=y +CONFIG_NLS_CODEPAGE_936=m +CONFIG_NLS_CODEPAGE_950=m +CONFIG_NLS_CODEPAGE_1250=m +CONFIG_NLS_CODEPAGE_1251=m +CONFIG_NLS_ASCII=m +CONFIG_NLS_ISO8859_1=m +CONFIG_NLS_UTF8=m ### End CONFIG_CRYPTO=y @@ -219,3 +226,72 @@ CONFIG_X509_CERTIFICATE_PARSER=y CONFIG_PKCS8_PRIVATE_KEY_PARSER=y CONFIG_PKCS7_MESSAGE_PARSER=y + +# for FUSE +CONFIG_FUSE_FS=m +CONFIG_CUSE=m +CONFIG_VIRTIO_FS=m +# for NFS server +CONFIG_NFSD=m +CONFIG_NFSD_V2_ACL=y +CONFIG_NFSD_V3_ACL=y +CONFIG_NFSD_V4=y +CONFIG_NFSD_PNFS=y +CONFIG_NFSD_BLOCKLAYOUT=y +CONFIG_NFSD_V4_SECURITY_LABEL=y +# for package clamav +CONFIG_FANOTIFY=y +CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y +# for package firewalld +CONFIG_NETFILTER_NETLINK_OSF=m +CONFIG_NETFILTER_CONNCOUNT=m +CONFIG_NETFILTER_SYNPROXY=m +CONFIG_NF_TABLES=m +CONFIG_NF_TABLES_INET=y +CONFIG_NF_TABLES_NETDEV=y +CONFIG_NFT_NUMGEN=m +CONFIG_NFT_CT=m +CONFIG_NFT_COUNTER=m +CONFIG_NFT_CONNLIMIT=m +CONFIG_NFT_LOG=m +CONFIG_NFT_LIMIT=m +CONFIG_NFT_MASQ=m +CONFIG_NFT_REDIR=m +CONFIG_NFT_NAT=m +CONFIG_NFT_TUNNEL=m +CONFIG_NFT_OBJREF=m +CONFIG_NFT_QUOTA=m +CONFIG_NFT_REJECT=m +CONFIG_NFT_REJECT_INET=m +CONFIG_NFT_COMPAT=m +CONFIG_NFT_HASH=m +CONFIG_NFT_XFRM=m +CONFIG_NFT_SOCKET=m +CONFIG_NFT_OSF=m +CONFIG_NFT_TPROXY=m +CONFIG_NFT_SYNPROXY=m +CONFIG_NF_DUP_NETDEV=m +CONFIG_NFT_DUP_NETDEV=m +CONFIG_NFT_FWD_NETDEV=m +CONFIG_NF_FLOW_TABLE_INET=m +CONFIG_NF_FLOW_TABLE=m +CONFIG_NF_SOCKET_IPV4=m +CONFIG_NF_TPROXY_IPV4=m +CONFIG_NF_TABLES_IPV4=y +CONFIG_NFT_REJECT_IPV4=m +CONFIG_NF_SOCKET_IPV6=m +CONFIG_NF_TPROXY_IPV6=m +CONFIG_NF_TABLES_IPV6=y +CONFIG_NFT_REJECT_IPV6=m +CONFIG_NF_CONNTRACK_BROADCAST=m +CONFIG_NF_CONNTRACK_NETBIOS_NS=m +CONFIG_NF_CONNTRACK_SNMP=m +CONFIG_NFT_FIB=m +CONFIG_NFT_FIB_INET=m +CONFIG_IP_SET=m +CONFIG_IP_SET_MAX=256 +CONFIG_NFT_FIB_IPV4=m +CONFIG_NF_NAT_SNMP_BASIC=m +CONFIG_IP_NF_RAW=m +CONFIG_IP_NF_SECURITY=m +CONFIG_NFT_FIB_IPV6=m
Locations
Projects
Search
Status Monitor
Help
Open Build Service
OBS Manuals
API Documentation
OBS Portal
Reporting a Bug
Contact
Mailing List
Forums
Chat (IRC)
Twitter
Open Build Service (OBS)
is an
openSUSE project
.
浙ICP备2022010568号-2