Projects
Factory:RISC-V:Base
risc-v-kernel
Sign Up
Log In
Username
Password
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
Expand all
Collapse all
Changes of Revision 5
View file
_service:tar_scm:kernel.spec
Changed
@@ -9,7 +9,7 @@ Name: kernel Version: 5.10.0 -Release: 9 +Release: 11 Summary: Linux Kernel for RISC-V URL: http://www.kernel.org/ License: GPLv2 @@ -19,6 +19,7 @@ Source2: cpupower.service Source3: cpupower.config Patch1: 0001-fix-task-size-min-macro.patch +Patch2: 0002-backport-Make-mmap-with-PROT_WRITE-imply-PROT_READ.patch BuildRequires: module-init-tools, patch >= 2.5.4, bash >= 2.03, tar BuildRequires: bzip2, xz, findutils, gzip, m4, perl, make >= 3.78, diffutils, gawk @@ -142,6 +143,7 @@ cd linux-%{KernelVer} %patch1 -p1 +%patch2 -p1 touch .scmversion @@ -461,6 +463,12 @@ %endif %changelog +* Thu Jan 19 2023 mingzheng <xingmingzheng@iscas.ac.cn> - 5.10.0-11-riscv64 +- submit the backport patch to make mmap() with PROT_WRITE implied PROT_READ + +* Thu Dec 22 2022 laokz <zhangkai@iscas.ac.cn> - 5.10.0-10-riscv64 +- add some defconfig for FAT, clamav, NFS server, FUSE, firewalld + * Wed Dec 07 2022 laokz <zhangkai@iscas.ac.cn> - 5.10.0-9-riscv64 - submit @geasscore TASK_SIZE_MIN macro patch to fix EFI driver failure
View file
_service:tar_scm:0002-backport-Make-mmap-with-PROT_WRITE-imply-PROT_READ.patch
Added
@@ -0,0 +1,66 @@ +From 75a0cfefa0f051fba68682d2b18c26be6cd1a06e Mon Sep 17 00:00:00 2001 +From: Mingzheng Xing <xingmingzheng@iscas.ac.cn> +Date: Thu, 19 Jan 2023 12:31:34 +0800 +Subject: PATCH backport: Make mmap() with PROT_WRITE imply PROT_READ + +commit 8aeb7b17f04ef40f620c763502e2b644c5c73efd +Merge: c45fc916c2b2 9e2e6042a7ec +Author: Palmer Dabbelt <palmer@rivosinc.com> +Date: Thu Oct 13 12:49:12 2022 -0700 + + RISC-V: Make mmap() with PROT_WRITE imply PROT_READ + + Commit 2139619bcad7 ("riscv: mmap with PROT_WRITE but no PROT_READ is + invalid") made mmap() reject mappings with only PROT_WRITE set in an + attempt to fix an observed inconsistency in behavior when attempting + to read from a PROT_WRITE-only mapping. The root cause of this behavior + was actually that while RISC-V's protection_map maps VM_WRITE to + readable PTE permissions (since write-only PTEs are considered reserved + by the privileged spec), the page fault handler considered loads from + VM_WRITE-only VMAs illegal accesses. Fix the underlying cause by + handling faults in VM_WRITE-only VMAs (patch 1) and then re-enable + use of mmap(PROT_WRITE) (patch 2), making RISC-V's behavior consistent + with all other architectures that don't support write-only PTEs. + + * remotes/palmer/riscv-wonly: + riscv: Allow PROT_WRITE-only mmap() + riscv: Make VM_WRITE imply VM_READ + + Link: https://lore.kernel.org/r/20220915193702.2201018-1-abrestic@rivosinc.com/ + Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> +--- + arch/riscv/kernel/sys_riscv.c | 3 --- + arch/riscv/mm/fault.c | 3 ++- + 2 files changed, 2 insertions(+), 4 deletions(-) + +diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c +index 8a7880b9c433..bb402685057a 100644 +--- a/arch/riscv/kernel/sys_riscv.c ++++ b/arch/riscv/kernel/sys_riscv.c +@@ -18,9 +18,6 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len, + if (unlikely(offset & (~PAGE_MASK >> page_shift_offset))) + return -EINVAL; + +- if (unlikely((prot & PROT_WRITE) && !(prot & PROT_READ))) +- return -EINVAL; +- + return ksys_mmap_pgoff(addr, len, prot, flags, fd, + offset >> (PAGE_SHIFT - page_shift_offset)); + } +diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c +index 3c8b9e433c67..8f84bbe0ac33 100644 +--- a/arch/riscv/mm/fault.c ++++ b/arch/riscv/mm/fault.c +@@ -167,7 +167,8 @@ static inline bool access_error(unsigned long cause, struct vm_area_struct *vma) + } + break; + case EXC_LOAD_PAGE_FAULT: +- if (!(vma->vm_flags & VM_READ)) { ++ /* Write implies read */ ++ if (!(vma->vm_flags & (VM_READ | VM_WRITE))) { + return true; + } + break; +-- +2.34.1 +
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/Documentation/ABI/testing/sysfs-bus-vdpa
Added
@@ -0,0 +1,37 @@ +What: /sys/bus/vdpa/driver_autoprobe +Date: March 2020 +Contact: virtualization@lists.linux-foundation.org +Description: + This file determines whether new devices are immediately bound + to a driver after the creation. It initially contains 1, which + means the kernel automatically binds devices to a compatible + driver immediately after they are created. + + Writing "0" to this file disable this feature, any other string + enable it. + +What: /sys/bus/vdpa/driver_probe +Date: March 2020 +Contact: virtualization@lists.linux-foundation.org +Description: + Writing a device name to this file will cause the kernel binds + devices to a compatible driver. + + This can be useful when /sys/bus/vdpa/driver_autoprobe is + disabled. + +What: /sys/bus/vdpa/drivers/.../bind +Date: March 2020 +Contact: virtualization@lists.linux-foundation.org +Description: + Writing a device name to this file will cause the driver to + attempt to bind to the device. This is useful for overriding + default bindings. + +What: /sys/bus/vdpa/drivers/.../unbind +Date: March 2020 +Contact: virtualization@lists.linux-foundation.org +Description: + Writing a device name to this file will cause the driver to + attempt to unbind from the device. This may be useful when + overriding default bindings.
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/Documentation/ABI/testing/sysfs-platform-intel-ifs
Added
@@ -0,0 +1,39 @@ +What: /sys/devices/virtual/misc/intel_ifs_<N>/run_test +Date: April 21 2022 +KernelVersion: 5.19 +Contact: "Jithu Joseph" <jithu.joseph@intel.com> +Description: Write <cpu#> to trigger IFS test for one online core. + Note that the test is per core. The cpu# can be + for any thread on the core. Running on one thread + completes the test for the core containing that thread. + Example: to test the core containing cpu5: echo 5 > + /sys/devices/platform/intel_ifs.<N>/run_test + +What: /sys/devices/virtual/misc/intel_ifs_<N>/status +Date: April 21 2022 +KernelVersion: 5.19 +Contact: "Jithu Joseph" <jithu.joseph@intel.com> +Description: The status of the last test. It can be one of "pass", "fail" + or "untested". + +What: /sys/devices/virtual/misc/intel_ifs_<N>/details +Date: April 21 2022 +KernelVersion: 5.19 +Contact: "Jithu Joseph" <jithu.joseph@intel.com> +Description: Additional information regarding the last test. The details file reports + the hex value of the SCAN_STATUS MSR. Note that the error_code field + may contain driver defined software code not defined in the Intel SDM. + +What: /sys/devices/virtual/misc/intel_ifs_<N>/image_version +Date: April 21 2022 +KernelVersion: 5.19 +Contact: "Jithu Joseph" <jithu.joseph@intel.com> +Description: Version (hexadecimal) of loaded IFS binary image. If no scan image + is loaded reports "none". + +What: /sys/devices/virtual/misc/intel_ifs_<N>/reload +Date: April 21 2022 +KernelVersion: 5.19 +Contact: "Jithu Joseph" <jithu.joseph@intel.com> +Description: Write "1" (or "y" or "Y") to reload the IFS image from + /lib/firmware/intel/ifs/ff-mm-ss.scan.
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/Documentation/admin-guide/sysctl/vm.rst
Changed
@@ -65,6 +65,7 @@ - page-cluster - panic_on_oom - percpu_pagelist_fraction +- percpu_max_batchsize - stat_interval - stat_refresh - numa_stat @@ -856,6 +857,15 @@ sysctl, it will revert to this default behavior. +percpu_max_batchsize +======================== + +This is used to setup the max batch and high size of percpu in each zone. +The default value is set to (256 * 1024) / PAGE_SIZE. +The max value is limited to (512 * 1024) / PAGE_SIZE. +The min value is limited to (64 * 1024) / PAGE_SIZE. + + stat_interval =============
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/Documentation/x86/ifs.rst
Added
@@ -0,0 +1,2 @@ +.. SPDX-License-Identifier: GPL-2.0 +.. kernel-doc:: drivers/platform/x86/intel/ifs/ifs.h
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/Documentation/x86/index.rst
Changed
@@ -34,6 +34,7 @@ usb-legacy-support i386/index x86_64/index + ifs sva sgx elf_auxvec
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/MAINTAINERS
Changed
@@ -8966,6 +8966,14 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux.git F: drivers/idle/intel_idle.c +INTEL IN FIELD SCAN (IFS) DEVICE +M: Jithu Joseph <jithu.joseph@intel.com> +R: Ashok Raj <ashok.raj@intel.com> +R: Tony Luck <tony.luck@intel.com> +S: Maintained +F: drivers/platform/x86/intel/ifs +F: include/trace/events/intel_ifs.h + INTEL INTEGRATED SENSOR HUB DRIVER M: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> M: Jiri Kosina <jikos@kernel.org> @@ -18674,6 +18682,7 @@ M: Jason Wang <jasowang@redhat.com> L: virtualization@lists.linux-foundation.org S: Maintained +F: Documentation/ABI/testing/sysfs-bus-vdpa F: Documentation/devicetree/bindings/virtio/ F: drivers/block/virtio_blk.c F: drivers/crypto/virtio/
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/configs/hi3516dv300_smp_defconfig
Added
@@ -0,0 +1,2181 @@ +# +# Automatically generated file; DO NOT EDIT. +# Linux/arm 4.19.90 Kernel Configuration +# + +# +# Compiler: arm-himix410-linux-gcc (HC&C V1R3C00SPC200B042_20210105) 7.3.0 +# +CONFIG_CC_IS_GCC=y +CONFIG_GCC_VERSION=70300 +CONFIG_CLANG_VERSION=0 +CONFIG_CC_HAS_ASM_GOTO=y +CONFIG_IRQ_WORK=y +CONFIG_BUILDTIME_EXTABLE_SORT=y + +# +# General setup +# +CONFIG_INIT_ENV_ARG_LIMIT=32 +# CONFIG_COMPILE_TEST is not set +CONFIG_LOCALVERSION="" +# CONFIG_LOCALVERSION_AUTO is not set +CONFIG_BUILD_SALT="" +CONFIG_HAVE_KERNEL_GZIP=y +CONFIG_HAVE_KERNEL_LZMA=y +CONFIG_HAVE_KERNEL_XZ=y +CONFIG_HAVE_KERNEL_LZO=y +CONFIG_HAVE_KERNEL_LZ4=y +CONFIG_KERNEL_GZIP=y +# CONFIG_KERNEL_LZMA is not set +# CONFIG_KERNEL_XZ is not set +# CONFIG_KERNEL_LZO is not set +# CONFIG_KERNEL_LZ4 is not set +CONFIG_DEFAULT_HOSTNAME="(none)" +# CONFIG_SWAP is not set +CONFIG_SYSVIPC=y +CONFIG_SYSVIPC_SYSCTL=y +CONFIG_CROSS_MEMORY_ATTACH=y +CONFIG_USELIB=y + +# +# IRQ subsystem +# +CONFIG_GENERIC_IRQ_PROBE=y +CONFIG_GENERIC_IRQ_SHOW=y +CONFIG_GENERIC_IRQ_SHOW_LEVEL=y +CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y +CONFIG_GENERIC_IRQ_MIGRATION=y +CONFIG_HARDIRQS_SW_RESEND=y +CONFIG_IRQ_DOMAIN=y +CONFIG_IRQ_DOMAIN_HIERARCHY=y +CONFIG_HANDLE_DOMAIN_IRQ=y +CONFIG_IRQ_FORCED_THREADING=y +CONFIG_SPARSE_IRQ=y +# CONFIG_GENERIC_IRQ_DEBUGFS is not set +CONFIG_GENERIC_IRQ_MULTI_HANDLER=y +CONFIG_ARCH_CLOCKSOURCE_DATA=y +CONFIG_GENERIC_TIME_VSYSCALL=y +CONFIG_GENERIC_CLOCKEVENTS=y +CONFIG_ARCH_HAS_TICK_BROADCAST=y +CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y + +# +# Timers subsystem +# +CONFIG_TICK_ONESHOT=y +CONFIG_HZ_PERIODIC=y +# CONFIG_NO_HZ_IDLE is not set +# CONFIG_NO_HZ_FULL is not set +# CONFIG_NO_HZ is not set +CONFIG_HIGH_RES_TIMERS=y +CONFIG_PREEMPT_NONE=y +# CONFIG_PREEMPT_VOLUNTARY is not set +# CONFIG_PREEMPT is not set + +# +# CPU/Task time and stats accounting +# +CONFIG_TICK_CPU_ACCOUNTING=y +# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set +# CONFIG_IRQ_TIME_ACCOUNTING is not set +# CONFIG_BSD_PROCESS_ACCT is not set +CONFIG_CPU_ISOLATION=y + +# +# RCU Subsystem +# +CONFIG_TREE_RCU=y +# CONFIG_RCU_EXPERT is not set +CONFIG_SRCU=y +CONFIG_TREE_SRCU=y +CONFIG_RCU_STALL_COMMON=y +CONFIG_RCU_NEED_SEGCBLIST=y +# CONFIG_IKCONFIG is not set +CONFIG_LOG_BUF_SHIFT=17 +CONFIG_LOG_CPU_MAX_BUF_SHIFT=12 +CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=13 +CONFIG_GENERIC_SCHED_CLOCK=y +CONFIG_CGROUPS=y +# CONFIG_MEMCG is not set +# CONFIG_BLK_CGROUP is not set +CONFIG_CGROUP_SCHED=y +CONFIG_FAIR_GROUP_SCHED=y +# CONFIG_CFS_BANDWIDTH is not set +# CONFIG_RT_GROUP_SCHED is not set +# CONFIG_CGROUP_PIDS is not set +# CONFIG_CGROUP_RDMA is not set +# CONFIG_CGROUP_FREEZER is not set +# CONFIG_CPUSETS is not set +# CONFIG_CGROUP_DEVICE is not set +# CONFIG_CGROUP_CPUACCT is not set +# CONFIG_CGROUP_PERF is not set +# CONFIG_CGROUP_DEBUG is not set +CONFIG_NAMESPACES=y +CONFIG_UTS_NS=y +CONFIG_IPC_NS=y +# CONFIG_USER_NS is not set +CONFIG_PID_NS=y +# CONFIG_CHECKPOINT_RESTORE is not set +# CONFIG_SCHED_AUTOGROUP is not set +# CONFIG_SYSFS_DEPRECATED is not set +# CONFIG_RELAY is not set +CONFIG_BLK_DEV_INITRD=y +CONFIG_INITRAMFS_SOURCE="" +# CONFIG_RD_GZIP is not set +# CONFIG_RD_BZIP2 is not set +# CONFIG_RD_LZMA is not set +# CONFIG_RD_XZ is not set +# CONFIG_RD_LZO is not set +# CONFIG_RD_LZ4 is not set +CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y +# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set +CONFIG_SYSCTL=y +CONFIG_ANON_INODES=y +CONFIG_HAVE_UID16=y +# CONFIG_EXPERT is not set +CONFIG_UID16=y +CONFIG_MULTIUSER=y +CONFIG_SYSFS_SYSCALL=y +CONFIG_FHANDLE=y +CONFIG_POSIX_TIMERS=y +CONFIG_PRINTK=y +CONFIG_PRINTK_NMI=y +CONFIG_BUG=y +CONFIG_ELF_CORE=y +CONFIG_BASE_FULL=y +CONFIG_FUTEX=y +CONFIG_FUTEX_PI=y +CONFIG_EPOLL=y +CONFIG_SIGNALFD=y +CONFIG_TIMERFD=y +CONFIG_EVENTFD=y +CONFIG_SHMEM=y +CONFIG_AIO=y +CONFIG_ADVISE_SYSCALLS=y +CONFIG_MEMBARRIER=y +CONFIG_KALLSYMS=y +# CONFIG_KALLSYMS_ALL is not set +CONFIG_KALLSYMS_BASE_RELATIVE=y +# CONFIG_BPF_SYSCALL is not set +# CONFIG_USERFAULTFD is not set +CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y +CONFIG_RSEQ=y +# CONFIG_EMBEDDED is not set +CONFIG_HAVE_PERF_EVENTS=y +CONFIG_PERF_USE_VMALLOC=y + +# +# Kernel Performance Events And Counters +# +CONFIG_PERF_EVENTS=y +# CONFIG_DEBUG_PERF_USE_VMALLOC is not set +CONFIG_VM_EVENT_COUNTERS=y +CONFIG_SLUB_DEBUG=y +CONFIG_COMPAT_BRK=y +# CONFIG_SLAB is not set +CONFIG_SLUB=y +CONFIG_SLAB_MERGE_DEFAULT=y +# CONFIG_SLAB_FREELIST_RANDOM is not set +# CONFIG_SLAB_FREELIST_HARDENED is not set +CONFIG_SLUB_CPU_PARTIAL=y +# CONFIG_PROFILING is not set +CONFIG_ARM=y +CONFIG_ARM_HAS_SG_CHAIN=y +CONFIG_MIGHT_HAVE_PCI=y +CONFIG_SYS_SUPPORTS_APM_EMULATION=y +CONFIG_HAVE_PROC_CPU=y +CONFIG_STACKTRACE_SUPPORT=y +CONFIG_LOCKDEP_SUPPORT=y +CONFIG_TRACE_IRQFLAGS_SUPPORT=y +CONFIG_RWSEM_XCHGADD_ALGORITHM=y +CONFIG_FIX_EARLYCON_MEM=y +CONFIG_GENERIC_HWEIGHT=y +CONFIG_GENERIC_CALIBRATE_DELAY=y +CONFIG_ARCH_SUPPORTS_UPROBES=y +CONFIG_ARM_PATCH_PHYS_VIRT=y +CONFIG_GENERIC_BUG=y +CONFIG_PGTABLE_LEVELS=2 + +# +# System Type +# +CONFIG_MMU=y +CONFIG_ARCH_MMAP_RND_BITS_MIN=8 +CONFIG_ARCH_MMAP_RND_BITS_MAX=16 +CONFIG_ARCH_MULTIPLATFORM=y +# CONFIG_ARCH_EBSA110 is not set +# CONFIG_ARCH_EP93XX is not set +# CONFIG_ARCH_FOOTBRIDGE is not set +# CONFIG_ARCH_NETX is not set +# CONFIG_ARCH_IOP13XX is not set +# CONFIG_ARCH_IOP32X is not set +# CONFIG_ARCH_IOP33X is not set +# CONFIG_ARCH_IXP4XX is not set +# CONFIG_ARCH_DOVE is not set +# CONFIG_ARCH_KS8695 is not set +# CONFIG_ARCH_W90X900 is not set +# CONFIG_ARCH_LPC32XX is not set +# CONFIG_ARCH_PXA is not set +# CONFIG_ARCH_RPC is not set +# CONFIG_ARCH_SA1100 is not set +# CONFIG_ARCH_S3C24XX is not set +# CONFIG_ARCH_DAVINCI is not set +# CONFIG_ARCH_OMAP1 is not set + +# +# Multiple platform selection +# + +# +# CPU Core family selection +# +# CONFIG_ARCH_MULTI_V6 is not set +CONFIG_ARCH_MULTI_V7=y +CONFIG_ARCH_MULTI_V6_V7=y +# CONFIG_ARCH_VIRT is not set +# CONFIG_ARCH_ACTIONS is not set +# CONFIG_ARCH_ALPINE is not set +# CONFIG_ARCH_ARTPEC is not set +# CONFIG_ARCH_AT91 is not set +# CONFIG_ARCH_BCM is not set +# CONFIG_ARCH_BERLIN is not set +# CONFIG_ARCH_DIGICOLOR is not set +# CONFIG_ARCH_EXYNOS is not set +# CONFIG_ARCH_HIGHBANK is not set +# CONFIG_ARCH_HISI is not set +CONFIG_ARCH_HISI_BVT=y + +# +# Hisilicon BVT platform type +# +# CONFIG_ARCH_HI3521DV200 is not set +# CONFIG_ARCH_HI3520DV500 is not set +# CONFIG_ARCH_HI3516A is not set +# CONFIG_ARCH_HI3516CV500 is not set +CONFIG_ARCH_HI3516DV300=y +# CONFIG_ARCH_HI3516EV200 is not set +# CONFIG_ARCH_HI3516EV300 is not set +# CONFIG_ARCH_HI3518EV300 is not set +# CONFIG_ARCH_HI3516DV200 is not set +# CONFIG_ARCH_HI3556V200 is not set +# CONFIG_ARCH_HI3559V200 is not set +# CONFIG_ARCH_HI3536DV100 is not set +# CONFIG_ARCH_HI3521A is not set +# CONFIG_ARCH_HI3531A is not set +# CONFIG_ARCH_HI3556AV100 is not set +# CONFIG_ARCH_HI3519AV100 is not set +# CONFIG_ARCH_HI3568V100 is not set +# CONFIG_ARCH_HISI_BVT_AMP is not set +# CONFIG_HISI_MC is not set +CONFIG_HI_ZRELADDR=0x80008000 +CONFIG_HI_PARAMS_PHYS=0x00000100 +CONFIG_HI_INITRD_PHYS=0x00800000 +# CONFIG_ARCH_MXC is not set +# CONFIG_ARCH_KEYSTONE is not set +# CONFIG_ARCH_MEDIATEK is not set +# CONFIG_ARCH_MESON is not set +# CONFIG_ARCH_MMP is not set +# CONFIG_ARCH_MVEBU is not set +# CONFIG_ARCH_NPCM is not set + +# +# TI OMAP/AM/DM/DRA Family +# +# CONFIG_ARCH_OMAP3 is not set +# CONFIG_ARCH_OMAP4 is not set +# CONFIG_SOC_OMAP5 is not set +# CONFIG_SOC_AM33XX is not set +# CONFIG_SOC_AM43XX is not set +# CONFIG_SOC_DRA7XX is not set +# CONFIG_ARCH_SIRF is not set +# CONFIG_ARCH_QCOM is not set +# CONFIG_ARCH_REALVIEW is not set +# CONFIG_ARCH_ROCKCHIP is not set +# CONFIG_ARCH_S5PV210 is not set +# CONFIG_ARCH_RENESAS is not set +# CONFIG_ARCH_SOCFPGA is not set +# CONFIG_PLAT_SPEAR is not set +# CONFIG_ARCH_STI is not set +# CONFIG_ARCH_STM32 is not set +# CONFIG_ARCH_SUNXI is not set +# CONFIG_ARCH_TANGO is not set +# CONFIG_ARCH_TEGRA is not set +# CONFIG_ARCH_UNIPHIER is not set +# CONFIG_ARCH_U8500 is not set +# CONFIG_ARCH_VEXPRESS is not set +# CONFIG_ARCH_WM8850 is not set +# CONFIG_ARCH_ZX is not set +# CONFIG_ARCH_ZYNQ is not set + +# +# Processor Type +# +CONFIG_CPU_V7=y +CONFIG_CPU_THUMB_CAPABLE=y +CONFIG_CPU_32v6K=y +CONFIG_CPU_32v7=y +CONFIG_CPU_ABRT_EV7=y +CONFIG_CPU_PABRT_V7=y +CONFIG_CPU_CACHE_V7=y +CONFIG_CPU_CACHE_VIPT=y +CONFIG_CPU_COPY_V6=y +CONFIG_CPU_TLB_V7=y +CONFIG_CPU_HAS_ASID=y +CONFIG_CPU_CP15=y +CONFIG_CPU_CP15_MMU=y + +# +# Processor Features +# +# CONFIG_ARM_LPAE is not set +CONFIG_ARM_THUMB=y +# CONFIG_ARM_THUMBEE is not set +CONFIG_ARM_VIRT_EXT=y +CONFIG_SWP_EMULATE=y +# CONFIG_CPU_ICACHE_DISABLE is not set +# CONFIG_CPU_BPREDICT_DISABLE is not set +CONFIG_CPU_SPECTRE=y +CONFIG_HARDEN_BRANCH_PREDICTOR=y +CONFIG_KUSER_HELPERS=y +CONFIG_VDSO=y +CONFIG_MIGHT_HAVE_CACHE_L2X0=y +# CONFIG_CACHE_L2X0 is not set +CONFIG_ARM_L1_CACHE_SHIFT_6=y +CONFIG_ARM_L1_CACHE_SHIFT=6 +CONFIG_ARM_DMA_MEM_BUFFERABLE=y +CONFIG_DEBUG_ALIGN_RODATA=y +# CONFIG_ARM_ERRATA_430973 is not set +# CONFIG_ARM_ERRATA_643719 is not set +# CONFIG_ARM_ERRATA_720789 is not set +# CONFIG_ARM_ERRATA_754322 is not set +# CONFIG_ARM_ERRATA_754327 is not set +# CONFIG_ARM_ERRATA_764369 is not set +# CONFIG_ARM_ERRATA_775420 is not set +# CONFIG_ARM_ERRATA_798181 is not set +# CONFIG_ARM_ERRATA_773022 is not set +# CONFIG_ARM_ERRATA_818325_852422 is not set +# CONFIG_ARM_ERRATA_821420 is not set +# CONFIG_ARM_ERRATA_825619 is not set +# CONFIG_ARM_ERRATA_852421 is not set +# CONFIG_ARM_ERRATA_852423 is not set + +# +# Bus support +# +# CONFIG_PCI is not set + +# +# PCI Endpoint +# +# CONFIG_PCI_ENDPOINT is not set +# CONFIG_PCCARD is not set + +# +# Kernel Features +# +CONFIG_HAVE_SMP=y +CONFIG_SMP=y +CONFIG_SMP_ON_UP=y +CONFIG_ARM_CPU_TOPOLOGY=y +# CONFIG_SCHED_MC is not set +# CONFIG_SCHED_SMT is not set +CONFIG_HAVE_ARM_ARCH_TIMER=y +# CONFIG_MCPM is not set +# CONFIG_BIG_LITTLE is not set +CONFIG_VMSPLIT_3G=y +# CONFIG_VMSPLIT_3G_OPT is not set +# CONFIG_VMSPLIT_2G is not set +# CONFIG_VMSPLIT_1G is not set +CONFIG_PAGE_OFFSET=0xC0000000 +CONFIG_NR_CPUS=2 +CONFIG_HOTPLUG_CPU=y +# CONFIG_ARM_PSCI is not set +CONFIG_ARCH_NR_GPIO=0 +CONFIG_HZ_FIXED=0 +# CONFIG_HZ_100 is not set +# CONFIG_HZ_200 is not set +# CONFIG_HZ_250 is not set +# CONFIG_HZ_300 is not set +# CONFIG_HZ_500 is not set +CONFIG_HZ_1000=y +CONFIG_HZ=1000 +CONFIG_SCHED_HRTICK=y +# CONFIG_THUMB2_KERNEL is not set +CONFIG_ARM_PATCH_IDIV=y +CONFIG_AEABI=y +CONFIG_OABI_COMPAT=y +CONFIG_HAVE_ARCH_PFN_VALID=y +CONFIG_HIGHMEM=y +CONFIG_HIGHPTE=y +CONFIG_CPU_SW_DOMAIN_PAN=y +CONFIG_HW_PERF_EVENTS=y +CONFIG_ARCH_WANT_GENERAL_HUGETLB=y +# CONFIG_ARM_MODULE_PLTS is not set +CONFIG_FORCE_MAX_ZONEORDER=11 +CONFIG_ALIGNMENT_TRAP=y +# CONFIG_UACCESS_WITH_MEMCPY is not set +# CONFIG_SECCOMP is not set +# CONFIG_PARAVIRT is not set +# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set +# CONFIG_XEN is not set + +# +# Boot options +# +CONFIG_USE_OF=y +CONFIG_ATAGS=y +# CONFIG_DEPRECATED_PARAM_STRUCT is not set +CONFIG_ZBOOT_ROM_TEXT=0 +CONFIG_ZBOOT_ROM_BSS=0 +CONFIG_ARM_APPENDED_DTB=y +CONFIG_ARM_ATAG_DTB_COMPAT=y +CONFIG_ARM_ATAG_DTB_COMPAT_CMDLINE_FROM_BOOTLOADER=y +# CONFIG_ARM_ATAG_DTB_COMPAT_CMDLINE_EXTEND is not set +CONFIG_CMDLINE="" +# CONFIG_KEXEC is not set +# CONFIG_CRASH_DUMP is not set +CONFIG_AUTO_ZRELADDR=y +# CONFIG_EFI is not set + +# +# CPU Power Management +# + +# +# CPU Frequency scaling +# +# CONFIG_CPU_FREQ is not set + +# +# CPU Idle +# +# CONFIG_CPU_IDLE is not set + +# +# Floating point emulation +# + +# +# At least one emulation must be selected +# +# CONFIG_FPE_NWFPE is not set +# CONFIG_FPE_FASTFPE is not set +CONFIG_VFP=y +CONFIG_VFPv3=y +CONFIG_NEON=y +CONFIG_KERNEL_MODE_NEON=y + +# +# Power management options +# +CONFIG_SUSPEND=y +CONFIG_SUSPEND_FREEZER=y +CONFIG_PM_SLEEP=y +CONFIG_PM_SLEEP_SMP=y +# CONFIG_PM_AUTOSLEEP is not set +# CONFIG_PM_WAKELOCKS is not set +CONFIG_PM=y +# CONFIG_PM_DEBUG is not set +# CONFIG_APM_EMULATION is not set +CONFIG_PM_CLK=y +# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set +CONFIG_CPU_PM=y +CONFIG_ARCH_SUSPEND_POSSIBLE=y +CONFIG_ARM_CPU_SUSPEND=y +CONFIG_ARCH_HIBERNATION_POSSIBLE=y + +# +# Firmware Drivers +# +# CONFIG_FW_CFG_SYSFS is not set +CONFIG_HAVE_ARM_SMCCC=y +# CONFIG_GOOGLE_FIRMWARE is not set + +# +# Tegra firmware driver +# +# CONFIG_ARM_CRYPTO is not set +# CONFIG_VIRTUALIZATION is not set + +# +# General architecture-dependent options +# +CONFIG_HAVE_OPROFILE=y +# CONFIG_KPROBES is not set +# CONFIG_JUMP_LABEL is not set +CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y +CONFIG_ARCH_USE_BUILTIN_BSWAP=y +CONFIG_HAVE_KPROBES=y +CONFIG_HAVE_KRETPROBES=y +CONFIG_HAVE_OPTPROBES=y +CONFIG_HAVE_NMI=y +CONFIG_HAVE_ARCH_TRACEHOOK=y +CONFIG_HAVE_DMA_CONTIGUOUS=y +CONFIG_GENERIC_SMP_IDLE_THREAD=y +CONFIG_GENERIC_IDLE_POLL_SETUP=y +CONFIG_ARCH_HAS_FORTIFY_SOURCE=y +CONFIG_ARCH_HAS_SET_MEMORY=y +CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y +CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y +CONFIG_HAVE_RSEQ=y +CONFIG_HAVE_CLK=y +CONFIG_HAVE_HW_BREAKPOINT=y +CONFIG_HAVE_PERF_REGS=y +CONFIG_HAVE_PERF_USER_STACK_DUMP=y +CONFIG_HAVE_ARCH_JUMP_LABEL=y +CONFIG_ARCH_WANT_IPC_PARSE_VERSION=y +CONFIG_HAVE_STACKPROTECTOR=y +CONFIG_CC_HAS_STACKPROTECTOR_NONE=y +CONFIG_STACKPROTECTOR=y +CONFIG_STACKPROTECTOR_STRONG=y +CONFIG_HAVE_CONTEXT_TRACKING=y +CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y +CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y +CONFIG_HAVE_MOD_ARCH_SPECIFIC=y +CONFIG_MODULES_USE_ELF_REL=y +CONFIG_ARCH_HAS_ELF_RANDOMIZE=y +CONFIG_HAVE_ARCH_MMAP_RND_BITS=y +CONFIG_HAVE_EXIT_THREAD=y +CONFIG_ARCH_MMAP_RND_BITS=8 +CONFIG_CLONE_BACKWARDS=y +CONFIG_OLD_SIGSUSPEND3=y +CONFIG_OLD_SIGACTION=y +CONFIG_ARCH_OPTIONAL_KERNEL_RWX=y +CONFIG_ARCH_OPTIONAL_KERNEL_RWX_DEFAULT=y +CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y +CONFIG_STRICT_KERNEL_RWX=y +CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y +CONFIG_STRICT_MODULE_RWX=y +CONFIG_ARCH_HAS_PHYS_TO_DMA=y +CONFIG_REFCOUNT_FULL=y + +# +# GCOV-based kernel profiling +# +# CONFIG_GCOV_KERNEL is not set +CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y +CONFIG_PLUGIN_HOSTCC="" +CONFIG_HAVE_GCC_PLUGINS=y +CONFIG_RT_MUTEXES=y +CONFIG_BASE_SMALL=0 +CONFIG_MODULES=y +CONFIG_MODULE_FORCE_LOAD=y +CONFIG_MODULE_UNLOAD=y +CONFIG_MODULE_FORCE_UNLOAD=y +# CONFIG_MODVERSIONS is not set +# CONFIG_MODULE_SRCVERSION_ALL is not set +# CONFIG_MODULE_SIG is not set +# CONFIG_MODULE_COMPRESS is not set +# CONFIG_TRIM_UNUSED_KSYMS is not set +CONFIG_MODULES_TREE_LOOKUP=y +CONFIG_BLOCK=y +CONFIG_LBDAF=y +# CONFIG_BLK_DEV_BSG is not set +# CONFIG_BLK_DEV_BSGLIB is not set +# CONFIG_BLK_DEV_INTEGRITY is not set +# CONFIG_BLK_DEV_ZONED is not set +CONFIG_BLK_CMDLINE_PARSER=y +# CONFIG_BLK_WBT is not set +CONFIG_BLK_DEBUG_FS=y +# CONFIG_BLK_SED_OPAL is not set + +# +# Partition Types +# +CONFIG_PARTITION_ADVANCED=y +# CONFIG_ACORN_PARTITION is not set +# CONFIG_AIX_PARTITION is not set +# CONFIG_OSF_PARTITION is not set +# CONFIG_AMIGA_PARTITION is not set +# CONFIG_ATARI_PARTITION is not set +# CONFIG_MAC_PARTITION is not set +CONFIG_MSDOS_PARTITION=y +# CONFIG_BSD_DISKLABEL is not set +# CONFIG_MINIX_SUBPARTITION is not set +# CONFIG_SOLARIS_X86_PARTITION is not set +# CONFIG_UNIXWARE_DISKLABEL is not set +# CONFIG_LDM_PARTITION is not set +# CONFIG_SGI_PARTITION is not set +# CONFIG_ULTRIX_PARTITION is not set +# CONFIG_SUN_PARTITION is not set +# CONFIG_KARMA_PARTITION is not set +CONFIG_EFI_PARTITION=y +# CONFIG_SYSV68_PARTITION is not set +CONFIG_CMDLINE_PARTITION=y + +# +# IO Schedulers +# +CONFIG_IOSCHED_NOOP=y +# CONFIG_IOSCHED_DEADLINE is not set +CONFIG_IOSCHED_CFQ=m +CONFIG_DEFAULT_NOOP=y +CONFIG_DEFAULT_IOSCHED="noop" +CONFIG_MQ_IOSCHED_DEADLINE=m +CONFIG_MQ_IOSCHED_KYBER=m +# CONFIG_IOSCHED_BFQ is not set +CONFIG_INLINE_SPIN_UNLOCK_IRQ=y +CONFIG_INLINE_READ_UNLOCK=y +CONFIG_INLINE_READ_UNLOCK_IRQ=y +CONFIG_INLINE_WRITE_UNLOCK=y +CONFIG_INLINE_WRITE_UNLOCK_IRQ=y +CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y +CONFIG_MUTEX_SPIN_ON_OWNER=y +CONFIG_RWSEM_SPIN_ON_OWNER=y +CONFIG_LOCK_SPIN_ON_OWNER=y +CONFIG_FREEZER=y + +# +# Executable file formats +# +CONFIG_BINFMT_ELF=y +# CONFIG_BINFMT_ELF_FDPIC is not set +CONFIG_ELFCORE=y +CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y +CONFIG_BINFMT_SCRIPT=y +# CONFIG_BINFMT_FLAT is not set +# CONFIG_BINFMT_MISC is not set +CONFIG_COREDUMP=y + +# +# Memory Management options +# +CONFIG_FLATMEM=y +CONFIG_FLAT_NODE_MEM_MAP=y +CONFIG_HAVE_MEMBLOCK=y +CONFIG_NO_BOOTMEM=y +CONFIG_MEMORY_ISOLATION=y +CONFIG_SPLIT_PTLOCK_CPUS=4 +CONFIG_COMPACTION=y +CONFIG_MIGRATION=y +CONFIG_BOUNCE=y +# CONFIG_KSM is not set +CONFIG_DEFAULT_MMAP_MIN_ADDR=4096 +# CONFIG_CLEANCACHE is not set +CONFIG_CMA=y +# CONFIG_CMA_DEBUG is not set +# CONFIG_CMA_DEBUGFS is not set +CONFIG_CMA_AREAS=7 +# CONFIG_ZPOOL is not set +# CONFIG_ZBUD is not set +# CONFIG_ZSMALLOC is not set +CONFIG_GENERIC_EARLY_IOREMAP=y +# CONFIG_IDLE_PAGE_TRACKING is not set +# CONFIG_PERCPU_STATS is not set +# CONFIG_GUP_BENCHMARK is not set +# CONFIG_NET is not set +CONFIG_HAVE_EBPF_JIT=y + +# +# Device Drivers +# +CONFIG_ARM_AMBA=y + +# +# Generic Driver Options +# +CONFIG_UEVENT_HELPER=y +CONFIG_UEVENT_HELPER_PATH="" +CONFIG_DEVTMPFS=y +CONFIG_DEVTMPFS_MOUNT=y +CONFIG_STANDALONE=y +CONFIG_PREVENT_FIRMWARE_BUILD=y + +# +# Firmware loader +# +CONFIG_FW_LOADER=y +CONFIG_EXTRA_FIRMWARE="" +# CONFIG_FW_LOADER_USER_HELPER is not set +CONFIG_ALLOW_DEV_COREDUMP=y +# CONFIG_DEBUG_DRIVER is not set +# CONFIG_DEBUG_DEVRES is not set +# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set +# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set +CONFIG_GENERIC_CPU_AUTOPROBE=y +CONFIG_REGMAP=y +CONFIG_REGMAP_I2C=y +CONFIG_REGMAP_SPI=y +CONFIG_REGMAP_MMIO=y +CONFIG_DMA_CMA=y + +# +# Default contiguous memory area size: +# +CONFIG_CMA_SIZE_MBYTES=16 +CONFIG_CMA_SIZE_SEL_MBYTES=y +# CONFIG_CMA_SIZE_SEL_PERCENTAGE is not set +# CONFIG_CMA_SIZE_SEL_MIN is not set +# CONFIG_CMA_SIZE_SEL_MAX is not set +CONFIG_CMA_ALIGNMENT=8 +CONFIG_GENERIC_ARCH_TOPOLOGY=y + +# +# Bus devices +# +# CONFIG_BRCMSTB_GISB_ARB is not set +# CONFIG_SIMPLE_PM_BUS is not set +# CONFIG_VEXPRESS_CONFIG is not set +# CONFIG_GNSS is not set +CONFIG_MTD=y +# CONFIG_MTD_TESTS is not set +# CONFIG_MTD_REDBOOT_PARTS is not set +CONFIG_MTD_CMDLINE_PARTS=y +# CONFIG_MTD_AFS_PARTS is not set +# CONFIG_MTD_OF_PARTS is not set +# CONFIG_MTD_AR7_PARTS is not set + +# +# Partition parsers +# + +# +# User Modules And Translation Layers +# +CONFIG_MTD_BLKDEVS=y +CONFIG_MTD_BLOCK=y +# CONFIG_FTL is not set +# CONFIG_NFTL is not set +# CONFIG_INFTL is not set +# CONFIG_RFD_FTL is not set +# CONFIG_SSFDC is not set +# CONFIG_SM_FTL is not set +# CONFIG_MTD_OOPS is not set +# CONFIG_MTD_PARTITIONED_MASTER is not set + +# +# RAM/ROM/Flash chip drivers +# +# CONFIG_MTD_CFI is not set +# CONFIG_MTD_JEDECPROBE is not set +CONFIG_MTD_MAP_BANK_WIDTH_1=y +CONFIG_MTD_MAP_BANK_WIDTH_2=y +CONFIG_MTD_MAP_BANK_WIDTH_4=y +CONFIG_MTD_CFI_I1=y +CONFIG_MTD_CFI_I2=y +# CONFIG_MTD_RAM is not set +# CONFIG_MTD_ROM is not set +# CONFIG_MTD_ABSENT is not set + +# +# Mapping drivers for chip access +# +# CONFIG_MTD_COMPLEX_MAPPINGS is not set +# CONFIG_MTD_PLATRAM is not set + +# +# Self-contained MTD device drivers +# +# CONFIG_MTD_DATAFLASH is not set +# CONFIG_MTD_M25P80 is not set +# CONFIG_MTD_MCHP23K256 is not set +# CONFIG_MTD_SST25L is not set +# CONFIG_MTD_SLRAM is not set +# CONFIG_MTD_PHRAM is not set +# CONFIG_MTD_MTDRAM is not set +# CONFIG_MTD_BLOCK2MTD is not set + +# +# Disk-On-Chip Device Drivers +# +# CONFIG_MTD_DOCG3 is not set +# CONFIG_MTD_ONENAND is not set +# CONFIG_HISI_NAND_FS_MAY_NO_YAFFS2 is not set +# CONFIG_MTD_NAND is not set +# CONFIG_MTD_NAND_HIFMC100 is not set +# CONFIG_MTD_SPI_NAND is not set + +# +# LPDDR & LPDDR2 PCM memory drivers +# +# CONFIG_MTD_LPDDR is not set +# CONFIG_MTD_LPDDR2_NVM is not set +CONFIG_MTD_SPI_NOR=y +# CONFIG_MTD_MT81xx_NOR is not set +# CONFIG_MTD_SPI_NOR_USE_4K_SECTORS is not set +# CONFIG_SPI_CADENCE_QUADSPI is not set +CONFIG_SPI_HISI_SFC=y +# CONFIG_MTD_SPI_IDS is not set +# CONFIG_CLOSE_SPI_8PIN_4IO is not set +CONFIG_HISI_SPI_BLOCK_PROTECT=y +# CONFIG_MTD_UBI is not set +CONFIG_DTC=y +CONFIG_OF=y +# CONFIG_OF_UNITTEST is not set +CONFIG_OF_FLATTREE=y +CONFIG_OF_EARLY_FLATTREE=y +CONFIG_OF_KOBJ=y +CONFIG_OF_ADDRESS=y +CONFIG_OF_IRQ=y +CONFIG_OF_RESERVED_MEM=y +# CONFIG_OF_OVERLAY is not set +CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y +# CONFIG_PARPORT is not set +CONFIG_BLK_DEV=y +# CONFIG_BLK_DEV_NULL_BLK is not set +# CONFIG_BLK_DEV_LOOP is not set + +# +# DRBD disabled because PROC_FS or INET not selected +# +# CONFIG_BLK_DEV_RAM is not set +# CONFIG_CDROM_PKTCDVD is not set + +# +# NVME Support +# +# CONFIG_NVME_FC is not set +# CONFIG_NVME_TARGET is not set + +# +# Misc devices +# +# CONFIG_AD525X_DPOT is not set +# CONFIG_DUMMY_IRQ is not set +# CONFIG_ICS932S401 is not set +# CONFIG_ENCLOSURE_SERVICES is not set +# CONFIG_APDS9802ALS is not set +# CONFIG_ISL29003 is not set +# CONFIG_ISL29020 is not set +# CONFIG_SENSORS_TSL2550 is not set +# CONFIG_SENSORS_BH1770 is not set +# CONFIG_SENSORS_APDS990X is not set +# CONFIG_HMC6352 is not set +# CONFIG_DS1682 is not set +# CONFIG_USB_SWITCH_FSA9480 is not set +# CONFIG_LATTICE_ECP3_CONFIG is not set +# CONFIG_SRAM is not set +# CONFIG_C2PORT is not set + +# +# EEPROM support +# +# CONFIG_EEPROM_AT24 is not set +# CONFIG_EEPROM_AT25 is not set +# CONFIG_EEPROM_LEGACY is not set +# CONFIG_EEPROM_MAX6875 is not set +# CONFIG_EEPROM_93CX6 is not set +# CONFIG_EEPROM_93XX46 is not set +# CONFIG_EEPROM_IDT_89HPESX is not set + +# +# Texas Instruments shared transport line discipline +# +# CONFIG_SENSORS_LIS3_SPI is not set +# CONFIG_SENSORS_LIS3_I2C is not set +# CONFIG_ALTERA_STAPL is not set + +# +# Intel MIC & related support +# + +# +# Intel MIC Bus Driver +# + +# +# SCIF Bus Driver +# + +# +# VOP Bus Driver +# + +# +# Intel MIC Host Driver +# + +# +# Intel MIC Card Driver +# + +# +# SCIF Driver +# + +# +# Intel MIC Coprocessor State Management (COSM) Drivers +# + +# +# VOP Driver +# +# CONFIG_ECHO is not set + +# +# SCSI device support +# +CONFIG_SCSI_MOD=y +# CONFIG_RAID_ATTRS is not set +# CONFIG_SCSI is not set +# CONFIG_ATA is not set +# CONFIG_MD is not set +# CONFIG_TARGET_CORE is not set + +# +# Input device support +# +CONFIG_INPUT=y +CONFIG_INPUT_FF_MEMLESS=y +# CONFIG_INPUT_POLLDEV is not set +# CONFIG_INPUT_SPARSEKMAP is not set +# CONFIG_INPUT_MATRIXKMAP is not set + +# +# Userland interfaces +# +# CONFIG_INPUT_MOUSEDEV is not set +# CONFIG_INPUT_JOYDEV is not set +CONFIG_INPUT_EVDEV=y +# CONFIG_INPUT_EVBUG is not set + +# +# Input Device Drivers +# +# CONFIG_INPUT_KEYBOARD is not set +# CONFIG_INPUT_MOUSE is not set +# CONFIG_INPUT_JOYSTICK is not set +# CONFIG_INPUT_TABLET is not set +# CONFIG_INPUT_TOUCHSCREEN is not set +# CONFIG_INPUT_MISC is not set +# CONFIG_RMI4_CORE is not set + +# +# Hardware I/O ports +# +# CONFIG_SERIO is not set +# CONFIG_GAMEPORT is not set + +# +# Character devices +# +CONFIG_TTY=y +CONFIG_VT=y +CONFIG_CONSOLE_TRANSLATIONS=y +CONFIG_VT_CONSOLE=y +CONFIG_VT_CONSOLE_SLEEP=y +CONFIG_HW_CONSOLE=y +# CONFIG_VT_HW_CONSOLE_BINDING is not set +CONFIG_UNIX98_PTYS=y +# CONFIG_LEGACY_PTYS is not set +# CONFIG_SERIAL_NONSTANDARD is not set +# CONFIG_TRACE_SINK is not set +CONFIG_LDISC_AUTOLOAD=y +CONFIG_DEVMEM=y +CONFIG_DEVKMEM=y + +# +# Serial drivers +# +CONFIG_SERIAL_EARLYCON=y +# CONFIG_SERIAL_8250 is not set + +# +# Non-8250 serial port support +# +# CONFIG_SERIAL_AMBA_PL010 is not set +CONFIG_SERIAL_AMBA_PL011=y +CONFIG_SERIAL_AMBA_PL011_CONSOLE=y +# CONFIG_SERIAL_EARLYCON_ARM_SEMIHOST is not set +# CONFIG_SERIAL_MAX3100 is not set +# CONFIG_SERIAL_MAX310X is not set +# CONFIG_SERIAL_UARTLITE is not set +CONFIG_SERIAL_CORE=y +CONFIG_SERIAL_CORE_CONSOLE=y +# CONFIG_SERIAL_SCCNXP is not set +# CONFIG_SERIAL_SC16IS7XX is not set +# CONFIG_SERIAL_BCM63XX is not set +# CONFIG_SERIAL_ALTERA_JTAGUART is not set +# CONFIG_SERIAL_ALTERA_UART is not set +# CONFIG_SERIAL_IFX6X60 is not set +# CONFIG_SERIAL_XILINX_PS_UART is not set +# CONFIG_SERIAL_ARC is not set +# CONFIG_SERIAL_FSL_LPUART is not set +# CONFIG_SERIAL_CONEXANT_DIGICOLOR is not set +# CONFIG_SERIAL_ST_ASC is not set +# CONFIG_SERIAL_DEV_BUS is not set +# CONFIG_HVC_DCC is not set +# CONFIG_IPMI_HANDLER is not set +# CONFIG_HW_RANDOM is not set +# CONFIG_RAW_DRIVER is not set +# CONFIG_TCG_TPM is not set +# CONFIG_XILLYBUS is not set + +# +# I2C support +# +CONFIG_I2C=y +CONFIG_I2C_BOARDINFO=y +# CONFIG_I2C_COMPAT is not set +CONFIG_I2C_CHARDEV=y +CONFIG_I2C_MUX=y + +# +# Multiplexer I2C Chip support +# +# CONFIG_I2C_ARB_GPIO_CHALLENGE is not set +# CONFIG_I2C_MUX_GPIO is not set +# CONFIG_I2C_MUX_GPMUX is not set +# CONFIG_I2C_MUX_LTC4306 is not set +# CONFIG_I2C_MUX_PCA9541 is not set +# CONFIG_I2C_MUX_PCA954x is not set +# CONFIG_I2C_MUX_PINCTRL is not set +# CONFIG_I2C_MUX_REG is not set +# CONFIG_I2C_DEMUX_PINCTRL is not set +# CONFIG_I2C_MUX_MLXCPLD is not set +# CONFIG_I2C_HELPER_AUTO is not set +# CONFIG_I2C_SMBUS is not set + +# +# I2C Algorithms +# +# CONFIG_I2C_ALGOBIT is not set +# CONFIG_I2C_ALGOPCF is not set +# CONFIG_I2C_ALGOPCA is not set + +# +# I2C Hardware Bus support +# + +# +# I2C system bus drivers (mostly embedded / system-on-chip) +# +# CONFIG_I2C_CBUS_GPIO is not set +# CONFIG_I2C_DESIGNWARE_PLATFORM is not set +# CONFIG_I2C_EMEV2 is not set +# CONFIG_I2C_GPIO is not set +CONFIG_I2C_HIBVT=y +# CONFIG_I2C_NOMADIK is not set +# CONFIG_I2C_OCORES is not set +# CONFIG_I2C_PCA_PLATFORM is not set +# CONFIG_I2C_RK3X is not set +# CONFIG_I2C_SIMTEC is not set +# CONFIG_I2C_XILINX is not set + +# +# External I2C/SMBus adapter drivers +# +# CONFIG_I2C_PARPORT_LIGHT is not set +# CONFIG_I2C_TAOS_EVM is not set + +# +# Other I2C/SMBus bus drivers +# +CONFIG_DMA_MSG_MIN_LEN=5 +CONFIG_DMA_MSG_MAX_LEN=4090 +# CONFIG_I2C_STUB is not set +# CONFIG_I2C_SLAVE is not set +# CONFIG_I2C_DEBUG_CORE is not set +# CONFIG_I2C_DEBUG_ALGO is not set +# CONFIG_I2C_DEBUG_BUS is not set +CONFIG_SPI=y +# CONFIG_SPI_DEBUG is not set +CONFIG_SPI_MASTER=y +# CONFIG_SPI_MEM is not set + +# +# SPI Master Controller Drivers +# +# CONFIG_SPI_ALTERA is not set +# CONFIG_SPI_AXI_SPI_ENGINE is not set +# CONFIG_SPI_BITBANG is not set +# CONFIG_SPI_CADENCE is not set +# CONFIG_SPI_DESIGNWARE is not set +# CONFIG_SPI_GPIO is not set +# CONFIG_SPI_FSL_SPI is not set +# CONFIG_SPI_OC_TINY is not set +CONFIG_SPI_PL022=y +# CONFIG_SPI_ROCKCHIP is not set +# CONFIG_SPI_SC18IS602 is not set +# CONFIG_SPI_XCOMM is not set +# CONFIG_SPI_XILINX is not set +# CONFIG_SPI_ZYNQMP_GQSPI is not set + +# +# SPI Protocol Masters +# +# CONFIG_SPI_SPIDEV is not set +# CONFIG_SPI_LOOPBACK_TEST is not set +# CONFIG_SPI_TLE62X0 is not set +# CONFIG_SPI_SLAVE is not set +# CONFIG_SPMI is not set +# CONFIG_HSI is not set +# CONFIG_PPS is not set + +# +# PTP clock support +# + +# +# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks. +# +CONFIG_PINCTRL=y +# CONFIG_DEBUG_PINCTRL is not set +# CONFIG_PINCTRL_AMD is not set +# CONFIG_PINCTRL_MCP23S08 is not set +# CONFIG_PINCTRL_SINGLE is not set +# CONFIG_PINCTRL_SX150X is not set +CONFIG_ARCH_HAVE_CUSTOM_GPIO_H=y +CONFIG_GPIOLIB=y +CONFIG_GPIOLIB_FASTPATH_LIMIT=512 +CONFIG_OF_GPIO=y +CONFIG_GPIOLIB_IRQCHIP=y +# CONFIG_DEBUG_GPIO is not set +CONFIG_GPIO_SYSFS=y +CONFIG_GPIO_GENERIC=y + +# +# Memory mapped GPIO drivers +# +# CONFIG_GPIO_74XX_MMIO is not set +# CONFIG_GPIO_ALTERA is not set +# CONFIG_GPIO_DWAPB is not set +# CONFIG_GPIO_FTGPIO010 is not set +CONFIG_GPIO_GENERIC_PLATFORM=y +# CONFIG_GPIO_GRGPIO is not set +# CONFIG_GPIO_HLWD is not set +# CONFIG_GPIO_MB86S7X is not set +# CONFIG_GPIO_MOCKUP is not set +# CONFIG_GPIO_MPC8XXX is not set +CONFIG_GPIO_PL061=y +# CONFIG_GPIO_SYSCON is not set +# CONFIG_GPIO_XILINX is not set +# CONFIG_GPIO_ZEVIO is not set + +# +# I2C GPIO expanders +# +# CONFIG_GPIO_ADP5588 is not set +# CONFIG_GPIO_ADNP is not set +# CONFIG_GPIO_MAX7300 is not set +# CONFIG_GPIO_MAX732X is not set +# CONFIG_GPIO_PCA953X is not set +# CONFIG_GPIO_PCF857X is not set +# CONFIG_GPIO_TPIC2810 is not set + +# +# MFD GPIO expanders +# +# CONFIG_HTC_EGPIO is not set + +# +# SPI GPIO expanders +# +# CONFIG_GPIO_74X164 is not set +# CONFIG_GPIO_MAX3191X is not set +# CONFIG_GPIO_MAX7301 is not set +# CONFIG_GPIO_MC33880 is not set +# CONFIG_GPIO_PISOSR is not set +# CONFIG_GPIO_XRA1403 is not set +# CONFIG_W1 is not set +# CONFIG_POWER_AVS is not set +CONFIG_POWER_RESET=y +# CONFIG_POWER_RESET_BRCMKONA is not set +# CONFIG_POWER_RESET_BRCMSTB is not set +# CONFIG_POWER_RESET_GPIO is not set +# CONFIG_POWER_RESET_GPIO_RESTART is not set +CONFIG_POWER_RESET_HISI=y +# CONFIG_POWER_RESET_LTC2952 is not set +# CONFIG_POWER_RESET_RESTART is not set +# CONFIG_POWER_RESET_VERSATILE is not set +CONFIG_POWER_RESET_SYSCON=y +# CONFIG_POWER_RESET_SYSCON_POWEROFF is not set +# CONFIG_SYSCON_REBOOT_MODE is not set +CONFIG_POWER_SUPPLY=y +# CONFIG_POWER_SUPPLY_DEBUG is not set +# CONFIG_PDA_POWER is not set +# CONFIG_TEST_POWER is not set +# CONFIG_CHARGER_ADP5061 is not set +# CONFIG_BATTERY_DS2780 is not set +# CONFIG_BATTERY_DS2781 is not set +# CONFIG_BATTERY_DS2782 is not set +# CONFIG_BATTERY_SBS is not set +# CONFIG_CHARGER_SBS is not set +# CONFIG_MANAGER_SBS is not set +# CONFIG_BATTERY_BQ27XXX is not set +# CONFIG_BATTERY_MAX17040 is not set +# CONFIG_BATTERY_MAX17042 is not set +# CONFIG_CHARGER_MAX8903 is not set +# CONFIG_CHARGER_LP8727 is not set +# CONFIG_CHARGER_GPIO is not set +# CONFIG_CHARGER_LTC3651 is not set +# CONFIG_CHARGER_DETECTOR_MAX14656 is not set +# CONFIG_CHARGER_BQ2415X is not set +# CONFIG_CHARGER_BQ24257 is not set +# CONFIG_CHARGER_BQ24735 is not set +# CONFIG_CHARGER_BQ25890 is not set +# CONFIG_CHARGER_SMB347 is not set +# CONFIG_BATTERY_GAUGE_LTC2941 is not set +# CONFIG_CHARGER_RT9455 is not set +# CONFIG_HWMON is not set +# CONFIG_THERMAL is not set +# CONFIG_WATCHDOG is not set +CONFIG_SSB_POSSIBLE=y +# CONFIG_SSB is not set +CONFIG_BCMA_POSSIBLE=y +# CONFIG_BCMA is not set + +# +# Multifunction device drivers +# +CONFIG_MFD_CORE=y +# CONFIG_MFD_ACT8945A is not set +# CONFIG_MFD_AS3711 is not set +# CONFIG_MFD_AS3722 is not set +# CONFIG_PMIC_ADP5520 is not set +# CONFIG_MFD_AAT2870_CORE is not set +# CONFIG_MFD_ATMEL_FLEXCOM is not set +# CONFIG_MFD_ATMEL_HLCDC is not set +# CONFIG_MFD_BCM590XX is not set +# CONFIG_MFD_BD9571MWV is not set +# CONFIG_MFD_AXP20X_I2C is not set +# CONFIG_MFD_CROS_EC is not set +# CONFIG_MFD_MADERA is not set +# CONFIG_MFD_ASIC3 is not set +# CONFIG_PMIC_DA903X is not set +# CONFIG_MFD_DA9052_SPI is not set +# CONFIG_MFD_DA9052_I2C is not set +# CONFIG_MFD_DA9055 is not set +# CONFIG_MFD_DA9062 is not set +# CONFIG_MFD_DA9063 is not set +# CONFIG_MFD_DA9150 is not set +# CONFIG_MFD_MC13XXX_SPI is not set +# CONFIG_MFD_MC13XXX_I2C is not set +# CONFIG_MFD_HI6421_PMIC is not set +CONFIG_MFD_HISI_FMC=y +# CONFIG_HTC_PASIC3 is not set +# CONFIG_HTC_I2CPLD is not set +# CONFIG_MFD_KEMPLD is not set +# CONFIG_MFD_88PM800 is not set +# CONFIG_MFD_88PM805 is not set +# CONFIG_MFD_88PM860X is not set +# CONFIG_MFD_MAX14577 is not set +# CONFIG_MFD_MAX77620 is not set +# CONFIG_MFD_MAX77686 is not set +# CONFIG_MFD_MAX77693 is not set +# CONFIG_MFD_MAX77843 is not set +# CONFIG_MFD_MAX8907 is not set +# CONFIG_MFD_MAX8925 is not set +# CONFIG_MFD_MAX8997 is not set +# CONFIG_MFD_MAX8998 is not set +# CONFIG_MFD_MT6397 is not set +# CONFIG_MFD_MENF21BMC is not set +# CONFIG_EZX_PCAP is not set +# CONFIG_MFD_CPCAP is not set +# CONFIG_MFD_RETU is not set +# CONFIG_MFD_PCF50633 is not set +# CONFIG_MFD_PM8XXX is not set +# CONFIG_MFD_RT5033 is not set +# CONFIG_MFD_RC5T583 is not set +# CONFIG_MFD_RK808 is not set +# CONFIG_MFD_RN5T618 is not set +# CONFIG_MFD_SEC_CORE is not set +# CONFIG_MFD_SI476X_CORE is not set +# CONFIG_MFD_SM501 is not set +# CONFIG_MFD_SKY81452 is not set +# CONFIG_MFD_SMSC is not set +# CONFIG_ABX500_CORE is not set +# CONFIG_MFD_STMPE is not set +CONFIG_MFD_SYSCON=y +# CONFIG_MFD_TI_AM335X_TSCADC is not set +# CONFIG_MFD_LP3943 is not set +# CONFIG_MFD_LP8788 is not set +# CONFIG_MFD_TI_LMU is not set +# CONFIG_MFD_PALMAS is not set +# CONFIG_TPS6105X is not set +# CONFIG_TPS65010 is not set +# CONFIG_TPS6507X is not set +# CONFIG_MFD_TPS65086 is not set +# CONFIG_MFD_TPS65090 is not set +# CONFIG_MFD_TPS65217 is not set +# CONFIG_MFD_TI_LP873X is not set +# CONFIG_MFD_TI_LP87565 is not set +# CONFIG_MFD_TPS65218 is not set +# CONFIG_MFD_TPS6586X is not set +# CONFIG_MFD_TPS65910 is not set +# CONFIG_MFD_TPS65912_I2C is not set +# CONFIG_MFD_TPS65912_SPI is not set +# CONFIG_MFD_TPS80031 is not set +# CONFIG_TWL4030_CORE is not set +# CONFIG_TWL6040_CORE is not set +# CONFIG_MFD_WL1273_CORE is not set +# CONFIG_MFD_LM3533 is not set +# CONFIG_MFD_TC3589X is not set +# CONFIG_MFD_T7L66XB is not set +# CONFIG_MFD_TC6387XB is not set +# CONFIG_MFD_TC6393XB is not set +# CONFIG_MFD_ARIZONA_I2C is not set +# CONFIG_MFD_ARIZONA_SPI is not set +# CONFIG_MFD_WM8400 is not set +# CONFIG_MFD_WM831X_I2C is not set +# CONFIG_MFD_WM831X_SPI is not set +# CONFIG_MFD_WM8350_I2C is not set +# CONFIG_MFD_WM8994 is not set +# CONFIG_MFD_ROHM_BD718XX is not set +# CONFIG_REGULATOR is not set +# CONFIG_RC_CORE is not set +CONFIG_MEDIA_SUPPORT=y + +# +# Multimedia core support +# +CONFIG_MEDIA_CAMERA_SUPPORT=y +# CONFIG_MEDIA_ANALOG_TV_SUPPORT is not set +# CONFIG_MEDIA_DIGITAL_TV_SUPPORT is not set +# CONFIG_MEDIA_RADIO_SUPPORT is not set +# CONFIG_MEDIA_SDR_SUPPORT is not set +# CONFIG_MEDIA_CEC_SUPPORT is not set +# CONFIG_MEDIA_CONTROLLER is not set +CONFIG_VIDEO_DEV=y +CONFIG_VIDEO_V4L2=y +# CONFIG_VIDEO_ADV_DEBUG is not set +# CONFIG_VIDEO_FIXED_MINOR_RANGES is not set + +# +# Media drivers +# +# CONFIG_V4L_PLATFORM_DRIVERS is not set +# CONFIG_V4L_MEM2MEM_DRIVERS is not set +# CONFIG_V4L_TEST_DRIVERS is not set + +# +# Supported MMC/SDIO adapters +# + +# +# Media ancillary drivers (tuners, sensors, i2c, spi, frontends) +# +CONFIG_MEDIA_SUBDRV_AUTOSELECT=y + +# +# Audio decoders, processors and mixers +# + +# +# RDS decoders +# + +# +# Video decoders +# + +# +# Video and audio decoders +# + +# +# Video encoders +# + +# +# Camera sensor devices +# + +# +# Flash devices +# + +# +# Video improvement chips +# + +# +# Audio/Video compression chips +# + +# +# SDR tuner chips +# + +# +# Miscellaneous helper chips +# + +# +# Sensors used on soc_camera driver +# + +# +# Media SPI Adapters +# + +# +# Tools to develop new frontends +# + +# +# Graphics support +# +# CONFIG_IMX_IPUV3_CORE is not set +# CONFIG_DRM is not set +# CONFIG_DRM_DP_CEC is not set + +# +# ACP (Audio CoProcessor) Configuration +# + +# +# AMD Library routines +# + +# +# Frame buffer Devices +# +CONFIG_FB_CMDLINE=y +CONFIG_FB_NOTIFY=y +CONFIG_FB=y +# CONFIG_FIRMWARE_EDID is not set +# CONFIG_FB_FOREIGN_ENDIAN is not set +# CONFIG_FB_MODE_HELPERS is not set +# CONFIG_FB_TILEBLITTING is not set + +# +# Frame buffer hardware drivers +# +# CONFIG_FB_ARMCLCD is not set +# CONFIG_FB_OPENCORES is not set +# CONFIG_FB_S1D13XXX is not set +# CONFIG_FB_IBM_GXT4500 is not set +# CONFIG_FB_VIRTUAL is not set +# CONFIG_FB_METRONOME is not set +# CONFIG_FB_BROADSHEET is not set +# CONFIG_FB_SIMPLE is not set +# CONFIG_FB_SSD1307 is not set +# CONFIG_BACKLIGHT_LCD_SUPPORT is not set + +# +# Console display driver support +# +CONFIG_DUMMY_CONSOLE=y +# CONFIG_FRAMEBUFFER_CONSOLE is not set +# CONFIG_LOGO is not set +# CONFIG_SOUND is not set + +# +# HID support +# +# CONFIG_HID is not set + +# +# I2C HID support +# +# CONFIG_I2C_HID is not set +CONFIG_USB_OHCI_LITTLE_ENDIAN=y +# CONFIG_USB_SUPPORT is not set +# CONFIG_UWB is not set +# CONFIG_MMC is not set +# CONFIG_MEMSTICK is not set +# CONFIG_NEW_LEDS is not set +# CONFIG_ACCESSIBILITY is not set +CONFIG_EDAC_ATOMIC_SCRUB=y +CONFIG_EDAC_SUPPORT=y +CONFIG_RTC_LIB=y +CONFIG_RTC_CLASS=y +# CONFIG_RTC_HCTOSYS is not set +CONFIG_RTC_SYSTOHC=y +CONFIG_RTC_SYSTOHC_DEVICE="rtc0" +# CONFIG_RTC_DEBUG is not set +CONFIG_RTC_NVMEM=y + +# +# RTC interfaces +# +CONFIG_RTC_INTF_SYSFS=y +CONFIG_RTC_INTF_PROC=y +CONFIG_RTC_INTF_DEV=y +# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set +# CONFIG_RTC_DRV_TEST is not set + +# +# I2C RTC drivers +# +# CONFIG_RTC_DRV_ABB5ZES3 is not set +# CONFIG_RTC_DRV_ABX80X is not set +# CONFIG_RTC_DRV_DS1307 is not set +# CONFIG_RTC_DRV_DS1374 is not set +# CONFIG_RTC_DRV_DS1672 is not set +# CONFIG_RTC_DRV_HYM8563 is not set +# CONFIG_RTC_DRV_MAX6900 is not set +# CONFIG_RTC_DRV_RS5C372 is not set +# CONFIG_RTC_DRV_ISL1208 is not set +# CONFIG_RTC_DRV_ISL12022 is not set +# CONFIG_RTC_DRV_ISL12026 is not set +# CONFIG_RTC_DRV_X1205 is not set +# CONFIG_RTC_DRV_PCF8523 is not set +# CONFIG_RTC_DRV_PCF85063 is not set +# CONFIG_RTC_DRV_PCF85363 is not set +# CONFIG_RTC_DRV_PCF8563 is not set +# CONFIG_RTC_DRV_PCF8583 is not set +# CONFIG_RTC_DRV_M41T80 is not set +# CONFIG_RTC_DRV_BQ32K is not set +# CONFIG_RTC_DRV_S35390A is not set +# CONFIG_RTC_DRV_FM3130 is not set +# CONFIG_RTC_DRV_RX8010 is not set +# CONFIG_RTC_DRV_RX8581 is not set +# CONFIG_RTC_DRV_RX8025 is not set +# CONFIG_RTC_DRV_EM3027 is not set +# CONFIG_RTC_DRV_RV8803 is not set + +# +# SPI RTC drivers +# +# CONFIG_RTC_DRV_M41T93 is not set +# CONFIG_RTC_DRV_M41T94 is not set +# CONFIG_RTC_DRV_DS1302 is not set +# CONFIG_RTC_DRV_DS1305 is not set +# CONFIG_RTC_DRV_DS1343 is not set +# CONFIG_RTC_DRV_DS1347 is not set +# CONFIG_RTC_DRV_DS1390 is not set +# CONFIG_RTC_DRV_MAX6916 is not set +# CONFIG_RTC_DRV_R9701 is not set +# CONFIG_RTC_DRV_RX4581 is not set +# CONFIG_RTC_DRV_RX6110 is not set +# CONFIG_RTC_DRV_RS5C348 is not set +# CONFIG_RTC_DRV_MAX6902 is not set +# CONFIG_RTC_DRV_PCF2123 is not set +# CONFIG_RTC_DRV_MCP795 is not set +CONFIG_RTC_I2C_AND_SPI=y + +# +# SPI and I2C RTC drivers +# +# CONFIG_RTC_DRV_DS3232 is not set +# CONFIG_RTC_DRV_PCF2127 is not set +# CONFIG_RTC_DRV_RV3029C2 is not set + +# +# Platform RTC drivers +# +CONFIG_RTC_DRV_HIBVT=m +# CONFIG_RTC_DRV_CMOS is not set +# CONFIG_RTC_DRV_DS1286 is not set +# CONFIG_RTC_DRV_DS1511 is not set +# CONFIG_RTC_DRV_DS1553 is not set +# CONFIG_RTC_DRV_DS1685_FAMILY is not set +# CONFIG_RTC_DRV_DS1742 is not set +# CONFIG_RTC_DRV_DS2404 is not set +# CONFIG_RTC_DRV_STK17TA8 is not set +# CONFIG_RTC_DRV_M48T86 is not set +# CONFIG_RTC_DRV_M48T35 is not set +# CONFIG_RTC_DRV_M48T59 is not set +# CONFIG_RTC_DRV_MSM6242 is not set +# CONFIG_RTC_DRV_BQ4802 is not set +# CONFIG_RTC_DRV_RP5C01 is not set +# CONFIG_RTC_DRV_V3020 is not set +# CONFIG_RTC_DRV_ZYNQMP is not set + +# +# on-CPU RTC drivers +# +# CONFIG_RTC_DRV_PL030 is not set +# CONFIG_RTC_DRV_PL031 is not set +# CONFIG_RTC_DRV_FTRTC010 is not set +# CONFIG_RTC_DRV_SNVS is not set +# CONFIG_RTC_DRV_R7301 is not set + +# +# HID Sensor RTC drivers +# +# CONFIG_DMADEVICES is not set + +# +# DMABUF options +# +# CONFIG_SYNC_FILE is not set +# CONFIG_AUXDISPLAY is not set +# CONFIG_UIO is not set +# CONFIG_VIRT_DRIVERS is not set +CONFIG_VIRTIO_MENU=y +# CONFIG_VIRTIO_MMIO is not set + +# +# Microsoft Hyper-V guest support +# +# CONFIG_STAGING is not set +# CONFIG_GOLDFISH is not set +# CONFIG_CHROME_PLATFORMS is not set +# CONFIG_MELLANOX_PLATFORM is not set +CONFIG_CLKDEV_LOOKUP=y +CONFIG_HAVE_CLK_PREPARE=y +CONFIG_COMMON_CLK=y + +# +# Common Clock Framework +# +# CONFIG_CLK_HSDK is not set +# CONFIG_COMMON_CLK_MAX9485 is not set +# CONFIG_COMMON_CLK_SI5351 is not set +# CONFIG_COMMON_CLK_SI514 is not set +# CONFIG_COMMON_CLK_SI544 is not set +# CONFIG_COMMON_CLK_SI570 is not set +# CONFIG_COMMON_CLK_CDCE706 is not set +# CONFIG_COMMON_CLK_CDCE925 is not set +# CONFIG_COMMON_CLK_CS2000_CP is not set +# CONFIG_CLK_QORIQ is not set +# CONFIG_COMMON_CLK_VC5 is not set +CONFIG_COMMON_CLK_HI3516DV300=y +CONFIG_RESET_HISI=y +# CONFIG_HWSPINLOCK is not set + +# +# Clock Source drivers +# +CONFIG_TIMER_OF=y +CONFIG_TIMER_PROBE=y +CONFIG_CLKSRC_MMIO=y +CONFIG_ARM_ARCH_TIMER=y +CONFIG_ARM_ARCH_TIMER_EVTSTREAM=y +# CONFIG_ARM_ARCH_TIMER_VCT_ACCESS is not set +CONFIG_ARM_TIMER_SP804=y +# CONFIG_TIMER_HISP804 is not set +# CONFIG_MAILBOX is not set +# CONFIG_IOMMU_SUPPORT is not set + +# +# Remoteproc drivers +# +# CONFIG_REMOTEPROC is not set + +# +# Rpmsg drivers +# +# CONFIG_RPMSG_VIRTIO is not set + +# +# SOC (System On Chip) specific Drivers +# + +# +# Amlogic SoC drivers +# + +# +# Broadcom SoC drivers +# +# CONFIG_SOC_BRCMSTB is not set + +# +# NXP/Freescale QorIQ SoC drivers +# + +# +# i.MX SoC drivers +# + +# +# Qualcomm SoC drivers +# +# CONFIG_SOC_TI is not set + +# +# Xilinx SoC drivers +# +# CONFIG_XILINX_VCU is not set +# CONFIG_PM_DEVFREQ is not set +# CONFIG_EXTCON is not set +# CONFIG_MEMORY is not set +# CONFIG_IIO is not set +# CONFIG_PWM is not set + +# +# IRQ chip support +# +CONFIG_IRQCHIP=y +CONFIG_ARM_GIC=y +CONFIG_ARM_GIC_MAX_NR=1 +# CONFIG_IPACK_BUS is not set +CONFIG_RESET_CONTROLLER=y +# CONFIG_RESET_TI_SYSCON is not set +# CONFIG_FMC is not set + +# +# PHY Subsystem +# +# CONFIG_GENERIC_PHY is not set +# CONFIG_BCM_KONA_USB2_PHY is not set +# CONFIG_PHY_PXA_28NM_HSIC is not set +# CONFIG_PHY_PXA_28NM_USB2 is not set +# CONFIG_HI_USB_PHY is not set +# CONFIG_POWERCAP is not set +# CONFIG_MCB is not set + +# +# Performance monitor support +# +# CONFIG_ARM_CCI_PMU is not set +# CONFIG_ARM_CCN is not set +CONFIG_ARM_PMU=y +# CONFIG_RAS is not set +# CONFIG_DAX is not set +CONFIG_NVMEM=y + +# +# HW tracing support +# +# CONFIG_STM is not set +# CONFIG_INTEL_TH is not set +# CONFIG_FPGA is not set +# CONFIG_FSI is not set +# CONFIG_TEE is not set +# CONFIG_SIOX is not set +# CONFIG_SLIMBUS is not set +# CONFIG_HI_DMAC is not set +# CONFIG_HIEDMAC is not set + +# +# Hisilicon driver support +# +# CONFIG_CMA_MEM_SHARED is not set +# CONFIG_CMA_ADVANCE_SHARE is not set + +# +# File systems +# +CONFIG_DCACHE_WORD_ACCESS=y +# CONFIG_EXT2_FS is not set +# CONFIG_EXT3_FS is not set +# CONFIG_EXT4_FS is not set +# CONFIG_REISERFS_FS is not set +# CONFIG_JFS_FS is not set +# CONFIG_XFS_FS is not set +# CONFIG_GFS2_FS is not set +# CONFIG_BTRFS_FS is not set +# CONFIG_NILFS2_FS is not set +# CONFIG_F2FS_FS is not set +CONFIG_EXPORTFS=y +# CONFIG_EXPORTFS_BLOCK_OPS is not set +CONFIG_FILE_LOCKING=y +CONFIG_MANDATORY_FILE_LOCKING=y +# CONFIG_FS_ENCRYPTION is not set +CONFIG_FSNOTIFY=y +CONFIG_DNOTIFY=y +CONFIG_INOTIFY_USER=y +# CONFIG_FANOTIFY is not set +# CONFIG_QUOTA is not set +# CONFIG_AUTOFS4_FS is not set +# CONFIG_AUTOFS_FS is not set +# CONFIG_FUSE_FS is not set +# CONFIG_OVERLAY_FS is not set + +# +# Caches +# +# CONFIG_FSCACHE is not set + +# +# CD-ROM/DVD Filesystems +# +# CONFIG_ISO9660_FS is not set +# CONFIG_UDF_FS is not set + +# +# DOS/FAT/NT Filesystems +# +# CONFIG_MSDOS_FS is not set +# CONFIG_VFAT_FS is not set +# CONFIG_NTFS_FS is not set + +# +# Pseudo filesystems +# +CONFIG_PROC_FS=y +CONFIG_PROC_SYSCTL=y +CONFIG_PROC_PAGE_MONITOR=y +# CONFIG_PROC_CHILDREN is not set +CONFIG_KERNFS=y +CONFIG_SYSFS=y +CONFIG_TMPFS=y +# CONFIG_TMPFS_POSIX_ACL is not set +# CONFIG_TMPFS_XATTR is not set +CONFIG_MEMFD_CREATE=y +CONFIG_CONFIGFS_FS=y +CONFIG_MISC_FILESYSTEMS=y +# CONFIG_ORANGEFS_FS is not set +# CONFIG_ADFS_FS is not set +# CONFIG_AFFS_FS is not set +# CONFIG_HFS_FS is not set +# CONFIG_HFSPLUS_FS is not set +# CONFIG_BEFS_FS is not set +# CONFIG_BFS_FS is not set +# CONFIG_EFS_FS is not set +CONFIG_JFFS2_FS=y +CONFIG_JFFS2_FS_DEBUG=0 +CONFIG_JFFS2_FS_WRITEBUFFER=y +# CONFIG_JFFS2_FS_WBUF_VERIFY is not set +CONFIG_JFFS2_SUMMARY=y +# CONFIG_JFFS2_FS_XATTR is not set +# CONFIG_JFFS2_COMPRESSION_OPTIONS is not set +CONFIG_JFFS2_ZLIB=y +CONFIG_JFFS2_RTIME=y +# CONFIG_CRAMFS is not set +# CONFIG_SQUASHFS is not set +# CONFIG_VXFS_FS is not set +# CONFIG_MINIX_FS is not set +# CONFIG_OMFS_FS is not set +# CONFIG_HPFS_FS is not set +# CONFIG_QNX4FS_FS is not set +# CONFIG_QNX6FS_FS is not set +# CONFIG_ROMFS_FS is not set +# CONFIG_PSTORE is not set +# CONFIG_SYSV_FS is not set +# CONFIG_UFS_FS is not set +# CONFIG_NLS is not set + +# +# Security options +# +# CONFIG_KEYS is not set +# CONFIG_SECURITY_DMESG_RESTRICT is not set +# CONFIG_SECURITY is not set +# CONFIG_SECURITYFS is not set +CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y +# CONFIG_HARDENED_USERCOPY is not set +# CONFIG_FORTIFY_SOURCE is not set +# CONFIG_STATIC_USERMODEHELPER is not set +CONFIG_DEFAULT_SECURITY_DAC=y +CONFIG_DEFAULT_SECURITY="" +CONFIG_CRYPTO=y + +# +# Crypto core or helper +# +CONFIG_CRYPTO_ALGAPI=y +CONFIG_CRYPTO_ALGAPI2=y +CONFIG_CRYPTO_AEAD=m +CONFIG_CRYPTO_AEAD2=y +CONFIG_CRYPTO_BLKCIPHER2=y +CONFIG_CRYPTO_HASH=m +CONFIG_CRYPTO_HASH2=y +CONFIG_CRYPTO_RNG=y +CONFIG_CRYPTO_RNG2=y +CONFIG_CRYPTO_RNG_DEFAULT=m +CONFIG_CRYPTO_AKCIPHER2=y +CONFIG_CRYPTO_KPP2=y +CONFIG_CRYPTO_ACOMP2=y +# CONFIG_CRYPTO_RSA is not set +# CONFIG_CRYPTO_DH is not set +# CONFIG_CRYPTO_ECDH is not set +CONFIG_CRYPTO_MANAGER=m +CONFIG_CRYPTO_MANAGER2=y +CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y +# CONFIG_CRYPTO_GF128MUL is not set +CONFIG_CRYPTO_NULL=m +CONFIG_CRYPTO_NULL2=y +# CONFIG_CRYPTO_PCRYPT is not set +CONFIG_CRYPTO_WORKQUEUE=y +# CONFIG_CRYPTO_CRYPTD is not set +# CONFIG_CRYPTO_MCRYPTD is not set +# CONFIG_CRYPTO_AUTHENC is not set +# CONFIG_CRYPTO_TEST is not set + +# +# Authenticated Encryption with Associated Data +# +# CONFIG_CRYPTO_CCM is not set +# CONFIG_CRYPTO_GCM is not set +# CONFIG_CRYPTO_CHACHA20POLY1305 is not set +# CONFIG_CRYPTO_AEGIS128 is not set +# CONFIG_CRYPTO_AEGIS128L is not set +# CONFIG_CRYPTO_AEGIS256 is not set +# CONFIG_CRYPTO_MORUS640 is not set +# CONFIG_CRYPTO_MORUS1280 is not set +# CONFIG_CRYPTO_SEQIV is not set +CONFIG_CRYPTO_ECHAINIV=m + +# +# Block modes +# +# CONFIG_CRYPTO_CBC is not set +# CONFIG_CRYPTO_CFB is not set +# CONFIG_CRYPTO_CTR is not set +# CONFIG_CRYPTO_CTS is not set +# CONFIG_CRYPTO_ECB is not set +# CONFIG_CRYPTO_LRW is not set +# CONFIG_CRYPTO_PCBC is not set +# CONFIG_CRYPTO_XTS is not set +# CONFIG_CRYPTO_KEYWRAP is not set + +# +# Hash modes +# +# CONFIG_CRYPTO_CMAC is not set +CONFIG_CRYPTO_HMAC=m +# CONFIG_CRYPTO_XCBC is not set +# CONFIG_CRYPTO_VMAC is not set + +# +# Digest +# +CONFIG_CRYPTO_CRC32C=m +# CONFIG_CRYPTO_CRC32 is not set +# CONFIG_CRYPTO_CRCT10DIF is not set +# CONFIG_CRYPTO_GHASH is not set +# CONFIG_CRYPTO_POLY1305 is not set +# CONFIG_CRYPTO_MD4 is not set +# CONFIG_CRYPTO_MD5 is not set +# CONFIG_CRYPTO_MICHAEL_MIC is not set +# CONFIG_CRYPTO_RMD128 is not set +# CONFIG_CRYPTO_RMD160 is not set +# CONFIG_CRYPTO_RMD256 is not set +# CONFIG_CRYPTO_RMD320 is not set +# CONFIG_CRYPTO_SHA1 is not set +CONFIG_CRYPTO_SHA256=m +# CONFIG_CRYPTO_SHA512 is not set +# CONFIG_CRYPTO_SHA3 is not set +# CONFIG_CRYPTO_SM3 is not set +# CONFIG_CRYPTO_TGR192 is not set +# CONFIG_CRYPTO_WP512 is not set + +# +# Ciphers +# +CONFIG_CRYPTO_AES=y +# CONFIG_CRYPTO_AES_TI is not set +# CONFIG_CRYPTO_ANUBIS is not set +# CONFIG_CRYPTO_ARC4 is not set +# CONFIG_CRYPTO_BLOWFISH is not set +# CONFIG_CRYPTO_CAMELLIA is not set +# CONFIG_CRYPTO_CAST5 is not set +# CONFIG_CRYPTO_CAST6 is not set +# CONFIG_CRYPTO_DES is not set +# CONFIG_CRYPTO_FCRYPT is not set +# CONFIG_CRYPTO_KHAZAD is not set +# CONFIG_CRYPTO_SALSA20 is not set +# CONFIG_CRYPTO_CHACHA20 is not set +# CONFIG_CRYPTO_SEED is not set +# CONFIG_CRYPTO_SERPENT is not set +# CONFIG_CRYPTO_SM4 is not set +# CONFIG_CRYPTO_TEA is not set +# CONFIG_CRYPTO_TWOFISH is not set + +# +# Compression +# +CONFIG_CRYPTO_DEFLATE=y +CONFIG_CRYPTO_LZO=y +# CONFIG_CRYPTO_842 is not set +# CONFIG_CRYPTO_LZ4 is not set +# CONFIG_CRYPTO_LZ4HC is not set +# CONFIG_CRYPTO_ZSTD is not set + +# +# Random Number Generation +# +CONFIG_CRYPTO_ANSI_CPRNG=y +CONFIG_CRYPTO_DRBG_MENU=m +CONFIG_CRYPTO_DRBG_HMAC=y +# CONFIG_CRYPTO_DRBG_HASH is not set +CONFIG_CRYPTO_DRBG=m +CONFIG_CRYPTO_JITTERENTROPY=m +CONFIG_CRYPTO_HW=y +# CONFIG_CRYPTO_DEV_CCREE is not set + +# +# Certificates for signature checking +# + +# +# Library routines +# +CONFIG_BITREVERSE=y +CONFIG_HAVE_ARCH_BITREVERSE=y +CONFIG_RATIONAL=y +CONFIG_GENERIC_STRNCPY_FROM_USER=y +CONFIG_GENERIC_STRNLEN_USER=y +CONFIG_GENERIC_PCI_IOMAP=y +CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y +# CONFIG_CRC_CCITT is not set +CONFIG_CRC16=m +# CONFIG_CRC_T10DIF is not set +# CONFIG_CRC_ITU_T is not set +CONFIG_CRC32=y +# CONFIG_CRC32_SELFTEST is not set +CONFIG_CRC32_SLICEBY8=y +# CONFIG_CRC32_SLICEBY4 is not set +# CONFIG_CRC32_SARWATE is not set +# CONFIG_CRC32_BIT is not set +# CONFIG_CRC64 is not set +# CONFIG_CRC4 is not set +# CONFIG_CRC7 is not set +CONFIG_LIBCRC32C=m +# CONFIG_CRC8 is not set +# CONFIG_RANDOM32_SELFTEST is not set +CONFIG_ZLIB_INFLATE=y +CONFIG_ZLIB_DEFLATE=y +CONFIG_LZO_COMPRESS=y +CONFIG_LZO_DECOMPRESS=y +# CONFIG_XZ_DEC is not set +CONFIG_GENERIC_ALLOCATOR=y +CONFIG_HAS_IOMEM=y +CONFIG_HAS_IOPORT_MAP=y +CONFIG_HAS_DMA=y +CONFIG_NEED_DMA_MAP_STATE=y +CONFIG_HAVE_GENERIC_DMA_COHERENT=y +CONFIG_SGL_ALLOC=y +# CONFIG_CORDIC is not set +# CONFIG_DDR is not set +# CONFIG_IRQ_POLL is not set +CONFIG_LIBFDT=y +CONFIG_ARCH_HAS_SG_CHAIN=y +CONFIG_SBITMAP=y +# CONFIG_STRING_SELFTEST is not set + +# +# Kernel hacking +# + +# +# printk and dmesg options +# +CONFIG_PRINTK_TIME=y +CONFIG_CONSOLE_LOGLEVEL_DEFAULT=7 +CONFIG_CONSOLE_LOGLEVEL_QUIET=4 +CONFIG_MESSAGE_LOGLEVEL_DEFAULT=4 +# CONFIG_BOOT_PRINTK_DELAY is not set +# CONFIG_DYNAMIC_DEBUG is not set + +# +# Compile-time checks and compiler options +# +CONFIG_DEBUG_INFO=y +# CONFIG_DEBUG_INFO_REDUCED is not set +# CONFIG_DEBUG_INFO_SPLIT is not set +# CONFIG_DEBUG_INFO_DWARF4 is not set +# CONFIG_GDB_SCRIPTS is not set +CONFIG_ENABLE_MUST_CHECK=y +CONFIG_FRAME_WARN=1024 +# CONFIG_STRIP_ASM_SYMS is not set +# CONFIG_READABLE_ASM is not set +# CONFIG_UNUSED_SYMBOLS is not set +# CONFIG_PAGE_OWNER is not set +CONFIG_DEBUG_FS=y +# CONFIG_HEADERS_CHECK is not set +CONFIG_DEBUG_SECTION_MISMATCH=y +CONFIG_SECTION_MISMATCH_WARN_ONLY=y +# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set +# CONFIG_MAGIC_SYSRQ is not set +CONFIG_DEBUG_KERNEL=y + +# +# Memory Debugging +# +# CONFIG_PAGE_EXTENSION is not set +# CONFIG_DEBUG_PAGEALLOC is not set +# CONFIG_PAGE_POISONING is not set +# CONFIG_DEBUG_RODATA_TEST is not set +# CONFIG_DEBUG_OBJECTS is not set +# CONFIG_SLUB_DEBUG_ON is not set +# CONFIG_SLUB_STATS is not set +CONFIG_HAVE_DEBUG_KMEMLEAK=y +# CONFIG_DEBUG_KMEMLEAK is not set +# CONFIG_DEBUG_STACK_USAGE is not set +# CONFIG_DEBUG_VM is not set +CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y +# CONFIG_DEBUG_VIRTUAL is not set +CONFIG_DEBUG_MEMORY_INIT=y +# CONFIG_DEBUG_PER_CPU_MAPS is not set +# CONFIG_DEBUG_HIGHMEM is not set +CONFIG_ARCH_HAS_KCOV=y +CONFIG_CC_HAS_SANCOV_TRACE_PC=y +# CONFIG_KCOV is not set +# CONFIG_DEBUG_SHIRQ is not set + +# +# Debug Lockups and Hangs +# +CONFIG_LOCKUP_DETECTOR=y +CONFIG_SOFTLOCKUP_DETECTOR=y +# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set +CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0 +# CONFIG_DETECT_HUNG_TASK is not set +# CONFIG_WQ_WATCHDOG is not set +# CONFIG_PANIC_ON_OOPS is not set +CONFIG_PANIC_ON_OOPS_VALUE=0 +CONFIG_PANIC_TIMEOUT=0 +CONFIG_SCHED_DEBUG=y +# CONFIG_SCHEDSTATS is not set +# CONFIG_SCHED_STACK_END_CHECK is not set +# CONFIG_DEBUG_TIMEKEEPING is not set + +# +# Lock Debugging (spinlocks, mutexes, etc...) +# +CONFIG_LOCK_DEBUGGING_SUPPORT=y +# CONFIG_PROVE_LOCKING is not set +# CONFIG_LOCK_STAT is not set +# CONFIG_DEBUG_RT_MUTEXES is not set +# CONFIG_DEBUG_SPINLOCK is not set +# CONFIG_DEBUG_MUTEXES is not set +# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set +# CONFIG_DEBUG_RWSEMS is not set +# CONFIG_DEBUG_LOCK_ALLOC is not set +# CONFIG_DEBUG_ATOMIC_SLEEP is not set +# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set +# CONFIG_LOCK_TORTURE_TEST is not set +# CONFIG_WW_MUTEX_SELFTEST is not set +CONFIG_STACKTRACE=y +# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set +# CONFIG_DEBUG_KOBJECT is not set +CONFIG_DEBUG_BUGVERBOSE=y +# CONFIG_DEBUG_LIST is not set +# CONFIG_DEBUG_PI_LIST is not set +# CONFIG_DEBUG_SG is not set +# CONFIG_DEBUG_NOTIFIERS is not set +# CONFIG_DEBUG_CREDENTIALS is not set + +# +# RCU Debugging +# +# CONFIG_RCU_PERF_TEST is not set +# CONFIG_RCU_TORTURE_TEST is not set +CONFIG_RCU_CPU_STALL_TIMEOUT=60 +CONFIG_RCU_TRACE=y +# CONFIG_RCU_EQS_DEBUG is not set +# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set +# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set +# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set +# CONFIG_NOTIFIER_ERROR_INJECTION is not set +# CONFIG_FAULT_INJECTION is not set +# CONFIG_LATENCYTOP is not set +CONFIG_HAVE_FUNCTION_TRACER=y +CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y +CONFIG_HAVE_DYNAMIC_FTRACE=y +CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y +CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y +CONFIG_HAVE_SYSCALL_TRACEPOINTS=y +CONFIG_HAVE_C_RECORDMCOUNT=y +CONFIG_TRACE_CLOCK=y +# CONFIG_TRACING is not set +CONFIG_TRACING_SUPPORT=y +# CONFIG_FTRACE is not set +# CONFIG_DMA_API_DEBUG is not set +# CONFIG_RUNTIME_TESTING_MENU is not set +# CONFIG_MEMTEST is not set +# CONFIG_BUG_ON_DATA_CORRUPTION is not set +# CONFIG_SAMPLES is not set +CONFIG_HAVE_ARCH_KGDB=y +# CONFIG_KGDB is not set +# CONFIG_UBSAN is not set +CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y +CONFIG_STRICT_DEVMEM=y +# CONFIG_IO_STRICT_DEVMEM is not set +# CONFIG_ARM_PTDUMP_DEBUGFS is not set +# CONFIG_DEBUG_WX is not set +CONFIG_ARM_UNWIND=y +# CONFIG_DEBUG_USER is not set +CONFIG_DEBUG_LL=y +CONFIG_DEBUG_HI3516DV300_UART=y +# CONFIG_DEBUG_ICEDCC is not set +# CONFIG_DEBUG_SEMIHOSTING is not set +# CONFIG_DEBUG_LL_UART_8250 is not set +# CONFIG_DEBUG_LL_UART_PL01X is not set +CONFIG_DEBUG_LL_INCLUDE="debug/pl01x.S" +CONFIG_DEBUG_UART_PL01X=y +CONFIG_DEBUG_UART_PHYS=0x120a0000 +CONFIG_DEBUG_UART_VIRT=0xfe4a0000 +# CONFIG_DEBUG_UNCOMPRESS is not set +CONFIG_UNCOMPRESS_INCLUDE="debug/uncompress.h" +# CONFIG_EARLY_PRINTK is not set +# CONFIG_PID_IN_CONTEXTIDR is not set +# CONFIG_CORESIGHT is not set
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt
Added
+(directory)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/Kconfig
Added
@@ -0,0 +1,258 @@ +config ARCH_HISI_BVT + bool "Hisilicon BVT SoC Support" + select ARM_AMBA + select ARM_GIC if ARCH_MULTI_V7 + select ARM_VIC if ARCH_MULTI_V5 + select ARM_TIMER_SP804 + select POWER_RESET + select POWER_SUPPLY + +if ARCH_HISI_BVT + +menu "Hisilicon BVT platform type" + +config ARCH_HI3521DV200 + bool "Hisilicon Hi3521DV200 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3521DV200 Soc family. + +config ARCH_HI3520DV500 + bool "Hisilicon Hi3520DV500 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3520DV500 Soc family. + +config ARCH_HI3516A + bool "Hisilicon Hi3516A Cortex-A7(Single) family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select ARM_GIC + select ARCH_HAS_RESET_CONTROLLER + select RESET_CONTROLLER + help + Support for Hisilicon Hi3516A Soc family. + +config ARCH_HI3516CV500 + bool "Hisilicon Hi3516CV500 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3516CV500 Soc family. + +config ARCH_HI3516DV300 + bool "Hisilicon Hi3516DV300 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3516DV300 Soc family. + +config ARCH_HI3516EV200 + bool "Hisilicon Hi3516EV200 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3516EV200 Soc family. + +config ARCH_HI3516EV300 + bool "Hisilicon Hi3516EV300 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3516EV300 Soc family. + +config ARCH_HI3518EV300 + bool "Hisilicon Hi3518EV300 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3518EV300 Soc family. + +config ARCH_HI3516DV200 + bool "Hisilicon Hi3516DV200 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3516DV200 Soc family. +config ARCH_HI3556V200 + bool "Hisilicon Hi3556V200 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3556V200 Soc family. + +config ARCH_HI3559V200 + bool "Hisilicon Hi3559V200 Cortex-A7 family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + select POWER_RESET_HISI + help + Support for Hisilicon Hi3559V200 Soc family. + +config ARCH_HI3518EV20X + bool "Hisilicon Hi3518ev20x ARM926T(Single) family" + depends on ARCH_MULTI_V5 + select PINCTRL + select PINCTRL_SINGLE + help + Support for Hisilicon Hi3518ev20x Soc family. + +config ARCH_HI3536DV100 + bool "Hisilicon Hi3536DV100 Cortex-A7(Single) family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select PINCTRL + help + Support for Hisilicon Hi3536DV100 Soc family. + +config ARCH_HI3521A + bool "Hisilicon Hi3521A A7(Single) family" + depends on ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select ARM_GIC + select PINCTRL + select PINCTRL_SINGLE + help + Support for Hisilicon Hi3521a Soc family. + +config ARCH_HI3531A + bool "Hisilicon Hi3531A A9 family" if ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select ARM_GIC + select CACHE_L2X0 + select PINCTRL + select PINCTRL_SINGLE + select HAVE_ARM_SCU if SMP + select NEED_MACH_IO_H if PCI + help + Support for Hisilicon Hi3531a Soc family. + +config ARCH_HI3556AV100 + bool "Hisilicon Hi3556AV100 Cortex-a53 family" if ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select ARM_CCI + select ARCH_HAS_RESET_CONTROLLER + select RESET_CONTROLLER + select PMC if SMP + help + Support for Hisilicon Hi3556AV100 Soc family +if ARCH_HI3556AV100 + +config PMC + bool + depends on ARCH_HI3556AV100 + help + support power control for Hi3556AV100 Cortex-a53 + +endif + +config ARCH_HI3519AV100 + bool "Hisilicon Hi3519AV100 Cortex-a53 family" if ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select ARM_CCI + select ARM_GIC + select ARCH_HAS_RESET_CONTROLLER + select RESET_CONTROLLER + select NEED_MACH_IO_H if PCI + select PMC if SMP + help + Support for Hisilicon Hi3519AV100 Soc family +if ARCH_HI3519AV100 + +config PMC + bool + depends on ARCH_HI3519AV100 + help + support power control for Hi3519AV100 Cortex-a53 + +endif + +config ARCH_HI3568V100 + bool "Hisilicon Hi3568V100 Cortex-a53 family" if ARCH_MULTI_V7 + select HAVE_ARM_ARCH_TIMER + select ARM_CCI + select ARM_GIC + select ARCH_HAS_RESET_CONTROLLER + select RESET_CONTROLLER + select NEED_MACH_IO_H if PCI + select PMC if SMP + help + Support for Hisilicon Hi3568V100 Soc family +if ARCH_HI3568V100 + +config PMC + bool + depends on ARCH_HI3568V100 + help + support power control for Hi3568V100 Cortex-a53 + +endif + +config ARCH_HISI_BVT_AMP + bool "Hisilicon AMP solution support" + depends on ARCH_HI3556AV100 || ARCH_HI3519AV100 || ARCH_HI3516CV500 || ARCH_HI3516DV300 || ARCH_HI3556V200 || ARCH_HI3559V200 || ARCH_HI3562V100 || ARCH_HI3566V100 || ARCH_HI3568V100 + help + support for Hisilicon AMP solution + +config HISI_MC + bool "Hisilicon mc platform solution" + default n + help + support for Hisilicon mc platform solution + +config AMP_ZRELADDR + hex 'amp zreladdr' + depends on ARCH_HISI_BVT_AMP + default "0x32008000" if ARCH_HI3556AV100 || ARCH_HI3519AV100 || ARCH_HI3568V100 + default "0x82008000" if ARCH_HI3516CV500 || ARCH_HI3516DV300 || ARCH_HI3556V200 || ARCH_HI3559V200 || ARCH_HI3562V100 || ARCH_HI3566V100 + default "0x42008000" if ARCH_HI3516EV200 || ARCH_HI3516EV300 || ARCH_HI3518EV300 || ARCH_HI3516DV200 +config HI_ZRELADDR + hex 'zreladdr' + default "0x40008000" if ARCH_HI3521DV200 + default "0x40008000" if ARCH_HI3520DV500 + default "0x80008000" if ARCH_HI3516CV500 + default "0x80008000" if ARCH_HI3516DV300 + default "0x80008000" if ARCH_HI3556V200 + default "0x80008000" if ARCH_HI3559V200 + default "0x80008000" if ARCH_HI3562V100 + default "0x80008000" if ARCH_HI3566V100 + default "0x80008000" if ARCH_HI3516A + default "0x80008000" if ARCH_HI3518EV20X + default "0x80008000" if ARCH_HI3536DV100 + default "0x80008000" if ARCH_HI3521A + default "0x40008000" if ARCH_HI3531A + default "0x40008000" if ARCH_HI3516EV200 || ARCH_HI3516EV300 || ARCH_HI3518EV300 || ARCH_HI3516DV200 + default "0x22008000" if ARCH_HI3556AV100 || ARCH_HI3519AV100 || ARCH_HI3568V100 + +config HI_PARAMS_PHYS + hex 'params_phys' + default "0x00000100" + +config HI_INITRD_PHYS + hex 'initrd_phys' + default "0x00800000" + +endmenu + +endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/Makefile
Added
@@ -0,0 +1,27 @@ +# +# Makefile for Hisilicon processors family +# + +obj-$(CONFIG_ARCH_HI3521DV200) += mach-hi3521dv200.o +obj-$(CONFIG_ARCH_HI3520DV500) += mach-hi3521dv200.o +obj-$(CONFIG_ARCH_HI3516A) += mach-hi3516a.o +obj-$(CONFIG_ARCH_HI3516CV500) += mach-hi3516cv500.o +obj-$(CONFIG_ARCH_HI3516EV200) += mach-hi3516ev200.o +obj-$(CONFIG_ARCH_HI3516EV300) += mach-hi3516ev300.o +obj-$(CONFIG_ARCH_HI3518EV300) += mach-hi3518ev300.o +obj-$(CONFIG_ARCH_HI3516DV200) += mach-hi3516dv200.o +obj-$(CONFIG_ARCH_HI3516DV300) += mach-hi3516dv300.o +obj-$(CONFIG_ARCH_HI3556V200) += mach-hi3556v200.o +obj-$(CONFIG_ARCH_HI3559V200) += mach-hi3559v200.o +obj-$(CONFIG_ARCH_HI3562V100) += mach-hi3559v200.o +obj-$(CONFIG_ARCH_HI3566V100) += mach-hi3559v200.o +obj-$(CONFIG_ARCH_HI3518EV20X) += mach-hi3518ev20x.o +obj-$(CONFIG_ARCH_HI3536DV100) += mach-hi3536dv100.o +obj-$(CONFIG_ARCH_HI3521A) += mach-hi3521a.o +obj-$(CONFIG_ARCH_HI3531A) += mach-hi3531a.o +obj-$(CONFIG_ARCH_HI3556AV100) += mach-hi3556av100.o +obj-$(CONFIG_ARCH_HI3519AV100) += mach-hi3519av100.o +obj-$(CONFIG_ARCH_HI3568V100) += mach-hi3519av100.o + + +obj-$(CONFIG_SMP) += platsmp.o
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/Makefile.boot
Added
@@ -0,0 +1,7 @@ +ifeq ($(CONFIG_ARCH_HISI_BVT_AMP), y) +zreladdr-$(CONFIG_ARCH_HISI_BVT) := $(CONFIG_AMP_ZRELADDR) +else +zreladdr-$(CONFIG_ARCH_HISI_BVT) := $(CONFIG_HI_ZRELADDR) +endif +params_phys-$(CONFIG_ARCH_HISI_BVT) := $(CONFIG_HI_PARAMS_PHYS) +initrd_phys-$(CONFIG_ARCH_HISI_BVT) := $(CONFIG_HI_INITRD_PHYS)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/include
Added
+(directory)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/include/mach
Added
+(directory)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/include/mach/hi3516dv300_io.h
Added
@@ -0,0 +1,27 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __HI3516DV300_IO_H +#define __HI3516DV300_IO_H + +/* + * phy: 0x20000000 ~ 0x20700000 + * vir: 0xFE100000 ~ 0xFE800000 + */ +#define HI3516DV300_IOCH2_PHYS 0x20000000 +#define IO_OFFSET_HIGH 0xDE100000 +#define HI3516DV300_IOCH2_VIRT (HI3516DV300_IOCH2_PHYS + IO_OFFSET_HIGH) +#define HI3516DV300_IOCH2_SIZE 0x700000 + +/* phy: 0x10000000 ~ 0x100E0000 + * vir: 0xFE000000 ~ 0xFE0E0000 + */ +#define HI3516DV300_IOCH1_PHYS 0x10000000 +#define IO_OFFSET_LOW 0xEE000000 +#define HI3516DV300_IOCH1_VIRT (HI3516DV300_IOCH1_PHYS + IO_OFFSET_LOW) +#define HI3516DV300_IOCH1_SIZE 0xE0000 + +#define IO_ADDRESS(x) ((x) >= HI3516DV300_IOCH2_PHYS ? (x) + IO_OFFSET_HIGH \ + : (x) + IO_OFFSET_LOW) + +#define __io_address(n) ((void __iomem __force *)IO_ADDRESS(n)) + +#endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/include/mach/hi3516dv300_platform.h
Added
@@ -0,0 +1,5 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __HI3516DV300_CHIP_REGS_H__ +#define __HI3516DV300_CHIP_REGS_H__ + +#endif /* End of __HI3516DV300_CHIP_REGS_H__ */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/include/mach/io.h
Added
@@ -0,0 +1,53 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __ASM_ARM_ARCH_IO_H +#define __ASM_ARM_ARCH_IO_H + +#ifdef CONFIG_ARCH_HI3516A +#include <mach/hi3516a_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3518EV20X +#include <mach/hi3518ev20x_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3536DV100 +#include <mach/hi3536dv100_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3521A +#include <mach/hi3521a_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3531A +#include <mach/hi3531a_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3516CV500 +#include <mach/hi3516cv500_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3516DV300 +#include <mach/hi3516dv300_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3556V200 +#include <mach/hi3556v200_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3559V200 +#include <mach/hi3559v200_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3562V100 +#include <mach/hi3559v200_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3566V100 +#include <mach/hi3559v200_io.h> +#endif + +#ifdef CONFIG_ARCH_HI3519AV100 +#include <mach/hi3519av100_io.h> +#endif + +#endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/include/mach/platform.h
Added
@@ -0,0 +1,53 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __HISI_PLATFORM_H__ +#define __HISI_PLATFORM_H__ + +#ifdef CONFIG_ARCH_HI3536DV100 +#include <mach/hi3536dv100_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3521A +#include <mach/hi3521a_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3531A +#include <mach/hi3531a_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3516DV300 +#include <mach/hi3516dv300_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3516CV500 +#include <mach/hi3516cv500_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3556V200 +#include <mach/hi3556v200_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3559V200 +#include <mach/hi3559v200_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3562V100 +#include <mach/hi3559v200_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3566V100 +#include <mach/hi3559v200_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3556AV100 +#include <mach/hi3556av100_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3519AV100 +#include <mach/hi3519av100_platform.h> +#endif + +#ifdef CONFIG_ARCH_HI3568V100 +#include <mach/hi3519av100_platform.h> +#endif + +#endif /* End of __HISI_PLATFORM_H__ */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/mach-common.h
Added
@@ -0,0 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __SMP_COMMON_H +#define __SMP_COMMON_H + +#ifdef CONFIG_SMP +void hi35xx_set_cpu(unsigned int cpu, bool enable); +void __init hi35xx_smp_prepare_cpus(unsigned int max_cpus); +int hi35xx_boot_secondary(unsigned int cpu, struct task_struct *idle); +#endif /* CONFIG_SMP */ +#endif /* __SMP_COMMON_H */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/mach-hi3516dv300.c
Added
@@ -0,0 +1,69 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2016-2017 HiSilicon Technologies Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + * +*/ + +#include <linux/of_address.h> +#include <asm/smp_scu.h> + +#include "mach-common.h" + +#ifdef CONFIG_SMP + +#define REG_CPU_SRST_CRG 0x78 +#define CPU1_SRST_REQ BIT(2) +#define DBG1_SRST_REQ BIT(4) + +void hi35xx_set_cpu(unsigned int cpu, bool enable) +{ + struct device_node *np = NULL; + unsigned int regval; + void __iomem *crg_base; + + np = of_find_compatible_node(NULL, NULL, "hisilicon,hi3516dv300-clock"); + if (!np) { + pr_err("failed to find hisilicon clock node\n"); + return; + } + + crg_base = of_iomap(np, 0); + if (!crg_base) { + pr_err("failed to map address\n"); + return; + } + + if (enable) { + /* clear the slave cpu reset */ + regval = readl(crg_base + REG_CPU_SRST_CRG); + regval &= ~CPU1_SRST_REQ; + writel(regval, (crg_base + REG_CPU_SRST_CRG)); + } else { + regval = readl(crg_base + REG_CPU_SRST_CRG); + regval |= (DBG1_SRST_REQ | CPU1_SRST_REQ); + writel(regval, (crg_base + REG_CPU_SRST_CRG)); + } + iounmap(crg_base); +} + +static const struct smp_operations hi35xx_smp_ops __initconst = { + .smp_prepare_cpus = hi35xx_smp_prepare_cpus, + .smp_boot_secondary = hi35xx_boot_secondary, +}; + +CPU_METHOD_OF_DECLARE(hi3516dv300_smp, "hisilicon,hi3516dv300", + &hi35xx_smp_ops); +#endif /* CONFIG_SMP */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm/mach-hibvt/platsmp.c
Added
@@ -0,0 +1,63 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2013 Linaro Ltd. + * Copyright (c) 2013 Hisilicon Limited. + * Based on arch/arm/mach-vexpress/platsmp.c, Copyright (C) 2002 ARM Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + */ + +#include <linux/io.h> +#include <linux/smp.h> +#include <asm/smp_scu.h> + +#include "mach-common.h" + +#define HI35XX_BOOT_ADDRESS 0x00000000 + +void __init hi35xx_smp_prepare_cpus(unsigned int max_cpus) +{ + unsigned long base = 0; + void __iomem *scu_base = NULL; + + if (scu_a9_has_base()) { + base = scu_a9_get_base(); + scu_base = ioremap(base, PAGE_SIZE); + if (!scu_base) { + pr_err("ioremap(scu_base) failed\n"); + return; + } + + scu_enable(scu_base); + iounmap(scu_base); + } +} + +void hi35xx_set_scu_boot_addr(phys_addr_t start_addr, phys_addr_t jump_addr) +{ + void __iomem *virt; + + virt = ioremap(start_addr, PAGE_SIZE); + if (!virt) { + pr_err("ioremap(start_addr) failed\n"); + return; + } + + writel_relaxed(0xe51ff004, virt); /* ldr pc, rc, #-4 */ + writel_relaxed(jump_addr, virt + 4); /* pc jump phy address */ + iounmap(virt); +} + +int hi35xx_boot_secondary(unsigned int cpu, struct task_struct *idle) +{ + phys_addr_t jumpaddr; + + jumpaddr = virt_to_phys(secondary_startup); + hi35xx_set_scu_boot_addr(HI35XX_BOOT_ADDRESS, jumpaddr); + hi35xx_set_cpu(cpu, true); + arch_send_wakeup_ipi_mask(cpumask_of(cpu)); + return 0; +} +
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/configs/openeuler_defconfig
Changed
@@ -139,7 +139,7 @@ CONFIG_CGROUP_V1_WRITEBACK=y CONFIG_CGROUP_SCHED=y CONFIG_QOS_SCHED=y -CONFIG_QOS_SCHED_SMT_EXPELLER=y +# CONFIG_QOS_SCHED_SMT_EXPELLER is not set CONFIG_FAIR_GROUP_SCHED=y CONFIG_CFS_BANDWIDTH=y CONFIG_RT_GROUP_SCHED=y @@ -2776,6 +2776,11 @@ CONFIG_NGBE_DEBUG_FS=y # CONFIG_NGBE_POLL_LINK_STATUS is not set CONFIG_NGBE_SYSFS=y +CONFIG_TXGBE=m +CONFIG_TXGBE_HWMON=y +CONFIG_TXGBE_DEBUG_FS=y +# CONFIG_TXGBE_POLL_LINK_STATUS is not set +CONFIG_TXGBE_SYSFS=y # CONFIG_JME is not set # CONFIG_NET_VENDOR_MARVELL is not set CONFIG_NET_VENDOR_MELLANOX=y @@ -3400,7 +3405,6 @@ CONFIG_TCG_TIS_ST33ZP24_SPI=m # CONFIG_XILLYBUS is not set CONFIG_PIN_MEMORY_DEV=m -CONFIG_HISI_SVM=m # CONFIG_RANDOM_TRUST_CPU is not set # CONFIG_RANDOM_TRUST_BOOTLOADER is not set # end of Character devices @@ -7294,6 +7298,7 @@ # CONFIG_CORESIGHT_CTI is not set CONFIG_CORESIGHT_TRBE=m CONFIG_ULTRASOC_SMB=m +CONFIG_ACPI_TRBE=y # end of arm64 Debugging #
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/include/asm/acpi.h
Changed
@@ -42,6 +42,9 @@ #define ACPI_MADT_GICC_SPE (offsetof(struct acpi_madt_generic_interrupt, \ spe_interrupt) + sizeof(u16)) +#define ACPI_MADT_GICC_TRBE (offsetof(struct acpi_madt_generic_interrupt, \ + trbe_interrupt) + sizeof(u16)) + /* Basic configuration for ACPI */ #ifdef CONFIG_ACPI pgprot_t __acpi_get_mem_attribute(phys_addr_t addr);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/include/asm/resctrl.h
Changed
@@ -545,5 +545,23 @@ DEFINE_INLINE_CTRL_FEATURE_ENABLE_FUNC(caMax); DEFINE_INLINE_CTRL_FEATURE_ENABLE_FUNC(caPrio); +/** + * rdtgroup_remove - the helper to remove resource group safely + * @rdtgrp: resource group to remove + * + * On resource group creation via a mkdir, an extra kernfs_node reference is + * taken to ensure that the rdtgroup structure remains accessible for the + * rdtgroup_kn_unlock() calls where it is removed. + * + * Drop the extra reference here, then free the rdtgroup structure. + * + * Return: void + */ +static inline void rdtgroup_remove(struct rdtgroup *rdtgrp) +{ + kernfs_put(rdtgrp->kn); + kfree(rdtgrp); +} + #endif #endif /* _ASM_ARM64_RESCTRL_H */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/kernel/Makefile
Changed
@@ -54,6 +54,7 @@ obj-$(CONFIG_ARMV8_DEPRECATED) += armv8_deprecated.o obj-$(CONFIG_ACPI) += acpi.o obj-$(CONFIG_ACPI_NUMA) += acpi_numa.o +obj-$(CONFIG_ACPI_TRBE) += acpi_trbe.o obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL) += acpi_parking_protocol.o obj-$(CONFIG_PARAVIRT) += paravirt.o paravirt-spinlocks.o obj-$(CONFIG_PARAVIRT_SPINLOCKS) += paravirt.o paravirt-spinlocks.o
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/kernel/acpi_trbe.c
Added
@@ -0,0 +1,81 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * ACPI probing code for ARM Trace Buffer Extension. + * + * Copyright (C) 2022 ARM Ltd. + */ + +#include <linux/acpi.h> +#include <linux/coresight.h> +#include <linux/platform_device.h> +#include <linux/init.h> + +static struct resource trbe_resources = { + { + /* irq */ + .flags = IORESOURCE_IRQ, + } +}; + +static struct platform_device trbe_dev = { + .name = ARMV9_TRBE_PDEV_NAME, + .id = -1, + .resource = trbe_resources, + .num_resources = ARRAY_SIZE(trbe_resources) +}; + +static void arm_trbe_acpi_register_device(void) +{ + int cpu, hetid, irq, ret; + bool first = true; + u16 gsi = 0; + + /* + * Sanity check all the GICC tables for the same interrupt number. + * For now, we only support homogeneous machines. + */ + for_each_possible_cpu(cpu) { + struct acpi_madt_generic_interrupt *gicc; + + gicc = acpi_cpu_get_madt_gicc(cpu); + if (gicc->header.length < ACPI_MADT_GICC_TRBE) + return; + + if (first) { + gsi = gicc->trbe_interrupt; + if (!gsi) + return; + hetid = find_acpi_cpu_topology_hetero_id(cpu); + first = false; + } else if ((gsi != gicc->trbe_interrupt) || + (hetid != find_acpi_cpu_topology_hetero_id(cpu))) { + pr_warn("ACPI: TRBE must be homogeneous\n"); + return; + } + } + + irq = acpi_register_gsi(NULL, gsi, ACPI_LEVEL_SENSITIVE, + ACPI_ACTIVE_HIGH); + if (irq < 0) { + pr_warn("ACPI: TRBE Unable to register interrupt: %d\n", gsi); + return; + } + + trbe_resources0.start = irq; + ret = platform_device_register(&trbe_dev); + if (ret < 0) { + pr_warn("ACPI: TRBE: Unable to register device\n"); + acpi_unregister_gsi(gsi); + } +} + +static int arm_acpi_trbe_init(void) +{ + if (acpi_disabled) + return 0; + + arm_trbe_acpi_register_device(); + + return 0; +} +device_initcall(arm_acpi_trbe_init)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/kernel/armv8_deprecated.c
Changed
@@ -208,10 +208,12 @@ loff_t *ppos) { int ret = 0; - struct insn_emulation *insn = container_of(table->data, struct insn_emulation, current_mode); - enum insn_emulation_mode prev_mode = insn->current_mode; + struct insn_emulation *insn; + enum insn_emulation_mode prev_mode; mutex_lock(&insn_emulation_mutex); + insn = container_of(table->data, struct insn_emulation, current_mode); + prev_mode = insn->current_mode; ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); if (ret || !write || prev_mode == insn->current_mode)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/kernel/mpam/mpam_ctrlmon.c
Changed
@@ -804,7 +804,6 @@ if (IS_ERR(kn_subdir)) return PTR_ERR(kn_subdir); - kernfs_get(kn_subdir); ret = resctrl_group_kn_set_ugid(kn_subdir); if (ret) return ret; @@ -830,7 +829,6 @@ *kn_info = kernfs_create_dir(parent_kn, "info", parent_kn->mode, NULL); if (IS_ERR(*kn_info)) return PTR_ERR(*kn_info); - kernfs_get(*kn_info); ret = resctrl_group_add_files(*kn_info, RF_TOP_INFO); if (ret) @@ -865,12 +863,6 @@ } } - /* - m This extra ref will be put in kernfs_remove() and guarantees - * that @rdtgrp->kn is always accessible. - */ - kernfs_get(*kn_info); - ret = resctrl_group_kn_set_ugid(*kn_info); if (ret) goto out_destroy;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/kernel/mpam/mpam_resctrl.c
Changed
@@ -310,11 +310,15 @@ return -EINVAL; } - if (kstrtoul(buf, rr->ctrl_featurestype.base, &data)) + if (kstrtoul(buf, rr->ctrl_featurestype.base, &data)) { + rdt_last_cmd_printf("Non-hex character in the mask %s\n", buf); return -EINVAL; + } - if (data >= rr->ctrl_featurestype.max_wd) + if (data >= rr->ctrl_featurestype.max_wd) { + rdt_last_cmd_puts("Mask out of range\n"); return -EINVAL; + } cfg->new_ctrltype = data; cfg->have_new_ctrl = true; @@ -338,25 +342,31 @@ switch (rr->ctrl_featurestype.evt) { case QOS_MBA_MAX_EVENT_ID: case QOS_MBA_PBM_EVENT_ID: - if (kstrtoul(buf, rr->ctrl_featurestype.base, &data)) - return -EINVAL; - data = (data < r->mbw.min_bw) ? r->mbw.min_bw : data; - data = roundup(data, r->mbw.bw_gran); - break; case QOS_MBA_MIN_EVENT_ID: - if (kstrtoul(buf, rr->ctrl_featurestype.base, &data)) + if (kstrtoul(buf, rr->ctrl_featurestype.base, &data)) { + rdt_last_cmd_printf("Non-decimal digit in MB value %s\n", buf); return -EINVAL; - /* for mbw min feature, 0 of setting is allowed */ + } + if (data < r->mbw.min_bw) { + rdt_last_cmd_printf("MB value %ld out of range %d,%d\n", data, + r->mbw.min_bw, rr->ctrl_featurestype.max_wd - 1); + return -EINVAL; + } data = roundup(data, r->mbw.bw_gran); break; default: - if (kstrtoul(buf, rr->ctrl_featurestype.base, &data)) + if (kstrtoul(buf, rr->ctrl_featurestype.base, &data)) { + rdt_last_cmd_printf("Non-decimal digit in MB value %s\n", buf); return -EINVAL; + } break; } - if (data >= rr->ctrl_featurestype.max_wd) + if (data >= rr->ctrl_featurestype.max_wd) { + rdt_last_cmd_printf("MB value %ld out of range %d,%d\n", data, + r->mbw.min_bw, rr->ctrl_featurestype.max_wd - 1); return -EINVAL; + } cfg->new_ctrltype = data; cfg->have_new_ctrl = true; @@ -1335,7 +1345,7 @@ (rdtgrp->flags & RDT_DELETED)) { current->closid = 0; current->rmid = 0; - kfree(rdtgrp); + rdtgroup_remove(rdtgrp); } preempt_disable(); @@ -2280,6 +2290,8 @@ case QOS_MBA_MAX_EVENT_ID: range = MBW_MAX_BWA_FRACT(res->class->bwa_wd); mpam_cfg->mbw_max = (resctrl_cfg * range) / (MAX_MBA_BW - 1); + /* correct mbw_max if remainder is too large */ + mpam_cfg->mbw_max += ((resctrl_cfg * range) % (MAX_MBA_BW - 1)) / range; mpam_cfg->mbw_max = (mpam_cfg->mbw_max > range) ? range : mpam_cfg->mbw_max; mpam_set_feature(mpam_feat_mbw_max, &mpam_cfg->valid); @@ -2287,6 +2299,8 @@ case QOS_MBA_MIN_EVENT_ID: range = MBW_MAX_BWA_FRACT(res->class->bwa_wd); mpam_cfg->mbw_min = (resctrl_cfg * range) / (MAX_MBA_BW - 1); + /* correct mbw_min if remainder is too large */ + mpam_cfg->mbw_min += ((resctrl_cfg * range) % (MAX_MBA_BW - 1)) / range; mpam_cfg->mbw_min = (mpam_cfg->mbw_min > range) ? range : mpam_cfg->mbw_min; mpam_set_feature(mpam_feat_mbw_min, &mpam_cfg->valid);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/kernel/mpam/mpam_setup.c
Changed
@@ -172,6 +172,8 @@ list_del(&d->list); dom = container_of(d, struct mpam_resctrl_dom, resctrl_dom); kfree(dom); + + res->resctrl_res.dom_num--; } mpam_resctrl_clear_default_cpu(cpu); @@ -417,6 +419,9 @@ * of 1 would appear too fine to make percentage conversions. */ r->mbw.bw_gran = GRAN_MBA_BW; + /* do not allow mbw_max/min below mbw.bw_gran */ + if (r->mbw.min_bw < r->mbw.bw_gran) + r->mbw.min_bw = r->mbw.bw_gran; /* We will only pick a class that can monitor and control */ r->alloc_capable = true;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/kvm/hyp/include/hyp/switch.h
Changed
@@ -223,7 +223,11 @@ esr_ec != ESR_ELx_EC_SVE) return false; - vcpu->stat.fp_asimd_exit_stat++; + if (esr_ec == ESR_ELx_EC_FP_ASIMD) + vcpu->stat.fp_asimd_exit_stat++; + else /* SVE trap */ + vcpu->stat.sve_exit_stat++; + /* Don't handle SVE traps for non-SVE vcpus here: */ if (!sve_guest) if (esr_ec != ESR_ELx_EC_FP_ASIMD)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/arm64/mm/init.c
Changed
@@ -727,14 +727,20 @@ } } +#ifdef CONFIG_ASCEND_CHARGE_MIGRATE_HUGEPAGES +extern int enable_charge_mighp; +#endif + +#ifdef CONFIG_ARM64_PSEUDO_NMI +extern bool enable_pseudo_nmi; +#endif + void ascend_enable_all_features(void) { if (IS_ENABLED(CONFIG_ASCEND_DVPP_MMAP)) enable_mmap_dvpp = 1; #ifdef CONFIG_ASCEND_CHARGE_MIGRATE_HUGEPAGES - extern int enable_charge_mighp; - enable_charge_mighp = 1; #endif @@ -743,8 +749,6 @@ #endif #ifdef CONFIG_ARM64_PSEUDO_NMI - extern bool enable_pseudo_nmi; - enable_pseudo_nmi = true; #endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/configs/openeuler_defconfig
Changed
@@ -158,7 +158,7 @@ CONFIG_CGROUP_V1_WRITEBACK=y CONFIG_CGROUP_SCHED=y CONFIG_QOS_SCHED=y -CONFIG_QOS_SCHED_SMT_EXPELLER=y +# CONFIG_QOS_SCHED_SMT_EXPELLER is not set CONFIG_FAIR_GROUP_SCHED=y CONFIG_CFS_BANDWIDTH=y CONFIG_RT_GROUP_SCHED=y @@ -2743,12 +2743,16 @@ CONFIG_FM10K=m # CONFIG_IGC is not set CONFIG_NET_VENDOR_NETSWIFT=y -CONFIG_TXGBE=m CONFIG_NGBE=m CONFIG_NGBE_HWMON=y CONFIG_NGBE_DEBUG_FS=y # CONFIG_NGBE_POLL_LINK_STATUS is not set CONFIG_NGBE_SYSFS=y +CONFIG_TXGBE=m +CONFIG_TXGBE_HWMON=y +CONFIG_TXGBE_DEBUG_FS=y +# CONFIG_TXGBE_POLL_LINK_STATUS is not set +CONFIG_TXGBE_SYSFS=y # CONFIG_JME is not set # CONFIG_NET_VENDOR_MARVELL is not set CONFIG_NET_VENDOR_MELLANOX=y @@ -6504,6 +6508,7 @@ # CONFIG_GREYBUS is not set # CONFIG_STAGING is not set CONFIG_X86_PLATFORM_DEVICES=y +CONFIG_INTEL_IFS=m CONFIG_ACPI_WMI=m CONFIG_WMI_BMOF=m # CONFIG_ALIENWARE_WMI is not set
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/events/amd/uncore.c
Changed
@@ -12,11 +12,11 @@ #include <linux/init.h> #include <linux/cpu.h> #include <linux/cpumask.h> +#include <linux/cpufeature.h> +#include <linux/smp.h> -#include <asm/cpufeature.h> #include <asm/perf_event.h> #include <asm/msr.h> -#include <asm/smp.h> #define NUM_COUNTERS_NB 4 #define NUM_COUNTERS_L2 4 @@ -537,7 +537,7 @@ if (amd_uncore_llc) { uncore = *per_cpu_ptr(amd_uncore_llc, cpu); - uncore->id = per_cpu(cpu_llc_id, cpu); + uncore->id = get_llc_id(cpu); uncore = amd_uncore_find_online_sibling(uncore, amd_uncore_llc); *per_cpu_ptr(amd_uncore_llc, cpu) = uncore; @@ -755,11 +755,9 @@ fail_llc: if (boot_cpu_has(X86_FEATURE_PERFCTR_NB)) perf_pmu_unregister(&amd_nb_pmu); - if (amd_uncore_llc) - free_percpu(amd_uncore_llc); + free_percpu(amd_uncore_llc); fail_nb: - if (amd_uncore_nb) - free_percpu(amd_uncore_nb); + free_percpu(amd_uncore_nb); return ret; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/include/asm/cpu.h
Changed
@@ -66,4 +66,23 @@ #else static inline void init_ia32_feat_ctl(struct cpuinfo_x86 *c) {} #endif + +struct ucode_cpu_info; + +int intel_cpu_collect_info(struct ucode_cpu_info *uci); + +static inline bool intel_cpu_signatures_match(unsigned int s1, unsigned int p1, + unsigned int s2, unsigned int p2) +{ + if (s1 != s2) + return false; + + /* Processor flags are either both 0 ... */ + if (!p1 && !p2) + return true; + + /* ... or they intersect. */ + return p1 & p2; +} + #endif /* _ASM_X86_CPU_H */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/include/asm/cpufeature.h
Changed
@@ -31,7 +31,6 @@ CPUID_7_ECX, CPUID_8000_0007_EBX, CPUID_7_EDX, - CPUID_8000_001F_EAX, }; #ifdef CONFIG_X86_FEATURE_NAMES
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/include/asm/msr-index.h
Changed
@@ -78,6 +78,8 @@ /* Abbreviated from Intel SDM name IA32_CORE_CAPABILITIES */ #define MSR_IA32_CORE_CAPS 0x000000cf +#define MSR_IA32_CORE_CAPS_INTEGRITY_CAPS_BIT 2 +#define MSR_IA32_CORE_CAPS_INTEGRITY_CAPS BIT(MSR_IA32_CORE_CAPS_INTEGRITY_CAPS_BIT) #define MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT_BIT 5 #define MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT BIT(MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT_BIT) @@ -192,6 +194,11 @@ #define MSR_IA32_POWER_CTL 0x000001fc #define MSR_IA32_POWER_CTL_BIT_EE 19 +/* Abbreviated from Intel SDM name IA32_INTEGRITY_CAPABILITIES */ +#define MSR_INTEGRITY_CAPS 0x000002d9 +#define MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT 4 +#define MSR_INTEGRITY_CAPS_PERIODIC_BIST BIT(MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT) + #define MSR_LBR_NHM_FROM 0x00000680 #define MSR_LBR_NHM_TO 0x000006c0 #define MSR_LBR_CORE_FROM 0x00000040
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/include/asm/processor.h
Changed
@@ -829,6 +829,8 @@ DECLARE_PER_CPU(u64, msr_misc_features_shadow); +extern u16 get_llc_id(unsigned int cpu); + #ifdef CONFIG_CPU_SUP_AMD extern u16 amd_get_nb_id(int cpu); extern u32 amd_get_nodes_per_socket(void);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kernel/cpu/amd.c
Changed
@@ -445,7 +445,7 @@ node = numa_cpu_node(cpu); if (node == NUMA_NO_NODE) - node = per_cpu(cpu_llc_id, cpu); + node = get_llc_id(cpu); /* * On multi-fabric platform (e.g. Numascale NumaChip) a
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kernel/cpu/common.c
Changed
@@ -79,6 +79,12 @@ /* Last level cache ID of each logical CPU */ DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID; +u16 get_llc_id(unsigned int cpu) +{ + return per_cpu(cpu_llc_id, cpu); +} +EXPORT_SYMBOL_GPL(get_llc_id); + /* correctly size the local cpu masks */ void __init setup_cpu_local_masks(void) { @@ -957,9 +963,6 @@ if (c->extended_cpuid_level >= 0x8000000a) c->x86_capabilityCPUID_8000_000A_EDX = cpuid_edx(0x8000000a); - if (c->extended_cpuid_level >= 0x8000001f) - c->x86_capabilityCPUID_8000_001F_EAX = cpuid_eax(0x8000001f); - init_scattered_cpuid_features(c); init_speculation_control(c);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kernel/cpu/hygon.c
Changed
@@ -235,12 +235,12 @@ u32 ecx; ecx = cpuid_ecx(0x8000001e); - nodes_per_socket = ((ecx >> 8) & 7) + 1; + __max_die_per_package = nodes_per_socket = ((ecx >> 8) & 7) + 1; } else if (boot_cpu_has(X86_FEATURE_NODEID_MSR)) { u64 value; rdmsrl(MSR_FAM10H_NODE_ID, value); - nodes_per_socket = ((value >> 3) & 7) + 1; + __max_die_per_package = nodes_per_socket = ((value >> 3) & 7) + 1; } if (!boot_cpu_has(X86_FEATURE_AMD_SSBD) &&
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kernel/cpu/intel.c
Changed
@@ -181,6 +181,38 @@ return false; } +int intel_cpu_collect_info(struct ucode_cpu_info *uci) +{ + unsigned int val2; + unsigned int family, model; + struct cpu_signature csig = { 0 }; + unsigned int eax, ebx, ecx, edx; + + memset(uci, 0, sizeof(*uci)); + + eax = 0x00000001; + ecx = 0; + native_cpuid(&eax, &ebx, &ecx, &edx); + csig.sig = eax; + + family = x86_family(eax); + model = x86_model(eax); + + if (model >= 5 || family > 6) { + /* get processor flags from MSR 0x17 */ + native_rdmsr(MSR_IA32_PLATFORM_ID, val0, val1); + csig.pf = 1 << ((val1 >> 18) & 7); + } + + csig.rev = intel_get_microcode_revision(); + + uci->cpu_sig = csig; + uci->valid = 1; + + return 0; +} +EXPORT_SYMBOL_GPL(intel_cpu_collect_info); + static void early_init_intel(struct cpuinfo_x86 *c) { u64 misc_enable;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kernel/cpu/microcode/intel.c
Changed
@@ -45,20 +45,6 @@ /* last level cache size per core */ static int llc_size_per_core; -static inline bool cpu_signatures_match(unsigned int s1, unsigned int p1, - unsigned int s2, unsigned int p2) -{ - if (s1 != s2) - return false; - - /* Processor flags are either both 0 ... */ - if (!p1 && !p2) - return true; - - /* ... or they intersect. */ - return p1 & p2; -} - /* * Returns 1 if update has been found, 0 otherwise. */ @@ -69,7 +55,7 @@ struct extended_signature *ext_sig; int i; - if (cpu_signatures_match(csig, cpf, mc_hdr->sig, mc_hdr->pf)) + if (intel_cpu_signatures_match(csig, cpf, mc_hdr->sig, mc_hdr->pf)) return 1; /* Look for ext. headers: */ @@ -80,7 +66,7 @@ ext_sig = (void *)ext_hdr + EXT_HEADER_SIZE; for (i = 0; i < ext_hdr->count; i++) { - if (cpu_signatures_match(csig, cpf, ext_sig->sig, ext_sig->pf)) + if (intel_cpu_signatures_match(csig, cpf, ext_sig->sig, ext_sig->pf)) return 1; ext_sig++; } @@ -342,37 +328,6 @@ return patch; } -static int collect_cpu_info_early(struct ucode_cpu_info *uci) -{ - unsigned int val2; - unsigned int family, model; - struct cpu_signature csig = { 0 }; - unsigned int eax, ebx, ecx, edx; - - memset(uci, 0, sizeof(*uci)); - - eax = 0x00000001; - ecx = 0; - native_cpuid(&eax, &ebx, &ecx, &edx); - csig.sig = eax; - - family = x86_family(eax); - model = x86_model(eax); - - if ((model >= 5) || (family > 6)) { - /* get processor flags from MSR 0x17 */ - native_rdmsr(MSR_IA32_PLATFORM_ID, val0, val1); - csig.pf = 1 << ((val1 >> 18) & 7); - } - - csig.rev = intel_get_microcode_revision(); - - uci->cpu_sig = csig; - uci->valid = 1; - - return 0; -} - static void show_saved_mc(void) { #ifdef DEBUG @@ -386,7 +341,7 @@ return; } - collect_cpu_info_early(&uci); + intel_cpu_collect_info(&uci); sig = uci.cpu_sig.sig; pf = uci.cpu_sig.pf; @@ -495,7 +450,7 @@ struct ucode_cpu_info uci; if (delay_ucode_info) { - collect_cpu_info_early(&uci); + intel_cpu_collect_info(&uci); print_ucode_info(&uci, current_mc_date); delay_ucode_info = 0; } @@ -597,7 +552,7 @@ if (!(cp.data && cp.size)) return 0; - collect_cpu_info_early(&uci); + intel_cpu_collect_info(&uci); scan_microcode(cp.data, cp.size, &uci, true); @@ -630,7 +585,7 @@ if (!(cp.data && cp.size)) return NULL; - collect_cpu_info_early(uci); + intel_cpu_collect_info(uci); return scan_microcode(cp.data, cp.size, uci, false); } @@ -705,7 +660,7 @@ struct microcode_intel *p; struct ucode_cpu_info uci; - collect_cpu_info_early(&uci); + intel_cpu_collect_info(&uci); p = find_patch(&uci); if (!p)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kernel/cpu/scattered.c
Changed
@@ -44,6 +44,11 @@ { X86_FEATURE_PROC_FEEDBACK, CPUID_EDX, 11, 0x80000007, 0 }, { X86_FEATURE_MBA, CPUID_EBX, 6, 0x80000008, 0 }, { X86_FEATURE_PERFMON_V2, CPUID_EAX, 0, 0x80000022, 0 }, + { X86_FEATURE_SME, CPUID_EAX, 0, 0x8000001f, 0 }, + { X86_FEATURE_SEV, CPUID_EAX, 1, 0x8000001f, 0 }, + { X86_FEATURE_VM_PAGE_FLUSH, CPUID_EAX, 2, 0x8000001f, 0 }, + { X86_FEATURE_SEV_ES, CPUID_EAX, 3, 0x8000001f, 0 }, + { X86_FEATURE_SME_COHERENT, CPUID_EAX, 10, 0x8000001f, 0 }, { 0, 0, 0, 0, 0 } };
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kvm/vmx/nested.c
Changed
@@ -4543,6 +4543,17 @@ vmx_switch_vmcs(vcpu, &vmx->vmcs01); + /* + * If IBRS is advertised to the vCPU, KVM must flush the indirect + * branch predictors when transitioning from L2 to L1, as L1 expects + * hardware (KVM in this case) to provide separate predictor modes. + * Bare metal isolates VMX root (host) from VMX non-root (guest), but + * doesn't isolate different VMCSs, i.e. in this case, doesn't provide + * separate modes for L2 vs L1. + */ + if (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL)) + indirect_branch_prediction_barrier(); + /* Update any VMCS fields that might have changed while L2 ran */ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/arch/x86/kvm/vmx/vmx.c
Changed
@@ -1454,8 +1454,10 @@ /* * No indirect branch prediction barrier needed when switching - * the active VMCS within a guest, e.g. on nested VM-Enter. - * The L1 VMM can protect itself with retpolines, IBPB or IBRS. + * the active VMCS within a vCPU, unless IBRS is advertised to + * the vCPU. To minimize the number of IBPBs executed, KVM + * performs IBPB on nested VM-Exit (a single nested transition + * may switch the active VMCS multiple times). */ if (!buddy || WARN_ON_ONCE(buddy->vmcs != prev)) indirect_branch_prediction_barrier();
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/bfq-cgroup.c
Changed
@@ -613,6 +613,10 @@ struct bfq_group *bfqg; while (blkg) { + if (!blkg->online) { + blkg = blkg->parent; + continue; + } bfqg = blkg_to_bfqg(blkg); if (bfqg->online) { bio_associate_blkg_from_css(bio, &blkg->blkcg->css); @@ -907,6 +911,9 @@ unsigned long flags; int i; + if (!bfqg->online) + return; + spin_lock_irqsave(&bfqd->lock, flags); if (!entity) /* root group */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/bfq-iosched.c
Changed
@@ -2775,6 +2775,15 @@ bfqq != bfqd->in_service_queue) bfq_del_bfqq_busy(bfqd, bfqq, false); + /* + * __bfq_bic_change_cgroup() just reset bic->bfqq so that a new bfqq + * will be created to handle new io, while old bfqq will stay around + * until all the requests are completed. It's unsafe to keep bfqq->bic + * since they are not related anymore. + */ + if (bfqq_process_refs(bfqq) == 1) + bfqq->bic = NULL; + bfq_put_queue(bfqq); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/blk-core.c
Changed
@@ -1304,6 +1304,37 @@ } } +static void blk_account_io_latency(struct request *req, u64 now, const int sgrp) +{ +#ifdef CONFIG_64BIT + u64 stat_time; + struct request_wrapper *rq_wrapper; + + if (!(req->rq_flags & RQF_FROM_BLOCK)) { + part_stat_add(req->part, nsecssgrp, now - req->start_time_ns); + return; + } + + rq_wrapper = request_to_wrapper(req); + stat_time = READ_ONCE(rq_wrapper->stat_time_ns); + /* + * This might fail if 'stat_time_ns' is updated + * in blk_mq_check_inflight_with_stat(). + */ + if (likely(now > stat_time && + cmpxchg64(&rq_wrapper->stat_time_ns, stat_time, now) + == stat_time)) { + u64 duration = stat_time ? now - stat_time : + now - req->start_time_ns; + + part_stat_add(req->part, nsecssgrp, duration); + } +#else + part_stat_add(req->part, nsecssgrp, now - req->start_time_ns); + +#endif +} + void blk_account_io_done(struct request *req, u64 now) { /* @@ -1315,36 +1346,15 @@ !(req->rq_flags & RQF_FLUSH_SEQ)) { const int sgrp = op_stat_group(req_op(req)); struct hd_struct *part; -#ifdef CONFIG_64BIT - u64 stat_time; - struct request_wrapper *rq_wrapper = request_to_wrapper(req); -#endif part_stat_lock(); part = req->part; update_io_ticks(part, jiffies, true); part_stat_inc(part, iossgrp); -#ifdef CONFIG_64BIT - stat_time = READ_ONCE(rq_wrapper->stat_time_ns); - /* - * This might fail if 'stat_time_ns' is updated - * in blk_mq_check_inflight_with_stat(). - */ - if (likely(now > stat_time && - cmpxchg64(&rq_wrapper->stat_time_ns, stat_time, now) - == stat_time)) { - u64 duation = stat_time ? now - stat_time : - now - req->start_time_ns; - - part_stat_add(req->part, nsecssgrp, duation); - } -#else - part_stat_add(part, nsecssgrp, now - req->start_time_ns); -#endif + blk_account_io_latency(req, now, sgrp); if (precise_iostat) part_stat_local_dec(part, in_flightrq_data_dir(req)); part_stat_unlock(); - hd_struct_put(part); } }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/blk-flush.c
Changed
@@ -333,7 +333,7 @@ flush_rq->cmd_flags = REQ_OP_FLUSH | REQ_PREFLUSH; flush_rq->cmd_flags |= (flags & REQ_DRV) | (flags & REQ_FAILFAST_MASK); - flush_rq->rq_flags |= RQF_FLUSH_SEQ; + flush_rq->rq_flags |= RQF_FLUSH_SEQ | RQF_FROM_BLOCK; flush_rq->rq_disk = first_rq->rq_disk; flush_rq->end_io = flush_end_io; /* @@ -470,7 +470,8 @@ gfp_t flags) { struct blk_flush_queue *fq; - int rq_sz = sizeof(struct request_wrapper); + struct request_wrapper *wrapper; + int rq_sz = sizeof(struct request) + sizeof(struct request_wrapper); fq = kzalloc_node(sizeof(*fq), flags, node); if (!fq) @@ -479,10 +480,11 @@ spin_lock_init(&fq->mq_flush_lock); rq_sz = round_up(rq_sz + cmd_size, cache_line_size()); - fq->flush_rq = kzalloc_node(rq_sz, flags, node); - if (!fq->flush_rq) + wrapper = kzalloc_node(rq_sz, flags, node); + if (!wrapper) goto fail_rq; + fq->flush_rq = (struct request *)(wrapper + 1); INIT_LIST_HEAD(&fq->flush_queue0); INIT_LIST_HEAD(&fq->flush_queue1); INIT_LIST_HEAD(&fq->flush_data_in_flight); @@ -501,7 +503,7 @@ if (!fq) return; - kfree(fq->flush_rq); + kfree(request_to_wrapper(fq->flush_rq)); kfree(fq); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/blk-iocost.c
Changed
@@ -2747,8 +2747,13 @@ struct ioc_pcpu_stat *ccs; u64 on_q_ns, rq_wait_ns, size_nsec; int pidx, rw; + struct request_wrapper *rq_wrapper; - if (!ioc->enabled || !rq->alloc_time_ns || !rq->start_time_ns) + if (WARN_ON_ONCE(!(rq->rq_flags & RQF_FROM_BLOCK))) + return; + + rq_wrapper = request_to_wrapper(rq); + if (!ioc->enabled || !rq_wrapper->alloc_time_ns || !rq->start_time_ns) return; switch (req_op(rq) & REQ_OP_MASK) { @@ -2764,8 +2769,8 @@ return; } - on_q_ns = ktime_get_ns() - rq->alloc_time_ns; - rq_wait_ns = rq->start_time_ns - rq->alloc_time_ns; + on_q_ns = ktime_get_ns() - rq_wrapper->alloc_time_ns; + rq_wait_ns = rq->start_time_ns - rq_wrapper->alloc_time_ns; size_nsec = div64_u64(calc_size_vtime_cost(rq, ioc), VTIME_PER_NSEC); ccs = get_cpu_ptr(ioc->pcpu_stat);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/blk-mq-debugfs.c
Changed
@@ -360,8 +360,9 @@ blk_flags_show(m, rq->cmd_flags & ~REQ_OP_MASK, cmd_flag_name, ARRAY_SIZE(cmd_flag_name)); seq_puts(m, ", .rq_flags="); - blk_flags_show(m, (__force unsigned int)rq->rq_flags, rqf_name, - ARRAY_SIZE(rqf_name)); + blk_flags_show(m, + (__force unsigned int)(rq->rq_flags & ~RQF_FROM_BLOCK), + rqf_name, ARRAY_SIZE(rqf_name)); seq_printf(m, ", .state=%s", blk_mq_rq_state_name(blk_mq_rq_state(rq))); seq_printf(m, ", .tag=%d, .internal_tag=%d", rq->tag, rq->internal_tag);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/blk-mq.c
Changed
@@ -115,9 +115,8 @@ struct request_wrapper *rq_wrapper; mi->inflightrq_data_dir(rq)++; - if (!rq->part) + if (!rq->part || !(rq->rq_flags & RQF_FROM_BLOCK)) return true; - /* * If the request is started after 'part->stat_time' is set, * don't update 'nsces' here. @@ -375,7 +374,7 @@ rq->q = data->q; rq->mq_ctx = data->ctx; rq->mq_hctx = data->hctx; - rq->rq_flags = 0; + rq->rq_flags = RQF_FROM_BLOCK; rq->cmd_flags = data->cmd_flags; if (data->flags & BLK_MQ_REQ_PM) rq->rq_flags |= RQF_PM; @@ -387,7 +386,7 @@ rq->rq_disk = NULL; rq->part = NULL; #ifdef CONFIG_BLK_RQ_ALLOC_TIME - rq->alloc_time_ns = alloc_time_ns; + request_to_wrapper(rq)->alloc_time_ns = alloc_time_ns; #endif request_to_wrapper(rq)->stat_time_ns = 0; if (blk_mq_need_time_stamp(rq)) @@ -2601,8 +2600,9 @@ * rq_size is the size of the request plus driver payload, rounded * to the cacheline size */ - rq_size = round_up(sizeof(struct request_wrapper) + set->cmd_size, - cache_line_size()); + rq_size = round_up(sizeof(struct request) + + sizeof(struct request_wrapper) + set->cmd_size, + cache_line_size()); left = rq_size * depth; for (i = 0; i < depth; ) { @@ -2642,7 +2642,7 @@ to_do = min(entries_per_page, depth - i); left -= to_do * rq_size; for (j = 0; j < to_do; j++) { - struct request *rq = p; + struct request *rq = p + sizeof(struct request_wrapper); tags->static_rqsi = rq; if (blk_mq_init_request(set, rq, hctx_idx, node)) {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/blk-mq.h
Changed
@@ -37,6 +37,20 @@ struct kobject kobj; } ____cacheline_aligned_in_smp; +struct request_wrapper { + /* Time that I/O was counted in part_get_stat_info(). */ + u64 stat_time_ns; +#ifdef CONFIG_BLK_RQ_ALLOC_TIME + /* Time that the first bio started allocating this request. */ + u64 alloc_time_ns; +#endif +} ____cacheline_aligned; + +static inline struct request_wrapper *request_to_wrapper(void *rq) +{ + return rq - sizeof(struct request_wrapper); +} + void blk_mq_exit_queue(struct request_queue *q); int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr); void blk_mq_wake_waiters(struct request_queue *q);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/block/elevator.c
Changed
@@ -630,7 +630,8 @@ if (q->tag_set && q->tag_set->flags & BLK_MQ_F_NO_SCHED_BY_DEFAULT) return NULL; - if (q->nr_hw_queues != 1) + if (q->nr_hw_queues != 1 && + !blk_mq_is_sbitmap_shared(q->tag_set->flags)) return NULL; return elevator_get(q, "mq-deadline", false);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/acpi/apei/einj.c
Changed
@@ -545,6 +545,8 @@ != REGION_INTERSECTS) && (region_intersects(base_addr, size, IORESOURCE_MEM, IORES_DESC_PERSISTENT_MEMORY) != REGION_INTERSECTS) && + (region_intersects(base_addr, size, IORESOURCE_MEM, IORES_DESC_SOFT_RESERVED) + != REGION_INTERSECTS) && !arch_is_platform_page(base_addr))) return -EINVAL;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/android/binder.c
Changed
@@ -6081,6 +6081,7 @@ .open = binder_open, .flush = binder_flush, .release = binder_release, + .may_pollfree = true, }; static int __init init_binder_device(const char *name)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/android/binder_alloc.c
Changed
@@ -212,7 +212,7 @@ mm = alloc->vma_vm_mm; if (mm) { - mmap_read_lock(mm); + mmap_write_lock(mm); vma = alloc->vma; } @@ -270,7 +270,7 @@ trace_binder_alloc_page_end(alloc, index); } if (mm) { - mmap_read_unlock(mm); + mmap_write_unlock(mm); mmput(mm); } return 0; @@ -303,7 +303,7 @@ } err_no_vma: if (mm) { - mmap_read_unlock(mm); + mmap_write_unlock(mm); mmput(mm); } return vma ? -ENOMEM : -ESRCH;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/block/virtio_blk.c
Changed
@@ -933,7 +933,7 @@ mutex_lock(&vblk->vdev_mutex); /* Stop all the virtqueues. */ - vdev->config->reset(vdev); + virtio_reset_device(vdev); /* Virtqueues are stopped, nothing can use vblk->vdev anymore. */ vblk->vdev = NULL; @@ -953,7 +953,7 @@ struct virtio_blk *vblk = vdev->priv; /* Ensure we don't receive any more interrupts */ - vdev->config->reset(vdev); + virtio_reset_device(vdev); /* Make sure no work handler is accessing the device. */ flush_work(&vblk->config_work);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/block/zram/zram_drv.c
Changed
@@ -1983,31 +1983,53 @@ static int zram_remove(struct zram *zram) { struct block_device *bdev; + bool claimed; bdev = bdget_disk(zram->disk, 0); if (!bdev) return -ENOMEM; mutex_lock(&bdev->bd_mutex); - if (bdev->bd_openers || zram->claim) { + if (bdev->bd_openers) { mutex_unlock(&bdev->bd_mutex); bdput(bdev); return -EBUSY; } - zram->claim = true; + claimed = zram->claim; + if (!claimed) + zram->claim = true; mutex_unlock(&bdev->bd_mutex); zram_debugfs_unregister(zram); - /* Make sure all the pending I/O are finished */ - fsync_bdev(bdev); - zram_reset_device(zram); + if (claimed) { + /* + * If we were claimed by reset_store(), del_gendisk() will + * wait until reset_store() is done, so nothing need to do. + */ + ; + } else { + /* Make sure all the pending I/O are finished */ + fsync_bdev(bdev); + zram_reset_device(zram); + } bdput(bdev); pr_info("Removed device: %s\n", zram->disk->disk_name); del_gendisk(zram->disk); + + /* del_gendisk drains pending reset_store */ + WARN_ON_ONCE(claimed && zram->claim); + + /* + * disksize_store() may be called in between zram_reset_device() + * and del_gendisk(), so run the last reset to avoid leaking + * anything allocated with disksize_store() + */ + zram_reset_device(zram); + blk_cleanup_queue(zram->disk->queue); put_disk(zram->disk); kfree(zram); @@ -2085,7 +2107,7 @@ static int zram_remove_cb(int id, void *ptr, void *data) { - zram_remove(ptr); + WARN_ON_ONCE(zram_remove(ptr)); return 0; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/char/Kconfig
Changed
@@ -478,16 +478,6 @@ help pin memory driver -config HISI_SVM - tristate "Hisilicon svm driver" - depends on ARM64 && ARM_SMMU_V3 && MMU_NOTIFIER - default m - help - This driver provides character-level access to Hisilicon - SVM chipset. Typically, you can bind a task to the - svm and share the virtual memory with hisilicon svm device. - When in doubt, say "N". - config RANDOM_TRUST_CPU bool "Initialize RNG using CPU RNG instructions" default y
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/char/Makefile
Changed
@@ -48,4 +48,3 @@ obj-$(CONFIG_POWERNV_OP_PANEL) += powernv-op-panel.o obj-$(CONFIG_ADI) += adi.o obj-$(CONFIG_PIN_MEMORY_DEV) += pin_memory.o -obj-$(CONFIG_HISI_SVM) += svm.o
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/char/hw_random/virtio-rng.c
Changed
@@ -134,7 +134,7 @@ vi->hwrng_removed = true; vi->data_avail = 0; complete(&vi->have_data); - vdev->config->reset(vdev); + virtio_reset_device(vdev); vi->busy = false; if (vi->hwrng_register_done) hwrng_unregister(&vi->hwrng);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/char/svm.c
Deleted
@@ -1,1772 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright (c) 2017-2018 Hisilicon Limited. - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - */ - -#include <asm/esr.h> -#include <linux/mmu_context.h> - -#include <linux/delay.h> -#include <linux/err.h> -#include <linux/interrupt.h> -#include <linux/io.h> -#include <linux/iommu.h> -#include <linux/miscdevice.h> -#include <linux/mman.h> -#include <linux/mmu_notifier.h> -#include <linux/module.h> -#include <linux/of.h> -#include <linux/of_address.h> -#include <linux/of_device.h> -#include <linux/platform_device.h> -#include <linux/ptrace.h> -#include <linux/security.h> -#include <linux/slab.h> -#include <linux/uaccess.h> -#include <linux/sched.h> -#include <linux/hugetlb.h> -#include <linux/sched/mm.h> -#include <linux/msi.h> -#include <linux/acpi.h> - -#define SVM_DEVICE_NAME "svm" -#define ASID_SHIFT 48 - -#define SVM_IOCTL_REMAP_PROC 0xfff4 -#define SVM_IOCTL_UNPIN_MEMORY 0xfff5 -#define SVM_IOCTL_PIN_MEMORY 0xfff7 -#define SVM_IOCTL_GET_PHYS 0xfff9 -#define SVM_IOCTL_LOAD_FLAG 0xfffa -#define SVM_IOCTL_SET_RC 0xfffc -#define SVM_IOCTL_PROCESS_BIND 0xffff - -#define CORE_SID 0 - -#define SVM_IOCTL_RELEASE_PHYS32 0xfff3 -#define SVM_REMAP_MEM_LEN_MAX (16 * 1024 * 1024) -#define MMAP_PHY32_MAX (16 * 1024 * 1024) - -static int probe_index; -static LIST_HEAD(child_list); -static DECLARE_RWSEM(svm_sem); -static struct rb_root svm_process_root = RB_ROOT; -static struct mutex svm_process_mutex; - -struct core_device { - struct device dev; - struct iommu_group *group; - struct iommu_domain *domain; - u8 smmu_bypass; - struct list_head entry; -}; - -struct svm_device { - unsigned long long id; - struct miscdevice miscdev; - struct device *dev; - phys_addr_t l2buff; - unsigned long l2size; -}; - -struct svm_bind_process { - pid_t vpid; - u64 ttbr; - u64 tcr; - int pasid; - u32 flags; -#define SVM_BIND_PID (1 << 0) -}; - -/* - *svm_process is released in svm_notifier_release() when mm refcnt - *goes down zero. We should access svm_process only in the context - *where mm_struct is valid, which means we should always get mm - *refcnt first. - */ -struct svm_process { - struct pid *pid; - struct mm_struct *mm; - unsigned long asid; - struct rb_node rb_node; - struct mmu_notifier notifier; - /* For postponed release */ - struct rcu_head rcu; - int pasid; - struct mutex mutex; - struct rb_root sdma_list; - struct svm_device *sdev; - struct iommu_sva *sva; -}; - -struct svm_sdma { - struct rb_node node; - unsigned long addr; - int nr_pages; - struct page **pages; - atomic64_t ref; -}; - -struct svm_proc_mem { - u32 dev_id; - u32 len; - u64 pid; - u64 vaddr; - u64 buf; -}; - -static char *svm_cmd_to_string(unsigned int cmd) -{ - switch (cmd) { - case SVM_IOCTL_PROCESS_BIND: - return "bind"; - case SVM_IOCTL_GET_PHYS: - return "get phys"; - case SVM_IOCTL_SET_RC: - return "set rc"; - case SVM_IOCTL_PIN_MEMORY: - return "pin memory"; - case SVM_IOCTL_UNPIN_MEMORY: - return "unpin memory"; - case SVM_IOCTL_REMAP_PROC: - return "remap proc"; - case SVM_IOCTL_LOAD_FLAG: - return "load flag"; - case SVM_IOCTL_RELEASE_PHYS32: - return "release phys"; - default: - return "unsupported"; - } - - return NULL; -} - -/* - * image word of slot - * SVM_IMAGE_WORD_INIT: initial value, indicating that the slot is not used. - * SVM_IMAGE_WORD_VALID: valid data is filled in the slot - * SVM_IMAGE_WORD_DONE: the DMA operation is complete when the TS uses this address, - * so, this slot can be freed. - */ -#define SVM_IMAGE_WORD_INIT 0x0 -#define SVM_IMAGE_WORD_VALID 0xaa55aa55 -#define SVM_IMAGE_WORD_DONE 0x55ff55ff - -/* - * The length of this structure must be 64 bytes, which is the agreement with the TS. - * And the data type and sequence cannot be changed, because the TS core reads data - * based on the data type and sequence. - * image_word: slot status. For details, see SVM_IMAGE_WORD_xxx - * pid: pid of process which ioctl svm device to get physical addr, it is used for - * verification by TS. - * data_type: used to determine the data type by TS. Currently, data type must be - * SVM_VA2PA_TYPE_DMA. - * char data48: for the data type SVM_VA2PA_TYPE_DMA, the DMA address is stored. - */ -struct svm_va2pa_slot { - int image_word; - int resv; - int pid; - int data_type; - union { - char user_defined_data48; - struct { - unsigned long phys; - unsigned long len; - char reserved32; - }; - }; -}; - -struct svm_va2pa_trunk { - struct svm_va2pa_slot *slots; - int slot_total; - int slot_used; - unsigned long *bitmap; - struct mutex mutex; -}; - -struct svm_va2pa_trunk va2pa_trunk; - -#define SVM_VA2PA_TRUNK_SIZE_MAX 0x3200000 -#define SVM_VA2PA_MEMORY_ALIGN 64 -#define SVM_VA2PA_SLOT_SIZE sizeof(struct svm_va2pa_slot) -#define SVM_VA2PA_TYPE_DMA 0x1 -#define SVM_MEM_REG "va2pa trunk" -#define SVM_VA2PA_CLEAN_BATCH_NUM 0x80 - -struct device_node *svm_find_mem_reg_node(struct device *dev, const char *compat) -{ - int index = 0; - struct device_node *tmp = NULL; - struct device_node *np = dev->of_node; - - for (; ; index++) { - tmp = of_parse_phandle(np, "memory-region", index); - if (!tmp) - break; - - if (of_device_is_compatible(tmp, compat)) - return tmp; - - of_node_put(tmp); - } - - return NULL; -} - -static int svm_parse_trunk_memory(struct device *dev, phys_addr_t *base, unsigned long *size) -{ - int err; - struct resource r; - struct device_node *trunk = NULL; - - trunk = svm_find_mem_reg_node(dev, SVM_MEM_REG); - if (!trunk) { - dev_err(dev, "Didn't find reserved memory\n"); - return -EINVAL; - } - - err = of_address_to_resource(trunk, 0, &r); - of_node_put(trunk); - if (err) { - dev_err(dev, "Couldn't address to resource for reserved memory\n"); - return -ENOMEM; - } - - *base = r.start; - *size = resource_size(&r); - - return 0; -} - -static int svm_setup_trunk(struct device *dev, phys_addr_t base, unsigned long size) -{ - int slot_total; - unsigned long *bitmap = NULL; - struct svm_va2pa_slot *slot = NULL; - - if (!IS_ALIGNED(base, SVM_VA2PA_MEMORY_ALIGN)) { - dev_err(dev, "Didn't aligned to %u\n", SVM_VA2PA_MEMORY_ALIGN); - return -EINVAL; - } - - if ((size == 0) || (size > SVM_VA2PA_TRUNK_SIZE_MAX)) { - dev_err(dev, "Size of reserved memory is not right\n"); - return -EINVAL; - } - - slot_total = size / SVM_VA2PA_SLOT_SIZE; - if (slot_total < BITS_PER_LONG) - return -EINVAL; - - bitmap = kvcalloc(slot_total / BITS_PER_LONG, sizeof(unsigned long), GFP_KERNEL); - if (!bitmap) { - dev_err(dev, "alloc memory failed\n"); - return -ENOMEM; - } - - slot = ioremap(base, size); - if (!slot) { - kvfree(bitmap); - dev_err(dev, "Ioremap trunk failed\n"); - return -ENXIO; - } - - va2pa_trunk.slots = slot; - va2pa_trunk.slot_used = 0; - va2pa_trunk.slot_total = slot_total; - va2pa_trunk.bitmap = bitmap; - mutex_init(&va2pa_trunk.mutex); - - return 0; -} - -static void svm_remove_trunk(struct device *dev) -{ - iounmap(va2pa_trunk.slots); - kvfree(va2pa_trunk.bitmap); - - va2pa_trunk.slots = NULL; - va2pa_trunk.bitmap = NULL; -} - -static void svm_set_slot_valid(unsigned long index, unsigned long phys, unsigned long len) -{ - struct svm_va2pa_slot *slot = &va2pa_trunk.slotsindex; - - slot->phys = phys; - slot->len = len; - slot->image_word = SVM_IMAGE_WORD_VALID; - slot->pid = current->tgid; - slot->data_type = SVM_VA2PA_TYPE_DMA; - __bitmap_set(va2pa_trunk.bitmap, index, 1); - va2pa_trunk.slot_used++; -} - -static void svm_set_slot_init(unsigned long index) -{ - struct svm_va2pa_slot *slot = &va2pa_trunk.slotsindex; - - slot->image_word = SVM_IMAGE_WORD_INIT; - __bitmap_clear(va2pa_trunk.bitmap, index, 1); - va2pa_trunk.slot_used--; -} - -static void svm_clean_done_slots(void) -{ - int used = va2pa_trunk.slot_used; - int count = 0; - long temp = -1; - phys_addr_t addr; - unsigned long *bitmap = va2pa_trunk.bitmap; - - for (; count < used && count < SVM_VA2PA_CLEAN_BATCH_NUM;) { - temp = find_next_bit(bitmap, va2pa_trunk.slot_total, temp + 1); - if (temp == va2pa_trunk.slot_total) - break; - - count++; - if (va2pa_trunk.slotstemp.image_word != SVM_IMAGE_WORD_DONE) - continue; - - addr = (phys_addr_t)va2pa_trunk.slotstemp.phys; - put_page(pfn_to_page(PHYS_PFN(addr))); - svm_set_slot_init(temp); - } -} - -static int svm_find_slot_init(unsigned long *index) -{ - int temp; - unsigned long *bitmap = va2pa_trunk.bitmap; - - temp = find_first_zero_bit(bitmap, va2pa_trunk.slot_total); - if (temp == va2pa_trunk.slot_total) - return -ENOSPC; - - *index = temp; - return 0; -} - -static int svm_va2pa_trunk_init(struct device *dev) -{ - int err; - phys_addr_t base; - unsigned long size; - - err = svm_parse_trunk_memory(dev, &base, &size); - if (err) - return err; - - err = svm_setup_trunk(dev, base, size); - if (err) - return err; - - return 0; -} - -static struct svm_process *find_svm_process(unsigned long asid) -{ - struct rb_node *node = svm_process_root.rb_node; - - while (node) { - struct svm_process *process = NULL; - - process = rb_entry(node, struct svm_process, rb_node); - if (asid < process->asid) - node = node->rb_left; - else if (asid > process->asid) - node = node->rb_right; - else - return process; - } - - return NULL; -} - -static void insert_svm_process(struct svm_process *process) -{ - struct rb_node **p = &svm_process_root.rb_node; - struct rb_node *parent = NULL; - - while (*p) { - struct svm_process *tmp_process = NULL; - - parent = *p; - tmp_process = rb_entry(parent, struct svm_process, rb_node); - if (process->asid < tmp_process->asid) - p = &(*p)->rb_left; - else if (process->asid > tmp_process->asid) - p = &(*p)->rb_right; - else { - WARN_ON_ONCE("asid already in the tree"); - return; - } - } - - rb_link_node(&process->rb_node, parent, p); - rb_insert_color(&process->rb_node, &svm_process_root); -} - -static void delete_svm_process(struct svm_process *process) -{ - rb_erase(&process->rb_node, &svm_process_root); - RB_CLEAR_NODE(&process->rb_node); -} - -static struct svm_device *file_to_sdev(struct file *file) -{ - return container_of(file->private_data, - struct svm_device, miscdev); -} - -static inline struct core_device *to_core_device(struct device *d) -{ - return container_of(d, struct core_device, dev); -} - -static struct svm_sdma *svm_find_sdma(struct svm_process *process, - unsigned long addr, int nr_pages) -{ - struct rb_node *node = process->sdma_list.rb_node; - - while (node) { - struct svm_sdma *sdma = NULL; - - sdma = rb_entry(node, struct svm_sdma, node); - if (addr < sdma->addr) - node = node->rb_left; - else if (addr > sdma->addr) - node = node->rb_right; - else if (nr_pages < sdma->nr_pages) - node = node->rb_left; - else if (nr_pages > sdma->nr_pages) - node = node->rb_right; - else - return sdma; - } - - return NULL; -} - -static int svm_insert_sdma(struct svm_process *process, struct svm_sdma *sdma) -{ - struct rb_node **p = &process->sdma_list.rb_node; - struct rb_node *parent = NULL; - - while (*p) { - struct svm_sdma *tmp_sdma = NULL; - - parent = *p; - tmp_sdma = rb_entry(parent, struct svm_sdma, node); - if (sdma->addr < tmp_sdma->addr) - p = &(*p)->rb_left; - else if (sdma->addr > tmp_sdma->addr) - p = &(*p)->rb_right; - else if (sdma->nr_pages < tmp_sdma->nr_pages) - p = &(*p)->rb_left; - else if (sdma->nr_pages > tmp_sdma->nr_pages) - p = &(*p)->rb_right; - else { - /* - * add reference count and return -EBUSY - * to free former alloced one. - */ - atomic64_inc(&tmp_sdma->ref); - return -EBUSY; - } - } - - rb_link_node(&sdma->node, parent, p); - rb_insert_color(&sdma->node, &process->sdma_list); - - return 0; -} - -static void svm_remove_sdma(struct svm_process *process, - struct svm_sdma *sdma, bool try_rm) -{ - int null_count = 0; - - if (try_rm && (!atomic64_dec_and_test(&sdma->ref))) - return; - - rb_erase(&sdma->node, &process->sdma_list); - RB_CLEAR_NODE(&sdma->node); - - while (sdma->nr_pages--) { - if (sdma->pagessdma->nr_pages == NULL) { - pr_err("null pointer, nr_pages:%d.\n", sdma->nr_pages); - null_count++; - continue; - } - - put_page(sdma->pagessdma->nr_pages); - } - - if (null_count) - dump_stack(); - - kvfree(sdma->pages); - kfree(sdma); -} - -static int svm_pin_pages(unsigned long addr, int nr_pages, - struct page **pages) -{ - int err; - - err = get_user_pages_fast(addr, nr_pages, 1, pages); - if (err > 0 && err < nr_pages) { - while (err--) - put_page(pageserr); - err = -EFAULT; - } else if (err == 0) { - err = -EFAULT; - } - - return err; -} - -static int svm_add_sdma(struct svm_process *process, - unsigned long addr, unsigned long size) -{ - int err; - struct svm_sdma *sdma = NULL; - - sdma = kzalloc(sizeof(struct svm_sdma), GFP_KERNEL); - if (sdma == NULL) - return -ENOMEM; - - atomic64_set(&sdma->ref, 1); - sdma->addr = addr & PAGE_MASK; - sdma->nr_pages = (PAGE_ALIGN(size + addr) >> PAGE_SHIFT) - - (sdma->addr >> PAGE_SHIFT); - sdma->pages = kvcalloc(sdma->nr_pages, sizeof(char *), GFP_KERNEL); - if (sdma->pages == NULL) { - err = -ENOMEM; - goto err_free_sdma; - } - - /* - * If always pin the same addr with the same nr_pages, pin pages - * maybe should move after insert sdma with mutex lock. - */ - err = svm_pin_pages(sdma->addr, sdma->nr_pages, sdma->pages); - if (err < 0) { - pr_err("%s: failed to pin pages addr 0x%pK, size 0x%lx\n", - __func__, (void *)addr, size); - goto err_free_pages; - } - - err = svm_insert_sdma(process, sdma); - if (err < 0) { - err = 0; - pr_debug("%s: sdma already exist!\n", __func__); - goto err_unpin_pages; - } - - return err; - -err_unpin_pages: - while (sdma->nr_pages--) - put_page(sdma->pagessdma->nr_pages); -err_free_pages: - kvfree(sdma->pages); -err_free_sdma: - kfree(sdma); - - return err; -} - -static int svm_pin_memory(unsigned long __user *arg) -{ - int err; - struct svm_process *process = NULL; - unsigned long addr, size, asid; - - if (!acpi_disabled) - return -EPERM; - - if (arg == NULL) - return -EINVAL; - - if (get_user(addr, arg)) - return -EFAULT; - - if (get_user(size, arg + 1)) - return -EFAULT; - - if ((addr + size <= addr) || (size >= (u64)UINT_MAX) || (addr == 0)) - return -EINVAL; - - asid = arm64_mm_context_get(current->mm); - if (!asid) - return -ENOSPC; - - mutex_lock(&svm_process_mutex); - process = find_svm_process(asid); - if (process == NULL) { - mutex_unlock(&svm_process_mutex); - err = -ESRCH; - goto out; - } - mutex_unlock(&svm_process_mutex); - - mutex_lock(&process->mutex); - err = svm_add_sdma(process, addr, size); - mutex_unlock(&process->mutex); - -out: - arm64_mm_context_put(current->mm); - - return err; -} - -static int svm_unpin_memory(unsigned long __user *arg) -{ - int err = 0, nr_pages; - struct svm_sdma *sdma = NULL; - unsigned long addr, size, asid; - struct svm_process *process = NULL; - - if (!acpi_disabled) - return -EPERM; - - if (arg == NULL) - return -EINVAL; - - if (get_user(addr, arg)) - return -EFAULT; - - if (get_user(size, arg + 1)) - return -EFAULT; - - if (ULONG_MAX - addr < size) - return -EINVAL; - - asid = arm64_mm_context_get(current->mm); - if (!asid) - return -ENOSPC; - - nr_pages = (PAGE_ALIGN(size + addr) >> PAGE_SHIFT) - - ((addr & PAGE_MASK) >> PAGE_SHIFT); - addr &= PAGE_MASK; - - mutex_lock(&svm_process_mutex); - process = find_svm_process(asid); - if (process == NULL) { - mutex_unlock(&svm_process_mutex); - err = -ESRCH; - goto out; - } - mutex_unlock(&svm_process_mutex); - - mutex_lock(&process->mutex); - sdma = svm_find_sdma(process, addr, nr_pages); - if (sdma == NULL) { - mutex_unlock(&process->mutex); - err = -ESRCH; - goto out; - } - - svm_remove_sdma(process, sdma, true); - mutex_unlock(&process->mutex); - -out: - arm64_mm_context_put(current->mm); - - return err; -} - -static void svm_unpin_all(struct svm_process *process) -{ - struct rb_node *node = NULL; - - while ((node = rb_first(&process->sdma_list))) - svm_remove_sdma(process, - rb_entry(node, struct svm_sdma, node), - false); -} - -static int svm_acpi_bind_core(struct core_device *cdev, void *data) -{ - struct task_struct *task = NULL; - struct svm_process *process = data; - - if (cdev->smmu_bypass) - return 0; - - task = get_pid_task(process->pid, PIDTYPE_PID); - if (!task) { - pr_err("failed to get task_struct\n"); - return -ESRCH; - } - - process->sva = iommu_sva_bind_device(&cdev->dev, task->mm, NULL); - if (!process->sva) { - pr_err("failed to bind device\n"); - return PTR_ERR(process->sva); - } - - process->pasid = task->mm->pasid; - put_task_struct(task); - - return 0; -} - -static int svm_dt_bind_core(struct device *dev, void *data) -{ - struct task_struct *task = NULL; - struct svm_process *process = data; - struct core_device *cdev = to_core_device(dev); - - if (cdev->smmu_bypass) - return 0; - - task = get_pid_task(process->pid, PIDTYPE_PID); - if (!task) { - pr_err("failed to get task_struct\n"); - return -ESRCH; - } - - process->sva = iommu_sva_bind_device(dev, task->mm, NULL); - if (!process->sva) { - pr_err("failed to bind device\n"); - return PTR_ERR(process->sva); - } - - process->pasid = task->mm->pasid; - put_task_struct(task); - - return 0; -} - -static void svm_dt_bind_cores(struct svm_process *process) -{ - device_for_each_child(process->sdev->dev, process, svm_dt_bind_core); -} - -static void svm_acpi_bind_cores(struct svm_process *process) -{ - struct core_device *pos = NULL; - - list_for_each_entry(pos, &child_list, entry) { - svm_acpi_bind_core(pos, process); - } -} - -static void svm_process_free(struct mmu_notifier *mn) -{ - struct svm_process *process = NULL; - - process = container_of(mn, struct svm_process, notifier); - svm_unpin_all(process); - arm64_mm_context_put(process->mm); - kfree(process); -} - -static void svm_process_release(struct svm_process *process) -{ - delete_svm_process(process); - put_pid(process->pid); - - mmu_notifier_put(&process->notifier); -} - -static void svm_notifier_release(struct mmu_notifier *mn, - struct mm_struct *mm) -{ - struct svm_process *process = NULL; - - process = container_of(mn, struct svm_process, notifier); - - /* - * No need to call svm_unbind_cores(), as iommu-sva will do the - * unbind in its mm_notifier callback. - */ - - mutex_lock(&svm_process_mutex); - svm_process_release(process); - mutex_unlock(&svm_process_mutex); -} - -static struct mmu_notifier_ops svm_process_mmu_notifier = { - .release = svm_notifier_release, - .free_notifier = svm_process_free, -}; - -static struct svm_process * -svm_process_alloc(struct svm_device *sdev, struct pid *pid, - struct mm_struct *mm, unsigned long asid) -{ - struct svm_process *process = kzalloc(sizeof(*process), GFP_ATOMIC); - - if (!process) - return ERR_PTR(-ENOMEM); - - process->sdev = sdev; - process->pid = pid; - process->mm = mm; - process->asid = asid; - process->sdma_list = RB_ROOT; //lint !e64 - mutex_init(&process->mutex); - process->notifier.ops = &svm_process_mmu_notifier; - - return process; -} - -static struct task_struct *svm_get_task(struct svm_bind_process params) -{ - struct task_struct *task = NULL; - - if (params.flags & ~SVM_BIND_PID) - return ERR_PTR(-EINVAL); - - if (params.flags & SVM_BIND_PID) { - struct mm_struct *mm = NULL; - - task = find_get_task_by_vpid(params.vpid); - if (task == NULL) - return ERR_PTR(-ESRCH); - - /* check the permission */ - mm = mm_access(task, PTRACE_MODE_ATTACH_REALCREDS); - if (IS_ERR_OR_NULL(mm)) { - pr_err("cannot access mm\n"); - put_task_struct(task); - return ERR_PTR(-ESRCH); - } - - mmput(mm); - } else { - get_task_struct(current); - task = current; - } - - return task; -} - -static int svm_process_bind(struct task_struct *task, - struct svm_device *sdev, u64 *ttbr, u64 *tcr, int *pasid) -{ - int err; - unsigned long asid; - struct pid *pid = NULL; - struct svm_process *process = NULL; - struct mm_struct *mm = NULL; - - if ((ttbr == NULL) || (tcr == NULL) || (pasid == NULL)) - return -EINVAL; - - pid = get_task_pid(task, PIDTYPE_PID); - if (pid == NULL) - return -EINVAL; - - mm = get_task_mm(task); - if (!mm) { - err = -EINVAL; - goto err_put_pid; - } - - asid = arm64_mm_context_get(mm); - if (!asid) { - err = -ENOSPC; - goto err_put_mm; - } - - /* If a svm_process already exists, use it */ - mutex_lock(&svm_process_mutex); - process = find_svm_process(asid); - if (process == NULL) { - process = svm_process_alloc(sdev, pid, mm, asid); - if (IS_ERR(process)) { - err = PTR_ERR(process); - mutex_unlock(&svm_process_mutex); - goto err_put_mm_context; - } - err = mmu_notifier_register(&process->notifier, mm); - if (err) { - mutex_unlock(&svm_process_mutex); - goto err_free_svm_process; - } - - insert_svm_process(process); - - if (acpi_disabled) - svm_dt_bind_cores(process); - else - svm_acpi_bind_cores(process); - - mutex_unlock(&svm_process_mutex); - } else { - mutex_unlock(&svm_process_mutex); - arm64_mm_context_put(mm); - put_pid(pid); - } - - - *ttbr = virt_to_phys(mm->pgd) | asid << ASID_SHIFT; - *tcr = read_sysreg(tcr_el1); - *pasid = process->pasid; - - mmput(mm); - return 0; - -err_free_svm_process: - kfree(process); -err_put_mm_context: - arm64_mm_context_put(mm); -err_put_mm: - mmput(mm); -err_put_pid: - put_pid(pid); - - return err; -} - -static pte_t *svm_get_pte(struct vm_area_struct *vma, - pud_t *pud, - unsigned long addr, - unsigned long *page_size, - unsigned long *offset) -{ - pte_t *pte = NULL; - unsigned long size = 0; - - if (is_vm_hugetlb_page(vma)) { - if (pud_present(*pud)) { - if (pud_val(*pud) && !(pud_val(*pud) & PUD_TABLE_BIT)) { - pte = (pte_t *)pud; - *offset = addr & (PUD_SIZE - 1); - size = PUD_SIZE; - } else { - pte = (pte_t *)pmd_offset(pud, addr); - *offset = addr & (PMD_SIZE - 1); - size = PMD_SIZE; - } - } else { - pr_err("%s:hugetlb but pud not present\n", __func__); - } - } else { - pmd_t *pmd = pmd_offset(pud, addr); - - if (pmd_none(*pmd)) - return NULL; - - if (pmd_trans_huge(*pmd)) { - pte = (pte_t *)pmd; - *offset = addr & (PMD_SIZE - 1); - size = PMD_SIZE; - } else { - pte = pte_offset_map(pmd, addr); - *offset = addr & (PAGE_SIZE - 1); - size = PAGE_SIZE; - } - } - - if (page_size) - *page_size = size; - - return pte; -} - -/* Must be called with mmap_lock held */ -static pte_t *svm_walk_pt(unsigned long addr, unsigned long *page_size, - unsigned long *offset) -{ - pgd_t *pgd = NULL; - p4d_t *p4d = NULL; - pud_t *pud = NULL; - struct mm_struct *mm = current->mm; - struct vm_area_struct *vma = NULL; - - vma = find_vma(mm, addr); - if (!vma) - return NULL; - - pgd = pgd_offset(mm, addr); - if (pgd_none(*pgd)) - return NULL; - - p4d = p4d_offset(pgd, addr); - if (p4d_none(*p4d)) - return NULL; - - pud = pud_offset(p4d, addr); - if (pud_none(*pud)) - return NULL; - - return svm_get_pte(vma, pud, addr, page_size, offset); -} - -static int svm_get_phys(unsigned long __user *arg) -{ - int err; - pte_t *ptep = NULL; - pte_t pte; - unsigned long index = 0; - struct page *page; - unsigned long addr, phys, offset; - struct mm_struct *mm = current->mm; - struct vm_area_struct *vma = NULL; - unsigned long len; - - if (!acpi_disabled) - return -EPERM; - - if (get_user(addr, arg)) - return -EFAULT; - - down_read(&mm->mmap_lock); - ptep = svm_walk_pt(addr, NULL, &offset); - if (!ptep) { - up_read(&mm->mmap_lock); - return -EINVAL; - } - - pte = READ_ONCE(*ptep); - if (!pte_present(pte) || !(pfn_in_present_section(pte_pfn(pte)))) { - up_read(&mm->mmap_lock); - return -EINVAL; - } - - page = pte_page(pte); - get_page(page); - - phys = PFN_PHYS(pte_pfn(pte)) + offset; - - /* fix ts problem, which need the len to check out memory */ - len = 0; - vma = find_vma(mm, addr); - if (vma) - len = vma->vm_end - addr; - - up_read(&mm->mmap_lock); - - mutex_lock(&va2pa_trunk.mutex); - svm_clean_done_slots(); - if (va2pa_trunk.slot_used == va2pa_trunk.slot_total) { - err = -ENOSPC; - goto err_mutex_unlock; - } - - err = svm_find_slot_init(&index); - if (err) - goto err_mutex_unlock; - - svm_set_slot_valid(index, phys, len); - - err = put_user(index * SVM_VA2PA_SLOT_SIZE, (unsigned long __user *)arg); - if (err) - goto err_slot_init; - - mutex_unlock(&va2pa_trunk.mutex); - return 0; - -err_slot_init: - svm_set_slot_init(index); -err_mutex_unlock: - mutex_unlock(&va2pa_trunk.mutex); - put_page(page); - return err; -} - -static struct bus_type svm_bus_type = { - .name = "svm_bus", -}; - -static int svm_open(struct inode *inode, struct file *file) -{ - return 0; -} - -static int svm_proc_load_flag(int __user *arg) -{ - static atomic_t l2buf_load_flag = ATOMIC_INIT(0); - int flag; - - if (!acpi_disabled) - return -EPERM; - - if (arg == NULL) - return -EINVAL; - - if (0 == (atomic_cmpxchg(&l2buf_load_flag, 0, 1))) - flag = 0; - else - flag = 1; - - return put_user(flag, arg); -} - -static int svm_mmap(struct file *file, struct vm_area_struct *vma) -{ - int err; - struct svm_device *sdev = file_to_sdev(file); - - if (!acpi_disabled) - return -EPERM; - - if (vma->vm_flags & VM_PA32BIT) { - unsigned long vm_size = vma->vm_end - vma->vm_start; - struct page *page = NULL; - - if ((vma->vm_end < vma->vm_start) || (vm_size > MMAP_PHY32_MAX)) - return -EINVAL; - - /* vma->vm_pgoff transfer the nid */ - if (vma->vm_pgoff == 0) - page = alloc_pages(GFP_KERNEL | GFP_DMA32, - get_order(vm_size)); - else - page = alloc_pages_node((int)vma->vm_pgoff, - GFP_KERNEL | __GFP_THISNODE, - get_order(vm_size)); - if (!page) { - dev_err(sdev->dev, "fail to alloc page on node 0x%lx\n", - vma->vm_pgoff); - return -ENOMEM; - } - - err = remap_pfn_range(vma, - vma->vm_start, - page_to_pfn(page), - vm_size, vma->vm_page_prot); - if (err) - dev_err(sdev->dev, - "fail to remap 0x%pK err=%d\n", - (void *)vma->vm_start, err); - } else { - if ((vma->vm_end < vma->vm_start) || - ((vma->vm_end - vma->vm_start) > sdev->l2size)) - return -EINVAL; - - vma->vm_page_prot = __pgprot((~PTE_SHARED) & - vma->vm_page_prot.pgprot); - - err = remap_pfn_range(vma, - vma->vm_start, - sdev->l2buff >> PAGE_SHIFT, - vma->vm_end - vma->vm_start, - __pgprot(vma->vm_page_prot.pgprot | PTE_DIRTY)); - if (err) - dev_err(sdev->dev, - "fail to remap 0x%pK err=%d\n", - (void *)vma->vm_start, err); - } - - return err; -} - -static int svm_release_phys32(unsigned long __user *arg) -{ - struct mm_struct *mm = current->mm; - struct vm_area_struct *vma = NULL; - struct page *page = NULL; - pte_t *pte = NULL; - unsigned long phys, addr, offset; - unsigned int len = 0; - - if (arg == NULL) - return -EINVAL; - - if (get_user(addr, arg)) - return -EFAULT; - - down_read(&mm->mmap_lock); - pte = svm_walk_pt(addr, NULL, &offset); - if (pte && pte_present(*pte)) { - phys = PFN_PHYS(pte_pfn(*pte)) + offset; - } else { - up_read(&mm->mmap_lock); - return -EINVAL; - } - - vma = find_vma(mm, addr); - if (!vma) { - up_read(&mm->mmap_lock); - return -EFAULT; - } - - page = phys_to_page(phys); - len = vma->vm_end - vma->vm_start; - - __free_pages(page, get_order(len)); - - up_read(&mm->mmap_lock); - - return 0; -} - -static long svm_ioctl(struct file *file, unsigned int cmd, - unsigned long arg) -{ - int err = -EINVAL; - struct svm_bind_process params; - struct svm_device *sdev = file_to_sdev(file); - struct task_struct *task; - - if (!arg) - return -EINVAL; - - if (cmd == SVM_IOCTL_PROCESS_BIND) { - err = copy_from_user(¶ms, (void __user *)arg, - sizeof(params)); - if (err) { - dev_err(sdev->dev, "fail to copy params %d\n", err); - return -EFAULT; - } - } - - switch (cmd) { - case SVM_IOCTL_PROCESS_BIND: - task = svm_get_task(params); - if (IS_ERR(task)) { - dev_err(sdev->dev, "failed to get task\n"); - return PTR_ERR(task); - } - - err = svm_process_bind(task, sdev, ¶ms.ttbr, - ¶ms.tcr, ¶ms.pasid); - if (err) { - put_task_struct(task); - dev_err(sdev->dev, "failed to bind task %d\n", err); - return err; - } - - put_task_struct(task); - err = copy_to_user((void __user *)arg, ¶ms, - sizeof(params)); - if (err) { - dev_err(sdev->dev, "failed to copy to user!\n"); - return -EFAULT; - } - break; - case SVM_IOCTL_GET_PHYS: - err = svm_get_phys((unsigned long __user *)arg); - break; - case SVM_IOCTL_PIN_MEMORY: - err = svm_pin_memory((unsigned long __user *)arg); - break; - case SVM_IOCTL_UNPIN_MEMORY: - err = svm_unpin_memory((unsigned long __user *)arg); - break; - case SVM_IOCTL_LOAD_FLAG: - err = svm_proc_load_flag((int __user *)arg); - break; - case SVM_IOCTL_RELEASE_PHYS32: - err = svm_release_phys32((unsigned long __user *)arg); - break; - default: - err = -EINVAL; - } - - if (err) - dev_err(sdev->dev, "%s: %s failed err = %d\n", __func__, - svm_cmd_to_string(cmd), err); - - return err; -} - -static const struct file_operations svm_fops = { - .owner = THIS_MODULE, - .open = svm_open, - .mmap = svm_mmap, - .unlocked_ioctl = svm_ioctl, -}; - -static void cdev_device_release(struct device *dev) -{ - struct core_device *cdev = to_core_device(dev); - - if (!acpi_disabled) - list_del(&cdev->entry); - - kfree(cdev); -} - -static int svm_remove_core(struct device *dev, void *data) -{ - struct core_device *cdev = to_core_device(dev); - - if (!cdev->smmu_bypass) { - iommu_dev_disable_feature(dev, IOMMU_DEV_FEAT_SVA); - iommu_detach_group(cdev->domain, cdev->group); - iommu_group_put(cdev->group); - iommu_domain_free(cdev->domain); - } - - device_unregister(&cdev->dev); - - return 0; -} - -#ifdef CONFIG_ACPI -static int svm_acpi_add_core(struct svm_device *sdev, - struct acpi_device *children, int id) -{ - int err; - struct core_device *cdev = NULL; - char *name = NULL; - enum dev_dma_attr attr; - const union acpi_object *obj; - - name = devm_kasprintf(sdev->dev, GFP_KERNEL, "svm_child_dev%d", id); - if (name == NULL) - return -ENOMEM; - - cdev = kzalloc(sizeof(*cdev), GFP_KERNEL); - if (cdev == NULL) - return -ENOMEM; - cdev->dev.fwnode = &children->fwnode; - cdev->dev.parent = sdev->dev; - cdev->dev.bus = &svm_bus_type; - cdev->dev.release = cdev_device_release; - cdev->smmu_bypass = 0; - list_add(&cdev->entry, &child_list); - dev_set_name(&cdev->dev, "%s", name); - - err = device_register(&cdev->dev); - if (err) { - dev_info(&cdev->dev, "core_device register failed\n"); - list_del(&cdev->entry); - kfree(cdev); - return err; - } - - attr = device_get_dma_attr(&children->dev); - if (attr != DEV_DMA_NOT_SUPPORTED) { - err = acpi_dma_configure(&cdev->dev, attr); - if (err) { - dev_dbg(&cdev->dev, "acpi_dma_configure failed\n"); - return err; - } - } - - err = acpi_dev_get_property(children, "hisi,smmu-bypass", - DEV_PROP_U8, &obj); - if (err) - dev_info(&children->dev, "read smmu bypass failed\n"); - - cdev->smmu_bypass = *(u8 *)obj->integer.value; - - cdev->group = iommu_group_get(&cdev->dev); - if (IS_ERR_OR_NULL(cdev->group)) { - dev_err(&cdev->dev, "smmu is not right configured\n"); - return -ENXIO; - } - - cdev->domain = iommu_domain_alloc(sdev->dev->bus); - if (cdev->domain == NULL) { - dev_info(&cdev->dev, "failed to alloc domain\n"); - return -ENOMEM; - } - - err = iommu_attach_group(cdev->domain, cdev->group); - if (err) { - dev_err(&cdev->dev, "failed group to domain\n"); - return err; - } - - err = iommu_dev_enable_feature(&cdev->dev, IOMMU_DEV_FEAT_IOPF); - if (err) { - dev_err(&cdev->dev, "failed to enable iopf feature, %d\n", err); - return err; - } - - err = iommu_dev_enable_feature(&cdev->dev, IOMMU_DEV_FEAT_SVA); - if (err) { - dev_err(&cdev->dev, "failed to enable sva feature\n"); - return err; - } - - return 0; -} - -static int svm_acpi_init_core(struct svm_device *sdev) -{ - int err = 0; - struct device *dev = sdev->dev; - struct acpi_device *adev = ACPI_COMPANION(sdev->dev); - struct acpi_device *cdev = NULL; - int id = 0; - - down_write(&svm_sem); - if (!svm_bus_type.iommu_ops) { - err = bus_register(&svm_bus_type); - if (err) { - up_write(&svm_sem); - dev_err(dev, "failed to register svm_bus_type\n"); - return err; - } - - err = bus_set_iommu(&svm_bus_type, dev->bus->iommu_ops); - if (err) { - up_write(&svm_sem); - dev_err(dev, "failed to set iommu for svm_bus_type\n"); - goto err_unregister_bus; - } - } else if (svm_bus_type.iommu_ops != dev->bus->iommu_ops) { - err = -EBUSY; - up_write(&svm_sem); - dev_err(dev, "iommu_ops configured, but changed!\n"); - return err; - } - up_write(&svm_sem); - - list_for_each_entry(cdev, &adev->children, node) { - err = svm_acpi_add_core(sdev, cdev, id++); - if (err) - device_for_each_child(dev, NULL, svm_remove_core); - } - - return err; - -err_unregister_bus: - bus_unregister(&svm_bus_type); - - return err; -} -#else -static int svm_acpi_init_core(struct svm_device *sdev) { return 0; } -#endif - -static int svm_of_add_core(struct svm_device *sdev, struct device_node *np) -{ - int err; - struct resource res; - struct core_device *cdev = NULL; - char *name = NULL; - - name = devm_kasprintf(sdev->dev, GFP_KERNEL, "svm%llu_%s", - sdev->id, np->name); - if (name == NULL) - return -ENOMEM; - - cdev = kzalloc(sizeof(*cdev), GFP_KERNEL); - if (cdev == NULL) - return -ENOMEM; - - cdev->dev.of_node = np; - cdev->dev.parent = sdev->dev; - cdev->dev.bus = &svm_bus_type; - cdev->dev.release = cdev_device_release; - cdev->smmu_bypass = of_property_read_bool(np, "hisi,smmu_bypass"); - dev_set_name(&cdev->dev, "%s", name); - - err = device_register(&cdev->dev); - if (err) { - dev_info(&cdev->dev, "core_device register failed\n"); - kfree(cdev); - return err; - } - - err = of_dma_configure(&cdev->dev, np, true); - if (err) { - dev_dbg(&cdev->dev, "of_dma_configure failed\n"); - return err; - } - - err = of_address_to_resource(np, 0, &res); - if (err) { - dev_info(&cdev->dev, "no reg, FW should install the sid\n"); - } else { - /* If the reg specified, install sid for the core */ - void __iomem *core_base = NULL; - int sid = cdev->dev.iommu->fwspec->ids0; - - core_base = ioremap(res.start, resource_size(&res)); - if (core_base == NULL) { - dev_err(&cdev->dev, "ioremap failed\n"); - return -ENOMEM; - } - - writel_relaxed(sid, core_base + CORE_SID); - iounmap(core_base); - } - - cdev->group = iommu_group_get(&cdev->dev); - if (IS_ERR_OR_NULL(cdev->group)) { - dev_err(&cdev->dev, "smmu is not right configured\n"); - return -ENXIO; - } - - cdev->domain = iommu_domain_alloc(sdev->dev->bus); - if (cdev->domain == NULL) { - dev_info(&cdev->dev, "failed to alloc domain\n"); - return -ENOMEM; - } - - err = iommu_attach_group(cdev->domain, cdev->group); - if (err) { - dev_err(&cdev->dev, "failed group to domain\n"); - return err; - } - - err = iommu_dev_enable_feature(&cdev->dev, IOMMU_DEV_FEAT_IOPF); - if (err) { - dev_err(&cdev->dev, "failed to enable iopf feature, %d\n", err); - return err; - } - - err = iommu_dev_enable_feature(&cdev->dev, IOMMU_DEV_FEAT_SVA); - if (err) { - dev_err(&cdev->dev, "failed to enable sva feature, %d\n", err); - return err; - } - - return 0; -} - -static int svm_dt_init_core(struct svm_device *sdev, struct device_node *np) -{ - int err = 0; - struct device_node *child = NULL; - struct device *dev = sdev->dev; - - down_write(&svm_sem); - if (svm_bus_type.iommu_ops == NULL) { - err = bus_register(&svm_bus_type); - if (err) { - up_write(&svm_sem); - dev_err(dev, "failed to register svm_bus_type\n"); - return err; - } - - err = bus_set_iommu(&svm_bus_type, dev->bus->iommu_ops); - if (err) { - up_write(&svm_sem); - dev_err(dev, "failed to set iommu for svm_bus_type\n"); - goto err_unregister_bus; - } - } else if (svm_bus_type.iommu_ops != dev->bus->iommu_ops) { - err = -EBUSY; - up_write(&svm_sem); - dev_err(dev, "iommu_ops configured, but changed!\n"); - return err; - } - up_write(&svm_sem); - - for_each_available_child_of_node(np, child) { - err = svm_of_add_core(sdev, child); - if (err) - device_for_each_child(dev, NULL, svm_remove_core); - } - - return err; - -err_unregister_bus: - bus_unregister(&svm_bus_type); - - return err; -} - -int svm_get_pasid(pid_t vpid, int dev_id __maybe_unused) -{ - int pasid; - unsigned long asid; - struct task_struct *task = NULL; - struct mm_struct *mm = NULL; - struct svm_process *process = NULL; - struct svm_bind_process params; - - params.flags = SVM_BIND_PID; - params.vpid = vpid; - params.pasid = -1; - params.ttbr = 0; - params.tcr = 0; - task = svm_get_task(params); - if (IS_ERR(task)) - return PTR_ERR(task); - - mm = get_task_mm(task); - if (mm == NULL) { - pasid = -EINVAL; - goto put_task; - } - - asid = arm64_mm_context_get(mm); - if (!asid) { - pasid = -ENOSPC; - goto put_mm; - } - - mutex_lock(&svm_process_mutex); - process = find_svm_process(asid); - mutex_unlock(&svm_process_mutex); - if (process) - pasid = process->pasid; - else - pasid = -ESRCH; - - arm64_mm_context_put(mm); -put_mm: - mmput(mm); -put_task: - put_task_struct(task); - - return pasid; -} -EXPORT_SYMBOL_GPL(svm_get_pasid); - -static int svm_dt_setup_l2buff(struct svm_device *sdev, struct device_node *np) -{ - struct device_node *l2buff = of_parse_phandle(np, "memory-region", 0); - - if (l2buff) { - struct resource r; - int err = of_address_to_resource(l2buff, 0, &r); - - if (err) { - of_node_put(l2buff); - return err; - } - - sdev->l2buff = r.start; - sdev->l2size = resource_size(&r); - } - - of_node_put(l2buff); - return 0; -} - -static int svm_device_probe(struct platform_device *pdev) -{ - int err = -1; - struct device *dev = &pdev->dev; - struct svm_device *sdev = NULL; - struct device_node *np = dev->of_node; - int alias_id; - - if (acpi_disabled && np == NULL) - return -ENODEV; - - if (!dev->bus) { - dev_dbg(dev, "this dev bus is NULL\n"); - return -EPROBE_DEFER; - } - - if (!dev->bus->iommu_ops) { - dev_dbg(dev, "defer probe svm device\n"); - return -EPROBE_DEFER; - } - - sdev = devm_kzalloc(dev, sizeof(*sdev), GFP_KERNEL); - if (sdev == NULL) - return -ENOMEM; - - if (!acpi_disabled) { - err = device_property_read_u64(dev, "svmid", &sdev->id); - if (err) { - dev_err(dev, "failed to get this svm device id\n"); - return err; - } - } else { - alias_id = of_alias_get_id(np, "svm"); - if (alias_id < 0) - sdev->id = probe_index; - else - sdev->id = alias_id; - } - - sdev->dev = dev; - sdev->miscdev.minor = MISC_DYNAMIC_MINOR; - sdev->miscdev.fops = &svm_fops; - sdev->miscdev.name = devm_kasprintf(dev, GFP_KERNEL, - SVM_DEVICE_NAME"%llu", sdev->id); - if (sdev->miscdev.name == NULL) - return -ENOMEM; - - dev_set_drvdata(dev, sdev); - err = misc_register(&sdev->miscdev); - if (err) { - dev_err(dev, "Unable to register misc device\n"); - return err; - } - - if (!acpi_disabled) { - err = svm_acpi_init_core(sdev); - if (err) { - dev_err(dev, "failed to init acpi cores\n"); - goto err_unregister_misc; - } - } else { - /* - * Get the l2buff phys address and size, if it do not exist - * just warn and continue, and runtime can not use L2BUFF. - */ - err = svm_dt_setup_l2buff(sdev, np); - if (err) - dev_warn(dev, "Cannot get l2buff\n"); - - if (svm_va2pa_trunk_init(dev)) { - dev_err(dev, "failed to init va2pa trunk\n"); - goto err_unregister_misc; - } - - err = svm_dt_init_core(sdev, np); - if (err) { - dev_err(dev, "failed to init dt cores\n"); - goto err_remove_trunk; - } - - probe_index++; - } - - mutex_init(&svm_process_mutex); - - return err; - -err_remove_trunk: - svm_remove_trunk(dev); - -err_unregister_misc: - misc_deregister(&sdev->miscdev); - - return err; -} - -static int svm_device_remove(struct platform_device *pdev) -{ - struct device *dev = &pdev->dev; - struct svm_device *sdev = dev_get_drvdata(dev); - - device_for_each_child(sdev->dev, NULL, svm_remove_core); - misc_deregister(&sdev->miscdev); - - return 0; -} - -static const struct acpi_device_id svm_acpi_match = { - { "HSVM1980", 0}, - { } -}; -MODULE_DEVICE_TABLE(acpi, svm_acpi_match); - -static const struct of_device_id svm_of_match = { - { .compatible = "hisilicon,svm" }, - { } -}; -MODULE_DEVICE_TABLE(of, svm_of_match); - -/*svm acpi probe and remove*/ -static struct platform_driver svm_driver = { - .probe = svm_device_probe, - .remove = svm_device_remove, - .driver = { - .name = SVM_DEVICE_NAME, - .acpi_match_table = ACPI_PTR(svm_acpi_match), - .of_match_table = svm_of_match, - }, -}; - -module_platform_driver(svm_driver); - -MODULE_DESCRIPTION("Hisilicon SVM driver"); -MODULE_AUTHOR("Fang Lijun <fanglijun3@huawei.com>"); -MODULE_LICENSE("GPL v2");
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/char/virtio_console.c
Changed
@@ -1967,7 +1967,7 @@ flush_work(&portdev->config_work); /* Disable interrupts for vqs */ - vdev->config->reset(vdev); + virtio_reset_device(vdev); /* Finish up work that's lined up */ if (use_multiport(portdev)) cancel_work_sync(&portdev->control_work); @@ -2149,7 +2149,7 @@ portdev = vdev->priv; - vdev->config->reset(vdev); + virtio_reset_device(vdev); if (use_multiport(portdev)) virtqueue_disable_cb(portdev->c_ivq);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/Kconfig
Changed
@@ -22,6 +22,38 @@ help Build the clock driver for hi3660. +config COMMON_CLK_HI3531DV200 + tristate "Hi3531DV200 Clock Driver" + depends on ARCH_HI3531DV200 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for Hi3531DV200. + +config COMMON_CLK_HI3535AV100 + tristate "Hi3535AV100 Clock Driver" + depends on ARCH_HI3535AV100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for Hi3535AV100. + +config COMMON_CLK_HI3521DV200 + tristate "Hi3521DV200 Clock Driver" + depends on ARCH_HI3521DV200 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3521DV200. + +config COMMON_CLK_HI3520DV500 + tristate "Hi3520DV500 Clock Driver" + depends on ARCH_HI3520DV500 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3520DV500. + config COMMON_CLK_HI3670 bool "Hi3670 Clock Driver" depends on ARCH_HISI || COMPILE_TEST @@ -37,6 +69,166 @@ help Build the clock driver for hi3798cv200. +config COMMON_CLK_HI3516A + tristate "Hi3516A Clock Driver" + depends on ARCH_HI3516A || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3516A. + +config COMMON_CLK_HI3516CV500 + tristate "Hi3516CV500 Clock Driver" + depends on ARCH_HI3516CV500 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3516CV500. + +config COMMON_CLK_HI3516EV200 + tristate "Hi3516EV200 Clock Driver" + depends on ARCH_HI3516EV200 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3516EV200. + +config COMMON_CLK_HI3516EV300 + tristate "Hi3516EV300 Clock Driver" + depends on ARCH_HI3516EV300 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3516EV300. + +config COMMON_CLK_HI3518EV300 + tristate "Hi3518EV300 Clock Driver" + depends on ARCH_HI3518EV300 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3518EV300. + +config COMMON_CLK_HI3516DV200 + tristate "Hi3516DV200 Clock Driver" + depends on ARCH_HI3516DV200 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3516DV200. + +config COMMON_CLK_HI3516DV300 + tristate "Hi3516DV300 Clock Driver" + depends on ARCH_HI3516DV300 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3516DV300. + +config COMMON_CLK_HI3556V200 + tristate "Hi3556V200 Clock Driver" + depends on ARCH_HI3556V200 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3556V200. + +config COMMON_CLK_HI3559V200 + tristate "Hi3559V200 Clock Driver" + depends on ARCH_HI3559V200 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3559V200. + +config COMMON_CLK_HI3562V100 + tristate "Hi3562V100 Clock Driver" + depends on ARCH_HI3562V100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3562V100. + +config COMMON_CLK_HI3566V100 + tristate "Hi3566V100 Clock Driver" + depends on ARCH_HI3566V100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3566V100. + +config COMMON_CLK_HI3518EV20X + tristate "Hi3518EV20X Clock Driver" + depends on ARCH_HI3518EV20X || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3516A. + +config COMMON_CLK_HI3536DV100 + tristate "Hi3536DV100 Clock Driver" + depends on ARCH_HI3536DV100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3536DV100. + +config COMMON_CLK_HI3559AV100 + tristate "Hi3559AV100 Clock Driver" + depends on ARCH_HI3559AV100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3559av100. + +config COMMON_CLK_HI3569V100 + tristate "Hi3569V100 Clock Driver" + depends on ARCH_HI3569V100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3569v100. + +config COMMON_CLK_HI3521A + tristate "Hi3521A Clock Driver" + depends on ARCH_HI3521A || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3521A. + +config COMMON_CLK_HI3531A + tristate "Hi3531A Clock Driver" + depends on ARCH_HI3531A || COMPILE_TEST + select RESET_HISI + default ARCH_HISI_BVT + help + Build the clock driver for hi3531A. + +config COMMON_CLK_HI3556AV100 + tristate "Hi3556AV100 Clock Driver" + depends on ARCH_HI3556AV100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3556av100. + +config COMMON_CLK_HI3519AV100 + tristate "Hi3519AV100 Clock Driver" + depends on ARCH_HI3519AV100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3519av100. + +config COMMON_CLK_HI3568V100 + tristate "Hi3568V100 Clock Driver" + depends on ARCH_HI3568V100 || COMPILE_TEST + select RESET_HISI + default ARCH_HISI + help + Build the clock driver for hi3568v100. + config COMMON_CLK_HI6220 bool "Hi6220 Clock Driver" depends on ARCH_HISI || COMPILE_TEST @@ -46,7 +238,7 @@ config RESET_HISI bool "HiSilicon Reset Controller Driver" - depends on ARCH_HISI || COMPILE_TEST + depends on ARCH_HISI || COMPILE_TEST || ARCH_HISI_BVT select RESET_CONTROLLER help Build reset controller driver for HiSilicon device chipsets.
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/Makefile
Changed
@@ -14,6 +14,7 @@ obj-$(CONFIG_COMMON_CLK_HI3670) += clk-hi3670.o obj-$(CONFIG_COMMON_CLK_HI3798CV200) += crg-hi3798cv200.o obj-$(CONFIG_COMMON_CLK_HI6220) += clk-hi6220.o +obj-$(CONFIG_COMMON_CLK_HI3516DV300) += clk-hi3516dv300.o obj-$(CONFIG_RESET_HISI) += reset.o obj-$(CONFIG_STUB_CLK_HI6220) += clk-hi6220-stub.o obj-$(CONFIG_STUB_CLK_HI3660) += clk-hi3660-stub.o
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/clk-hi3516dv300.c
Added
@@ -0,0 +1,272 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2016-2017 HiSilicon Technologies Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; either version 2 of the License, or (at your + * option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + * + */ + +#include <dt-bindings/clock/hi3516dv300-clock.h> +#include <linux/clk-provider.h> +#include <linux/module.h> +#include <linux/of_device.h> +#include <linux/platform_device.h> +#include "clk.h" +#include "crg.h" +#include "reset.h" + +static struct hisi_fixed_rate_clock hi3516dv300_fixed_rate_clks __initdata = { + { HI3516DV300_FIXED_3M, "3m", NULL, 0, 3000000, }, + { HI3516DV300_FIXED_6M, "6m", NULL, 0, 6000000, }, + { HI3516DV300_FIXED_12M, "12m", NULL, 0, 12000000, }, + { HI3516DV300_FIXED_24M, "24m", NULL, 0, 24000000, }, + { HI3516DV300_FIXED_25M, "25m", NULL, 0, 25000000, }, + { HI3516DV300_FIXED_50M, "50m", NULL, 0, 50000000, }, + { HI3516DV300_FIXED_54M, "54m", NULL, 0, 54000000, }, + { HI3516DV300_FIXED_83P3M, "83.3m", NULL, 0, 83300000, }, + { HI3516DV300_FIXED_100M, "100m", NULL, 0, 100000000, }, + { HI3516DV300_FIXED_125M, "125m", NULL, 0, 125000000, }, + { HI3516DV300_FIXED_150M, "150m", NULL, 0, 150000000, }, + { HI3516DV300_FIXED_163M, "163m", NULL, 0, 163000000, }, + { HI3516DV300_FIXED_200M, "200m", NULL, 0, 200000000, }, + { HI3516DV300_FIXED_250M, "250m", NULL, 0, 250000000, }, + { HI3516DV300_FIXED_257M, "257m", NULL, 0, 257000000, }, + { HI3516DV300_FIXED_300M, "300m", NULL, 0, 300000000, }, + { HI3516DV300_FIXED_324M, "324m", NULL, 0, 324000000, }, + { HI3516DV300_FIXED_342M, "342m", NULL, 0, 342000000, }, + { HI3516DV300_FIXED_342M, "375m", NULL, 0, 375000000, }, + { HI3516DV300_FIXED_396M, "396m", NULL, 0, 396000000, }, + { HI3516DV300_FIXED_400M, "400m", NULL, 0, 400000000, }, + { HI3516DV300_FIXED_448M, "448m", NULL, 0, 448000000, }, + { HI3516DV300_FIXED_500M, "500m", NULL, 0, 500000000, }, + { HI3516DV300_FIXED_540M, "540m", NULL, 0, 540000000, }, + { HI3516DV300_FIXED_600M, "600m", NULL, 0, 600000000, }, + { HI3516DV300_FIXED_750M, "750m", NULL, 0, 750000000, }, + { HI3516DV300_FIXED_1000M, "1000m", NULL, 0, 1000000000, }, + { HI3516DV300_FIXED_1500M, "1500m", NULL, 0, 1500000000UL, }, +}; + +static const char *sysaxi_mux_p __initconst = { + "24m", "200m", "300m" +}; +static const char *sysapb_mux_p __initconst = {"24m", "50m"}; +static const char *uart_mux_p __initconst = {"24m", "6m"}; +static const char *fmc_mux_p __initconst = {"24m", "100m", "150m", + "163m", "200m", "257m", "300m", "396m"}; +static const char *eth_mux_p __initconst = {"100m", "54m"}; +static const char *mmc_mux_p __initconst = {"100m", "50m", "25m"}; +static const char *pwm_mux_p __initconst = {"3m", "50m", "24m", "24m"}; + +static u32 sysaxi_mux_table = {0, 1, 2}; +static u32 sysapb_mux_table = {0, 1}; +static u32 uart_mux_table = {0, 1}; +static u32 fmc_mux_table = {0, 1, 2, 3, 4, 5, 6, 7}; +static u32 eth_mux_table = {0, 1}; +static u32 mmc_mux_table = {1, 2, 3}; +static u32 pwm_mux_table = {0, 1, 2, 3}; + +static struct hisi_mux_clock hi3516dv300_mux_clks __initdata = { + { + HI3516DV300_SYSAXI_CLK, "sysaxi_mux", sysaxi_mux_p, + ARRAY_SIZE(sysaxi_mux_p), + CLK_SET_RATE_PARENT, 0x80, 6, 2, 0, sysaxi_mux_table, + }, + { + HI3516DV300_SYSAPB_CLK, "sysapb_mux", sysapb_mux_p, + ARRAY_SIZE(sysapb_mux_p), + CLK_SET_RATE_PARENT, 0x80, 10, 1, 0, sysapb_mux_table, + }, + { + HI3516DV300_FMC_MUX, "fmc_mux", fmc_mux_p, ARRAY_SIZE(fmc_mux_p), + CLK_SET_RATE_PARENT, 0x144, 2, 3, 0, fmc_mux_table, + }, + { + HI3516DV300_MMC0_MUX, "mmc0_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p), + CLK_SET_RATE_PARENT, 0x148, 2, 2, 0, mmc_mux_table, + }, + { + HI3516DV300_MMC1_MUX, "mmc1_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p), + CLK_SET_RATE_PARENT, 0x160, 2, 2, 0, mmc_mux_table, + }, + { + HI3516DV300_MMC2_MUX, "mmc2_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p), + CLK_SET_RATE_PARENT, 0x154, 2, 2, 0, mmc_mux_table, + }, + { + HI3516DV300_UART_MUX, "uart_mux0", uart_mux_p, + ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1bc, 18, 1, 0, uart_mux_table, + }, + { + HI3516DV300_UART1_MUX, "uart_mux1", uart_mux_p, + ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1bc, 19, 1, 0, uart_mux_table, + }, + { + HI3516DV300_UART2_MUX, "uart_mux2", uart_mux_p, + ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1bc, 20, 1, 0, uart_mux_table, + }, + { + HI3516DV300_UART3_MUX, "uart_mux3", uart_mux_p, + ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1bc, 21, 1, 0, uart_mux_table, + }, + { + HI3516DV300_UART4_MUX, "uart_mux4", uart_mux_p, + ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1bc, 22, 1, 0, uart_mux_table, + }, + { + HI3516DV300_PWM_MUX, "pwm_mux", pwm_mux_p, + ARRAY_SIZE(pwm_mux_p), + CLK_SET_RATE_PARENT, 0x1bc, 8, 2, 0, pwm_mux_table, + }, + /* ethernet clock select */ + { + HI3516DV300_ETH_MUX, "eth_mux", eth_mux_p, ARRAY_SIZE(eth_mux_p), + CLK_SET_RATE_PARENT, 0x16c, 7, 1, 0, eth_mux_table, + }, +}; + +static struct hisi_fixed_factor_clock hi3516dv300_fixed_factor_clks __initdata + = { + { + HI3516DV300_SYSAXI_CLK, "clk_sysaxi", "sysaxi_mux", 1, 4, + CLK_SET_RATE_PARENT + }, +}; + +static struct hisi_gate_clock hi3516dv300_gate_clks __initdata = { + { + HI3516DV300_FMC_CLK, "clk_fmc", "fmc_mux", + CLK_SET_RATE_PARENT, 0x144, 1, 0, + }, + { + HI3516DV300_MMC0_CLK, "clk_mmc0", "mmc0_mux", + CLK_SET_RATE_PARENT, 0x148, 1, 0, + }, + { + HI3516DV300_MMC1_CLK, "clk_mmc1", "mmc1_mux", + CLK_SET_RATE_PARENT, 0x160, 1, 0, + }, + { + HI3516DV300_MMC2_CLK, "clk_mmc2", "mmc2_mux", + CLK_SET_RATE_PARENT, 0x154, 1, 0, + }, + { + HI3516DV300_UART0_CLK, "clk_uart0", "uart_mux0", + CLK_SET_RATE_PARENT, 0x1b8, 0, 0, + }, + { + HI3516DV300_UART1_CLK, "clk_uart1", "uart_mux1", + CLK_SET_RATE_PARENT, 0x1b8, 1, 0, + }, + { + HI3516DV300_UART2_CLK, "clk_uart2", "uart_mux2", + CLK_SET_RATE_PARENT, 0x1b8, 2, 0, + }, + { + HI3516DV300_UART3_CLK, "clk_uart3", "uart_mux3", + CLK_SET_RATE_PARENT, 0x1b8, 3, 0, + }, + { + HI3516DV300_UART4_CLK, "clk_uart4", "uart_mux4", + CLK_SET_RATE_PARENT, 0x1b8, 4, 0, + }, + { + HI3516DV300_I2C0_CLK, "clk_i2c0", "50m", + CLK_SET_RATE_PARENT, 0x1b8, 11, 0, + }, + { + HI3516DV300_I2C1_CLK, "clk_i2c1", "50m", + CLK_SET_RATE_PARENT, 0x1b8, 12, 0, + }, + { + HI3516DV300_I2C2_CLK, "clk_i2c2", "50m", + CLK_SET_RATE_PARENT, 0x1b8, 13, 0, + }, + { + HI3516DV300_I2C3_CLK, "clk_i2c3", "50m", + CLK_SET_RATE_PARENT, 0x1b8, 14, 0, + }, + { + HI3516DV300_I2C4_CLK, "clk_i2c4", "50m", + CLK_SET_RATE_PARENT, 0x1b8, 15, 0, + }, + { + HI3516DV300_I2C5_CLK, "clk_i2c5", "50m", + CLK_SET_RATE_PARENT, 0x1b8, 16, 0, + }, + { + HI3516DV300_I2C6_CLK, "clk_i2c6", "50m", + CLK_SET_RATE_PARENT, 0x1b8, 17, 0, + }, + { + HI3516DV300_I2C7_CLK, "clk_i2c7", "50m", + CLK_SET_RATE_PARENT, 0x1b8, 18, 0, + }, + { + HI3516DV300_SPI0_CLK, "clk_spi0", "100m", + CLK_SET_RATE_PARENT, 0x1bc, 12, 0, + }, + { + HI3516DV300_SPI1_CLK, "clk_spi1", "100m", + CLK_SET_RATE_PARENT, 0x1bc, 13, 0, + }, + { + HI3516DV300_SPI2_CLK, "clk_spi2", "100m", + CLK_SET_RATE_PARENT, 0x1bc, 14, 0, + }, + { + HI3516DV300_ETH0_CLK, "clk_eth0", "eth_mux", + CLK_SET_RATE_PARENT, 0x16c, 1, 0, + }, + { + HI3516DV300_DMAC_CLK, "clk_dmac", NULL, + CLK_SET_RATE_PARENT, 0x194, 1, 0, + }, + { + HI3516DV300_DMAC_AXICLK, "axiclk_dmac", NULL, + CLK_SET_RATE_PARENT, 0x194, 2, 0, + }, + { + HI3516DV300_PWM_CLK, "clk_pwm", "pwm_mux", + CLK_SET_RATE_PARENT, 0x1bc, 7, 0, + }, +}; + +static void __init hi3516dv300_clk_init(struct device_node *np) +{ + struct hisi_clock_data *clk_data; + + clk_data = hisi_clk_init(np, HI3516DV300_NR_CLKS); + if (!clk_data) + return; + + hisi_clk_register_fixed_rate(hi3516dv300_fixed_rate_clks, + ARRAY_SIZE(hi3516dv300_fixed_rate_clks), + clk_data); + hisi_clk_register_mux(hi3516dv300_mux_clks, ARRAY_SIZE(hi3516dv300_mux_clks), + clk_data); + hisi_clk_register_fixed_factor(hi3516dv300_fixed_factor_clks, + ARRAY_SIZE(hi3516dv300_fixed_factor_clks), clk_data); + hisi_clk_register_gate(hi3516dv300_gate_clks, + ARRAY_SIZE(hi3516dv300_gate_clks), clk_data); +} + +MODULE_LICENSE("GPL"); +CLK_OF_DECLARE(hi3516dv300_clk, "hisilicon,hi3516dv300-clock", + hi3516dv300_clk_init); +
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/clk-hi3519av100.c
Added
@@ -0,0 +1,561 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Hi3519A Clock Driver + * + * Copyright (c) 2016-2017 HiSilicon Technologies Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; either version 2 of the License, or (at your + * option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + * + */ + +#include <linux/of_address.h> +#include <dt-bindings/clock/hi3519av100-clock.h> +#include <linux/slab.h> +#include <linux/delay.h> +#include "clk.h" +#include "reset.h" + +struct hi3519av100_pll_clock { + u32 id; + const char *name; + const char *parent_name; + u32 ctrl_reg1; + u8 frac_shift; + u8 frac_width; + u8 postdiv1_shift; + u8 postdiv1_width; + u8 postdiv2_shift; + u8 postdiv2_width; + u32 ctrl_reg2; + u8 fbdiv_shift; + u8 fbdiv_width; + u8 refdiv_shift; + u8 refdiv_width; +}; + +struct hi3519av100_clk_pll { + struct clk_hw hw; + u32 id; + void __iomem *ctrl_reg1; + u8 frac_shift; + u8 frac_width; + u8 postdiv1_shift; + u8 postdiv1_width; + u8 postdiv2_shift; + u8 postdiv2_width; + void __iomem *ctrl_reg2; + u8 fbdiv_shift; + u8 fbdiv_width; + u8 refdiv_shift; + u8 refdiv_width; +}; + +static struct hi3519av100_pll_clock hi3519av100_pll_clks __initdata = { + { + HI3519AV100_APLL_CLK, "apll", NULL, 0x0, 0, 24, 24, 3, 28, 3, + 0x4, 0, 12, 12, 6 + }, +}; + +#define to_pll_clk(_hw) container_of(_hw, struct hi3519av100_clk_pll, hw) + +/* soc clk config */ +static struct hisi_fixed_rate_clock hi3519av100_fixed_rate_clks __initdata = { + { HI3519AV100_FIXED_2376M, "2376m", NULL, 0, 2376000000UL, }, + { HI3519AV100_FIXED_1188M, "1188m", NULL, 0, 1188000000, }, + { HI3519AV100_FIXED_594M, "594m", NULL, 0, 594000000, }, + { HI3519AV100_FIXED_297M, "297m", NULL, 0, 297000000, }, + { HI3519AV100_FIXED_148P5M, "148p5m", NULL, 0, 148500000, }, + { HI3519AV100_FIXED_74P25M, "74p25m", NULL, 0, 74250000, }, + { HI3519AV100_FIXED_792M, "792m", NULL, 0, 792000000, }, + { HI3519AV100_FIXED_475M, "475m", NULL, 0, 475000000, }, + { HI3519AV100_FIXED_340M, "340m", NULL, 0, 340000000, }, + { HI3519AV100_FIXED_72M, "72m", NULL, 0, 72000000, }, + { HI3519AV100_FIXED_400M, "400m", NULL, 0, 400000000, }, + { HI3519AV100_FIXED_200M, "200m", NULL, 0, 200000000, }, + { HI3519AV100_FIXED_54M, "54m", NULL, 0, 54000000, }, + { HI3519AV100_FIXED_27M, "27m", NULL, 0, 1188000000, }, + { HI3519AV100_FIXED_37P125M, "37p125m", NULL, 0, 37125000, }, + { HI3519AV100_FIXED_3000M, "3000m", NULL, 0, 3000000000UL, }, + { HI3519AV100_FIXED_1500M, "1500m", NULL, 0, 1500000000, }, + { HI3519AV100_FIXED_500M, "500m", NULL, 0, 500000000, }, + { HI3519AV100_FIXED_250M, "250m", NULL, 0, 250000000, }, + { HI3519AV100_FIXED_125M, "125m", NULL, 0, 125000000, }, + { HI3519AV100_FIXED_1000M, "1000m", NULL, 0, 1000000000, }, + { HI3519AV100_FIXED_600M, "600m", NULL, 0, 600000000, }, + { HI3519AV100_FIXED_750M, "750m", NULL, 0, 750000000, }, + { HI3519AV100_FIXED_150M, "150m", NULL, 0, 150000000, }, + { HI3519AV100_FIXED_75M, "75m", NULL, 0, 75000000, }, + { HI3519AV100_FIXED_300M, "300m", NULL, 0, 300000000, }, + { HI3519AV100_FIXED_60M, "60m", NULL, 0, 60000000, }, + { HI3519AV100_FIXED_214M, "214m", NULL, 0, 214000000, }, + { HI3519AV100_FIXED_107M, "107m", NULL, 0, 107000000, }, + { HI3519AV100_FIXED_100M, "100m", NULL, 0, 100000000, }, + { HI3519AV100_FIXED_50M, "50m", NULL, 0, 50000000, }, + { HI3519AV100_FIXED_25M, "25m", NULL, 0, 25000000, }, + { HI3519AV100_FIXED_24M, "24m", NULL, 0, 24000000, }, + { HI3519AV100_FIXED_3M, "3m", NULL, 0, 3000000, }, + { HI3519AV100_FIXED_100K, "100k", NULL, 0, 100000, }, + { HI3519AV100_FIXED_400K, "400k", NULL, 0, 400000, }, + { HI3519AV100_FIXED_49P5M, "49p5m", NULL, 0, 49500000, }, + { HI3519AV100_FIXED_99M, "99m", NULL, 0, 99000000, }, + { HI3519AV100_FIXED_187P5M, "187p5m", NULL, 0, 187500000, }, + { HI3519AV100_FIXED_198M, "198m", NULL, 0, 198000000, }, +}; + + +static const char *fmc_mux_p __initconst = { + "24m", "100m", "150m", "198m", "250m", "300m", "396m" +}; +static u32 fmc_mux_table = {0, 1, 2, 3, 4, 5, 6}; + +static const char *mmc_mux_p __initconst = { + "100k", "25m", "49p5m", "99m", "187p5m", "150m", "198m", "400k" +}; +static u32 mmc_mux_table = {0, 1, 2, 3, 4, 5, 6, 7}; + +static const char *sysapb_mux_p __initconst = { + "24m", "50m", +}; +static u32 sysapb_mux_table = {0, 1}; + +static const char *sysbus_mux_p __initconst = { + "24m", "300m" +}; +static u32 sysbus_mux_table = {0, 1}; + +static const char *uart_mux_p __initconst = {"50m", "24m", "3m"}; +static u32 uart_mux_table = {0, 1, 2}; + +static const char *a53_1_clksel_mux_p __initconst = { + "24m", "apll", "vpll", "792m" +}; +static u32 a53_1_clksel_mux_table = {0, 1, 2, 3}; + +static struct hisi_mux_clock hi3519av100_mux_clks __initdata = { + { + HI3519AV100_FMC_MUX, "fmc_mux", fmc_mux_p, ARRAY_SIZE(fmc_mux_p), + CLK_SET_RATE_PARENT, 0x170, 2, 3, 0, fmc_mux_table, + }, + + { + HI3519AV100_MMC0_MUX, "mmc0_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p), + CLK_SET_RATE_PARENT, 0x1a8, 24, 3, 0, mmc_mux_table, + }, + + { + HI3519AV100_MMC1_MUX, "mmc1_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p), + CLK_SET_RATE_PARENT, 0x1ec, 24, 3, 0, mmc_mux_table, + }, + + { + HI3519AV100_MMC2_MUX, "mmc2_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p), + CLK_SET_RATE_PARENT, 0x214, 24, 3, 0, mmc_mux_table, + }, + + { + HI3519AV100_SYSAPB_MUX, "sysapb_mux", sysapb_mux_p, ARRAY_SIZE(sysapb_mux_p), + CLK_SET_RATE_PARENT, 0xe8, 3, 1, 0, sysapb_mux_table + }, + + { + HI3519AV100_SYSBUS_MUX, "sysbus_mux", sysbus_mux_p, ARRAY_SIZE(sysbus_mux_p), + CLK_SET_RATE_PARENT, 0xe8, 0, 1, 1, sysbus_mux_table + }, + + { + HI3519AV100_UART0_MUX, "uart0_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1a4, 0, 2, 1, uart_mux_table + }, + + { + HI3519AV100_UART1_MUX, "uart1_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1a4, 2, 2, 1, uart_mux_table + }, + + { + HI3519AV100_UART2_MUX, "uart2_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1a4, 4, 2, 1, uart_mux_table + }, + + { + HI3519AV100_UART3_MUX, "uart3_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1a4, 6, 2, 1, uart_mux_table + }, + + { + HI3519AV100_UART4_MUX, "uart4_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1a4, 8, 2, 1, uart_mux_table + }, + + { + HI3519AV100_UART5_MUX, "uart5_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1a4, 10, 2, 1, uart_mux_table + }, + + { + HI3519AV100_UART6_MUX, "uart6_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1a4, 12, 2, 1, uart_mux_table + }, + + { + HI3519AV100_UART7_MUX, "uart7_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1a4, 14, 2, 1, uart_mux_table + }, + + { + HI3519AV100_UART8_MUX, "uart8_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p), + CLK_SET_RATE_PARENT, 0x1a4, 28, 2, 1, uart_mux_table + }, + + { + HI3519AV100_A53_1_MUX, "a53_1_mux", a53_1_clksel_mux_p, + ARRAY_SIZE(a53_1_clksel_mux_p), CLK_SET_RATE_PARENT, + 0xe4, 10, 2, 3, a53_1_clksel_mux_table + }, + +}; + +static struct hisi_fixed_factor_clock hi3519av100_fixed_factor_clks __initdata + = { + +}; + +static struct hisi_gate_clock hi3519av100_gate_clks __initdata = { + { + HI3519AV100_FMC_CLK, "clk_fmc", "fmc_mux", + CLK_SET_RATE_PARENT, 0x170, 1, 0, + }, + { + HI3519AV100_MMC0_CLK, "clk_mmc0", "mmc0_mux", + CLK_SET_RATE_PARENT, 0x1a8, 28, 0, + }, + { + HI3519AV100_MMC1_CLK, "clk_mmc1", "mmc1_mux", + CLK_SET_RATE_PARENT, 0x1ec, 28, 0, + }, + { + HI3519AV100_MMC2_CLK, "clk_mmc2", "mmc2_mux", + CLK_SET_RATE_PARENT, 0x214, 28, 0, + }, + { + HI3519AV100_UART0_CLK, "clk_uart0", "uart0_mux", + CLK_SET_RATE_PARENT, 0x198, 16, 0, + }, + { + HI3519AV100_UART1_CLK, "clk_uart1", "uart1_mux", + CLK_SET_RATE_PARENT, 0x198, 17, 0, + }, + { + HI3519AV100_UART2_CLK, "clk_uart2", "uart2_mux", + CLK_SET_RATE_PARENT, 0x198, 18, 0, + }, + { + HI3519AV100_UART3_CLK, "clk_uart3", "uart3_mux", + CLK_SET_RATE_PARENT, 0x198, 19, 0, + }, + { + HI3519AV100_UART4_CLK, "clk_uart4", "uart4_mux", + CLK_SET_RATE_PARENT, 0x198, 20, 0, + }, + { + HI3519AV100_UART5_CLK, "clk_uart5", "uart5_mux", + CLK_SET_RATE_PARENT, 0x198, 21, 0, + }, + { + HI3519AV100_UART6_CLK, "clk_uart6", "uart6_mux", + CLK_SET_RATE_PARENT, 0x198, 22, 0, + }, + { + HI3519AV100_UART7_CLK, "clk_uart7", "uart7_mux", + CLK_SET_RATE_PARENT, 0x198, 23, 0, + }, + { + HI3519AV100_UART8_CLK, "clk_uart8", "uart8_mux", + CLK_SET_RATE_PARENT, 0x198, 29, 0, + }, + { + HI3519AV100_ETH_CLK, "clk_eth", NULL, + CLK_SET_RATE_PARENT, 0x0174, 1, 0, + }, + { + HI3519AV100_ETH_MACIF_CLK, "clk_eth_macif", NULL, + CLK_SET_RATE_PARENT, 0x0174, 5, 0, + }, + /* i2c */ + { + HI3519AV100_I2C0_CLK, "clk_i2c0", "50m", + CLK_SET_RATE_PARENT, 0x01a0, 16, 0, + }, + { + HI3519AV100_I2C1_CLK, "clk_i2c1", "50m", + CLK_SET_RATE_PARENT, 0x01a0, 17, 0, + }, + { + HI3519AV100_I2C2_CLK, "clk_i2c2", "50m", + CLK_SET_RATE_PARENT, 0x01a0, 18, 0, + }, + { + HI3519AV100_I2C3_CLK, "clk_i2c3", "50m", + CLK_SET_RATE_PARENT, 0x01a0, 19, 0, + }, + { + HI3519AV100_I2C4_CLK, "clk_i2c4", "50m", + CLK_SET_RATE_PARENT, 0x01a0, 20, 0, + }, + { + HI3519AV100_I2C5_CLK, "clk_i2c5", "50m", + CLK_SET_RATE_PARENT, 0x01a0, 21, 0, + }, + { + HI3519AV100_I2C6_CLK, "clk_i2c6", "50m", + CLK_SET_RATE_PARENT, 0x01a0, 22, 0, + }, + { + HI3519AV100_I2C7_CLK, "clk_i2c7", "50m", + CLK_SET_RATE_PARENT, 0x01a0, 23, 0, + }, + { + HI3519AV100_I2C8_CLK, "clk_i2c8", "50m", + CLK_SET_RATE_PARENT, 0x01a0, 24, 0, + }, + { + HI3519AV100_I2C9_CLK, "clk_i2c9", "50m", + CLK_SET_RATE_PARENT, 0x01a0, 25, 0, + }, + { + HI3519AV100_SPI0_CLK, "clk_spi0", "100m", + CLK_SET_RATE_PARENT, 0x0198, 24, 0, + }, + { + HI3519AV100_SPI1_CLK, "clk_spi1", "100m", + CLK_SET_RATE_PARENT, 0x0198, 25, 0, + }, + { + HI3519AV100_SPI2_CLK, "clk_spi2", "100m", + CLK_SET_RATE_PARENT, 0x0198, 26, 0, + }, + { + HI3519AV100_SPI3_CLK, "clk_spi3", "100m", + CLK_SET_RATE_PARENT, 0x0198, 27, 0, + }, + { + HI3519AV100_SPI4_CLK, "clk_spi4", "100m", + CLK_SET_RATE_PARENT, 0x0198, 28, 0, + }, + { + HI3519AV100_EDMAC_AXICLK, "axi_clk_edmac", NULL, + CLK_SET_RATE_PARENT, 0x16c, 6, 0, + }, + { + HI3519AV100_EDMAC_CLK, "clk_edmac", NULL, + CLK_SET_RATE_PARENT, 0x16c, 5, 0, + }, + { + HI3519AV100_EDMAC1_AXICLK, "axi_clk_edmac1", NULL, + CLK_SET_RATE_PARENT, 0x16c, 9, 0, + }, + { + HI3519AV100_EDMAC1_CLK, "clk_edmac1", NULL, + CLK_SET_RATE_PARENT, 0x16c, 8, 0, + }, + { + HI3519AV100_VDMAC_CLK, "clk_vdmac", NULL, + CLK_SET_RATE_PARENT, 0x14c, 5, 0, + }, +}; + +static void hi3519av100_calc_pll(u32 *frac_val, + u32 *postdiv1_val, + u32 *postdiv2_val, + u32 *fbdiv_val, + u32 *refdiv_val, + u64 rate) +{ + u64 rem; + *frac_val = 0; + rem = do_div(rate, 1000000); + *fbdiv_val = rate; + *refdiv_val = 24; + if ((rem * (1 << 24)) > ULLONG_MAX) { + pr_err("Data over limits!\n"); + return; + } + rem = rem * (1 << 24); + do_div(rem, 1000000); + *frac_val = rem; +} + +static int clk_pll_set_rate(struct clk_hw *hw, + unsigned long rate, + unsigned long parent_rate) +{ + struct hi3519av100_clk_pll *clk = to_pll_clk(hw); + u32 frac_val, postdiv1_val, postdiv2_val, fbdiv_val, refdiv_val; + u32 val; + + postdiv1_val = postdiv2_val = 0; + + hi3519av100_calc_pll(&frac_val, &postdiv1_val, &postdiv2_val, + &fbdiv_val, &refdiv_val, rate); + + val = readl_relaxed(clk->ctrl_reg1); + val &= ~(((1 << clk->frac_width) - 1) << clk->frac_shift); + val &= ~(((1 << clk->postdiv1_width) - 1) << clk->postdiv1_shift); + val &= ~(((1 << clk->postdiv2_width) - 1) << clk->postdiv2_shift); + + val |= frac_val << clk->frac_shift; + val |= postdiv1_val << clk->postdiv1_shift; + val |= postdiv2_val << clk->postdiv2_shift; + writel_relaxed(val, clk->ctrl_reg1); + + val = readl_relaxed(clk->ctrl_reg2); + val &= ~(((1 << clk->fbdiv_width) - 1) << clk->fbdiv_shift); + val &= ~(((1 << clk->refdiv_width) - 1) << clk->refdiv_shift); + + val |= fbdiv_val << clk->fbdiv_shift; + val |= refdiv_val << clk->refdiv_shift; + writel_relaxed(val, clk->ctrl_reg2); + + return 0; +} + +static unsigned long clk_pll_recalc_rate(struct clk_hw *hw, + unsigned long parent_rate) +{ + struct hi3519av100_clk_pll *clk = to_pll_clk(hw); + u64 frac_val, fbdiv_val; + u32 val; + u64 tmp, rate; + u32 refdiv_val; + + val = readl_relaxed(clk->ctrl_reg1); + val = val >> clk->frac_shift; + val &= ((1 << clk->frac_width) - 1); + frac_val = val; + + val = readl_relaxed(clk->ctrl_reg2); + val = val >> clk->fbdiv_shift; + val &= ((1 << clk->fbdiv_width) - 1); + fbdiv_val = val; + + val = readl_relaxed(clk->ctrl_reg2); + val = val >> clk->refdiv_shift; + val &= ((1 << clk->refdiv_width) - 1); + refdiv_val = val; + + /* rate = 24000000 * (fbdiv + frac / (1<<24) ) / refdiv */ + rate = 0; + if ((24000000 * fbdiv_val) > ULLONG_MAX) { + pr_err("Data over limits!\n"); + return 0; + } + tmp = 24000000 * fbdiv_val; + rate += tmp; + do_div(rate, refdiv_val); + + return rate; +} + +static int clk_pll_determine_rate(struct clk_hw *hw, + struct clk_rate_request *req) +{ + return req->rate; +} + +static const struct clk_ops clk_pll_ops = { + .set_rate = clk_pll_set_rate, + .determine_rate = clk_pll_determine_rate, + .recalc_rate = clk_pll_recalc_rate, +}; + +void __init hi3519av100_clk_register_pll(struct hi3519av100_pll_clock *clks, + int nums, struct hisi_clock_data *data) +{ + int i; + void __iomem *base = NULL; + + if (clks == NULL || data == NULL) + return; + + base = data->base; + for (i = 0; i < nums; i++) { + struct hi3519av100_clk_pll *p_clk = NULL; + struct clk *clk = NULL; + struct clk_init_data init; + + p_clk = kzalloc(sizeof(*p_clk), GFP_KERNEL); + if (!p_clk) + return; + + init.name = clksi.name; + init.flags = CLK_IS_BASIC | CLK_SET_RATE_PARENT; + init.parent_names = + (clksi.parent_name ? &clksi.parent_name : NULL); + init.num_parents = (clksi.parent_name ? 1 : 0); + init.ops = &clk_pll_ops; + + p_clk->ctrl_reg1 = base + clksi.ctrl_reg1; + p_clk->frac_shift = clksi.frac_shift; + p_clk->frac_width = clksi.frac_width; + p_clk->postdiv1_shift = clksi.postdiv1_shift; + p_clk->postdiv1_width = clksi.postdiv1_width; + p_clk->postdiv2_shift = clksi.postdiv2_shift; + p_clk->postdiv2_width = clksi.postdiv2_width; + + p_clk->ctrl_reg2 = base + clksi.ctrl_reg2; + p_clk->fbdiv_shift = clksi.fbdiv_shift; + p_clk->fbdiv_width = clksi.fbdiv_width; + p_clk->refdiv_shift = clksi.refdiv_shift; + p_clk->refdiv_width = clksi.refdiv_width; + p_clk->hw.init = &init; + + clk = clk_register(NULL, &p_clk->hw); + if (IS_ERR(clk)) { + kfree(p_clk); + pr_err("%s: failed to register clock %s\n", + __func__, clksi.name); + continue; + } + + data->clk_data.clksclksi.id = clk; + } +} + +static void __init hi3519av100_clk_init(struct device_node *np) +{ + struct hisi_clock_data *clk_data; + + clk_data = hisi_clk_init(np, HI3519AV100_NR_CLKS); + if (!clk_data) + return; + if (IS_ENABLED(CONFIG_RESET_CONTROLLER)) + hibvt_reset_init(np, HI3519AV100_NR_RSTS); + + hisi_clk_register_fixed_rate(hi3519av100_fixed_rate_clks, + ARRAY_SIZE(hi3519av100_fixed_rate_clks), + clk_data); + hisi_clk_register_mux(hi3519av100_mux_clks, ARRAY_SIZE(hi3519av100_mux_clks), + clk_data); + hisi_clk_register_fixed_factor(hi3519av100_fixed_factor_clks, + ARRAY_SIZE(hi3519av100_fixed_factor_clks), clk_data); + hisi_clk_register_gate(hi3519av100_gate_clks, + ARRAY_SIZE(hi3519av100_gate_clks), clk_data); + + hi3519av100_clk_register_pll(hi3519av100_pll_clks, + ARRAY_SIZE(hi3519av100_pll_clks), clk_data); +} + +CLK_OF_DECLARE(hi3519av100_clk, "hisilicon,hi3519av100-clock", + hi3519av100_clk_init);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/clk.c
Changed
@@ -82,6 +82,10 @@ of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data->clk_data); return clk_data; err_data: + if (base) { + iounmap(base); + base = NULL; + } kfree(clk_data); err: return NULL;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/crg-hi3516cv300.c
Changed
@@ -170,7 +170,7 @@ return ERR_PTR(ret); } -static void hi3516cv300_clk_unregister(struct platform_device *pdev) +static void hi3516cv300_clk_unregister(const struct platform_device *pdev) { struct hisi_crg_dev *crg = platform_get_drvdata(pdev); @@ -229,7 +229,7 @@ return ERR_PTR(ret); } -static void hi3516cv300_sysctrl_clk_unregister(struct platform_device *pdev) +static void hi3516cv300_sysctrl_clk_unregister(const struct platform_device *pdev) { struct hisi_crg_dev *crg = platform_get_drvdata(pdev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/crg-hi3798cv200.c
Changed
@@ -251,7 +251,7 @@ return ERR_PTR(ret); } -static void hi3798cv200_clk_unregister(struct platform_device *pdev) +static void hi3798cv200_clk_unregister(const struct platform_device *pdev) { struct hisi_crg_dev *crg = platform_get_drvdata(pdev); @@ -316,7 +316,7 @@ return ERR_PTR(ret); } -static void hi3798cv200_sysctrl_clk_unregister(struct platform_device *pdev) +static void hi3798cv200_sysctrl_clk_unregister(const struct platform_device *pdev) { struct hisi_crg_dev *crg = platform_get_drvdata(pdev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/crg.h
Changed
@@ -13,7 +13,7 @@ struct hisi_crg_funcs { struct hisi_clock_data* (*register_clks)(struct platform_device *pdev); - void (*unregister_clks)(struct platform_device *pdev); + void (*unregister_clks)(const struct platform_device *pdev); }; struct hisi_crg_dev {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/reset.c
Changed
@@ -87,6 +87,36 @@ .deassert = hisi_reset_deassert, }; +#ifdef CONFIG_ARCH_HISI_BVT +int __init hibvt_reset_init(struct device_node *np, + int nr_rsts) +{ + struct hisi_reset_controller *rstc; + + rstc = kzalloc(sizeof(*rstc), GFP_KERNEL); + if (!rstc) + return -ENOMEM; + + rstc->membase = of_iomap(np, 0); + if (!rstc->membase) { + kfree(rstc); + return -EINVAL; + } + + spin_lock_init(&rstc->lock); + + rstc->rcdev.owner = THIS_MODULE; + rstc->rcdev.nr_resets = nr_rsts; + rstc->rcdev.ops = &hisi_reset_ops; + rstc->rcdev.of_node = np; + rstc->rcdev.of_reset_n_cells = 2; + rstc->rcdev.of_xlate = hisi_reset_of_xlate; + + return reset_controller_register(&rstc->rcdev); +} +EXPORT_SYMBOL_GPL(hibvt_reset_init); +#endif + struct hisi_reset_controller *hisi_reset_init(struct platform_device *pdev) { struct hisi_reset_controller *rstc;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/hisilicon/reset.h
Changed
@@ -11,6 +11,9 @@ #ifdef CONFIG_RESET_CONTROLLER struct hisi_reset_controller *hisi_reset_init(struct platform_device *pdev); +#ifdef CONFIG_ARCH_HISI_BVT +int __init hibvt_reset_init(struct device_node *np, int nr_rsts); +#endif void hisi_reset_exit(struct hisi_reset_controller *rstc); #else static inline
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/clk/imx/clk.c
Changed
@@ -173,6 +173,8 @@ int i; imx_uart_clocks = kcalloc(clk_count, sizeof(struct clk *), GFP_KERNEL); + if (!imx_uart_clocks) + return; if (!of_stdout) return;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/cpuidle/cpuidle-haltpoll.c
Changed
@@ -18,9 +18,17 @@ #include <linux/kvm_para.h> #include <linux/cpuidle_haltpoll.h> -static bool force __read_mostly; -module_param(force, bool, 0444); -MODULE_PARM_DESC(force, "Load unconditionally"); +static bool force; +MODULE_PARM_DESC(force, "bool, enable haltpoll driver"); +static int enable_haltpoll_driver(const char *val, const struct kernel_param *kp); +static int register_haltpoll_driver(void); +static void unregister_haltpoll_driver(void); + +static const struct kernel_param_ops enable_haltpoll_ops = { + .set = enable_haltpoll_driver, + .get = param_get_bool, +}; +module_param_cb(force, &enable_haltpoll_ops, &force, 0644); static struct cpuidle_device __percpu *haltpoll_cpuidle_devices; static enum cpuhp_state haltpoll_hp_state; @@ -36,6 +44,42 @@ return index; } + +static int enable_haltpoll_driver(const char *val, const struct kernel_param *kp) +{ +#ifdef CONFIG_ARM64 + int ret; + bool do_enable; + + if (!val) + return 0; + + ret = strtobool(val, &do_enable); + + if (ret || force == do_enable) + return ret; + + if (do_enable) { + ret = register_haltpoll_driver(); + + if (!ret) { + pr_info("Enable haltpoll driver.\n"); + force = 1; + } else { + pr_err("Fail to enable haltpoll driver.\n"); + } + } else { + unregister_haltpoll_driver(); + force = 0; + pr_info("Unregister haltpoll driver.\n"); + } + + return ret; +#else + return -1; +#endif +} + static struct cpuidle_driver haltpoll_driver = { .name = "haltpoll", .governor = "haltpoll", @@ -84,22 +128,18 @@ return 0; } -static void haltpoll_uninit(void) -{ - if (haltpoll_hp_state) - cpuhp_remove_state(haltpoll_hp_state); - cpuidle_unregister_driver(&haltpoll_driver); - - free_percpu(haltpoll_cpuidle_devices); - haltpoll_cpuidle_devices = NULL; -} static bool haltpoll_want(void) { return kvm_para_has_hint(KVM_HINTS_REALTIME); } -static int __init haltpoll_init(void) +static void haltpoll_uninit(void) +{ + unregister_haltpoll_driver(); +} + +static int register_haltpoll_driver(void) { int ret; struct cpuidle_driver *drv = &haltpoll_driver; @@ -112,9 +152,6 @@ cpuidle_poll_state_init(drv); - if (!force && (!kvm_para_available() || !haltpoll_want())) - return -ENODEV; - ret = cpuidle_register_driver(drv); if (ret < 0) return ret; @@ -137,9 +174,35 @@ return ret; } +static void unregister_haltpoll_driver(void) +{ + if (haltpoll_hp_state) + cpuhp_remove_state(haltpoll_hp_state); + cpuidle_unregister_driver(&haltpoll_driver); + + free_percpu(haltpoll_cpuidle_devices); + haltpoll_cpuidle_devices = NULL; + +} + +static int __init haltpoll_init(void) +{ + int ret = 0; +#ifdef CONFIG_X86 + /* Do not load haltpoll if idle= is passed */ + if (boot_option_idle_override != IDLE_NO_OVERRIDE) + return -ENODEV; +#endif + if (force || (haltpoll_want() && kvm_para_available())) + ret = register_haltpoll_driver(); + + return ret; +} + static void __exit haltpoll_exit(void) { - haltpoll_uninit(); + if (haltpoll_cpuidle_devices) + haltpoll_uninit(); } module_init(haltpoll_init);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/cpuidle/governors/haltpoll.c
Changed
@@ -39,7 +39,7 @@ static bool guest_halt_poll_allow_shrink __read_mostly = true; module_param(guest_halt_poll_allow_shrink, bool, 0644); -static bool enable __read_mostly; +static bool enable __read_mostly = true; module_param(enable, bool, 0444); MODULE_PARM_DESC(enable, "Load unconditionally"); @@ -144,7 +144,7 @@ static int __init init_haltpoll(void) { - if (kvm_para_available() || enable) + if (enable) return cpuidle_register_governor(&haltpoll_governor); return 0;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/crypto/virtio/virtio_crypto_core.c
Changed
@@ -404,7 +404,7 @@ free_engines: virtcrypto_clear_crypto_engines(vcrypto); free_vqs: - vcrypto->vdev->config->reset(vdev); + virtio_reset_device(vdev); virtcrypto_del_vqs(vcrypto); free_dev: virtcrypto_devmgr_rm_dev(vcrypto); @@ -436,7 +436,7 @@ if (virtcrypto_dev_started(vcrypto)) virtcrypto_dev_stop(vcrypto); - vdev->config->reset(vdev); + virtio_reset_device(vdev); virtcrypto_free_unused_reqs(vcrypto); virtcrypto_clear_crypto_engines(vcrypto); virtcrypto_del_vqs(vcrypto); @@ -456,7 +456,7 @@ { struct virtio_crypto *vcrypto = vdev->priv; - vdev->config->reset(vdev); + virtio_reset_device(vdev); virtcrypto_free_unused_reqs(vcrypto); if (virtcrypto_dev_started(vcrypto)) virtcrypto_dev_stop(vcrypto); @@ -492,7 +492,7 @@ free_engines: virtcrypto_clear_crypto_engines(vcrypto); free_vqs: - vcrypto->vdev->config->reset(vdev); + virtio_reset_device(vdev); virtcrypto_del_vqs(vcrypto); return err; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/firmware/efi/libstub/efi-stub.c
Changed
@@ -40,7 +40,7 @@ #ifdef CONFIG_ARM64 # define EFI_RT_VIRTUAL_LIMIT DEFAULT_MAP_WINDOW_64 -#elif defined(CONFIG_RISCV) || defined(CONFIG_LOONGARCH) +#elif defined(CONFIG_LOONGARCH) # define EFI_RT_VIRTUAL_LIMIT TASK_SIZE_MIN #else # define EFI_RT_VIRTUAL_LIMIT TASK_SIZE
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
Changed
@@ -408,6 +408,9 @@ return -ENODEV; /* same everything but the other direction */ props2 = kmemdup(props, sizeof(*props2), GFP_KERNEL); + if (!props2) + return -ENOMEM; + props2->node_from = id_to; props2->node_to = id_from; props2->kobj = NULL;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/gpu/drm/i915/gt/intel_gt.c
Changed
@@ -745,6 +745,10 @@ if (!i915_mmio_reg_offset(rb.reg)) continue; + if (INTEL_GEN(i915) == 12 && (engine->class == VIDEO_DECODE_CLASS || + engine->class == VIDEO_ENHANCEMENT_CLASS)) + rb.bit = _MASKED_BIT_ENABLE(rb.bit); + intel_uncore_write_fw(uncore, rb.reg, rb.bit); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/gpu/drm/virtio/virtgpu_kms.c
Changed
@@ -232,7 +232,7 @@ flush_work(&vgdev->ctrlq.dequeue_work); flush_work(&vgdev->cursorq.dequeue_work); flush_work(&vgdev->config_changed_work); - vgdev->vdev->config->reset(vgdev->vdev); + virtio_reset_device(vgdev->vdev); vgdev->vdev->config->del_vqs(vgdev->vdev); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/hwtracing/coresight/Kconfig
Changed
@@ -200,3 +200,7 @@ called ultrasoc-smb. endif + +config ACPI_TRBE + depends on ARM64 && ACPI + def_bool y
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/hwtracing/coresight/coresight-etm4x-core.c
Changed
@@ -428,8 +428,10 @@ etm4x_relaxed_write32(csa, config->vipcssctlr, TRCVIPCSSCTLR); for (i = 0; i < drvdata->nrseqstate - 1; i++) etm4x_relaxed_write32(csa, config->seq_ctrli, TRCSEQEVRn(i)); - etm4x_relaxed_write32(csa, config->seq_rst, TRCSEQRSTEVR); - etm4x_relaxed_write32(csa, config->seq_state, TRCSEQSTR); + if (drvdata->nrseqstate) { + etm4x_relaxed_write32(csa, config->seq_rst, TRCSEQRSTEVR); + etm4x_relaxed_write32(csa, config->seq_state, TRCSEQSTR); + } etm4x_relaxed_write32(csa, config->ext_inp, TRCEXTINSELR); for (i = 0; i < drvdata->nr_cntr; i++) { @@ -1622,9 +1624,10 @@ for (i = 0; i < drvdata->nrseqstate - 1; i++) state->trcseqevri = etm4x_read32(csa, TRCSEQEVRn(i)); - state->trcseqrstevr = etm4x_read32(csa, TRCSEQRSTEVR); - - state->trcseqstr = etm4x_read32(csa, TRCSEQSTR); + if (drvdata->nrseqstate) { + state->trcseqrstevr = etm4x_read32(csa, TRCSEQRSTEVR); + state->trcseqstr = etm4x_read32(csa, TRCSEQSTR); + } state->trcextinselr = etm4x_read32(csa, TRCEXTINSELR); for (i = 0; i < drvdata->nr_cntr; i++) { @@ -1751,8 +1754,10 @@ for (i = 0; i < drvdata->nrseqstate - 1; i++) etm4x_relaxed_write32(csa, state->trcseqevri, TRCSEQEVRn(i)); - etm4x_relaxed_write32(csa, state->trcseqrstevr, TRCSEQRSTEVR); - etm4x_relaxed_write32(csa, state->trcseqstr, TRCSEQSTR); + if (drvdata->nrseqstate) { + etm4x_relaxed_write32(csa, state->trcseqrstevr, TRCSEQRSTEVR); + etm4x_relaxed_write32(csa, state->trcseqstr, TRCSEQSTR); + } etm4x_relaxed_write32(csa, state->trcextinselr, TRCEXTINSELR); for (i = 0; i < drvdata->nr_cntr; i++) { @@ -2135,12 +2140,19 @@ {} }; +static const struct acpi_device_id static_ete_ids = { + {"HISI0461", 0}, + {} +}; +MODULE_DEVICE_TABLE(acpi, static_ete_ids); + static struct platform_driver etm4_platform_driver = { .probe = etm4_probe_platform_dev, .remove = etm4_remove_platform_dev, .driver = { .name = "coresight-etm4x", .of_match_table = etm4_sysreg_match, + .acpi_match_table = static_ete_ids, .suppress_bind_attrs = true, }, };
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/hwtracing/coresight/coresight-platform.c
Changed
@@ -844,15 +844,15 @@ struct coresight_platform_data *pdata = NULL; struct fwnode_handle *fwnode = dev_fwnode(dev); - if (IS_ERR_OR_NULL(fwnode)) - goto error; - pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL); if (!pdata) { ret = -ENOMEM; goto error; } + if (IS_ERR_OR_NULL(fwnode)) + return pdata; + if (is_of_node(fwnode)) ret = of_get_coresight_platform_data(dev, pdata); else if (is_acpi_device_node(fwnode))
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/hwtracing/coresight/coresight-trbe.c
Changed
@@ -1094,6 +1094,7 @@ static void arm_trbe_remove_cpuhp(struct trbe_drvdata *drvdata) { + cpuhp_state_remove_instance(drvdata->trbe_online, &drvdata->hotplug_node); cpuhp_remove_multi_state(drvdata->trbe_online); } @@ -1188,7 +1189,14 @@ }; MODULE_DEVICE_TABLE(of, arm_trbe_of_match); +static const struct platform_device_id arm_trbe_match = { + { ARMV9_TRBE_PDEV_NAME, 0}, + {} +}; +MODULE_DEVICE_TABLE(platform, arm_trbe_match); + static struct platform_driver arm_trbe_driver = { + .id_table = arm_trbe_match, .driver = { .name = DRVNAME, .of_match_table = of_match_ptr(arm_trbe_of_match),
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/i2c/busses/i2c-hix5hd2.c
Changed
@@ -360,7 +360,11 @@ pm_runtime_get_sync(priv->dev); for (i = 0; i < num; i++, msgs++) { - stop = (i == num - 1); + if ((i == num - 1) || (msgs->flags & I2C_M_STOP)) + stop = 1; + else + stop = 0; + ret = hix5hd2_i2c_xfer_msg(priv, msgs, stop); if (ret < 0) goto out;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/i2c/busses/i2c-ismt.c
Changed
@@ -507,6 +507,9 @@ if (read_write == I2C_SMBUS_WRITE) { /* Block Write */ dev_dbg(dev, "I2C_SMBUS_BLOCK_DATA: WRITE\n"); + if (data->block0 < 1 || data->block0 > I2C_SMBUS_BLOCK_MAX) + return -EINVAL; + dma_size = data->block0 + 1; dma_direction = DMA_TO_DEVICE; desc->wr_len_cmd = dma_size;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/ide/ide-cd_ioctl.c
Changed
@@ -298,7 +298,7 @@ rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, 0); ide_req(rq)->type = ATA_PRIV_MISC; - rq->rq_flags = RQF_QUIET; + rq->rq_flags |= RQF_QUIET; blk_execute_rq(drive->queue, cd->disk, rq, 0); ret = scsi_req(rq)->result ? -EIO : 0; blk_put_request(rq);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/infiniband/hw/hns/hns_roce_bond.c
Changed
@@ -26,26 +26,43 @@ return hr_dev; } -bool hns_roce_bond_is_active(struct hns_roce_dev *hr_dev) +static struct hns_roce_bond_group *hns_roce_get_bond_grp(struct hns_roce_dev *hr_dev) { + struct hns_roce_bond_group *bond_grp = NULL; struct net_device *upper_dev; struct net_device *net_dev; - if (!netif_is_lag_port(hr_dev->iboe.netdevs0)) - return false; - rcu_read_lock(); + upper_dev = netdev_master_upper_dev_get_rcu(hr_dev->iboe.netdevs0); + for_each_netdev_in_bond_rcu(upper_dev, net_dev) { hr_dev = hns_roce_get_hrdev_by_netdev(net_dev); - if (hr_dev && hr_dev->bond_grp && - hr_dev->bond_grp->bond_state == HNS_ROCE_BOND_IS_BONDED) { - rcu_read_unlock(); - return true; + if (hr_dev && hr_dev->bond_grp) { + bond_grp = hr_dev->bond_grp; + break; } } + rcu_read_unlock(); + return bond_grp; +} + +bool hns_roce_bond_is_active(struct hns_roce_dev *hr_dev) +{ + struct hns_roce_bond_group *bond_grp; + + if (!netif_is_lag_port(hr_dev->iboe.netdevs0)) + return false; + + bond_grp = hns_roce_get_bond_grp(hr_dev); + + if (bond_grp && + (bond_grp->bond_state == HNS_ROCE_BOND_REGISTERING || + bond_grp->bond_state == HNS_ROCE_BOND_IS_BONDED)) + return true; + return false; } @@ -61,12 +78,15 @@ if (!netif_is_lag_port(hr_dev->iboe.netdevs0)) return NULL; - if (!bond_grp) - return NULL; + if (!bond_grp) { + bond_grp = hns_roce_get_bond_grp(hr_dev); + if (!bond_grp) + return NULL; + } mutex_lock(&bond_grp->bond_mutex); - if (bond_grp->bond_state != HNS_ROCE_BOND_IS_BONDED) + if (bond_grp->bond_state == HNS_ROCE_BOND_NOT_BONDED) goto out; if (bond_grp->tx_type == NETDEV_LAG_TX_TYPE_ACTIVEBACKUP) { @@ -154,8 +174,8 @@ int ret; int i; - hns_roce_bond_get_active_slave(bond_grp); - /* bond_grp will be kfree during uninit_instance of main_hr_dev. + /* + * bond_grp will be kfree during uninit_instance of main_hr_dev. * Thus the main_hr_dev is switched before the uninit_instance * of the previous main_hr_dev. */ @@ -165,7 +185,7 @@ hns_roce_bond_uninit_client(bond_grp, i); } - bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; + bond_grp->bond_state = HNS_ROCE_BOND_REGISTERING; for (i = 0; i < ROCE_BOND_FUNC_MAX; i++) { net_dev = bond_grp->bond_func_infoi.net_dev; @@ -183,17 +203,18 @@ } } } - if (!hr_dev) return; hns_roce_bond_uninit_client(bond_grp, main_func_idx); + hns_roce_bond_get_active_slave(bond_grp); ret = hns_roce_cmd_bond(hr_dev, HNS_ROCE_SET_BOND); if (ret) { ibdev_err(&hr_dev->ib_dev, "failed to set RoCE bond!\n"); return; } + bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; ibdev_info(&hr_dev->ib_dev, "RoCE set bond finished!\n"); } @@ -239,7 +260,6 @@ int ret; hns_roce_bond_get_active_slave(bond_grp); - bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; ret = hns_roce_cmd_bond(bond_grp->main_hr_dev, HNS_ROCE_CHANGE_BOND); if (ret) { @@ -248,6 +268,7 @@ return; } + bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; ibdev_info(&bond_grp->main_hr_dev->ib_dev, "RoCE slave changestate finished!\n"); } @@ -258,8 +279,6 @@ u8 inc_func_idx = 0; int ret; - hns_roce_bond_get_active_slave(bond_grp); - while (inc_slave_map > 0) { if (inc_slave_map & 1) hns_roce_bond_uninit_client(bond_grp, inc_func_idx); @@ -267,8 +286,7 @@ inc_func_idx++; } - bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; - + hns_roce_bond_get_active_slave(bond_grp); ret = hns_roce_cmd_bond(bond_grp->main_hr_dev, HNS_ROCE_CHANGE_BOND); if (ret) { ibdev_err(&bond_grp->main_hr_dev->ib_dev, @@ -276,6 +294,7 @@ return; } + bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; ibdev_info(&bond_grp->main_hr_dev->ib_dev, "RoCE slave increase finished!\n"); } @@ -290,8 +309,6 @@ int ret; int i; - hns_roce_bond_get_active_slave(bond_grp); - bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; main_func_idx = PCI_FUNC(bond_grp->main_hr_dev->pci_dev->devfn); @@ -300,6 +317,7 @@ for (i = 0; i < ROCE_BOND_FUNC_MAX; i++) { net_dev = bond_grp->bond_func_infoi.net_dev; if (!(dec_slave_map & (1 << i)) && net_dev) { + bond_grp->bond_state = HNS_ROCE_BOND_REGISTERING; hr_dev = hns_roce_bond_init_client(bond_grp, i); if (hr_dev) { bond_grp->main_hr_dev = hr_dev; @@ -321,6 +339,7 @@ dec_func_idx++; } + hns_roce_bond_get_active_slave(bond_grp); if (bond_grp->slave_map_diff & (1 << main_func_idx)) ret = hns_roce_cmd_bond(hr_dev, HNS_ROCE_SET_BOND); else @@ -332,6 +351,7 @@ return; } + bond_grp->bond_state = HNS_ROCE_BOND_IS_BONDED; ibdev_info(&bond_grp->main_hr_dev->ib_dev, "RoCE slave decrease finished!\n"); } @@ -493,13 +513,13 @@ struct netdev_notifier_changeupper_info *info) { struct hns_roce_bond_group *bond_grp = hr_dev->bond_grp; + struct netdev_lag_upper_info *bond_upper_info = NULL; struct net_device *upper_dev = info->upper_dev; - struct netdev_lag_upper_info *bond_upper_info; - u32 pre_slave_map = bond_grp->slave_map; - u8 pre_slave_num = bond_grp->slave_num; bool changed = false; + u32 pre_slave_map; + u8 pre_slave_num; - if (!upper_dev || !netif_is_lag_master(upper_dev)) + if (!bond_grp || !upper_dev || !netif_is_lag_master(upper_dev)) return false; if (info->linking) @@ -510,15 +530,11 @@ if (bond_upper_info) bond_grp->tx_type = bond_upper_info->tx_type; + pre_slave_map = bond_grp->slave_map; + pre_slave_num = bond_grp->slave_num; hns_roce_bond_info_record(bond_grp, upper_dev); bond_grp->bond = netdev_priv(upper_dev); - if (!hns_roce_bond_mode_is_supported(bond_grp->tx_type) || - bond_grp->slave_num <= 1) { - changed = bond_grp->bond_ready; - bond_grp->bond_ready = false; - goto out; - } if (bond_grp->bond_state == HNS_ROCE_BOND_NOT_BONDED) { bond_grp->bond_ready = true; @@ -533,7 +549,6 @@ changed = true; } -out: mutex_unlock(&bond_grp->bond_mutex); return changed; @@ -560,8 +575,7 @@ return bond_grp; } -static bool hns_roce_is_slave(struct net_device *bond, - struct net_device *net_dev) +static struct net_device *get_upper_dev_from_ndev(struct net_device *net_dev) { struct net_device *upper_dev; @@ -569,7 +583,14 @@ upper_dev = netdev_master_upper_dev_get_rcu(net_dev); rcu_read_unlock(); - return bond == upper_dev; + return upper_dev; +} + +static bool hns_roce_is_slave(struct net_device *upper_dev, + struct hns_roce_dev *hr_dev) +{ + return (hr_dev->bond_grp && upper_dev == hr_dev->bond_grp->upper_dev) || + upper_dev == get_upper_dev_from_ndev(hr_dev->iboe.netdevs0); } static bool hns_roce_is_bond_grp_exist(struct net_device *upper_dev) @@ -590,51 +611,103 @@ return false; } +static enum bond_support_type + check_bond_support(struct hns_roce_dev *hr_dev, + struct net_device **upper_dev, + struct netdev_notifier_changeupper_info *info) +{ + struct netdev_lag_upper_info *bond_upper_info = NULL; + bool bond_grp_exist = false; + struct net_device *net_dev; + bool support = true; + u8 slave_num = 0; + int bus_num = -1; + + *upper_dev = info->upper_dev; + if (hr_dev->bond_grp || hns_roce_is_bond_grp_exist(*upper_dev)) + bond_grp_exist = true; + + if (!info->linking && !bond_grp_exist) + return BOND_NOT_SUPPORT; + + if (info->linking) + bond_upper_info = info->upper_info; + + if (bond_upper_info && + !hns_roce_bond_mode_is_supported(bond_upper_info->tx_type)) + return BOND_NOT_SUPPORT; + + rcu_read_lock(); + for_each_netdev_in_bond_rcu(*upper_dev, net_dev) { + hr_dev = hns_roce_get_hrdev_by_netdev(net_dev); + if (hr_dev) { + slave_num++; + if (bus_num == -1) + bus_num = hr_dev->pci_dev->bus->number; + if (hr_dev->is_vf || + bus_num != hr_dev->pci_dev->bus->number) { + support = false; + break; + } + } + } + rcu_read_unlock(); + + if (slave_num <= 1) + support = false; + if (support) + return BOND_SUPPORT; + + return bond_grp_exist ? BOND_EXISTING_NOT_SUPPORT : BOND_NOT_SUPPORT; +} + int hns_roce_bond_event(struct notifier_block *self, unsigned long event, void *ptr) { struct net_device *net_dev = netdev_notifier_info_to_dev(ptr); struct hns_roce_dev *hr_dev = container_of(self, struct hns_roce_dev, bond_nb); + enum bond_support_type support = BOND_SUPPORT; struct net_device *upper_dev; bool changed; if (event != NETDEV_CHANGEUPPER && event != NETDEV_CHANGELOWERSTATE) return NOTIFY_DONE; - rcu_read_lock(); - upper_dev = netdev_master_upper_dev_get_rcu(net_dev); - rcu_read_unlock(); - if (event == NETDEV_CHANGELOWERSTATE && !upper_dev && - hr_dev != hns_roce_get_hrdev_by_netdev(net_dev)) - return NOTIFY_DONE; - - if (upper_dev) { - if (!hns_roce_is_slave(upper_dev, hr_dev->iboe.netdevs0)) + if (event == NETDEV_CHANGEUPPER) { + support = check_bond_support(hr_dev, &upper_dev, ptr); + if (support == BOND_NOT_SUPPORT) return NOTIFY_DONE; + } else { + upper_dev = get_upper_dev_from_ndev(net_dev); + } - mutex_lock(&roce_bond_mutex); + if (upper_dev && !hns_roce_is_slave(upper_dev, hr_dev)) + return NOTIFY_DONE; + else if (!upper_dev && hr_dev != hns_roce_get_hrdev_by_netdev(net_dev)) + return NOTIFY_DONE; + + if (event == NETDEV_CHANGEUPPER) { if (!hr_dev->bond_grp) { - if (hns_roce_is_bond_grp_exist(upper_dev)) { - mutex_unlock(&roce_bond_mutex); + if (hns_roce_is_bond_grp_exist(upper_dev)) return NOTIFY_DONE; - } hr_dev->bond_grp = hns_roce_alloc_bond_grp(hr_dev, upper_dev); if (!hr_dev->bond_grp) { ibdev_err(&hr_dev->ib_dev, "failed to alloc RoCE bond_grp!\n"); - mutex_unlock(&roce_bond_mutex); return NOTIFY_DONE; } } - mutex_unlock(&roce_bond_mutex); + if (support == BOND_EXISTING_NOT_SUPPORT) { + hr_dev->bond_grp->bond_ready = false; + hns_roce_queue_bond_work(hr_dev, HZ); + return NOTIFY_DONE; + } + changed = hns_roce_bond_upper_event(hr_dev, ptr); + } else { + changed = hns_roce_bond_lowerstate_event(hr_dev, ptr); } - - changed = (event == NETDEV_CHANGEUPPER) ? - hns_roce_bond_upper_event(hr_dev, ptr) : - hns_roce_bond_lowerstate_event(hr_dev, ptr); - if (changed) hns_roce_queue_bond_work(hr_dev, HZ);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/infiniband/hw/hns/hns_roce_bond.h
Changed
@@ -17,9 +17,20 @@ BOND_MODE_2_4, }; +enum bond_support_type { + BOND_NOT_SUPPORT, + /* + * bond_grp already exists, but in the current + * conditions it's no longer supported + */ + BOND_EXISTING_NOT_SUPPORT, + BOND_SUPPORT, +}; + enum hns_roce_bond_state { HNS_ROCE_BOND_NOT_BONDED, HNS_ROCE_BOND_IS_BONDED, + HNS_ROCE_BOND_REGISTERING, HNS_ROCE_BOND_SLAVE_INC, HNS_ROCE_BOND_SLAVE_DEC, HNS_ROCE_BOND_SLAVE_CHANGESTATE,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/infiniband/hw/hns/hns_roce_main.c
Changed
@@ -47,6 +47,30 @@ #include "hns_roce_dca.h" #include "hns_roce_debugfs.h" +static struct net_device *hns_roce_get_netdev(struct ib_device *ib_dev, + u8 port_num) +{ + struct hns_roce_dev *hr_dev = to_hr_dev(ib_dev); + struct net_device *ndev; + + if (port_num < 1 || port_num > hr_dev->caps.num_ports) + return NULL; + + ndev = hr_dev->hw->get_bond_netdev(hr_dev); + + rcu_read_lock(); + + if (!ndev) + ndev = hr_dev->iboe.netdevsport_num - 1; + + if (ndev) + dev_hold(ndev); + + rcu_read_unlock(); + + return ndev; +} + static int hns_roce_set_mac(struct hns_roce_dev *hr_dev, u32 port, const u8 *addr) { @@ -677,6 +701,7 @@ .disassociate_ucontext = hns_roce_disassociate_ucontext, .get_dma_mr = hns_roce_get_dma_mr, .get_link_layer = hns_roce_get_link_layer, + .get_netdev = hns_roce_get_netdev, .get_port_immutable = hns_roce_port_immutable, .mmap = hns_roce_mmap, .mmap_free = hns_roce_free_mmap,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/infiniband/ulp/iser/iscsi_iser.c
Changed
@@ -979,22 +979,16 @@ .track_queue_depth = 1, }; -static struct iscsi_transport_expand iscsi_iser_expand = { - .unbind_conn = iscsi_conn_unbind, -}; - static struct iscsi_transport iscsi_iser_transport = { .owner = THIS_MODULE, .name = "iser", - .caps = CAP_RECOVERY_L0 | CAP_MULTI_R2T | CAP_TEXT_NEGO - | CAP_OPS_EXPAND, + .caps = CAP_RECOVERY_L0 | CAP_MULTI_R2T | CAP_TEXT_NEGO, /* session management */ .create_session = iscsi_iser_session_create, .destroy_session = iscsi_iser_session_destroy, /* connection management */ .create_conn = iscsi_iser_conn_create, .bind_conn = iscsi_iser_conn_bind, - .ops_expand = &iscsi_iser_expand, .destroy_conn = iscsi_conn_teardown, .attr_is_visible = iser_attr_is_visible, .set_param = iscsi_iser_set_param,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
Changed
@@ -3662,29 +3662,6 @@ return smmu_domain->ssid ?: -EINVAL; } -#ifdef CONFIG_SMMU_BYPASS_DEV -static int arm_smmu_device_domain_type(struct device *dev) -{ - int i; - struct pci_dev *pdev; - - if (!dev_is_pci(dev)) - return 0; - - pdev = to_pci_dev(dev); - for (i = 0; i < smmu_bypass_devices_num; i++) { - if ((smmu_bypass_devicesi.vendor == pdev->vendor) && - (smmu_bypass_devicesi.device == pdev->device)) { - dev_info(dev, "device 0x%hx:0x%hx uses identity mapping.", - pdev->vendor, pdev->device); - return IOMMU_DOMAIN_IDENTITY; - } - } - - return 0; -} -#endif - static int arm_smmu_set_mpam(struct arm_smmu_device *smmu, int sid, int ssid, int partid, int pmg, int s1mpam) { @@ -3933,16 +3910,40 @@ #define IS_HISI_PTT_DEVICE(pdev) ((pdev)->vendor == PCI_VENDOR_ID_HUAWEI && \ (pdev)->device == 0xa12e) +#ifdef CONFIG_SMMU_BYPASS_DEV +static int arm_smmu_bypass_dev_domain_type(struct device *dev) +{ + int i; + struct pci_dev *pdev = to_pci_dev(dev); + + for (i = 0; i < smmu_bypass_devices_num; i++) { + if ((smmu_bypass_devicesi.vendor == pdev->vendor) && + (smmu_bypass_devicesi.device == pdev->device)) { + dev_info(dev, "device 0x%hx:0x%hx uses identity mapping.", + pdev->vendor, pdev->device); + return IOMMU_DOMAIN_IDENTITY; + } + } + + return 0; +} +#endif + static int arm_smmu_def_domain_type(struct device *dev) { + int ret = 0; + if (dev_is_pci(dev)) { struct pci_dev *pdev = to_pci_dev(dev); if (IS_HISI_PTT_DEVICE(pdev)) return IOMMU_DOMAIN_IDENTITY; + #ifdef CONFIG_SMMU_BYPASS_DEV + ret = arm_smmu_bypass_dev_domain_type(dev); + #endif } - return 0; + return ret; } static struct iommu_ops arm_smmu_ops = { @@ -3979,9 +3980,6 @@ .aux_attach_dev = arm_smmu_aux_attach_dev, .aux_detach_dev = arm_smmu_aux_detach_dev, .aux_get_pasid = arm_smmu_aux_get_pasid, -#ifdef CONFIG_SMMU_BYPASS_DEV - .def_domain_type = arm_smmu_device_domain_type, -#endif .dev_get_config = arm_smmu_device_get_config, .dev_set_config = arm_smmu_device_set_config, .pgsize_bitmap = -1UL, /* Restricted during device attach */ @@ -4167,7 +4165,7 @@ struct pci_dev *pdev; struct arm_smmu_device *smmu = (struct arm_smmu_device *)data; - if (!arm_smmu_device_domain_type(dev)) + if (!arm_smmu_def_domain_type(dev)) return 0; pdev = to_pci_dev(dev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/iommu/intel/iommu.c
Changed
@@ -4995,8 +4995,10 @@ } iommu = device_to_iommu(dev, &bus, &devfn); - if (!iommu) - return -ENODEV; + if (!iommu) { + ret = -ENODEV; + goto unlock; + } info = dmar_search_domain_by_dev_info(iommu->segment, bus, devfn); if (!info) { pn->dev->bus->iommu_ops = &intel_iommu_ops; @@ -5011,8 +5013,10 @@ } if (!pn_dev) { iommu = device_to_iommu(dev, &bus, &devfn); - if (!iommu) - return -ENODEV; + if (!iommu) { + ret = -ENODEV; + goto unlock; + } info = dmar_search_domain_by_dev_info(iommu->segment, bus, devfn); if (!info) { dev->bus->iommu_ops = &intel_iommu_ops;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/iommu/virtio-iommu.c
Changed
@@ -1115,7 +1115,7 @@ iommu_device_unregister(&viommu->iommu); /* Stop all virtqueues */ - vdev->config->reset(vdev); + virtio_reset_device(vdev); vdev->config->del_vqs(vdev); dev_info(&vdev->dev, "device removed\n");
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/md/dm-thin-metadata.c
Changed
@@ -701,6 +701,15 @@ goto bad_cleanup_data_sm; } + /* + * For pool metadata opening process, root setting is redundant + * because it will be set again in __begin_transaction(). But dm + * pool aborting process really needs to get last transaction's + * root to avoid accessing broken btree. + */ + pmd->root = le64_to_cpu(disk_super->data_mapping_root); + pmd->details_root = le64_to_cpu(disk_super->device_details_root); + __setup_btree_details(pmd); dm_bm_unlock(sblock); @@ -753,13 +762,15 @@ return r; } -static void __destroy_persistent_data_objects(struct dm_pool_metadata *pmd) +static void __destroy_persistent_data_objects(struct dm_pool_metadata *pmd, + bool destroy_bm) { dm_sm_destroy(pmd->data_sm); dm_sm_destroy(pmd->metadata_sm); dm_tm_destroy(pmd->nb_tm); dm_tm_destroy(pmd->tm); - dm_block_manager_destroy(pmd->bm); + if (destroy_bm) + dm_block_manager_destroy(pmd->bm); } static int __begin_transaction(struct dm_pool_metadata *pmd) @@ -966,7 +977,7 @@ } pmd_write_unlock(pmd); if (!pmd->fail_io) - __destroy_persistent_data_objects(pmd); + __destroy_persistent_data_objects(pmd, true); kfree(pmd); return 0; @@ -1873,19 +1884,52 @@ int dm_pool_abort_metadata(struct dm_pool_metadata *pmd) { int r = -EINVAL; + struct dm_block_manager *old_bm = NULL, *new_bm = NULL; + + /* fail_io is double-checked with pmd->root_lock held below */ + if (unlikely(pmd->fail_io)) + return r; + + /* + * Replacement block manager (new_bm) is created and old_bm destroyed outside of + * pmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of + * shrinker associated with the block manager's bufio client vs pmd root_lock). + * - must take shrinker_rwsem without holding pmd->root_lock + */ + new_bm = dm_block_manager_create(pmd->bdev, THIN_METADATA_BLOCK_SIZE << SECTOR_SHIFT, + THIN_MAX_CONCURRENT_LOCKS); pmd_write_lock(pmd); - if (pmd->fail_io) + if (pmd->fail_io) { + pmd_write_unlock(pmd); goto out; + } __set_abort_with_changes_flags(pmd); - __destroy_persistent_data_objects(pmd); - r = __create_persistent_data_objects(pmd, false); + __destroy_persistent_data_objects(pmd, false); + old_bm = pmd->bm; + if (IS_ERR(new_bm)) { + DMERR("could not create block manager during abort"); + pmd->bm = NULL; + r = PTR_ERR(new_bm); + goto out_unlock; + } + + pmd->bm = new_bm; + r = __open_or_format_metadata(pmd, false); + if (r) { + pmd->bm = NULL; + goto out_unlock; + } + new_bm = NULL; +out_unlock: if (r) pmd->fail_io = true; - -out: pmd_write_unlock(pmd); + dm_block_manager_destroy(old_bm); +out: + if (new_bm && !IS_ERR(new_bm)) + dm_block_manager_destroy(new_bm); return r; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/media/dvb-core/dmxdev.c
Changed
@@ -800,6 +800,11 @@ if (mutex_lock_interruptible(&dmxdev->mutex)) return -ERESTARTSYS; + if (dmxdev->exit) { + mutex_unlock(&dmxdev->mutex); + return -ENODEV; + } + for (i = 0; i < dmxdev->filternum; i++) if (dmxdev->filteri.state == DMXDEV_STATE_FREE) break; @@ -1458,7 +1463,10 @@ void dvb_dmxdev_release(struct dmxdev *dmxdev) { + mutex_lock(&dmxdev->mutex); dmxdev->exit = 1; + mutex_unlock(&dmxdev->mutex); + if (dmxdev->dvbdev->users > 1) { wait_event(dmxdev->dvbdev->wait_queue, dmxdev->dvbdev->users == 1);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/media/rc/mceusb.c
Changed
@@ -1416,42 +1416,37 @@ { int ret; struct device *dev = ir->dev; - char *data; - - data = kzalloc(USB_CTRL_MSG_SZ, GFP_KERNEL); - if (!data) { - dev_err(dev, "%s: memory allocation failed!", __func__); - return; - } + char dataUSB_CTRL_MSG_SZ; /* * This is a strange one. Windows issues a set address to the device * on the receive control pipe and expect a certain value pair back */ - ret = usb_control_msg(ir->usbdev, usb_rcvctrlpipe(ir->usbdev, 0), - USB_REQ_SET_ADDRESS, USB_TYPE_VENDOR, 0, 0, - data, USB_CTRL_MSG_SZ, 3000); + ret = usb_control_msg_recv(ir->usbdev, 0, USB_REQ_SET_ADDRESS, + USB_DIR_IN | USB_TYPE_VENDOR, + 0, 0, data, USB_CTRL_MSG_SZ, 3000, + GFP_KERNEL); dev_dbg(dev, "set address - ret = %d", ret); dev_dbg(dev, "set address - data0 = %d, data1 = %d", data0, data1); /* set feature: bit rate 38400 bps */ - ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0), - USB_REQ_SET_FEATURE, USB_TYPE_VENDOR, - 0xc04e, 0x0000, NULL, 0, 3000); + ret = usb_control_msg_send(ir->usbdev, 0, + USB_REQ_SET_FEATURE, USB_TYPE_VENDOR, + 0xc04e, 0x0000, NULL, 0, 3000, GFP_KERNEL); dev_dbg(dev, "set feature - ret = %d", ret); /* bRequest 4: set char length to 8 bits */ - ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0), - 4, USB_TYPE_VENDOR, - 0x0808, 0x0000, NULL, 0, 3000); + ret = usb_control_msg_send(ir->usbdev, 0, + 4, USB_TYPE_VENDOR, + 0x0808, 0x0000, NULL, 0, 3000, GFP_KERNEL); dev_dbg(dev, "set char length - retB = %d", ret); /* bRequest 2: set handshaking to use DTR/DSR */ - ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0), - 2, USB_TYPE_VENDOR, - 0x0000, 0x0100, NULL, 0, 3000); + ret = usb_control_msg_send(ir->usbdev, 0, + 2, USB_TYPE_VENDOR, + 0x0000, 0x0100, NULL, 0, 3000, GFP_KERNEL); dev_dbg(dev, "set handshake - retC = %d", ret); /* device resume */ @@ -1459,8 +1454,6 @@ /* get hw/sw revision? */ mce_command_out(ir, GET_REVISION, sizeof(GET_REVISION)); - - kfree(data); } static void mceusb_gen2_init(struct mceusb_dev *ir)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/misc/sgi-gru/grufault.c
Changed
@@ -648,6 +648,7 @@ if ((cb & (GRU_HANDLE_STRIDE - 1)) || ucbnum >= GRU_NUM_CB) return -EINVAL; +again: gts = gru_find_lock_gts(cb); if (!gts) return -EINVAL; @@ -656,7 +657,11 @@ if (ucbnum >= gts->ts_cbr_au_count * GRU_CBR_AU_SIZE) goto exit; - gru_check_context_placement(gts); + if (gru_check_context_placement(gts)) { + gru_unlock_gts(gts); + gru_unload_context(gts, 1); + goto again; + } /* * CCH may contain stale data if ts_force_cch_reload is set. @@ -874,7 +879,11 @@ } else { gts->ts_user_blade_id = req.val1; gts->ts_user_chiplet_id = req.val0; - gru_check_context_placement(gts); + if (gru_check_context_placement(gts)) { + gru_unlock_gts(gts); + gru_unload_context(gts, 1); + return ret; + } } break; case sco_gseg_owner:
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/misc/sgi-gru/grumain.c
Changed
@@ -716,9 +716,10 @@ * chiplet. Misassignment can occur if the process migrates to a different * blade or if the user changes the selected blade/chiplet. */ -void gru_check_context_placement(struct gru_thread_state *gts) +int gru_check_context_placement(struct gru_thread_state *gts) { struct gru_state *gru; + int ret = 0; /* * If the current task is the context owner, verify that the @@ -726,15 +727,23 @@ * references. Pthread apps use non-owner references to the CBRs. */ gru = gts->ts_gru; + /* + * If gru or gts->ts_tgid_owner isn't initialized properly, return + * success to indicate that the caller does not need to unload the + * gru context.The caller is responsible for their inspection and + * reinitialization if needed. + */ if (!gru || gts->ts_tgid_owner != current->tgid) - return; + return ret; if (!gru_check_chiplet_assignment(gru, gts)) { STAT(check_context_unload); - gru_unload_context(gts, 1); + ret = -EINVAL; } else if (gru_retarget_intr(gts)) { STAT(check_context_retarget_intr); } + + return ret; } @@ -934,7 +943,12 @@ mutex_lock(>s->ts_ctxlock); preempt_disable(); - gru_check_context_placement(gts); + if (gru_check_context_placement(gts)) { + preempt_enable(); + mutex_unlock(>s->ts_ctxlock); + gru_unload_context(gts, 1); + return VM_FAULT_NOPAGE; + } if (!gts->ts_gru) { STAT(load_user_context);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/misc/sgi-gru/grutables.h
Changed
@@ -637,7 +637,7 @@ extern int gru_user_unload_context(unsigned long arg); extern int gru_get_exception_detail(unsigned long arg); extern int gru_set_context_option(unsigned long address); -extern void gru_check_context_placement(struct gru_thread_state *gts); +extern int gru_check_context_placement(struct gru_thread_state *gts); extern int gru_cpu_fault_map_id(void); extern struct vm_area_struct *gru_find_vma(unsigned long vaddr); extern void gru_flush_all_tlb(struct gru_state *gru);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/mtd/maps/physmap-core.c
Changed
@@ -307,6 +307,9 @@ const char *probe_type; match = of_match_device(of_flash_match, &dev->dev); + if (!match) + return NULL; + probe_type = match->data; if (probe_type) return probe_type;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/caif/caif_virtio.c
Changed
@@ -764,7 +764,7 @@ debugfs_remove_recursive(cfv->debugfs); vringh_kiov_cleanup(&cfv->ctx.riov); - vdev->config->reset(vdev); + virtio_reset_device(vdev); vdev->vringh_config->del_vrhs(cfv->vdev); cfv->vr_rx = NULL; vdev->config->del_vqs(cfv->vdev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/hisilicon/hns3/hnae3.h
Changed
@@ -845,7 +845,6 @@ const struct hnae3_dcb_ops *dcb_ops; u16 int_rl_setting; - enum pkt_hash_types rss_type; void __iomem *io_base; };
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h
Changed
@@ -294,8 +294,8 @@ HCLGE_PPP_CMD0_INT_CMD = 0x2100, HCLGE_PPP_CMD1_INT_CMD = 0x2101, HCLGE_MAC_ETHERTYPE_IDX_RD = 0x2105, - HCLGE_OPC_WOL_CFG = 0x2200, HCLGE_OPC_WOL_GET_SUPPORTED_MODE = 0x2201, + HCLGE_OPC_WOL_CFG = 0x2202, HCLGE_NCSI_INT_EN = 0x2401, /* ROH MAC commands */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_rss.c
Changed
@@ -191,23 +191,6 @@ return HCLGE_COMM_RSS_KEY_SIZE; } -void hclge_comm_get_rss_type(struct hnae3_handle *nic, - struct hclge_comm_rss_tuple_cfg *rss_tuple_sets) -{ - if (rss_tuple_sets->ipv4_tcp_en || - rss_tuple_sets->ipv4_udp_en || - rss_tuple_sets->ipv4_sctp_en || - rss_tuple_sets->ipv6_tcp_en || - rss_tuple_sets->ipv6_udp_en || - rss_tuple_sets->ipv6_sctp_en) - nic->kinfo.rss_type = PKT_HASH_TYPE_L4; - else if (rss_tuple_sets->ipv4_fragment_en || - rss_tuple_sets->ipv6_fragment_en) - nic->kinfo.rss_type = PKT_HASH_TYPE_L3; - else - nic->kinfo.rss_type = PKT_HASH_TYPE_NONE; -} - int hclge_comm_parse_rss_hfunc(struct hclge_comm_rss_cfg *rss_cfg, const u8 hfunc, u8 *hash_algo) { @@ -344,9 +327,6 @@ req->ipv6_sctp_en = rss_cfg->rss_tuple_sets.ipv6_sctp_en; req->ipv6_fragment_en = rss_cfg->rss_tuple_sets.ipv6_fragment_en; - if (is_pf) - hclge_comm_get_rss_type(nic, &rss_cfg->rss_tuple_sets); - ret = hclge_comm_cmd_send(hw, &desc, 1); if (ret) dev_err(&hw->cmq.csq.pdev->dev,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_rss.h
Changed
@@ -95,8 +95,6 @@ }; u32 hclge_comm_get_rss_key_size(struct hnae3_handle *handle); -void hclge_comm_get_rss_type(struct hnae3_handle *nic, - struct hclge_comm_rss_tuple_cfg *rss_tuple_sets); void hclge_comm_rss_indir_init_cfg(struct hnae3_ae_dev *ae_dev, struct hclge_comm_rss_cfg *rss_cfg); int hclge_comm_get_rss_tuple(struct hclge_comm_rss_cfg *rss_cfg, int flow_type,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
Changed
@@ -110,26 +110,28 @@ }; MODULE_DEVICE_TABLE(pci, hns3_pci_tbl); -#define HNS3_RX_PTYPE_ENTRY(ptype, l, s, t) \ +#define HNS3_RX_PTYPE_ENTRY(ptype, l, s, t, h) \ { ptype, \ l, \ CHECKSUM_##s, \ HNS3_L3_TYPE_##t, \ - 1 } + 1, \ + h} #define HNS3_RX_PTYPE_UNUSED_ENTRY(ptype) \ - { ptype, 0, CHECKSUM_NONE, HNS3_L3_TYPE_PARSE_FAIL, 0 } + { ptype, 0, CHECKSUM_NONE, HNS3_L3_TYPE_PARSE_FAIL, 0, \ + PKT_HASH_TYPE_NONE } static const struct hns3_rx_ptype hns3_rx_ptype_tbl = { HNS3_RX_PTYPE_UNUSED_ENTRY(0), - HNS3_RX_PTYPE_ENTRY(1, 0, COMPLETE, ARP), - HNS3_RX_PTYPE_ENTRY(2, 0, COMPLETE, RARP), - HNS3_RX_PTYPE_ENTRY(3, 0, COMPLETE, LLDP), - HNS3_RX_PTYPE_ENTRY(4, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(5, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(6, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(7, 0, COMPLETE, CNM), - HNS3_RX_PTYPE_ENTRY(8, 0, NONE, PARSE_FAIL), + HNS3_RX_PTYPE_ENTRY(1, 0, COMPLETE, ARP, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(2, 0, COMPLETE, RARP, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(3, 0, COMPLETE, LLDP, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(4, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(5, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(6, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(7, 0, COMPLETE, CNM, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(8, 0, NONE, PARSE_FAIL, PKT_HASH_TYPE_NONE), HNS3_RX_PTYPE_UNUSED_ENTRY(9), HNS3_RX_PTYPE_UNUSED_ENTRY(10), HNS3_RX_PTYPE_UNUSED_ENTRY(11), @@ -137,36 +139,36 @@ HNS3_RX_PTYPE_UNUSED_ENTRY(13), HNS3_RX_PTYPE_UNUSED_ENTRY(14), HNS3_RX_PTYPE_UNUSED_ENTRY(15), - HNS3_RX_PTYPE_ENTRY(16, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(17, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(18, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(19, 0, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(20, 0, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(21, 0, NONE, IPV4), - HNS3_RX_PTYPE_ENTRY(22, 0, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(23, 0, NONE, IPV4), - HNS3_RX_PTYPE_ENTRY(24, 0, NONE, IPV4), - HNS3_RX_PTYPE_ENTRY(25, 0, UNNECESSARY, IPV4), + HNS3_RX_PTYPE_ENTRY(16, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(17, 0, COMPLETE, IPV4, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(18, 0, COMPLETE, IPV4, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(19, 0, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(20, 0, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(21, 0, NONE, IPV4, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(22, 0, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(23, 0, NONE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(24, 0, NONE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(25, 0, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), HNS3_RX_PTYPE_UNUSED_ENTRY(26), HNS3_RX_PTYPE_UNUSED_ENTRY(27), HNS3_RX_PTYPE_UNUSED_ENTRY(28), - HNS3_RX_PTYPE_ENTRY(29, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(30, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(31, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(32, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(33, 1, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(34, 1, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(35, 1, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(36, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(37, 0, COMPLETE, IPV4), + HNS3_RX_PTYPE_ENTRY(29, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(30, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(31, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(32, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(33, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(34, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(35, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(36, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(37, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), HNS3_RX_PTYPE_UNUSED_ENTRY(38), - HNS3_RX_PTYPE_ENTRY(39, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(40, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(41, 1, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(42, 1, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(43, 1, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(44, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(45, 0, COMPLETE, IPV6), + HNS3_RX_PTYPE_ENTRY(39, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(40, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(41, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(42, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(43, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(44, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(45, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), HNS3_RX_PTYPE_UNUSED_ENTRY(46), HNS3_RX_PTYPE_UNUSED_ENTRY(47), HNS3_RX_PTYPE_UNUSED_ENTRY(48), @@ -232,35 +234,35 @@ HNS3_RX_PTYPE_UNUSED_ENTRY(108), HNS3_RX_PTYPE_UNUSED_ENTRY(109), HNS3_RX_PTYPE_UNUSED_ENTRY(110), - HNS3_RX_PTYPE_ENTRY(111, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(112, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(113, 0, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(114, 0, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(115, 0, NONE, IPV6), - HNS3_RX_PTYPE_ENTRY(116, 0, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(117, 0, NONE, IPV6), - HNS3_RX_PTYPE_ENTRY(118, 0, NONE, IPV6), - HNS3_RX_PTYPE_ENTRY(119, 0, UNNECESSARY, IPV6), + HNS3_RX_PTYPE_ENTRY(111, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(112, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(113, 0, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(114, 0, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(115, 0, NONE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(116, 0, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(117, 0, NONE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(118, 0, NONE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(119, 0, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), HNS3_RX_PTYPE_UNUSED_ENTRY(120), HNS3_RX_PTYPE_UNUSED_ENTRY(121), HNS3_RX_PTYPE_UNUSED_ENTRY(122), - HNS3_RX_PTYPE_ENTRY(123, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(124, 0, COMPLETE, PARSE_FAIL), - HNS3_RX_PTYPE_ENTRY(125, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(126, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(127, 1, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(128, 1, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(129, 1, UNNECESSARY, IPV4), - HNS3_RX_PTYPE_ENTRY(130, 0, COMPLETE, IPV4), - HNS3_RX_PTYPE_ENTRY(131, 0, COMPLETE, IPV4), + HNS3_RX_PTYPE_ENTRY(123, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(124, 0, COMPLETE, PARSE_FAIL, PKT_HASH_TYPE_NONE), + HNS3_RX_PTYPE_ENTRY(125, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(126, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(127, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(128, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(129, 1, UNNECESSARY, IPV4, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(130, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(131, 0, COMPLETE, IPV4, PKT_HASH_TYPE_L3), HNS3_RX_PTYPE_UNUSED_ENTRY(132), - HNS3_RX_PTYPE_ENTRY(133, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(134, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(135, 1, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(136, 1, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(137, 1, UNNECESSARY, IPV6), - HNS3_RX_PTYPE_ENTRY(138, 0, COMPLETE, IPV6), - HNS3_RX_PTYPE_ENTRY(139, 0, COMPLETE, IPV6), + HNS3_RX_PTYPE_ENTRY(133, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(134, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(135, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(136, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(137, 1, UNNECESSARY, IPV6, PKT_HASH_TYPE_L4), + HNS3_RX_PTYPE_ENTRY(138, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), + HNS3_RX_PTYPE_ENTRY(139, 0, COMPLETE, IPV6, PKT_HASH_TYPE_L3), HNS3_RX_PTYPE_UNUSED_ENTRY(140), HNS3_RX_PTYPE_UNUSED_ENTRY(141), HNS3_RX_PTYPE_UNUSED_ENTRY(142), @@ -3856,8 +3858,8 @@ desc_cb->reuse_flag = 1; } else if (frag_size <= ring->rx_copybreak) { ret = hns3_handle_rx_copybreak(skb, i, ring, pull_len, desc_cb); - if (ret) - goto out; + if (!ret) + return; } out: @@ -4251,15 +4253,35 @@ } static void hns3_set_rx_skb_rss_type(struct hns3_enet_ring *ring, - struct sk_buff *skb, u32 rss_hash) + struct sk_buff *skb, u32 rss_hash, + u32 l234info, u32 ol_info) { - struct hnae3_handle *handle = ring->tqp->handle; - enum pkt_hash_types rss_type; + enum pkt_hash_types rss_type = PKT_HASH_TYPE_NONE; + struct net_device *netdev = ring_to_netdev(ring); + struct hns3_nic_priv *priv = netdev_priv(netdev); - if (rss_hash) - rss_type = handle->kinfo.rss_type; - else - rss_type = PKT_HASH_TYPE_NONE; + if (test_bit(HNS3_NIC_STATE_RXD_ADV_LAYOUT_ENABLE, &priv->state)) { + u32 ptype = hnae3_get_field(ol_info, HNS3_RXD_PTYPE_M, + HNS3_RXD_PTYPE_S); + + rss_type = hns3_rx_ptype_tblptype.hash_type; + } else { + int l3_type = hnae3_get_field(l234info, HNS3_RXD_L3ID_M, + HNS3_RXD_L3ID_S); + int l4_type = hnae3_get_field(l234info, HNS3_RXD_L4ID_M, + HNS3_RXD_L4ID_S); + + if (l3_type == HNS3_L3_TYPE_IPV4 || + l3_type == HNS3_L3_TYPE_IPV6) { + if (l4_type == HNS3_L4_TYPE_UDP || + l4_type == HNS3_L4_TYPE_TCP || + l4_type == HNS3_L4_TYPE_SCTP) + rss_type = PKT_HASH_TYPE_L4; + else if (l4_type == HNS3_L4_TYPE_IGMP || + l4_type == HNS3_L4_TYPE_ICMP) + rss_type = PKT_HASH_TYPE_L3; + } + } skb_set_hash(skb, rss_hash, rss_type); } @@ -4362,7 +4384,8 @@ ring->tqp_vector->rx_group.total_bytes += len; - hns3_set_rx_skb_rss_type(ring, skb, le32_to_cpu(desc->rx.rss_hash)); + hns3_set_rx_skb_rss_type(ring, skb, le32_to_cpu(desc->rx.rss_hash), + l234info, ol_info); return 0; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
Changed
@@ -414,6 +414,7 @@ u32 ip_summed : 2; u32 l3_type : 4; u32 valid : 1; + u32 hash_type: 3; }; struct ring_stats {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
Changed
@@ -3331,6 +3331,7 @@ hdev->hw.mac.autoneg = cmd.base.autoneg; hdev->hw.mac.speed = cmd.base.speed; hdev->hw.mac.duplex = cmd.base.duplex; + linkmode_copy(hdev->hw.mac.advertising, cmd.link_modes.advertising); return 0; } @@ -4974,7 +4975,6 @@ return ret; } - hclge_comm_get_rss_type(&vport->nic, &hdev->rss_cfg.rss_tuple_sets); return 0; } @@ -12199,12 +12199,10 @@ struct hclge_vport *vport = hclge_get_vport(handle); struct hclge_dev *hdev = vport->back; struct hclge_wol_info *wol_info = &hdev->hw.mac.wol; - u32 wol_supported; u32 wol_mode; - wol_supported = hclge_wol_mode_from_ethtool(wol->supported); wol_mode = hclge_wol_mode_from_ethtool(wol->wolopts); - if (wol_mode & ~wol_supported) + if (wol_mode & ~wol_info->wol_support_mode) return -EINVAL; wol_info->wol_current_mode = wol_mode; @@ -12305,9 +12303,12 @@ if (ret) goto err_msi_irq_uninit; - if (hdev->hw.mac.media_type == HNAE3_MEDIA_TYPE_COPPER && - !hnae3_dev_phy_imp_supported(hdev)) { - ret = hclge_mac_mdio_config(hdev); + if (hdev->hw.mac.media_type == HNAE3_MEDIA_TYPE_COPPER) { + if (hnae3_dev_phy_imp_supported(hdev)) + ret = hclge_update_tp_port_info(hdev); + else + ret = hclge_mac_mdio_config(hdev); + if (ret) goto err_msi_irq_uninit; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/Kconfig
Changed
@@ -16,7 +16,6 @@ if NET_VENDOR_NETSWIFT -source "drivers/net/ethernet/netswift/txgbe/Kconfig" config NGBE tristate "Netswift PCI-Express Gigabit Ethernet support" depends on PCI @@ -73,4 +72,58 @@ If unsure, say N. +config TXGBE + tristate "Netswift PCI-Express 10Gigabit Ethernet support" + depends on PCI + imply PTP_1588_CLOCK + help + This driver supports Netswift 10gigabit ethernet adapters. + For more information on how to identify your adapter, go + to <http://www.net-swift.com> + + To compile this driver as a module, choose M here. The module + will be called txgbe. + +config TXGBE_HWMON + bool "Netswift PCI-Express 10Gigabit adapters HWMON support" + default n + depends on TXGBE && HWMON && !(TXGBE=y && HWMON=m) + help + Say Y if you want to expose thermal sensor data on these devices. + For more information on how to use your adapter, go + to <http://www.net-swift.com> + + If unsure, say N. + +config TXGBE_DEBUG_FS + bool "Netswift PCI-Express 10Gigabit adapters debugfs support" + default n + depends on TXGBE + help + Say Y if you want to setup debugfs for these devices. + For more information on how to use your adapter, go + to <http://www.net-swift.com> + + If unsure, say N. + +config TXGBE_POLL_LINK_STATUS + bool "Netswift PCI-Express 10Gigabit adapters poll mode support" + default n + depends on TXGBE + help + Say Y if you want to turn these devices to poll mode instead of interrupt-trigged TX/RX. + For more information on how to use your adapter, go + to <http://www.net-swift.com> + + If unsure, say N. +config TXGBE_SYSFS + bool "Netswift PCI-Express 10Gigabit adapters sysfs support" + default n + depends on TXGBE + help + Say Y if you want to setup sysfs for these devices. + For more information on how to use your adapter, go + to <http://www.net-swift.com> + + If unsure, say N. endif # NET_VENDOR_NETSWIFT
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/Kconfig
Deleted
@@ -1,13 +0,0 @@ -# -# Netswift driver configuration -# - -config TXGBE - tristate "Netswift 10G Network Interface Card" - default n - depends on PCI_MSI && NUMA && PCI_IOV && DCB - help - This driver supports Netswift 10G Ethernet cards. - To compile this driver as part of the kernel, choose Y here. - If unsure, choose N. - The default is N.
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/Makefile
Changed
@@ -9,3 +9,7 @@ txgbe-objs := txgbe_main.o txgbe_ethtool.o \ txgbe_hw.o txgbe_phy.o txgbe_bp.o \ txgbe_mbx.o txgbe_mtd.o txgbe_param.o txgbe_lib.o txgbe_ptp.o + +txgbe-$(CONFIG_TXGBE_HWMON) += txgbe_sysfs.o +txgbe-$(CONFIG_TXGBE_DEBUG_FS) += txgbe_debugfs.o +txgbe-$(CONFIG_TXGBE_SYSFS) += txgbe_sysfs.o
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe.h
Changed
@@ -67,6 +67,10 @@ #define CL72_KRTR_PRBS_MODE_EN 0x2fff /*deepinsw : 512 default to 256*/ #endif +#ifndef TXGBE_STATIC_ITR +#define TXGBE_STATIC_ITR 1 /* static itr configure */ +#endif + #ifndef SFI_SET #define SFI_SET 0 #define SFI_MAIN 24 @@ -95,7 +99,6 @@ #define KX_POST 16 #endif - #ifndef KX4_TXRX_PIN #define KX4_TXRX_PIN 0 /*rx : 0xf tx : 0xf0 */ #endif @@ -118,8 +121,8 @@ #define KR_CL72_TRAINING 1 #endif -#ifndef KR_REINITED -#define KR_REINITED 1 +#ifndef KR_NOREINITED +#define KR_NOREINITED 0 #endif #ifndef KR_AN73_PRESET @@ -140,17 +143,17 @@ #define TXGBE_DEFAULT_TX_WORK DEFAULT_TX_WORK #else #define TXGBE_DEFAULT_TXD 512 -#define TXGBE_DEFAULT_TX_WORK 256 +#define TXGBE_DEFAULT_TX_WORK 256 #endif #define TXGBE_MAX_TXD 8192 #define TXGBE_MIN_TXD 128 #if (PAGE_SIZE < 8192) #define TXGBE_DEFAULT_RXD 512 -#define TXGBE_DEFAULT_RX_WORK 256 +#define TXGBE_DEFAULT_RX_WORK 256 #else #define TXGBE_DEFAULT_RXD 256 -#define TXGBE_DEFAULT_RX_WORK 128 +#define TXGBE_DEFAULT_RX_WORK 128 #endif #define TXGBE_MAX_RXD 8192 @@ -474,6 +477,7 @@ #define MAX_RX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1) #define MAX_TX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1) +#define MAX_XDP_QUEUES (TXGBE_MAX_FDIR_INDICES + 1) #define TXGBE_MAX_L2A_QUEUES 4 #define TXGBE_BAD_L2A_QUEUE 3 @@ -552,6 +556,26 @@ struct txgbe_ring ring0 ____cacheline_internodealigned_in_smp; }; +#ifdef CONFIG_TXGBE_HWMON + +#define TXGBE_HWMON_TYPE_TEMP 0 +#define TXGBE_HWMON_TYPE_ALARMTHRESH 1 +#define TXGBE_HWMON_TYPE_DALARMTHRESH 2 + +struct hwmon_attr { + struct device_attribute dev_attr; + struct txgbe_hw *hw; + struct txgbe_thermal_diode_data *sensor; + char name19; +}; + +struct hwmon_buff { + struct device *device; + struct hwmon_attr *hwmon_list; + unsigned int n_hwmon; +}; +#endif /* CONFIG_TXGBE_HWMON */ + /* * microsecond values for various ITR rates shifted by 2 to fit itr register * with the first 3 bits reserved 0 @@ -603,6 +627,13 @@ #define TXGBE_MAC_STATE_MODIFIED 0x2 #define TXGBE_MAC_STATE_IN_USE 0x4 +#ifdef CONFIG_TXGBE_PROCFS +struct txgbe_therm_proc_data { + struct txgbe_hw *hw; + struct txgbe_thermal_diode_data *sensor_data; +}; +#endif + /* * Only for array allocations in our adapter struct. * we can actually assign 64 queue vectors based on our extended-extended @@ -718,16 +749,17 @@ */ u32 flags; u32 flags2; - u32 vf_mode; - u32 backplane_an; - u32 an73; - u32 an37; - u32 ffe_main; - u32 ffe_pre; - u32 ffe_post; - u32 ffe_set; - u32 backplane_mode; - u32 backplane_auto; + u8 an73_mode; + u8 vf_mode; + u8 backplane_an; + u8 an73; + u8 an37; + u16 ffe_main; + u16 ffe_pre; + u16 ffe_post; + u8 ffe_set; + u8 backplane_mode; + u8 backplane_auto; bool cloud_mode; @@ -744,6 +776,10 @@ unsigned int num_vmdqs; /* does not include pools assigned to VFs */ unsigned int queues_per_pool; + /* XDP */ + int num_xdp_queues; + struct txgbe_ring *xdp_ringMAX_XDP_QUEUES; + /* TX */ struct txgbe_ring *tx_ringMAX_TX_QUEUES ____cacheline_aligned_in_smp; @@ -798,6 +834,9 @@ struct timer_list service_timer; struct work_struct service_task; +#ifdef CONFIG_TXGBE_POLL_LINK_STATUS + struct timer_list link_check_timer; +#endif struct hlist_head fdir_filter_list; unsigned long fdir_overflow; /* number of times ATR was backed off */ union txgbe_atr_input fdir_mask; @@ -845,6 +884,23 @@ __le16 vxlan_port; __le16 geneve_port; +#ifdef CONFIG_TXGBE_SYSFS +#ifdef CONFIG_TXGBE_HWMON + struct hwmon_buff txgbe_hwmon_buff; +#endif /* CONFIG_TXGBE_HWMON */ +#else /* CONFIG_TXGBE_SYSFS */ +#ifdef CONFIG_TXGBE_PROCFS + struct proc_dir_entry *eth_dir; + struct proc_dir_entry *info_dir; + u64 old_lsc; + struct proc_dir_entry *therm_dir; + struct txgbe_therm_proc_data therm_data; +#endif /* CONFIG_TXGBE_PROCFS */ +#endif /* CONFIG_TXGBE_SYSFS */ + +#ifdef CONFIG_TXGBE_DEBUG_FS + struct dentry *txgbe_dbg_adapter; +#endif /*CONFIG_TXGBE_DEBUG_FS*/ u8 default_up; unsigned long fwd_bitmask; /* bitmask indicating in use pools */ @@ -914,6 +970,10 @@ /* ESX txgbe CIM IOCTL definition */ +#ifdef CONFIG_TXGBE_SYSFS +void txgbe_sysfs_exit(struct txgbe_adapter *adapter); +int txgbe_sysfs_init(struct txgbe_adapter *adapter); +#endif /* CONFIG_TXGBE_SYSFS */ extern struct dcbnl_rtnl_ops dcbnl_ops; int txgbe_copy_dcb_cfg(struct txgbe_adapter *adapter, int tc_max); @@ -974,6 +1034,37 @@ void txgbe_vlan_strip_enable(struct txgbe_adapter *adapter); void txgbe_vlan_strip_disable(struct txgbe_adapter *adapter); +#if IS_ENABLED(CONFIG_FCOE) +void txgbe_configure_fcoe(struct txgbe_adapter *adapter); +int txgbe_fso(struct txgbe_ring *tx_ring, + struct txgbe_tx_buffer *first, + u8 *hdr_len); +int txgbe_fcoe_ddp(struct txgbe_adapter *adapter, + union txgbe_rx_desc *rx_desc, + struct sk_buff *skb); +int txgbe_fcoe_ddp_get(struct net_device *netdev, u16 xid, + struct scatterlist *sgl, unsigned int sgc); +int txgbe_fcoe_ddp_target(struct net_device *netdev, u16 xid, + struct scatterlist *sgl, unsigned int sgc); +int txgbe_fcoe_ddp_put(struct net_device *netdev, u16 xid); +int txgbe_setup_fcoe_ddp_resources(struct txgbe_adapter *adapter); +void txgbe_free_fcoe_ddp_resources(struct txgbe_adapter *adapter); +int txgbe_fcoe_enable(struct net_device *netdev); +int txgbe_fcoe_disable(struct net_device *netdev); +#if IS_ENABLED(CONFIG_DCB) +u8 txgbe_fcoe_getapp(struct net_device *netdev); +u8 txgbe_fcoe_setapp(struct txgbe_adapter *adapter, u8 up); +#endif /* CONFIG_DCB */ +u8 txgbe_fcoe_get_tc(struct txgbe_adapter *adapter); +int txgbe_fcoe_get_wwn(struct net_device *netdev, u64 *wwn, int type); +#endif /* CONFIG_FCOE */ + +#ifdef CONFIG_TXGBE_DEBUG_FS +void txgbe_dbg_adapter_init(struct txgbe_adapter *adapter); +void txgbe_dbg_adapter_exit(struct txgbe_adapter *adapter); +void txgbe_dbg_init(void); +void txgbe_dbg_exit(void); +#endif /* CONFIG_TXGBE_DEBUG_FS */ void txgbe_dump(struct txgbe_adapter *adapter); static inline struct netdev_queue *txring_txq(const struct txgbe_ring *ring)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_bp.c
Changed
@@ -18,23 +18,23 @@ #include "txgbe_bp.h" -int Handle_bkp_an73_flow(unsigned char byLinkMode, struct txgbe_adapter *adapter); -int WaitBkpAn73XnpDone(struct txgbe_adapter *adapter); -int GetBkpAn73Ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner, - struct txgbe_adapter *adapter); -int Get_bkp_an73_ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner, - struct txgbe_adapter *adapter); -int ClearBkpAn73Interrupt(unsigned int intIndex, unsigned int intIndexHi, struct txgbe_adapter *adapter); -int CheckBkpAn73Interrupt(unsigned int intIndex, struct txgbe_adapter *adapter); -int Check_bkp_an73_ability(bkpan73ability tBkpAn73Ability, bkpan73ability tLpBkpAn73Ability, - struct txgbe_adapter *adapter); +int handle_bkp_an73_flow(unsigned char bp_link_mode, struct txgbe_adapter *adapter); +int wait_bkp_an73_xnp_done(struct txgbe_adapter *adapter); +int get_bkp_an73_ability(bkpan73ability *pt_bkp_an73_ability, + unsigned char byLinkPartner, struct txgbe_adapter *adapter); +int clr_bkp_an73_int(unsigned int intIndex, unsigned int intIndexHi, + struct txgbe_adapter *adapter); +int chk_bkp_an73_Int(unsigned int intIndex, struct txgbe_adapter *adapter); +int chk_bkp_an73_ability(bkpan73ability tBkpAn73Ability, + bkpan73ability tLpBkpAn73Ability, + struct txgbe_adapter *adapter); void txgbe_bp_close_protect(struct txgbe_adapter *adapter) { adapter->flags2 |= TXGBE_FLAG2_KR_PRO_DOWN; - if (adapter->flags2 & TXGBE_FLAG2_KR_PRO_REINIT) { + while (adapter->flags2 & TXGBE_FLAG2_KR_PRO_REINIT) { msleep(100); - printk("wait to reinited ok..%x\n", adapter->flags2); + e_dev_info("wait to reinited ok..%x\n", adapter->flags2); } } @@ -49,16 +49,12 @@ if (adapter->backplane_mode == TXGBE_BP_M_KR) { hw->subsystem_device_id = TXGBE_ID_WX1820_KR_KX_KX4; - hw->subsystem_id = TXGBE_ID_WX1820_KR_KX_KX4; } else if (adapter->backplane_mode == TXGBE_BP_M_KX4) { hw->subsystem_device_id = TXGBE_ID_WX1820_MAC_XAUI; - hw->subsystem_id = TXGBE_ID_WX1820_MAC_XAUI; } else if (adapter->backplane_mode == TXGBE_BP_M_KX) { hw->subsystem_device_id = TXGBE_ID_WX1820_MAC_SGMII; - hw->subsystem_id = TXGBE_ID_WX1820_MAC_SGMII; } else if (adapter->backplane_mode == TXGBE_BP_M_SFI) { hw->subsystem_device_id = TXGBE_ID_WX1820_SFP; - hw->subsystem_id = TXGBE_ID_WX1820_SFP; } if (adapter->backplane_auto == TXGBE_BP_M_AUTO) { @@ -99,7 +95,7 @@ static int txgbe_kr_subtask(struct txgbe_adapter *adapter) { - Handle_bkp_an73_flow(0, adapter); + handle_bkp_an73_flow(0, adapter); return 0; } @@ -127,32 +123,26 @@ void txgbe_bp_down_event(struct txgbe_adapter *adapter) { struct txgbe_hw *hw = &adapter->hw; + if (adapter->backplane_an == 1) { if (KR_NORESET == 1) { - txgbe_wr32_epcs(hw, 0x78003, 0x0000); - txgbe_wr32_epcs(hw, 0x70000, 0x0000); + txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, 0x0000); + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0000); txgbe_wr32_epcs(hw, 0x78001, 0x0000); - msleep(1050); + msleep(1000); txgbe_set_link_to_kr(hw, 1); - } else if (KR_REINITED == 1) { - txgbe_wr32_epcs(hw, 0x78003, 0x0000); - txgbe_wr32_epcs(hw, 0x70000, 0x0000); + } else if (KR_NOREINITED == 1) { + txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, 0x0000); + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0000); txgbe_wr32_epcs(hw, 0x78001, 0x0000); - txgbe_wr32_epcs(hw, 0x18035, 0x00FF); - txgbe_wr32_epcs(hw, 0x18055, 0x00FF); msleep(1050); - txgbe_wr32_epcs(hw, 0x78003, 0x0001); - txgbe_wr32_epcs(hw, 0x70000, 0x3200); + txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, 0x0001); + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x3200); txgbe_wr32_epcs(hw, 0x78001, 0x0007); - txgbe_wr32_epcs(hw, 0x18035, 0x00FC); - txgbe_wr32_epcs(hw, 0x18055, 0x00FC); } else { - msleep(1000); - if (!(adapter->flags2&TXGBE_FLAG2_KR_PRO_DOWN)) { - adapter->flags2 |= TXGBE_FLAG2_KR_PRO_REINIT; + msleep(200); + if (!(adapter->flags2&TXGBE_FLAG2_KR_PRO_DOWN)) txgbe_reinit_locked(adapter); - adapter->flags2 &= ~TXGBE_FLAG2_KR_PRO_REINIT; - } } } } @@ -170,18 +160,18 @@ /*1. Get the local AN73 Base Page Ability*/ if (KR_MODE) e_dev_info("<1>. Get the local AN73 Base Page Ability ...\n"); - GetBkpAn73Ability(&tBkpAn73Ability, 0, adapter); + get_bkp_an73_ability(&tBkpAn73Ability, 0, adapter); /*2. Check the AN73 Interrupt Status*/ if (KR_MODE) e_dev_info("<2>. Check the AN73 Interrupt Status ...\n"); /*3.Clear the AN_PG_RCV interrupt*/ - ClearBkpAn73Interrupt(2, 0x0, adapter); + clr_bkp_an73_int(2, 0x0, adapter); /*3.1. Get the link partner AN73 Base Page Ability*/ if (KR_MODE) e_dev_info("<3.1>. Get the link partner AN73 Base Page Ability ...\n"); - Get_bkp_an73_ability(&tLpBkpAn73Ability, 1, adapter); + get_bkp_an73_ability(&tLpBkpAn73Ability, 1, adapter); /*3.2. Check the AN73 Link Ability with Link Partner*/ if (KR_MODE) { @@ -189,7 +179,7 @@ e_dev_info(" Local Link Ability: 0x%x\n", tBkpAn73Ability.linkAbility); e_dev_info(" Link Partner Link Ability: 0x%x\n", tLpBkpAn73Ability.linkAbility); } - Check_bkp_an73_ability(tBkpAn73Ability, tLpBkpAn73Ability, adapter); + chk_bkp_an73_ability(tBkpAn73Ability, tLpBkpAn73Ability, adapter); return 0; } @@ -200,7 +190,7 @@ ** 0 : current link mode matched, wait AN73 to be completed ** 1 : current link mode not matched, set to matched link mode, re-start AN73 external */ -int Check_bkp_an73_ability(bkpan73ability tBkpAn73Ability, bkpan73ability tLpBkpAn73Ability, +int chk_bkp_an73_ability(bkpan73ability tBkpAn73Ability, bkpan73ability tLpBkpAn73Ability, struct txgbe_adapter *adapter) { unsigned int comLinkAbility; @@ -215,8 +205,10 @@ comLinkAbility = tBkpAn73Ability.linkAbility & tLpBkpAn73Ability.linkAbility; if (KR_MODE) e_dev_info("comLinkAbility= 0x%x, linkAbility= 0x%x, lpLinkAbility= 0x%x\n", - comLinkAbility, tBkpAn73Ability.linkAbility, tLpBkpAn73Ability.linkAbility); + comLinkAbility, tBkpAn73Ability.linkAbility, + tLpBkpAn73Ability.linkAbility); + /*only support kr*/ if (comLinkAbility == 0) { if (KR_MODE) e_dev_info("WARNING: The Link Partner does not support any compatible speed mode!!!\n\n"); @@ -234,33 +226,8 @@ txgbe_set_link_to_kr(hw, 1); return 1; } - } else if (comLinkAbility & 0x40) { - if (tBkpAn73Ability.currentLinkMode == 0x10) { - if (KR_MODE) - e_dev_info("Link mode is matched with Link Partner: LINK_KX4.\n"); - return 0; - } else { - if (KR_MODE) { - e_dev_info("Link mode is not matched with Link Partner: LINK_KX4.\n"); - e_dev_info("Set the local link mode to LINK_KX4 ...\n"); - } - txgbe_set_link_to_kx4(hw, 1); - return 1; - } - } else if (comLinkAbility & 0x20) { - if (tBkpAn73Ability.currentLinkMode == 0x1) { - if (KR_MODE) - e_dev_info("Link mode is matched with Link Partner: LINK_KX.\n"); - return 0; - } else { - if (KR_MODE) { - e_dev_info("Link mode is not matched with Link Partner: LINK_KX.\n"); - e_dev_info("Set the local link mode to LINK_KX ...\n"); - } - txgbe_set_link_to_kx(hw, 1, 1); - return 1; - } } + return 0; } @@ -271,7 +238,7 @@ **- 2: Get Link Partner Next Page (only get NXP Ability Register 1 at the moment) **- 0: Get Local Device Base Page */ -int Get_bkp_an73_ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner, +int get_bkp_an73_ability(bkpan73ability *pt_bkp_an73_ability, unsigned char byLinkPartner, struct txgbe_adapter *adapter) { int status = 0; @@ -279,7 +246,7 @@ struct txgbe_hw *hw = &adapter->hw; if (KR_MODE) { - e_dev_info("GetBkpAn73Ability(): byLinkPartner = %d\n", byLinkPartner); + e_dev_info("get_bkp_an73_ability(): byLinkPartner = %d\n", byLinkPartner); e_dev_info("----------------------------------------\n"); } @@ -288,20 +255,20 @@ if (KR_MODE) e_dev_info("Read the link partner AN73 Base Page Ability Registers...\n"); rdata = 0; - rdata = txgbe_rd32_epcs(hw, 0x70013); + rdata = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_LP_ABL1); if (KR_MODE) e_dev_info("SR AN MMD LP Base Page Ability Register 1: 0x%x\n", rdata); - ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01; + pt_bkp_an73_ability->nextPage = (rdata >> 15) & 0x01; if (KR_MODE) - e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage); + e_dev_info(" Next Page (bit15): %d\n", pt_bkp_an73_ability->nextPage); rdata = 0; rdata = txgbe_rd32_epcs(hw, 0x70014); if (KR_MODE) e_dev_info("SR AN MMD LP Base Page Ability Register 2: 0x%x\n", rdata); - ptBkpAn73Ability->linkAbility = rdata & 0xE0; + pt_bkp_an73_ability->linkAbility = rdata & 0xE0; if (KR_MODE) { - e_dev_info(" Link Ability (bit15:0): 0x%x\n", ptBkpAn73Ability->linkAbility); + e_dev_info(" Link Ability (bit15:0): 0x%x\n", pt_bkp_an73_ability->linkAbility); e_dev_info(" (0x20- KX_ONLY, 0x40- KX4_ONLY, 0x60- KX4_KX\n"); e_dev_info(" 0x80- KR_ONLY, 0xA0- KR_KX, 0xC0- KR_KX4, 0xE0- KR_KX4_KX)\n"); } @@ -313,7 +280,7 @@ e_dev_info(" FEC Request (bit15): %d\n", ((rdata >> 15) & 0x01)); e_dev_info(" FEC Enable (bit14): %d\n", ((rdata >> 14) & 0x01)); } - ptBkpAn73Ability->fecAbility = (rdata >> 14) & 0x03; + pt_bkp_an73_ability->fecAbility = (rdata >> 14) & 0x03; } else if (byLinkPartner == 2) {/*Link Partner Next Page*/ /*Read the link partner AN73 Next Page Ability Registers*/ if (KR_MODE) @@ -322,28 +289,28 @@ rdata = txgbe_rd32_epcs(hw, 0x70019); if (KR_MODE) e_dev_info(" SR AN MMD LP XNP Ability Register 1: 0x%x\n", rdata); - ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01; + pt_bkp_an73_ability->nextPage = (rdata >> 15) & 0x01; if (KR_MODE) - e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage); + e_dev_info(" Next Page (bit15): %d\n", pt_bkp_an73_ability->nextPage); } else { /*Read the local AN73 Base Page Ability Registers*/ if (KR_MODE) e_dev_info("\nRead the local AN73 Base Page Ability Registers...\n"); rdata = 0; - rdata = txgbe_rd32_epcs(hw, 0x70010); + rdata = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_ADV_REG1); if (KR_MODE) e_dev_info("SR AN MMD Advertisement Register 1: 0x%x\n", rdata); - ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01; + pt_bkp_an73_ability->nextPage = (rdata >> 15) & 0x01; if (KR_MODE) - e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage); + e_dev_info(" Next Page (bit15): %d\n", pt_bkp_an73_ability->nextPage); rdata = 0; - rdata = txgbe_rd32_epcs(hw, 0x70011); + rdata = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_ADV_REG2); if (KR_MODE) e_dev_info("SR AN MMD Advertisement Register 2: 0x%x\n", rdata); - ptBkpAn73Ability->linkAbility = rdata & 0xE0; + pt_bkp_an73_ability->linkAbility = rdata & 0xE0; if (KR_MODE) { - e_dev_info(" Link Ability (bit15:0): 0x%x\n", ptBkpAn73Ability->linkAbility); + e_dev_info(" Link Ability (bit15:0): 0x%x\n", pt_bkp_an73_ability->linkAbility); e_dev_info(" (0x20- KX_ONLY, 0x40- KX4_ONLY, 0x60- KX4_KX\n"); e_dev_info(" 0x80- KR_ONLY, 0xA0- KR_KX, 0xC0- KR_KX4, 0xE0- KR_KX4_KX)\n"); } @@ -354,152 +321,53 @@ e_dev_info(" FEC Request (bit15): %d\n", ((rdata >> 15) & 0x01)); e_dev_info(" FEC Enable (bit14): %d\n", ((rdata >> 14) & 0x01)); } - ptBkpAn73Ability->fecAbility = (rdata >> 14) & 0x03; + pt_bkp_an73_ability->fecAbility = (rdata >> 14) & 0x03; } /*if (byLinkPartner == 1) Link Partner Base Page*/ if (KR_MODE) - e_dev_info("GetBkpAn73Ability() done.\n"); - - return status; -} - - -/*Get Ethernet Backplane AN73 Base Page Ability -**byLinkPartner: -**- 1: Get Link Partner Base Page -**- 2: Get Link Partner Next Page (only get NXP Ability Register 1 at the moment) -**- 0: Get Local Device Base Page -*/ -int GetBkpAn73Ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner, - struct txgbe_adapter *adapter) -{ - int status = 0; - unsigned int rdata; - struct txgbe_hw *hw = &adapter->hw; - - if (KR_MODE) { - e_dev_info("GetBkpAn73Ability(): byLinkPartner = %d\n", byLinkPartner); - e_dev_info("----------------------------------------\n"); - } - - if (byLinkPartner == 1) { //Link Partner Base Page - //Read the link partner AN73 Base Page Ability Registers - if (KR_MODE) - e_dev_info("Read the link partner AN73 Base Page Ability Registers...\n"); - rdata = 0; - rdata = txgbe_rd32_epcs(hw, 0x70013); - if (KR_MODE) - e_dev_info("SR AN MMD LP Base Page Ability Register 1: 0x%x\n", rdata); - ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01; - if (KR_MODE) - e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage); - - rdata = 0; - rdata = txgbe_rd32_epcs(hw, 0x70014); - if (KR_MODE) - e_dev_info("SR AN MMD LP Base Page Ability Register 2: 0x%x\n", rdata); - ptBkpAn73Ability->linkAbility = rdata & 0xE0; - if (KR_MODE) { - e_dev_info(" Link Ability (bit15:0): 0x%x\n", ptBkpAn73Ability->linkAbility); - e_dev_info(" (0x20- KX_ONLY, 0x40- KX4_ONLY, 0x60- KX4_KX\n"); - e_dev_info(" 0x80- KR_ONLY, 0xA0- KR_KX, 0xC0- KR_KX4, 0xE0- KR_KX4_KX)\n"); - } - - rdata = 0; - rdata = txgbe_rd32_epcs(hw, 0x70015); - printk("SR AN MMD LP Base Page Ability Register 3: 0x%x\n", rdata); - printk(" FEC Request (bit15): %d\n", ((rdata >> 15) & 0x01)); - printk(" FEC Enable (bit14): %d\n", ((rdata >> 14) & 0x01)); - ptBkpAn73Ability->fecAbility = (rdata >> 14) & 0x03; - } else if (byLinkPartner == 2) { //Link Partner Next Page - //Read the link partner AN73 Next Page Ability Registers - if (KR_MODE) - e_dev_info("Read the link partner AN73 Next Page Ability Registers...\n"); - rdata = 0; - rdata = txgbe_rd32_epcs(hw, 0x70019); - if (KR_MODE) - e_dev_info(" SR AN MMD LP XNP Ability Register 1: 0x%x\n", rdata); - ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01; - if (KR_MODE) - e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage); - } else { - //Read the local AN73 Base Page Ability Registers - if (KR_MODE) - e_dev_info("Read the local AN73 Base Page Ability Registers...\n"); - rdata = 0; - rdata = txgbe_rd32_epcs(hw, 0x70010); - if (KR_MODE) - e_dev_info("SR AN MMD Advertisement Register 1: 0x%x\n", rdata); - ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01; - if (KR_MODE) - e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage); - - rdata = 0; - rdata = txgbe_rd32_epcs(hw, 0x70011); - if (KR_MODE) - e_dev_info("SR AN MMD Advertisement Register 2: 0x%x\n", rdata); - ptBkpAn73Ability->linkAbility = rdata & 0xE0; - if (KR_MODE) { - e_dev_info(" Link Ability (bit15:0): 0x%x\n", ptBkpAn73Ability->linkAbility); - e_dev_info(" (0x20- KX_ONLY, 0x40- KX4_ONLY, 0x60- KX4_KX\n"); - e_dev_info(" 0x80- KR_ONLY, 0xA0- KR_KX, 0xC0- KR_KX4, 0xE0- KR_KX4_KX)\n"); - } - - rdata = 0; - rdata = txgbe_rd32_epcs(hw, 0x70012); - if (KR_MODE) { - e_dev_info("SR AN MMD Advertisement Register 3: 0x%x\n", rdata); - e_dev_info(" FEC Request (bit15): %d\n", ((rdata >> 15) & 0x01)); - e_dev_info(" FEC Enable (bit14): %d\n", ((rdata >> 14) & 0x01)); - } - ptBkpAn73Ability->fecAbility = (rdata >> 14) & 0x03; - } - - if (KR_MODE) - e_dev_info("GetBkpAn73Ability() done.\n"); + e_dev_info("get_bkp_an73_ability() done.\n"); return status; } /* DESCRIPTION: Set the source data fieldsbitHigh:bitLow with setValue -** INPUTS: *pSrcData: Source data pointer +** INPUTS: *src_data: Source data pointer ** bitHigh: High bit position of the fields ** bitLow : Low bit position of the fields ** setValue: Set value of the fields ** OUTPUTS: return the updated source data */ -static void SetFields( - unsigned int *pSrcData, - unsigned int bitHigh, - unsigned int bitLow, - unsigned int setValue) +static void set_fields(unsigned int *src_data, + unsigned int bitHigh, + unsigned int bitLow, + unsigned int setValue) { int i; if (bitHigh == bitLow) { if (setValue == 0) { - *pSrcData &= ~(1 << bitLow); + *src_data &= ~(1 << bitLow); } else { - *pSrcData |= (1 << bitLow); + *src_data |= (1 << bitLow); } } else { for (i = bitLow; i <= bitHigh; i++) { - *pSrcData &= ~(1 << i); + *src_data &= ~(1 << i); } - *pSrcData |= (setValue << bitLow); + *src_data |= (setValue << bitLow); } } -/*Check Ethernet Backplane AN73 Interrupt status +/* Check Ethernet Backplane AN73 Interrupt status **- return the value of select interrupt index */ -int CheckBkpAn73Interrupt(unsigned int intIndex, struct txgbe_adapter *adapter) +int chk_bkp_an73_Int(unsigned int intIndex, struct txgbe_adapter *adapter) { unsigned int rdata; struct txgbe_hw *hw = &adapter->hw; if (KR_MODE) { - e_dev_info("CheckBkpAn73Interrupt(): intIndex = %d\n", intIndex); + e_dev_info("%s: intIndex = %d\n", __func__, intIndex); e_dev_info("----------------------------------------\n"); } @@ -513,11 +381,11 @@ return ((rdata >> intIndex) & 0x01); } -/*Clear Ethernet Backplane AN73 Interrupt status +/* Clear Ethernet Backplane AN73 Interrupt status **- intIndexHi =0, only intIndex bit will be cleared **- intIndexHi !=0, the intIndexHi, intIndex range will be cleared */ -int ClearBkpAn73Interrupt(unsigned int intIndex, unsigned int intIndexHi, struct txgbe_adapter *adapter) +int clr_bkp_an73_int(unsigned int intIndex, unsigned int intIndexHi, struct txgbe_adapter *adapter) { int status = 0; unsigned int rdata, wdata; @@ -535,9 +403,9 @@ wdata = rdata; if (intIndexHi) { - SetFields(&wdata, intIndexHi, intIndex, 0); + set_fields(&wdata, intIndexHi, intIndex, 0); } else { - SetFields(&wdata, intIndex, intIndex, 0); + set_fields(&wdata, intIndex, intIndex, 0); } txgbe_wr32_epcs(hw, 0x78002, wdata); @@ -551,7 +419,7 @@ return status; } -int WaitBkpAn73XnpDone(struct txgbe_adapter *adapter) +int wait_bkp_an73_xnp_done(struct txgbe_adapter *adapter) { int status = 0; unsigned int timer = 0; @@ -559,12 +427,12 @@ /*while(timer++ < BKPAN73_TIMEOUT)*/ while (timer++ < 20) { - if (CheckBkpAn73Interrupt(2, adapter)) { + if (chk_bkp_an73_Int(2, adapter)) { /*Clear the AN_PG_RCV interrupt*/ - ClearBkpAn73Interrupt(2, 0, adapter); + clr_bkp_an73_int(2, 0, adapter); /*Get the link partner AN73 Next Page Ability*/ - Get_bkp_an73_ability(&tLpBkpAn73Ability, 2, adapter); + get_bkp_an73_ability(&tLpBkpAn73Ability, 2, adapter); /*Return when AN_LP_XNP_NP == 0, (bit15: Next Page)*/ if (tLpBkpAn73Ability.nextPage == 0) { @@ -579,7 +447,7 @@ return -1; } -int ReadPhyLaneTxEq(unsigned short lane, struct txgbe_adapter *adapter, int post_t, int mode) +int read_phy_lane_txeq(unsigned short lane, struct txgbe_adapter *adapter, int post_t, int mode) { int status = 0; unsigned int addr, rdata; @@ -642,7 +510,7 @@ **- bits1:0 =2'b11: Enable the CL72 KR training **- bits1:0 =2'b01: Disable the CL72 KR training */ -int EnableCl72KrTr(unsigned int enable, struct txgbe_adapter *adapter) +int en_cl72_krtr(unsigned int enable, struct txgbe_adapter *adapter) { int status = 0; unsigned int wdata = 0; @@ -651,15 +519,15 @@ if (enable == 1) { if (KR_MODE) e_dev_info("\nDisable Clause 72 KR Training ...\n"); - status |= ReadPhyLaneTxEq(0, adapter, 0, 0); + status |= read_phy_lane_txeq(0, adapter, 0, 0); } else if (enable == 4) { - status |= ReadPhyLaneTxEq(0, adapter, 20, 1); + status |= read_phy_lane_txeq(0, adapter, 20, 1); } else if (enable == 8) { - status |= ReadPhyLaneTxEq(0, adapter, 16, 1); + status |= read_phy_lane_txeq(0, adapter, 16, 1); } else if (enable == 12) { - status |= ReadPhyLaneTxEq(0, adapter, 24, 1); + status |= read_phy_lane_txeq(0, adapter, 24, 1); } else if (enable == 5) { - status |= ReadPhyLaneTxEq(0, adapter, 0, 1); + status |= read_phy_lane_txeq(0, adapter, 0, 1); } else if (enable == 3) { if (KR_MODE) e_dev_info("\nEnable Clause 72 KR Training ...\n"); @@ -674,15 +542,15 @@ /*Enable PRBS Mode to determine KR Training Status by setting Bit 0 of VR_PMA_KRTR_PRBS_CTRL0 Register*/ wdata = 0; - SetFields(&wdata, 0, 0, 1); + set_fields(&wdata, 0, 0, 1); } #ifdef CL72_KRTR_PRBS31_EN /*Enable PRBS31 as the KR Training Pattern by setting Bit 1 of VR_PMA_KRTR_PRBS_CTRL0 Register*/ - SetFields(&wdata, 1, 1, 1); + set_fields(&wdata, 1, 1, 1); #endif /*#ifdef CL72_KRTR_PRBS31_EN*/ txgbe_wr32_epcs(hw, 0x18003, wdata); - status |= ReadPhyLaneTxEq(0, adapter, 0, 0); + status |= read_phy_lane_txeq(0, adapter, 0, 0); } else { if (KR_MODE) e_dev_info("\nInvalid setting for Clause 72 KR Training!!!\n"); @@ -696,7 +564,7 @@ return status; } -int CheckCl72KrTrStatus(struct txgbe_adapter *adapter) +int chk_cl72_krtr_status(struct txgbe_adapter *adapter) { int status = 0; unsigned int addr, rdata, rdata1; @@ -753,7 +621,7 @@ if ((rdata >> 3) & 0x01) { if (KR_MODE) e_dev_info("Training is completed with failure!!!\n"); - status |= ReadPhyLaneTxEq(0, adapter, 0, 0); + status |= read_phy_lane_txeq(0, adapter, 0, 0); return status; } @@ -761,7 +629,7 @@ if ((rdata >> 0) & 0x01) { if (KR_MODE) e_dev_info("Receiver trained and ready to receive data ^_^\n"); - status |= ReadPhyLaneTxEq(0, adapter, 0, 0); + status |= read_phy_lane_txeq(0, adapter, 0, 0); return status; } @@ -774,7 +642,7 @@ return status; } -int Handle_bkp_an73_flow(unsigned char byLinkMode, struct txgbe_adapter *adapter) +int handle_bkp_an73_flow(unsigned char bp_link_mode, struct txgbe_adapter *adapter) { int status = 0; unsigned int timer = 0; @@ -784,92 +652,90 @@ u32 rdata = 0; u32 rdata1 = 0; struct txgbe_hw *hw = &adapter->hw; - tBkpAn73Ability.currentLinkMode = byLinkMode; + tBkpAn73Ability.currentLinkMode = bp_link_mode; if (KR_MODE) { e_dev_info("HandleBkpAn73Flow() \n"); e_dev_info("---------------------------------\n"); } - txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0); - txgbe_wr32_epcs(hw, 0x78003, 0x0); + if (adapter->an73_mode == 0) { + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0); + txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, 0x0); + } /*Check the FEC and KR Training for KR mode*/ - if (1) { - //FEC handling + if (KR_MODE) + e_dev_info("<3.3>. Check the FEC for KR mode ...\n"); + tBkpAn73Ability.fecAbility = 0x03; + tLpBkpAn73Ability.fecAbility = 0x3; + if ((tBkpAn73Ability.fecAbility & tLpBkpAn73Ability.fecAbility) == 0x03) { if (KR_MODE) - e_dev_info("<3.3>. Check the FEC for KR mode ...\n"); - tBkpAn73Ability.fecAbility = 0x03; - tLpBkpAn73Ability.fecAbility = 0x0; - if ((tBkpAn73Ability.fecAbility & tLpBkpAn73Ability.fecAbility) == 0x03) { - if (KR_MODE) - e_dev_info("Enable the Backplane KR FEC ...\n"); - //Write 1 to SR_PMA_KR_FEC_CTRL bit0 to enable the FEC - data = 1; - addr = 0x100ab; //SR_PMA_KR_FEC_CTRL - txgbe_wr32_epcs(hw, addr, data); - } else { - if (KR_MODE) - e_dev_info("Backplane KR FEC is disabled.\n"); + e_dev_info("Enable the Backplane KR FEC ...\n"); + //Write 1 to SR_PMA_KR_FEC_CTRL bit0 to enable the FEC + data = 1; + addr = 0x100ab; //SR_PMA_KR_FEC_CTRL + txgbe_wr32_epcs(hw, addr, data); + } else { + if (KR_MODE) + e_dev_info("Backplane KR FEC is disabled.\n"); + } + + for (i = 0; i < 2; i++) { + if (KR_MODE) { + e_dev_info("\n<3.4>. Check the CL72 KR Training for KR mode ...\n"); + e_dev_info("===================%d=======================\n", i); } -#ifdef CL72_KR_TRAINING_ON - for (i = 0; i < 2; i++) { - if (KR_MODE) { - e_dev_info("\n<3.4>. Check the CL72 KR Training for KR mode ...\n"); - printk("===================%d=======================\n", i); - } - status |= EnableCl72KrTr(3, adapter); + status |= en_cl72_krtr(3, adapter); - if (KR_MODE) - e_dev_info("\nCheck the Clause 72 KR Training status ...\n"); - status |= CheckCl72KrTrStatus(adapter); + if (KR_MODE) + e_dev_info("\nCheck the Clause 72 KR Training status ...\n"); + status |= chk_cl72_krtr_status(adapter); - rdata = txgbe_rd32_epcs(hw, 0x10099) & 0x8000; - if (KR_MODE) - e_dev_info("SR PMA MMD 10GBASE-KR LP Coefficient Status Register: 0x%x\n", rdata); - rdata1 = txgbe_rd32_epcs(hw, 0x1009b) & 0x8000; - if (KR_MODE) - e_dev_info("SR PMA MMD 10GBASE-KR LP Coefficient Status Register: 0x%x\n", rdata1); - if (KR_POLLING == 0) { - if (adapter->flags2 & KR) { - rdata = 0x8000; - adapter->flags2 &= ~KR; - } + rdata = txgbe_rd32_epcs(hw, 0x10099) & 0x8000; + if (KR_MODE) + e_dev_info("SR PMA MMD 10GBASE-KR LP Coefficient Status Register: 0x%x\n", rdata); + rdata1 = txgbe_rd32_epcs(hw, 0x1009b) & 0x8000; + if (KR_MODE) + e_dev_info("SR PMA MMD 10GBASE-KR LP Coefficient Status Register: 0x%x\n", rdata1); + if (KR_POLLING == 0) { + if (adapter->flags2 & KR) { + rdata = 0x8000; + adapter->flags2 &= ~KR; } - if ((rdata == 0x8000) & (rdata1 == 0x8000)) { - if (KR_MODE) - e_dev_info("====================out===========================\n"); - status |= EnableCl72KrTr(1, adapter); - txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0000); - ClearBkpAn73Interrupt(2, 0, adapter); - ClearBkpAn73Interrupt(1, 0, adapter); - ClearBkpAn73Interrupt(0, 0, adapter); - while (timer++ < 10) { - rdata = txgbe_rd32_epcs(hw, 0x30020); - rdata = rdata & 0x1000; - if (rdata == 0x1000) { - if (KR_MODE) - e_dev_info("\nINT_AN_INT_CMPLT =1, AN73 Done Success.\n"); - e_dev_info("AN73 Done Success.\n"); - txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0000); - return 0; - } - msleep(10); + } + if ((rdata == 0x8000) & (rdata1 == 0x8000)) { + if (KR_MODE) + e_dev_info("====================out===========================\n"); + status |= en_cl72_krtr(1, adapter); + clr_bkp_an73_int(2, 0, adapter); + clr_bkp_an73_int(1, 0, adapter); + clr_bkp_an73_int(0, 0, adapter); + + while (timer++ < 10) { + rdata = txgbe_rd32_epcs(hw, 0x30020); + rdata = rdata & 0x1000; + if (rdata == 0x1000) { + if (KR_MODE) + e_dev_info("\nINT_AN_INT_CMPLT =1, AN73 Done Success.\n"); + e_dev_info("AN73 Done Success.\n"); + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0000); + txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, 0x0); + return 0; } - msleep(1000); - txgbe_set_link_to_kr(hw, 1); - - return 0; + mdelay(10); } - status |= EnableCl72KrTr(1, adapter); + return 0; } -#endif + + status |= en_cl72_krtr(1, adapter); } - ClearBkpAn73Interrupt(0, 0, adapter); - ClearBkpAn73Interrupt(1, 0, adapter); - ClearBkpAn73Interrupt(2, 0, adapter); + + clr_bkp_an73_int(0, 0, adapter); + clr_bkp_an73_int(1, 0, adapter); + clr_bkp_an73_int(2, 0, adapter); return status; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_debugfs.c
Added
@@ -0,0 +1,724 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2019 - 2022 Beijing WangXun Technology Co., Ltd. */ + +#include "txgbe.h" +#include <linux/debugfs.h> +#include <linux/module.h> + +static struct dentry *txgbe_dbg_root; +static int txgbe_data_mode; + +#define TXGBE_DATA_FUNC(dm) ((dm) & ~0xFFFF) +#define TXGBE_DATA_ARGS(dm) ((dm) & 0xFFFF) +enum txgbe_data_func { + TXGBE_FUNC_NONE = (0 << 16), + TXGBE_FUNC_DUMP_BAR = (1 << 16), + TXGBE_FUNC_DUMP_RDESC = (2 << 16), + TXGBE_FUNC_DUMP_TDESC = (3 << 16), + TXGBE_FUNC_FLASH_READ = (4 << 16), + TXGBE_FUNC_FLASH_WRITE = (5 << 16), +}; + +/** + * data operation + **/ +ssize_t +txgbe_simple_read_from_pcibar(struct txgbe_adapter *adapter, int res, + void __user *buf, size_t size, loff_t *ppos) +{ + loff_t pos = *ppos; + u32 miss, len, limit = pci_resource_len(adapter->pdev, res); + + if (pos < 0) + return 0; + + limit = (pos + size <= limit ? pos + size : limit); + for (miss = 0; pos < limit && !miss; buf += len, pos += len) { + u32 val = 0, reg = round_down(pos, 4); + u32 off = pos - reg; + + len = (reg + 4 <= limit ? 4 - off : 4 - off - (limit - reg - 4)); + val = txgbe_rd32(adapter->io_addr + reg); + miss = copy_to_user(buf, &val + off, len); + } + + size = pos - *ppos - miss; + *ppos += size; + + return size; +} + +ssize_t +txgbe_simple_read_from_flash(struct txgbe_adapter *adapter, + void __user *buf, size_t size, loff_t *ppos) +{ + struct txgbe_hw *hw = &adapter->hw; + loff_t pos = *ppos; + size_t ret = 0; + loff_t rpos, rtail; + void __user *to = buf; + size_t available = adapter->hw.flash.dword_size << 2; + + if (pos < 0) + return -EINVAL; + if (pos >= available || !size) + return 0; + if (size > available - pos) + size = available - pos; + + rpos = round_up(pos, 4); + rtail = round_down(pos + size, 4); + if (rtail < rpos) + return 0; + + to += rpos - pos; + while (rpos <= rtail) { + u32 value = txgbe_rd32(adapter->io_addr + rpos); + + if (TCALL(hw, flash.ops.write_buffer, rpos>>2, 1, &value)) { + ret = size; + break; + } + if (copy_to_user(to, &value, 4) == 4) { + ret = size; + break; + } + to += 4; + rpos += 4; + } + + if (ret == size) + return -EFAULT; + size -= ret; + *ppos = pos + size; + return size; +} + +ssize_t +txgbe_simple_write_to_flash(struct txgbe_adapter *adapter, + const void __user *from, size_t size, loff_t *ppos, size_t available) +{ + return size; +} + +static ssize_t +txgbe_dbg_data_ops_read(struct file *filp, char __user *buffer, + size_t size, loff_t *ppos) +{ + struct txgbe_adapter *adapter = filp->private_data; + u32 func = TXGBE_DATA_FUNC(txgbe_data_mode); + + /* Ensure all reads are done */ + rmb(); + + switch (func) { + case TXGBE_FUNC_DUMP_BAR: { + u32 bar = TXGBE_DATA_ARGS(txgbe_data_mode); + + return txgbe_simple_read_from_pcibar(adapter, bar, buffer, size, + ppos); + } + case TXGBE_FUNC_FLASH_READ: { + return txgbe_simple_read_from_flash(adapter, buffer, size, ppos); + } + case TXGBE_FUNC_DUMP_RDESC: { + struct txgbe_ring *ring; + u32 queue = TXGBE_DATA_ARGS(txgbe_data_mode); + + if (queue >= adapter->num_rx_queues) + return 0; + queue += VMDQ_P(0) * adapter->queues_per_pool; + ring = adapter->rx_ringqueue; + + return simple_read_from_buffer(buffer, size, ppos, + ring->desc, ring->size); + } + case TXGBE_FUNC_DUMP_TDESC: { + struct txgbe_ring *ring; + u32 queue = TXGBE_DATA_ARGS(txgbe_data_mode); + + if (queue >= adapter->num_tx_queues) + return 0; + queue += VMDQ_P(0) * adapter->queues_per_pool; + ring = adapter->tx_ringqueue; + + return simple_read_from_buffer(buffer, size, ppos, + ring->desc, ring->size); + } + default: + break; + } + + return 0; +} + +static ssize_t +txgbe_dbg_data_ops_write(struct file *filp, + const char __user *buffer, + size_t size, loff_t *ppos) +{ + struct txgbe_adapter *adapter = filp->private_data; + u32 func = TXGBE_DATA_FUNC(txgbe_data_mode); + + /* Ensure all reads are done */ + rmb(); + + switch (func) { + case TXGBE_FUNC_FLASH_WRITE: { + u32 size = TXGBE_DATA_ARGS(txgbe_data_mode); + + if (size > adapter->hw.flash.dword_size << 2) + size = adapter->hw.flash.dword_size << 2; + + return txgbe_simple_write_to_flash(adapter, buffer, size, ppos, size); + } + default: + break; + } + + return size; +} + +static const struct file_operations txgbe_dbg_data_ops_fops = { + .owner = THIS_MODULE, + .open = simple_open, + .read = txgbe_dbg_data_ops_read, + .write = txgbe_dbg_data_ops_write, +}; + +/** + * reg_ops operation + **/ +static char txgbe_dbg_reg_ops_buf256 = ""; +static ssize_t +txgbe_dbg_reg_ops_read(struct file *filp, char __user *buffer, + size_t count, loff_t *ppos) +{ + struct txgbe_adapter *adapter = filp->private_data; + char *buf; + int len; + + /* don't allow partial reads */ + if (*ppos != 0) + return 0; + + buf = kasprintf(GFP_KERNEL, "%s: mode=0x%08x\n%s\n", + adapter->netdev->name, txgbe_data_mode, + txgbe_dbg_reg_ops_buf); + if (!buf) + return -ENOMEM; + + if (count < strlen(buf)) { + kfree(buf); + return -ENOSPC; + } + + len = simple_read_from_buffer(buffer, count, ppos, buf, strlen(buf)); + + kfree(buf); + return len; +} + +static ssize_t +txgbe_dbg_reg_ops_write(struct file *filp, + const char __user *buffer, + size_t count, loff_t *ppos) +{ + struct txgbe_adapter *adapter = filp->private_data; + char *pc = txgbe_dbg_reg_ops_buf; + int len; + + /* don't allow partial writes */ + if (*ppos != 0) + return 0; + if (count >= sizeof(txgbe_dbg_reg_ops_buf)) + return -ENOSPC; + + len = simple_write_to_buffer(txgbe_dbg_reg_ops_buf, + sizeof(txgbe_dbg_reg_ops_buf) - 1, + ppos, + buffer, + count); + if (len < 0) + return len; + + pclen = '\0'; + + if (strncmp(pc, "dump", 4) == 0) { + u32 mode = 0; + u16 args; + + pc += 4; + pc += strspn(pc, " \t"); + + if (!strncmp(pc, "bar", 3)) { + pc += 3; + mode = TXGBE_FUNC_DUMP_BAR; + } else if (!strncmp(pc, "rdesc", 5)) { + pc += 5; + mode = TXGBE_FUNC_DUMP_RDESC; + } else if (!strncmp(pc, "tdesc", 5)) { + pc += 5; + mode = TXGBE_FUNC_DUMP_TDESC; + } else { + txgbe_dump(adapter); + } + + if (mode && 1 == sscanf(pc, "%hu", &args)) + mode |= args; + + txgbe_data_mode = mode; + } else if (strncmp(pc, "flash", 4) == 0) { + u32 mode = 0; + u16 args; + + pc += 5; + pc += strspn(pc, " \t"); + if (!strncmp(pc, "read", 3)) { + pc += 4; + mode = TXGBE_FUNC_FLASH_READ; + } else if (!strncmp(pc, "write", 5)) { + pc += 5; + mode = TXGBE_FUNC_FLASH_WRITE; + } + + if (mode && 1 == sscanf(pc, "%hu", &args)) + mode |= args; + + txgbe_data_mode = mode; + } else if (strncmp(txgbe_dbg_reg_ops_buf, "write", 5) == 0) { + u32 reg, value; + int cnt; + + cnt = sscanf(&txgbe_dbg_reg_ops_buf5, "%x %x", ®, &value); + if (cnt == 2) { + wr32(&adapter->hw, reg, value); + e_dev_info("write: 0x%08x = 0x%08x\n", reg, value); + } else { + e_dev_info("write <reg> <value>\n"); + } + } else if (strncmp(txgbe_dbg_reg_ops_buf, "read", 4) == 0) { + u32 reg, value; + int cnt; + + cnt = sscanf(&txgbe_dbg_reg_ops_buf4, "%x", ®); + if (cnt == 1) { + value = rd32(&adapter->hw, reg); + e_dev_info("read 0x%08x = 0x%08x\n", reg, value); + } else { + e_dev_info("read <reg>\n"); + } + } else { + e_dev_info("Unknown command %s\n", txgbe_dbg_reg_ops_buf); + e_dev_info("Available commands:\n"); + e_dev_info(" read <reg>\n"); + e_dev_info(" write <reg> <value>\n"); + } + return count; +} + +static const struct file_operations txgbe_dbg_reg_ops_fops = { + .owner = THIS_MODULE, + .open = simple_open, + .read = txgbe_dbg_reg_ops_read, + .write = txgbe_dbg_reg_ops_write, +}; + +/** + * netdev_ops operation + **/ +static char txgbe_dbg_netdev_ops_buf256 = ""; +static ssize_t +txgbe_dbg_netdev_ops_read(struct file *filp, + char __user *buffer, + size_t count, loff_t *ppos) +{ + struct txgbe_adapter *adapter = filp->private_data; + char *buf; + int len; + + /* don't allow partial reads */ + if (*ppos != 0) + return 0; + + buf = kasprintf(GFP_KERNEL, "%s: mode=0x%08x\n%s\n", + adapter->netdev->name, txgbe_data_mode, + txgbe_dbg_netdev_ops_buf); + if (!buf) + return -ENOMEM; + + if (count < strlen(buf)) { + kfree(buf); + return -ENOSPC; + } + + len = simple_read_from_buffer(buffer, count, ppos, buf, strlen(buf)); + + kfree(buf); + return len; +} + +static ssize_t +txgbe_dbg_netdev_ops_write(struct file *filp, + const char __user *buffer, + size_t count, loff_t *ppos) +{ + struct txgbe_adapter *adapter = filp->private_data; + int len; + + /* don't allow partial writes */ + if (*ppos != 0) + return 0; + if (count >= sizeof(txgbe_dbg_netdev_ops_buf)) + return -ENOSPC; + + len = simple_write_to_buffer(txgbe_dbg_netdev_ops_buf, + sizeof(txgbe_dbg_netdev_ops_buf)-1, + ppos, + buffer, + count); + if (len < 0) + return len; + + txgbe_dbg_netdev_ops_buflen = '\0'; + + if (strncmp(txgbe_dbg_netdev_ops_buf, "tx_timeout", 10) == 0) { + adapter->netdev->netdev_ops->ndo_tx_timeout(adapter->netdev, UINT_MAX); + e_dev_info("tx_timeout called\n"); + } else { + e_dev_info("Unknown command: %s\n", txgbe_dbg_netdev_ops_buf); + e_dev_info("Available commands:\n"); + e_dev_info(" tx_timeout\n"); + } + return count; +} + +static const struct file_operations txgbe_dbg_netdev_ops_fops = { + .owner = THIS_MODULE, + .open = simple_open, + .read = txgbe_dbg_netdev_ops_read, + .write = txgbe_dbg_netdev_ops_write, +}; + +/** + * txgbe_dbg_adapter_init - setup the debugfs directory for the adapter + * @adapter: the adapter that is starting up + **/ +void txgbe_dbg_adapter_init(struct txgbe_adapter *adapter) +{ + const char *name = pci_name(adapter->pdev); + struct dentry *pfile; + + adapter->txgbe_dbg_adapter = debugfs_create_dir(name, txgbe_dbg_root); + if (!adapter->txgbe_dbg_adapter) { + e_dev_err("debugfs entry for %s failed\n", name); + return; + } + + pfile = debugfs_create_file("data", 0600, + adapter->txgbe_dbg_adapter, adapter, + &txgbe_dbg_data_ops_fops); + if (!pfile) + e_dev_err("debugfs netdev_ops for %s failed\n", name); + + pfile = debugfs_create_file("reg_ops", 0600, + adapter->txgbe_dbg_adapter, adapter, + &txgbe_dbg_reg_ops_fops); + if (!pfile) + e_dev_err("debugfs reg_ops for %s failed\n", name); + + pfile = debugfs_create_file("netdev_ops", 0600, + adapter->txgbe_dbg_adapter, adapter, + &txgbe_dbg_netdev_ops_fops); + if (!pfile) + e_dev_err("debugfs netdev_ops for %s failed\n", name); +} + +/** + * txgbe_dbg_adapter_exit - clear out the adapter's debugfs entries + * @pf: the pf that is stopping + **/ +void txgbe_dbg_adapter_exit(struct txgbe_adapter *adapter) +{ + debugfs_remove_recursive(adapter->txgbe_dbg_adapter); + adapter->txgbe_dbg_adapter = NULL; +} + +/** + * txgbe_dbg_init - start up debugfs for the driver + **/ +void txgbe_dbg_init(void) +{ + txgbe_dbg_root = debugfs_create_dir(txgbe_driver_name, NULL); + if (!txgbe_dbg_root) + pr_err("init of debugfs failed\n"); +} + +/** + * txgbe_dbg_exit - clean out the driver's debugfs entries + **/ +void txgbe_dbg_exit(void) +{ + debugfs_remove_recursive(txgbe_dbg_root); +} + +struct txgbe_reg_info { + u32 offset; + u32 length; + char *name; +}; + +static struct txgbe_reg_info txgbe_reg_info_tbl = { + /* General Registers */ + {TXGBE_CFG_PORT_CTL, 1, "CTRL"}, + {TXGBE_CFG_PORT_ST, 1, "STATUS"}, + + /* RX Registers */ + {TXGBE_PX_RR_CFG(0), 1, "SRRCTL"}, + {TXGBE_PX_RR_RP(0), 1, "RDH"}, + {TXGBE_PX_RR_WP(0), 1, "RDT"}, + {TXGBE_PX_RR_CFG(0), 1, "RXDCTL"}, + {TXGBE_PX_RR_BAL(0), 1, "RDBAL"}, + {TXGBE_PX_RR_BAH(0), 1, "RDBAH"}, + + /* TX Registers */ + {TXGBE_PX_TR_BAL(0), 1, "TDBAL"}, + {TXGBE_PX_TR_BAH(0), 1, "TDBAH"}, + {TXGBE_PX_TR_RP(0), 1, "TDH"}, + {TXGBE_PX_TR_WP(0), 1, "TDT"}, + {TXGBE_PX_TR_CFG(0), 1, "TXDCTL"}, + + /* MACVLAN */ + {TXGBE_PSR_MAC_SWC_VM_H, 128, "PSR_MAC_SWC_VM"}, + {TXGBE_PSR_MAC_SWC_AD_L, 128, "PSR_MAC_SWC_AD"}, + {TXGBE_PSR_VLAN_TBL(0), 128, "PSR_VLAN_TBL"}, + + /* QoS */ + {TXGBE_TDM_RP_RATE, 128, "TDM_RP_RATE"}, + + /* List Terminator */ + { .name = NULL } +}; + +/** + * txgbe_regdump - register printout routine + **/ +static void +txgbe_regdump(struct txgbe_hw *hw, struct txgbe_reg_info *reg_info) +{ + int i, n = 0; + u32 buffer256; + + switch (reg_info->offset) { + case TXGBE_PSR_MAC_SWC_VM_H: + for (i = 0; i < reg_info->length; i++) { + wr32(hw, TXGBE_PSR_MAC_SWC_IDX, i); + buffern++ = + rd32(hw, TXGBE_PSR_MAC_SWC_VM_H); + buffern++ = + rd32(hw, TXGBE_PSR_MAC_SWC_VM_L); + } + break; + case TXGBE_PSR_MAC_SWC_AD_L: + for (i = 0; i < reg_info->length; i++) { + wr32(hw, TXGBE_PSR_MAC_SWC_IDX, i); + buffern++ = + rd32(hw, TXGBE_PSR_MAC_SWC_AD_H); + buffern++ = + rd32(hw, TXGBE_PSR_MAC_SWC_AD_L); + } + break; + case TXGBE_TDM_RP_RATE: + for (i = 0; i < reg_info->length; i++) { + wr32(hw, TXGBE_TDM_RP_IDX, i); + buffern++ = rd32(hw, TXGBE_TDM_RP_RATE); + } + break; + default: + for (i = 0; i < reg_info->length; i++) { + buffern++ = rd32(hw, + reg_info->offset + 4 * i); + } + break; + } + WARN_ON(n); +} + +/** + * txgbe_dump - Print registers, tx-rings and rx-rings + **/ +void txgbe_dump(struct txgbe_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + struct txgbe_hw *hw = &adapter->hw; + struct txgbe_reg_info *reg_info; + int n = 0; + struct txgbe_ring *tx_ring; + struct txgbe_tx_buffer *tx_buffer; + union txgbe_tx_desc *tx_desc; + struct my_u0 { u64 a; u64 b; } *u0; + struct txgbe_ring *rx_ring; + union txgbe_rx_desc *rx_desc; + struct txgbe_rx_buffer *rx_buffer_info; + u32 staterr; + int i = 0; + + if (!netif_msg_hw(adapter)) + return; + + /* Print Registers */ + dev_info(&adapter->pdev->dev, "Register Dump\n"); + pr_info(" Register Name Value\n"); + for (reg_info = txgbe_reg_info_tbl; reg_info->name; reg_info++) + txgbe_regdump(hw, reg_info); + + /* Print TX Ring Summary */ + if (!netdev || !netif_running(netdev)) + return; + + dev_info(&adapter->pdev->dev, "TX Rings Summary\n"); + + for (n = 0; n < adapter->num_tx_queues; n++) { + tx_ring = adapter->tx_ringn; + tx_buffer = &tx_ring->tx_buffer_infotx_ring->next_to_clean; + pr_info(" %5d %5X %5X %016llX %08X %p %016llX\n", + n, tx_ring->next_to_use, tx_ring->next_to_clean, + (u64)dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + tx_buffer->next_to_watch, + (u64)tx_buffer->time_stamp); + } + + /* Print TX Rings */ + if (!netif_msg_tx_done(adapter)) + goto rx_ring_summary; + + dev_info(&adapter->pdev->dev, "TX Rings Dump\n"); + + /* Transmit Descriptor Formats + * + * Transmit Descriptor (Read) + * +--------------------------------------------------------------+ + * 0 | Buffer Address 63:0 | + * +--------------------------------------------------------------+ + * 8 |PAYLEN |POPTS|CC|IDX |STA |DCMD |DTYP |MAC |RSV |DTALEN | + * +--------------------------------------------------------------+ + * 63 46 45 40 39 38 36 35 32 31 24 23 20 19 18 17 16 15 0 + * + * Transmit Descriptor (Write-Back) + * +--------------------------------------------------------------+ + * 0 | RSV 63:0 | + * +--------------------------------------------------------------+ + * 8 | RSV | STA | RSV | + * +--------------------------------------------------------------+ + * 63 36 35 32 31 0 + */ + + for (n = 0; n < adapter->num_tx_queues; n++) { + tx_ring = adapter->tx_ringn; + pr_info("------------------------------------\n"); + pr_info("TX QUEUE INDEX = %d\n", tx_ring->queue_index); + pr_info("------------------------------------\n"); + + for (i = 0; tx_ring->desc && (i < tx_ring->count); i++) { + tx_desc = TXGBE_TX_DESC(tx_ring, i); + tx_buffer = &tx_ring->tx_buffer_infoi; + u0 = (struct my_u0 *)tx_desc; + if (dma_unmap_len(tx_buffer, len) > 0) { + pr_info("T 0x%03X %016llX %016llX %016llX %08X %p %016llX %p", + i, + le64_to_cpu(u0->a), + le64_to_cpu(u0->b), + (u64)dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + tx_buffer->next_to_watch, + (u64)tx_buffer->time_stamp, + tx_buffer->skb); + + if (netif_msg_pktdata(adapter) && + tx_buffer->skb) + print_hex_dump(KERN_INFO, "", + DUMP_PREFIX_ADDRESS, 16, 1, + tx_buffer->skb->data, + dma_unmap_len(tx_buffer, len), + true); + } + } + } + + /* Print RX Rings Summary */ +rx_ring_summary: + dev_info(&adapter->pdev->dev, "RX Rings Summary\n"); + pr_info("Queue NTU NTC\n"); + for (n = 0; n < adapter->num_rx_queues; n++) { + rx_ring = adapter->rx_ringn; + pr_info("%5d %5X %5X\n", + n, rx_ring->next_to_use, rx_ring->next_to_clean); + } + + /* Print RX Rings */ + if (!netif_msg_rx_status(adapter)) + return; + + dev_info(&adapter->pdev->dev, "RX Rings Dump\n"); + + /* Receive Descriptor Formats + * + * Receive Descriptor (Read) + * 63 1 0 + * +-----------------------------------------------------+ + * 0 | Packet Buffer Address 63:1 |A0/NSE| + * +----------------------------------------------+------+ + * 8 | Header Buffer Address 63:1 | DD | + * +-----------------------------------------------------+ + * + * + * Receive Descriptor (Write-Back) + * + * 63 48 47 32 31 30 21 20 17 16 4 3 0 + * +------------------------------------------------------+ + * 0 |RSS / Frag Checksum|SPH| HDR_LEN |RSC- |Packet| RSS | + * |/ RTT / PCoE_PARAM | | | CNT | Type | Type | + * |/ Flow Dir Flt ID | | | | | | + * +------------------------------------------------------+ + * 8 | VLAN Tag | Length |Extended Error| Xtnd Status/NEXTP | + * +------------------------------------------------------+ + * 63 48 47 32 31 20 19 0 + */ + + for (n = 0; n < adapter->num_rx_queues; n++) { + rx_ring = adapter->rx_ringn; + pr_info("------------------------------------\n"); + pr_info("RX QUEUE INDEX = %d\n", rx_ring->queue_index); + pr_info("------------------------------------\n"); + + for (i = 0; i < rx_ring->count; i++) { + rx_buffer_info = &rx_ring->rx_buffer_infoi; + rx_desc = TXGBE_RX_DESC(rx_ring, i); + u0 = (struct my_u0 *)rx_desc; + staterr = le32_to_cpu(rx_desc->wb.upper.status_error); + if (staterr & TXGBE_RXD_STAT_DD) { + /* Descriptor Done */ + pr_info("RWB0x%03X %016llX %016llX ---------------- %p", i, + le64_to_cpu(u0->a), + le64_to_cpu(u0->b), + rx_buffer_info->skb); + } else { + pr_info("R 0x%03X %016llX %016llX %016llX %p", i, + le64_to_cpu(u0->a), + le64_to_cpu(u0->b), + (u64)rx_buffer_info->page_dma, + rx_buffer_info->skb); + + if (netif_msg_pktdata(adapter) && + rx_buffer_info->page_dma) { + print_hex_dump(KERN_INFO, "", + DUMP_PREFIX_ADDRESS, 16, 1, + page_address(rx_buffer_info->page) + + rx_buffer_info->page_offset, + txgbe_rx_bufsz(rx_ring), true); + } + } + } + } +}
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c
Changed
@@ -212,6 +212,7 @@ /* set the advertised speeds */ if (hw->phy.autoneg_advertised) { + advertising = 0; if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_100_FULL) advertising |= ADVERTISED_100baseT_Full; if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10GB_FULL) @@ -2194,6 +2195,7 @@ { struct txgbe_adapter *adapter = netdev_priv(netdev); struct txgbe_hw *hw = &adapter->hw; + u16 value = 0; switch (state) { case ETHTOOL_ID_ACTIVE: @@ -2201,17 +2203,58 @@ return 2; case ETHTOOL_ID_ON: - TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_UP); + if (hw->oem_ssid == 0x0075 && hw->oem_svid == 0x1bd4) { + if (adapter->link_up) { + switch (adapter->link_speed) { + case TXGBE_LINK_SPEED_10GB_FULL: + TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_10G); + break; + case TXGBE_LINK_SPEED_1GB_FULL: + TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_1G); + break; + case TXGBE_LINK_SPEED_100_FULL: + TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_100M); + break; + default: + break; + } + } else + TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_10G); + } else + TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_UP); break; case ETHTOOL_ID_OFF: - TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_UP); + if (hw->oem_ssid == 0x0075 && hw->oem_svid == 0x1bd4) { + if (adapter->link_up) { + switch (adapter->link_speed) { + case TXGBE_LINK_SPEED_10GB_FULL: + TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_10G); + break; + case TXGBE_LINK_SPEED_1GB_FULL: + TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_1G); + break; + case TXGBE_LINK_SPEED_100_FULL: + TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_100M); + break; + default: + break; + } + } else + TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_10G); + } else + TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_UP); break; case ETHTOOL_ID_INACTIVE: /* Restore LED settings */ wr32(&adapter->hw, TXGBE_CFG_LED_CTL, adapter->led_reg); + if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI) { + txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, &value); + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, + (value & 0xFFFC) | 0x0); + } break; } @@ -3319,16 +3362,21 @@ if (ret < 0) return ret; - if (txgbe_mng_present(&adapter->hw)) { + if (ef->region == 0) { + ret = txgbe_upgrade_flash(&adapter->hw, ef->region, + fw->data, fw->size); + } else { + if (txgbe_mng_present(&adapter->hw)) ret = txgbe_upgrade_flash_hostif(&adapter->hw, ef->region, fw->data, fw->size); - } else - ret = -EOPNOTSUPP; + else + ret = -EOPNOTSUPP; + } release_firmware(fw); if (!ret) dev_info(&netdev->dev, - "loaded firmware %s, reload txgbe driver\n", ef->data); + "loaded firmware %s, reboot to make firmware work\n", ef->data); return ret; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_hw.c
Changed
@@ -123,8 +123,6 @@ u16 max_msix_count; u32 pos; - DEBUGFUNC("\n"); - max_msix_count = TXGBE_MAX_MSIX_VECTORS_SAPPHIRE; pos = pci_find_capability(((struct txgbe_adapter *)hw->back)->pdev, PCI_CAP_ID_MSIX); if (!pos) @@ -159,8 +157,6 @@ { s32 status; - DEBUGFUNC("\n"); - /* Reset the hardware */ status = TCALL(hw, mac.ops.reset_hw); @@ -184,8 +180,6 @@ { u16 i = 0; - DEBUGFUNC("\n"); - rd32(hw, TXGBE_RX_CRC_ERROR_FRAMES_LOW); for (i = 0; i < 8; i++) rd32(hw, TXGBE_RDB_MPCNT(i)); @@ -239,9 +233,7 @@ bool supported = false; u32 speed; bool link_up; - u8 device_type = hw->subsystem_id & 0xF0; - - DEBUGFUNC("\n"); + u8 device_type = hw->subsystem_device_id & 0xF0; switch (hw->phy.media_type) { case txgbe_media_type_fiber: @@ -284,8 +276,6 @@ u32 value = 0; u32 pcap_backplane = 0; - DEBUGFUNC("\n"); - /* Validate the requested mode */ if (hw->fc.strict_ieee && hw->fc.requested_mode == txgbe_fc_rx_pause) { ERROR_REPORT1(TXGBE_ERROR_UNSUPPORTED, @@ -399,8 +389,6 @@ u16 offset; u16 length; - DEBUGFUNC("\n"); - if (pba_num == NULL) { DEBUGOUT("PBA string buffer was null\n"); return TXGBE_ERR_INVALID_ARGUMENT; @@ -512,8 +500,6 @@ u32 rar_low; u16 i; - DEBUGFUNC("\n"); - wr32(hw, TXGBE_PSR_MAC_SWC_IDX, 0); rar_high = rd32(hw, TXGBE_PSR_MAC_SWC_AD_H); rar_low = rd32(hw, TXGBE_PSR_MAC_SWC_AD_L); @@ -585,8 +571,6 @@ { u16 link_status; - DEBUGFUNC("\n"); - /* Get the negotiated link width and speed from PCI config space */ link_status = txgbe_read_pci_cfg_word(hw, TXGBE_PCI_LINK_STATUS); @@ -607,8 +591,6 @@ struct txgbe_bus_info *bus = &hw->bus; u32 reg; - DEBUGFUNC("\n"); - reg = rd32(hw, TXGBE_CFG_PORT_ST); bus->lan_id = TXGBE_CFG_PORT_ST_LAN_ID(reg); @@ -633,8 +615,6 @@ { u16 i; - DEBUGFUNC("\n"); - /* * Set the adapter_stopped flag so other driver functions stop touching * the hardware @@ -683,15 +663,10 @@ { u32 led_reg = rd32(hw, TXGBE_CFG_LED_CTL); u16 value = 0; - DEBUGFUNC("\n"); if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI) { txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, &value); - txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, value | 0x3); - txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, &value); - txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, value | 0x3); - txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, &value); - txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, value | 0x3); + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, (value & 0xFFFC) | 0x0); } /* To turn on the LED, set mode to ON. */ led_reg |= index | (index << TXGBE_CFG_LED_CTL_LINK_OD_SHIFT); @@ -710,15 +685,10 @@ { u32 led_reg = rd32(hw, TXGBE_CFG_LED_CTL); u16 value = 0; - DEBUGFUNC("\n"); if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI) { txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, &value); - txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, value & 0xFFFC); - txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, &value); - txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, value & 0xFFFC); - txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, &value); - txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, value & 0xFFFC); + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, (value & 0xFFFC) | 0x1); } /* To turn off the LED, set mode to OFF. */ @@ -843,8 +813,6 @@ { s32 status = 0; - DEBUGFUNC("\n"); - /* Make sure it is not a multicast address */ if (TXGBE_IS_MULTICAST(mac_addr)) { DEBUGOUT("MAC address is multicast\n"); @@ -878,8 +846,6 @@ u32 rar_low, rar_high; u32 rar_entries = hw->mac.num_rar_entries; - DEBUGFUNC("\n"); - /* Make sure we are using a valid rar index range */ if (index >= rar_entries) { ERROR_REPORT2(TXGBE_ERROR_ARGUMENT, @@ -932,8 +898,6 @@ { u32 rar_entries = hw->mac.num_rar_entries; - DEBUGFUNC("\n"); - /* Make sure we are using a valid rar index range */ if (index >= rar_entries) { ERROR_REPORT2(TXGBE_ERROR_ARGUMENT, @@ -975,8 +939,6 @@ u32 rar_entries = hw->mac.num_rar_entries; u32 psrctl; - DEBUGFUNC("\n"); - /* * If the current mac address is valid, assume it is a software override * to the permanent address. @@ -1044,13 +1006,7 @@ u32 rar_entries = hw->mac.num_rar_entries; u32 rar; - DEBUGFUNC("\n"); - - DEBUGOUT6(" UC Addr = %.2X %.2X %.2X %.2X %.2X %.2X\n", - addr0, addr1, addr2, addr3, addr4, addr5); - - /* - * Place this address in the RAR if there is room, + /* Place this address in the RAR if there is room, * else put the controller into promiscuous mode */ if (hw->addr_ctrl.rar_used_count < rar_entries) { @@ -1089,8 +1045,6 @@ u32 uc_addr_in_use; u32 vmdq; - DEBUGFUNC("\n"); - /* * Clear accounting of old secondary address list, * don't count RAR0 @@ -1150,8 +1104,6 @@ { u32 vector = 0; - DEBUGFUNC("\n"); - switch (hw->mac.mc_filter_type) { case 0: /* use bits 47:36 of the address */ vector = ((mc_addr4 >> 4) | (((u16)mc_addr5) << 4)); @@ -1189,8 +1141,6 @@ u32 vector_bit; u32 vector_reg; - DEBUGFUNC("\n"); - hw->addr_ctrl.mta_in_use++; vector = txgbe_mta_vector(hw, mc_addr); @@ -1229,10 +1179,7 @@ u32 vmdq; u32 psrctl; - DEBUGFUNC("\n"); - - /* - * Set the new number of MC addresses that we are being requested to + /* Set the new number of MC addresses that we are being requested to * use. */ hw->addr_ctrl.num_mc_addrs = mc_addr_count; @@ -1278,8 +1225,6 @@ struct txgbe_addr_filter_info *a = &hw->addr_ctrl; u32 psrctl; - DEBUGFUNC("\n"); - if (a->mta_in_use > 0) { psrctl = rd32(hw, TXGBE_PSR_CTL); psrctl &= ~(TXGBE_PSR_CTL_MO | TXGBE_PSR_CTL_MFE); @@ -1301,7 +1246,6 @@ { struct txgbe_addr_filter_info *a = &hw->addr_ctrl; u32 psrctl; - DEBUGFUNC("\n"); if (a->mta_in_use > 0) { psrctl = rd32(hw, TXGBE_PSR_CTL); @@ -1327,8 +1271,6 @@ u32 fcrtl, fcrth; int i; - DEBUGFUNC("\n"); - /* Validate the water mark configuration */ if (!hw->fc.pause_time) { ret_val = TXGBE_ERR_INVALID_LINK_SETTINGS; @@ -1586,10 +1528,7 @@ u32 speed; bool link_up; - DEBUGFUNC("\n"); - - /* - * AN should have completed when the cable was plugged in. + /* AN should have completed when the cable was plugged in. * Look for reasons to bail out. Bail out if: * - FC autoneg is disabled, or if * - link is not up. @@ -1655,8 +1594,6 @@ u16 dev_ctl; u32 vf_bme_clear = 0; - DEBUGFUNC("\n"); - /* Always set this bit to ensure any future transactions are blocked */ pci_clear_master(((struct txgbe_adapter *)hw->back)->pdev); @@ -1773,8 +1710,6 @@ int i; int secrxreg; - DEBUGFUNC("\n"); - wr32m(hw, TXGBE_RSC_CTL, TXGBE_RSC_CTL_RX_DIS, TXGBE_RSC_CTL_RX_DIS); for (i = 0; i < TXGBE_MAX_SECRX_POLL; i++) { @@ -1802,8 +1737,6 @@ **/ s32 txgbe_enable_sec_rx_path(struct txgbe_hw *hw) { - DEBUGFUNC("\n"); - wr32m(hw, TXGBE_RSC_CTL, TXGBE_RSC_CTL_RX_DIS, 0); TXGBE_WRITE_FLUSH(hw); @@ -1825,8 +1758,6 @@ { s32 ret_val; - DEBUGFUNC("\n"); - /* * First read the EEPROM pointer to see if the MAC addresses are * available. @@ -1859,10 +1790,7 @@ u8 i; s32 ret_val; - DEBUGFUNC("\n"); - - /* - * First read the EEPROM pointer to see if the MAC addresses are + /* First read the EEPROM pointer to see if the MAC addresses are * available. If they're not, no point in calling set_lan_id() here. */ ret_val = txgbe_get_san_mac_addr_offset(hw, &san_mac_offset); @@ -1910,8 +1838,6 @@ u16 san_mac_data, san_mac_offset; u8 i; - DEBUGFUNC("\n"); - /* Look for SAN mac address pointer. If not defined, return */ ret_val = txgbe_get_san_mac_addr_offset(hw, &san_mac_offset); if (ret_val || san_mac_offset == 0 || san_mac_offset == 0xFFFF) @@ -1948,8 +1874,6 @@ u32 rar_low, rar_high; u32 addr_low, addr_high; - DEBUGFUNC("\n"); - /* swap bytes for HW little endian */ addr_low = addr5 | (addr4 << 8) | (addr3 << 16) @@ -2014,8 +1938,6 @@ u32 mpsar_lo, mpsar_hi; u32 rar_entries = hw->mac.num_rar_entries; - DEBUGFUNC("\n"); - /* Make sure we are using a valid rar index range */ if (rar >= rar_entries) { ERROR_REPORT2(TXGBE_ERROR_ARGUMENT, @@ -2046,12 +1968,10 @@ * @rar: receive address register index to associate with a VMDq index * @vmdq: VMDq pool index **/ -s32 txgbe_set_vmdq(struct txgbe_hw *hw, u32 rar, u32 __maybe_unused pool) +s32 txgbe_set_vmdq(struct txgbe_hw *hw, u32 rar, u32 __always_unused pool) { u32 rar_entries = hw->mac.num_rar_entries; - DEBUGFUNC("\n"); - /* Make sure we are using a valid rar index range */ if (rar >= rar_entries) { ERROR_REPORT2(TXGBE_ERROR_ARGUMENT, @@ -2076,8 +1996,6 @@ { u32 rar = hw->mac.san_mac_rar_index; - DEBUGFUNC("\n"); - wr32(hw, TXGBE_PSR_MAC_SWC_IDX, rar); if (vmdq < 32) { wr32(hw, TXGBE_PSR_MAC_SWC_VM_L, 1 << vmdq); @@ -2098,9 +2016,6 @@ { int i; - DEBUGFUNC("\n"); - DEBUGOUT(" Clearing UTA\n"); - for (i = 0; i < 128; i++) wr32(hw, TXGBE_PSR_UC_TBL(i), 0); @@ -2175,8 +2090,6 @@ s32 ret_val = 0; bool vfta_changed = false; - DEBUGFUNC("\n"); - if (vlan > 4095) return TXGBE_ERR_PARAM; @@ -2240,8 +2153,6 @@ { u32 vt; - DEBUGFUNC("\n"); - if (vlan > 4095) return TXGBE_ERR_PARAM; @@ -2343,8 +2254,6 @@ { u32 offset; - DEBUGFUNC("\n"); - for (offset = 0; offset < hw->mac.vft_size; offset++) { wr32(hw, TXGBE_PSR_VLAN_TBL(offset), 0); /* errata 5 */ @@ -2377,8 +2286,6 @@ u16 offset, caps; u16 alt_san_mac_blk_offset; - DEBUGFUNC("\n"); - /* clear output first */ *wwnn_prefix = 0xFFFF; *wwpn_prefix = 0xFFFF; @@ -2431,8 +2338,6 @@ { u64 pfvfspoof = 0; - DEBUGFUNC("\n"); - if (enable) { /* * The PF should be allowed to spoof so that it can support @@ -2461,8 +2366,6 @@ { u32 pfvfspoof; - DEBUGFUNC("\n"); - if (vf < 32) { pfvfspoof = rd32(hw, TXGBE_TDM_VLAN_AS_L); if (enable) @@ -2492,8 +2395,6 @@ { u32 pfvfspoof; - DEBUGFUNC("\n"); - if (vf < 32) { pfvfspoof = rd32(hw, TXGBE_TDM_ETYPE_AS_L); if (enable) @@ -2521,8 +2422,6 @@ **/ s32 txgbe_get_device_caps(struct txgbe_hw *hw, u16 *device_caps) { - DEBUGFUNC("\n"); - TCALL(hw, eeprom.ops.read, hw->eeprom.sw_region_offset + TXGBE_DEVICE_CAPS, device_caps); @@ -2541,8 +2440,6 @@ u32 i; u8 sum = 0; - DEBUGFUNC("\n"); - if (!buffer) return 0; @@ -2579,8 +2476,6 @@ s32 status = 0; u32 buf64 = {}; - DEBUGFUNC("\n"); - if (length == 0 || length > TXGBE_HI_MAX_BLOCK_BYTE_LENGTH) { DEBUGOUT1("Buffer length failure buffersize=%d.\n", length); return TXGBE_ERR_HOST_INTERFACE_COMMAND; @@ -2633,6 +2528,14 @@ msec_delay(1); } + buf0 = rd32(hw, TXGBE_MNG_MBOX); + + if ((buf0 & 0xff0000) >> 16 == 0x80) { + DEBUGOUT("It's unknown cmd.\n"); + status = TXGBE_ERR_MNG_ACCESS_FAILED; + goto rel_out; + } + /* Check command completion */ if (timeout != 0 && i == timeout) { ERROR_REPORT1(TXGBE_ERROR_CAUTION, @@ -2647,8 +2550,10 @@ ERROR_REPORT1(TXGBE_ERROR_CAUTION, "%x ", bufi); } - status = TXGBE_ERR_HOST_INTERFACE_COMMAND; - goto rel_out; + if ((buffer0 & 0xff) != (~buf0 >> 24)) { + status = TXGBE_ERR_HOST_INTERFACE_COMMAND; + goto rel_out; + } } if (!return_data) @@ -2720,8 +2625,6 @@ int i; s32 ret_val = 0; - DEBUGFUNC("\n"); - fw_cmd.hdr.cmd = FW_CEM_CMD_DRIVER_INFO; fw_cmd.hdr.buf_len = FW_CEM_CMD_DRIVER_INFO_LEN; fw_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED; @@ -2771,8 +2674,6 @@ int i; s32 status = 0; - DEBUGFUNC("\n"); - reset_cmd.hdr.cmd = FW_RESET_CMD; reset_cmd.hdr.buf_len = FW_RESET_LEN; reset_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED; @@ -2809,8 +2710,6 @@ int i; s32 status = 0; - DEBUGFUNC("\n"); - cmd.hdr.cmd = FW_SETUP_MAC_LINK_CMD; cmd.hdr.buf_len = FW_SETUP_MAC_LINK_LEN; cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED; @@ -2867,8 +2766,6 @@ u32 offset; s32 status = 0; - DEBUGFUNC("\n"); - start_cmd.hdr.cmd = FW_FLASH_UPGRADE_START_CMD; start_cmd.hdr.buf_len = FW_FLASH_UPGRADE_START_LEN; start_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED; @@ -2953,6 +2850,275 @@ return status; } +u8 fmgr_cmd_op(struct txgbe_hw *hw, u32 cmd, u32 cmd_addr) +{ + u32 cmd_val = 0; + u32 time_out = 0; + + cmd_val = (cmd << SPI_CLK_CMD_OFFSET) | (SPI_CLK_DIV << SPI_CLK_DIV_OFFSET) | cmd_addr; + wr32(hw, SPI_H_CMD_REG_ADDR, cmd_val); + while (1) { + if (rd32(hw, SPI_H_STA_REG_ADDR) & 0x1) + break; + + if (time_out == SPI_TIME_OUT_VALUE) + return 1; + + time_out = time_out + 1; + udelay(10); + } + + return 0; +} + +u8 fmgr_usr_cmd_op(struct txgbe_hw *hw, u32 usr_cmd) +{ + u8 status = 0; + + wr32(hw, SPI_H_USR_CMD_REG_ADDR, usr_cmd); + status = fmgr_cmd_op(hw, SPI_CMD_USER_CMD, 0); + + return status; +} + +u8 flash_erase_chip(struct txgbe_hw *hw) +{ + u8 status = fmgr_cmd_op(hw, SPI_CMD_ERASE_CHIP, 0); + return status; +} + +u8 flash_erase_sector(struct txgbe_hw *hw, u32 sec_addr) +{ + u8 status = fmgr_cmd_op(hw, SPI_CMD_ERASE_SECTOR, sec_addr); + + return status; +} + +u32 flash_read_dword(struct txgbe_hw *hw, u32 addr) +{ + u8 status = fmgr_cmd_op(hw, SPI_CMD_READ_DWORD, addr); + + if (status) + return (u32)status; + + return rd32(hw, SPI_H_DAT_REG_ADDR); +} + +u8 flash_write_dword(struct txgbe_hw *hw, u32 addr, u32 dword) +{ + u8 status = 0; + + wr32(hw, SPI_H_DAT_REG_ADDR, dword); + status = fmgr_cmd_op(hw, SPI_CMD_WRITE_DWORD, addr); + if (status) + return status; + + if (dword != flash_read_dword(hw, addr)) + return 1; + + return 0; +} + +int txgbe_flash_write_cab(struct txgbe_hw *hw, u32 addr, u32 value, u16 lan_id) +{ + int status; + struct txgbe_hic_read_cab buffer; + + buffer.hdr.req.cmd = 0xE2; + buffer.hdr.req.buf_lenh = 0x6; + buffer.hdr.req.buf_lenl = 0x0; + buffer.hdr.req.checksum = 0xFF; + + /* convert offset from words to bytes */ + buffer.dbuf.d160 = cpu_to_le16(lan_id); + /* one word */ + buffer.dbuf.d320 = htonl(addr); + buffer.dbuf.d321 = htonl(value); + + status = txgbe_host_interface_command(hw, (u32 *)&buffer, + sizeof(buffer), 5000, true); + + return status; +} + +int txgbe_flash_read_cab(struct txgbe_hw *hw, u32 addr, u16 lan_id) +{ + int status; + struct txgbe_hic_read_cab buffer; + u16 *data = NULL; + + buffer.hdr.req.cmd = 0xE1; + buffer.hdr.req.buf_lenh = 0xaa; + buffer.hdr.req.buf_lenl = 0; + buffer.hdr.req.checksum = 0xFF; + + /* convert offset from words to bytes */ + buffer.dbuf.d160 = cpu_to_le16(lan_id); + /* one word */ + buffer.dbuf.d320 = htonl(addr); + + status = txgbe_host_interface_command(hw, (u32 *)&buffer, + sizeof(buffer), 5000, true); + + if (status) + return status; + if (txgbe_check_mng_access(hw)) { + *data = (u16)rd32a(hw, 0x1e100, 3); + } else { + status = -147; + return status; + } + + return rd32(hw, 0x1e108); +} + +int txgbe_flash_write_unlock(struct txgbe_hw *hw) +{ + int status; + struct txgbe_hic_read_shadow_ram buffer; + + buffer.hdr.req.cmd = 0x40; + buffer.hdr.req.buf_lenh = 0; + buffer.hdr.req.buf_lenl = 0; + buffer.hdr.req.checksum = 0xFF; + + /* convert offset from words to bytes */ + buffer.address = 0; + /* one word */ + buffer.length = 0; + + status = txgbe_host_interface_command(hw, (u32 *)&buffer, + sizeof(buffer), 5000, false); + if (status) + return status; + + return status; +} + +int txgbe_flash_write_lock(struct txgbe_hw *hw) +{ + int status; + struct txgbe_hic_read_shadow_ram buffer; + + buffer.hdr.req.cmd = 0x39; + buffer.hdr.req.buf_lenh = 0; + buffer.hdr.req.buf_lenl = 0; + buffer.hdr.req.checksum = 0xFF; + + /* convert offset from words to bytes */ + buffer.address = 0; + /* one word */ + buffer.length = 0; + + status = txgbe_host_interface_command(hw, (u32 *)&buffer, + sizeof(buffer), 5000, false); + if (status) + return status; + + return status; +} + +int txgbe_upgrade_flash(struct txgbe_hw *hw, u32 region, + const u8 *data, u32 size) +{ + u32 sector_num = 0; + u32 read_data = 0; + u8 status = 0; + u8 skip = 0; + u32 i = 0; + u8 flash_vendor = 0; + u32 mac_addr0_dword0_t; + u32 mac_addr0_dword1_t; + u32 mac_addr1_dword0_t; + u32 mac_addr1_dword1_t; + u32 serial_num_dword0_t; + u32 serial_num_dword1_t; + u32 serial_num_dword2_t; + + /* check sub_id, dont care value of 15b ~ 12b*/; + if ((hw->subsystem_device_id & 0xfff) != + ((data0xfffdc << 8 | data0xfffdd) & 0xfff)) { + return -EOPNOTSUPP; + } + + /*check dev_id*/ + if (!((hw->device_id & 0xfff0) == ((data0xfffde << 8 | data0xfffdf) & 0xfff0)) && + !(hw->device_id == 0xffff)) { + return -EOPNOTSUPP; + } + + /* unlock flash write protect*/ + wr32(hw, TXGBE_SPI_CMDCFG0, 0x9f050206); + wr32(hw, 0x10194, 0x9f050206); + + msleep(1000); + + mac_addr0_dword0_t = flash_read_dword(hw, MAC_ADDR0_WORD0_OFFSET_1G); + mac_addr0_dword1_t = flash_read_dword(hw, MAC_ADDR0_WORD1_OFFSET_1G) & 0xffff; + mac_addr1_dword0_t = flash_read_dword(hw, MAC_ADDR1_WORD0_OFFSET_1G); + mac_addr1_dword1_t = flash_read_dword(hw, MAC_ADDR1_WORD1_OFFSET_1G) & 0xffff; + + serial_num_dword0_t = flash_read_dword(hw, PRODUCT_SERIAL_NUM_OFFSET_1G); + serial_num_dword1_t = flash_read_dword(hw, PRODUCT_SERIAL_NUM_OFFSET_1G + 4); + serial_num_dword2_t = flash_read_dword(hw, PRODUCT_SERIAL_NUM_OFFSET_1G + 8); + + status = fmgr_usr_cmd_op(hw, 0x6); /* write enable*/ + status = fmgr_usr_cmd_op(hw, 0x98); /* global protection un-lock*/ + txgbe_flash_write_unlock(hw); + msleep(1000); + + /*Note: for Spanish FLASH, first 8 sectors (4KB) in sector0 (64KB) + *need to use a special erase command (4K sector erase) + */ + if (flash_vendor == 1) { + wr32(hw, SPI_CMD_CFG1_ADDR, 0x0103c720); + for (i = 0; i < 8; i++) { + flash_erase_sector(hw, i * 128); + msleep(20); // 20 ms + } + wr32(hw, SPI_CMD_CFG1_ADDR, 0x0103c7d8); + } + + sector_num = size / SPI_SECTOR_SIZE; + /* Winbond Flash, erase chip command is okay, but erase sector doestn't work*/ + if (flash_vendor == 2) { + status = flash_erase_chip(hw); + msleep(1000); + } else { + wr32(hw, SPI_CMD_CFG1_ADDR, 0x0103c720); + for (i = 0; i < sector_num; i++) { + status = flash_erase_sector(hw, i * SPI_SECTOR_SIZE); + msleep(50); + } + wr32(hw, SPI_CMD_CFG1_ADDR, 0x0103c7d8); + } + + /* Program Image file in dword*/ + for (i = 0; i < size / 4; i++) { + read_data = data4 * i + 3 << 24 | data4 * i + 2 << 16 | data4 * i + 1 << 8 | data4 * i; + read_data = __le32_to_cpu(read_data); + skip = ((i * 4 == MAC_ADDR0_WORD0_OFFSET_1G) || (i * 4 == MAC_ADDR0_WORD1_OFFSET_1G) || + (i * 4 == MAC_ADDR1_WORD0_OFFSET_1G) || (i * 4 == MAC_ADDR1_WORD1_OFFSET_1G) || + (i * 4 >= PRODUCT_SERIAL_NUM_OFFSET_1G && i * 4 <= PRODUCT_SERIAL_NUM_OFFSET_1G + 8)); + if (read_data != 0xffffffff && !skip) { + status = flash_write_dword(hw, i * 4, read_data); + if (status) { + read_data = flash_read_dword(hw, i * 4); + return 1; + } + } + } + + flash_write_dword(hw, MAC_ADDR0_WORD0_OFFSET_1G, mac_addr0_dword0_t); + flash_write_dword(hw, MAC_ADDR0_WORD1_OFFSET_1G, (mac_addr0_dword1_t | 0x80000000));//lan0 + flash_write_dword(hw, MAC_ADDR1_WORD0_OFFSET_1G, mac_addr1_dword0_t); + flash_write_dword(hw, MAC_ADDR1_WORD1_OFFSET_1G, (mac_addr1_dword1_t | 0x80000000));//lan1 + flash_write_dword(hw, PRODUCT_SERIAL_NUM_OFFSET_1G, serial_num_dword0_t); + flash_write_dword(hw, PRODUCT_SERIAL_NUM_OFFSET_1G + 4, serial_num_dword1_t); + flash_write_dword(hw, PRODUCT_SERIAL_NUM_OFFSET_1G + 8, serial_num_dword2_t); + + return 0; +} /** * txgbe_set_rxpba - Initialize Rx packet buffer * @hw: pointer to hardware structure @@ -2967,8 +3133,6 @@ int i = 0; u32 rxpktsize, txpktsize, txpbthresh; - DEBUGFUNC("\n"); - /* Reserve headroom */ pbsize -= headroom; @@ -3047,8 +3211,6 @@ int i = 0; struct txgbe_thermal_sensor_data *data = &hw->mac.thermal_sensor_data; - DEBUGFUNC("\n"); - /* Only support thermal sensors attached to physical port 0 */ if (hw->bus.lan_id) return TXGBE_NOT_IMPLEMENTED; @@ -3058,10 +3220,10 @@ tsv = tsv < 1200 ? tsv : 1200; tsv = -(48380 << 8) / 1000 - + tsv * (31020 << 8) / 100000 - - tsv * tsv * (18201 << 8) / 100000000 - + tsv * tsv * tsv * (81542 << 8) / 1000000000000 - - tsv * tsv * tsv * tsv * (16743 << 8) / 1000000000000000; + + div64_s64(tsv * (31020 << 8), 100000) + - div64_s64(tsv * tsv * (18201 << 8), 100000000) + + div64_s64(tsv * tsv * tsv * (81542 << 8), 1000000000000) + - div64_s64(tsv * tsv * tsv * tsv * (16743 << 8), 1000000000000000); tsv >>= 8; data->sensor.temp = (s16)tsv; @@ -3072,10 +3234,10 @@ tsv = tsv & TXGBE_TS_ST_DATA_OUT_MASK; tsv = tsv < 1200 ? tsv : 1200; tsv = -(48380 << 8) / 1000 - + tsv * (31020 << 8) / 100000 - - tsv * tsv * (18201 << 8) / 100000000 - + tsv * tsv * tsv * (81542 << 8) / 1000000000000 - - tsv * tsv * tsv * tsv * (16743 << 8) / 1000000000000000; + + div64_s64(tsv * (31020 << 8), 100000) + - div64_s64(tsv * tsv * (18201 << 8), 100000000) + + div64_s64(tsv * tsv * tsv * (81542 << 8), 1000000000000) + - div64_s64(tsv * tsv * tsv * tsv * (16743 << 8), 1000000000000000); tsv >>= 8; data->sensor.temp = (s16)tsv; @@ -3102,8 +3264,6 @@ struct txgbe_thermal_sensor_data *data = &hw->mac.thermal_sensor_data; - DEBUGFUNC("\n"); - memset(data, 0, sizeof(struct txgbe_thermal_sensor_data)); /* Only support thermal sensors attached to SP physical port 0 */ @@ -3129,8 +3289,6 @@ u32 pfdtxgswc; u32 rxctrl; - DEBUGFUNC("\n"); - rxctrl = rd32(hw, TXGBE_RDB_PB_CTL); if (rxctrl & TXGBE_RDB_PB_CTL_RXEN) { pfdtxgswc = rd32(hw, TXGBE_PSR_CTL); @@ -3183,8 +3341,6 @@ { u32 pfdtxgswc; - DEBUGFUNC("\n"); - /* enable mac receiver */ wr32m(hw, TXGBE_MAC_RX_CFG, TXGBE_MAC_RX_CFG_RE, TXGBE_MAC_RX_CFG_RE); @@ -3258,8 +3414,6 @@ u32 i = 0; bool autoneg, link_up = false; - DEBUGFUNC("\n"); - /* Mask off requested but non-supported speeds */ status = TCALL(hw, mac.ops.get_link_capabilities, &link_speed, &autoneg); @@ -3728,8 +3882,6 @@ { struct txgbe_mac_info *mac = &hw->mac; - DEBUGFUNC("\n"); - /* * enable the laser control functions for SFP+ fiber * and MNG not enabled @@ -3777,8 +3929,6 @@ struct txgbe_mac_info *mac = &hw->mac; s32 ret_val = 0; - DEBUGFUNC("\n"); - txgbe_init_i2c(hw); /* Identify the PHY or SFP module */ ret_val = TCALL(hw, phy.ops.identify); @@ -3793,7 +3943,7 @@ /* If copper media, overwrite with copper function pointers */ if (TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_copper) { hw->phy.type = txgbe_phy_xaui; - if ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI) { + if ((hw->subsystem_device_id & 0xF0) != TXGBE_ID_SFI_XAUI) { mac->ops.setup_link = txgbe_setup_copper_link; mac->ops.get_link_capabilities = txgbe_get_copper_link_capabilities; @@ -3821,8 +3971,6 @@ struct txgbe_flash_info *flash = &hw->flash; s32 ret_val = 0; - DEBUGFUNC("\n"); - /* PHY */ phy->ops.reset = txgbe_reset_phy; phy->ops.read_reg = txgbe_read_phy_reg; @@ -3831,6 +3979,7 @@ phy->ops.write_reg_mdi = txgbe_write_phy_reg_mdi; phy->ops.setup_link = txgbe_setup_phy_link; phy->ops.setup_link_speed = txgbe_setup_phy_link_speed; + phy->ops.get_firmware_version = txgbe_get_phy_firmware_version; phy->ops.read_i2c_byte = txgbe_read_i2c_byte; phy->ops.write_i2c_byte = txgbe_write_i2c_byte; phy->ops.read_i2c_sff8472 = txgbe_read_i2c_sff8472; @@ -3952,8 +4101,6 @@ u32 sr_pcs_ctl, sr_pma_mmd_ctl1, sr_an_mmd_ctl; u32 sr_an_mmd_adv_reg2; - DEBUGFUNC("\n"); - /* Check if 1G SFP module. */ if (hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core0 || hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core1 || @@ -3975,14 +4122,14 @@ } /* XAUI */ else if ((txgbe_get_media_type(hw) == txgbe_media_type_copper) && - ((hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI || - (hw->subsystem_id & 0xF0) == TXGBE_ID_SFI_XAUI)) { + ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI || + (hw->subsystem_device_id & 0xF0) == TXGBE_ID_SFI_XAUI)) { *speed = TXGBE_LINK_SPEED_10GB_FULL; *autoneg = false; hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_T; } /* SGMII */ - else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_SGMII) { + else if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_SGMII) { *speed = TXGBE_LINK_SPEED_1GB_FULL | TXGBE_LINK_SPEED_100_FULL | TXGBE_LINK_SPEED_10_FULL; @@ -3990,12 +4137,12 @@ hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_1000BASE_T | TXGBE_PHYSICAL_LAYER_100BASE_TX; /* MAC XAUI */ - } else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_XAUI) { + } else if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_XAUI) { *speed = TXGBE_LINK_SPEED_10GB_FULL; *autoneg = false; hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KX4; /* MAC SGMII */ - } else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_SGMII) { + } else if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_SGMII) { *speed = TXGBE_LINK_SPEED_1GB_FULL; *autoneg = false; hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_1000BASE_KX; @@ -4081,9 +4228,7 @@ enum txgbe_media_type txgbe_get_media_type(struct txgbe_hw *hw) { enum txgbe_media_type media_type; - u8 device_type = hw->subsystem_id & 0xF0; - - DEBUGFUNC("\n"); + u8 device_type = hw->subsystem_device_id & 0xF0; /* Detect if there is a copper PHY attached. */ switch (hw->phy.type) { @@ -4152,6 +4297,11 @@ /* Blocked by MNG FW so bail */ txgbe_check_reset_blocked(hw); + /* overwrite led when ifdown */ + if (txgbe_close_notify(hw)) + TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_UP | TXGBE_LED_LINK_10G | + TXGBE_LED_LINK_1G | TXGBE_LED_LINK_ACTIVE); + /* Disable Tx laser; allow 100us to go dark per spec */ esdp_reg |= TXGBE_GPIO_DR_1 | TXGBE_GPIO_DR_0; wr32(hw, TXGBE_GPIO_DR, esdp_reg); @@ -4169,6 +4319,13 @@ **/ void txgbe_enable_tx_laser_multispeed_fiber(struct txgbe_hw *hw) { + if (!(TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_fiber)) + return; + + /* recover led configure when ifup */ + if (txgbe_open_notify(hw)) + wr32(hw, TXGBE_CFG_LED_CTL, 0); + /* Enable Tx laser; allow 100ms to light up */ wr32m(hw, TXGBE_GPIO_DR, TXGBE_GPIO_DR_0 | TXGBE_GPIO_DR_1, 0); @@ -4190,7 +4347,8 @@ **/ void txgbe_flap_tx_laser_multispeed_fiber(struct txgbe_hw *hw) { - DEBUGFUNC("\n"); + if (!(TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_fiber)) + return; /* Blocked by MNG FW so bail */ txgbe_check_reset_blocked(hw); @@ -4261,13 +4419,15 @@ u32 value; txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, 0x3002); - /* for sgmii + external phy, set to 0x0105 (mac sgmii mode) */ - if ((hw->subsystem_id & 0xF0) == TXGBE_ID_SGMII) { - txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL, 0x0105); - } - /* for sgmii direct link, set to 0x010c (phy sgmii mode) */ - if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_SGMII) { + /* for sgmii + external phy, set to 0x0105 (mac sgmii mode) + * for sgmii direct link, set to 0x010c (phy sgmii mode) + */ + if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_SGMII || + txgbe_get_media_type(hw) == txgbe_media_type_fiber) { txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL, 0x010c); + } else if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_SGMII || + (hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI) { + txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL, 0x0105); } txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_DIGI_CTL, 0x0200); value = txgbe_rd32_epcs(hw, TXGBE_SR_MII_MMD_CTL); @@ -4299,32 +4459,20 @@ e_dev_info("It is set to kr.\n"); txgbe_wr32_epcs(hw, 0x78001, 0x7); - txgbe_wr32_epcs(hw, 0x18035, 0x00FC); - txgbe_wr32_epcs(hw, 0x18055, 0x00FC); - if (1) { - /* 2. Disable xpcs AN-73 */ + /* 2. Disable xpcs AN-73 */ + if (adapter->backplane_an == 1) { txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x3000); - txgbe_wr32_epcs(hw, 0x78003, 0x1); - if (!(adapter->backplane_an == 1)) { - txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0000); - txgbe_wr32_epcs(hw, 0x78003, 0x0); - } - - if (KR_SET == 1 || adapter->ffe_set == TXGBE_BP_M_KR) { - e_dev_info("Set KR TX_EQ MAIN:%d PRE:%d POST:%d\n", - adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); - value = (0x1804 & ~0x3F3F); - value |= adapter->ffe_main << 8 | adapter->ffe_pre; - txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); - - value = (0x50 & ~0x7F) | (1 << 6)| adapter->ffe_post; - txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); - } + txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, 0x1); + } else { + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0); + txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, 0x0); + } - if (KR_AN73_PRESET == 1) { - txgbe_wr32_epcs(hw, 0x18037, 0x80); - } + txgbe_wr32_epcs(hw, 0x70012, 0xc000 | txgbe_rd32_epcs(hw, 0x70012)); + if (KR_AN73_PRESET == 1) { + txgbe_wr32_epcs(hw, 0x18037, 0x80 | txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1)); + } if (KR_POLLING == 1) { txgbe_wr32_epcs(hw, 0x18006, 0xffff); @@ -4368,9 +4516,16 @@ status = TXGBE_ERR_PHY_INIT_NOT_DONE; goto out; } - } else { - txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, - 0x1); + + if ((KR_SET == 1) || (adapter->ffe_set == TXGBE_BP_M_KR)) { + e_dev_info("Set KR TX_EQ MAIN:%d PRE:%d POST:%d\n", + adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); + value = (0x1804 & ~0x3F3F); + value |= adapter->ffe_main << 8 | adapter->ffe_pre; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + + value = (0x50 & ~0x7F) | (1 << 6)| adapter->ffe_post; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); } out: return status; @@ -4446,29 +4601,11 @@ value = (0xf5f0 & ~0x7F0) | (0x5 << 8) | (0x7 << 5) | 0xF0; txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value); - if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_XAUI) + if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_XAUI) txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00); else txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0x4F00); - if (KX4_SET == 1 || adapter->ffe_set) { - e_dev_info("Set KX4 TX_EQ MAIN:%d PRE:%d POST:%d\n", - adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); - value = (0x1804 & ~0x3F3F); - value |= adapter->ffe_main << 8 | adapter->ffe_pre; - txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); - - value = (0x50 & ~0x7F) | (1 << 6)| adapter->ffe_post; - txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); - } else { - value = (0x1804 & ~0x3F3F); - value |= 40 << 8 ; - txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); - - value = (0x50 & ~0x7F) | (1 << 6); - txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); - - } for (i = 0; i < 4; i++) { if (i == 0) value = (0x45 & ~0xFFFF) | (0x7 << 12) | (0x7 << 8) | 0x6; @@ -4589,6 +4726,17 @@ goto out; } + if ((KX4_SET == 1) || (adapter->ffe_set == TXGBE_BP_M_KX4)) { + e_dev_info("Set KX4 TX_EQ MAIN:%d PRE:%d POST:%d\n", + adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); + value = (0x1804 & ~0x3F3F); + value |= adapter->ffe_main << 8 | adapter->ffe_pre; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + + value = (0x50 & ~0x7F) | (1 << 6)| adapter->ffe_post; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); + } + out: return status; } @@ -4610,9 +4758,6 @@ } e_dev_info("It is set to kx. speed =0x%x\n", speed); - txgbe_wr32_epcs(hw, 0x18035, 0x00FC); - txgbe_wr32_epcs(hw, 0x18055, 0x00FC); - /* 1. Wait xpcs power-up good */ for (i = 0; i < TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME; i++) { if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS) & @@ -4684,29 +4829,6 @@ else txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00); - if (KX_SET == 1 || adapter->ffe_set == TXGBE_BP_M_KX) { - e_dev_info("Set KX TX_EQ MAIN:%d PRE:%d POST:%d\n", - adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); - /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit13:8(TX_EQ_MAIN) - * = 6'd30, Bit5:0(TX_EQ_PRE) = 6'd4 - */ - value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0); - value = (value & ~0x3F3F) | (adapter->ffe_main << 8) | adapter->ffe_pre; - txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); - /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit6(TX_EQ_OVR_RIDE) - * = 1'b1, Bit5:0(TX_EQ_POST) = 6'd36 - */ - value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1); - value = (value & ~0x7F) | adapter->ffe_post | (1 << 6); - txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); - } else { - value = (0x1804 & ~0x3F3F) | (24 << 8) | 4; - txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); - - value = (0x50 & ~0x7F) | 16 | (1 << 6); - txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); - } - for (i = 0; i < 4; i++) { if (i) { value = 0xff06; @@ -4817,6 +4939,23 @@ goto out; } + if ((KX_SET == 1) || (adapter->ffe_set == TXGBE_BP_M_KX)) { + e_dev_info("Set KX TX_EQ MAIN:%d PRE:%d POST:%d\n", + adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); + /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit13:8(TX_EQ_MAIN) + * = 6'd30, Bit5:0(TX_EQ_PRE) = 6'd4 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0); + value = (value & ~0x3F3F) | (adapter->ffe_main << 8) | adapter->ffe_pre; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit6(TX_EQ_OVR_RIDE) + * = 1'b1, Bit5:0(TX_EQ_POST) = 6'd36 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1); + value = (value & ~0x7F) | adapter->ffe_post | (1 << 6); + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); + } + out: return status; } @@ -4917,35 +5056,7 @@ * MPLLA_DIV16P5_CLK_EN=1, MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0 */ txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2, 0x0600); - if (SFI_SET == 1 || adapter->ffe_set) { - e_dev_info("Set SFI TX_EQ MAIN:%d PRE:%d POST:%d\n", - adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); - /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit13:8(TX_EQ_MAIN) - * = 6'd30, Bit5:0(TX_EQ_PRE) = 6'd4 - */ - value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0); - value = (value & ~0x3F3F) | (adapter->ffe_main << 8) | adapter->ffe_pre; - txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); - /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit6(TX_EQ_OVR_RIDE) - * = 1'b1, Bit5:0(TX_EQ_POST) = 6'd36 - */ - value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1); - value = (value & ~0x7F) | adapter->ffe_post | (1 << 6); - txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); - } else { - /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit13:8(TX_EQ_MAIN) - * = 6'd30, Bit5:0(TX_EQ_PRE) = 6'd4 - */ - value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0); - value = (value & ~0x3F3F) | (24 << 8) | 4; - txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); - /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit6(TX_EQ_OVR_RIDE) - * = 1'b1, Bit5:0(TX_EQ_POST) = 6'd36 - */ - value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1); - value = (value & ~0x7F) | 16 | (1 << 6); - txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); - } + if (hw->phy.sfp_type == txgbe_sfp_type_da_cu_core0 || hw->phy.sfp_type == txgbe_sfp_type_da_cu_core1) { /* 7. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register @@ -5111,6 +5222,23 @@ goto out; } + if ((SFI_SET == 1) || (adapter->ffe_set == TXGBE_BP_M_SFI)) { + e_dev_info("Set SFI TX_EQ MAIN:%d PRE:%d POST:%d\n", + adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); + /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit13:8(TX_EQ_MAIN) + * = 6'd30, Bit5:0(TX_EQ_PRE) = 6'd4 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0); + value = (value & ~0x3F3F) | (adapter->ffe_main << 8) | adapter->ffe_pre; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit6(TX_EQ_OVR_RIDE) + * = 1'b1, Bit5:0(TX_EQ_POST) = 6'd36 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1); + value = (value & ~0x7F) | adapter->ffe_post | (1 << 6); + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); + } + out: return status; } @@ -5135,8 +5263,6 @@ u32 link_speed = TXGBE_LINK_SPEED_UNKNOWN; bool link_up = false; - DEBUGFUNC("\n"); - /* Check to see if speed passed in is supported. */ status = TCALL(hw, mac.ops.get_link_capabilities, &link_capabilities, &autoneg); @@ -5150,9 +5276,9 @@ goto out; } - if (!(((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4) || - ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_XAUI) || - ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_SGMII))) { + if (!(((hw->subsystem_device_id & TXGBE_DEV_MASK) == TXGBE_ID_KR_KX_KX4) || + ((hw->subsystem_device_id & TXGBE_DEV_MASK) == TXGBE_ID_MAC_XAUI) || + ((hw->subsystem_device_id & TXGBE_DEV_MASK) == TXGBE_ID_MAC_SGMII))) { status = TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false); if (status != 0) @@ -5161,45 +5287,27 @@ goto out; } - if ((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) - goto out; - - if ((hw->subsystem_id & 0xF0) == TXGBE_ID_KR_KX_KX4) { - if (!autoneg) { - switch (hw->phy.link_mode) { - case TXGBE_PHYSICAL_LAYER_10GBASE_KR: - txgbe_set_link_to_kr(hw, autoneg); - break; - case TXGBE_PHYSICAL_LAYER_10GBASE_KX4: - txgbe_set_link_to_kx4(hw, autoneg); - break; - case TXGBE_PHYSICAL_LAYER_1000BASE_KX: - txgbe_set_link_to_kx(hw, speed, autoneg); - break; - default: - status = TXGBE_ERR_PHY; - goto out; - } - } else { - txgbe_set_link_to_kr(hw, autoneg); - } - } else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI || - ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_XAUI) || - (hw->subsystem_id & 0xF0) == TXGBE_ID_SGMII || - ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_SGMII) || - (txgbe_get_media_type(hw) == txgbe_media_type_copper && - (hw->subsystem_id & 0xF0) == TXGBE_ID_SFI_XAUI)) { - if (speed == TXGBE_LINK_SPEED_10GB_FULL) { - txgbe_set_link_to_kx4(hw, autoneg); - } else { - txgbe_set_link_to_kx(hw, speed, 0); - if (adapter->an37 || - (hw->subsystem_id & 0xF0) == TXGBE_ID_SGMII || - (hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI) - txgbe_set_sgmii_an37_ability(hw); + if ((hw->subsystem_device_id & TXGBE_DEV_MASK) == TXGBE_ID_KR_KX_KX4) { + txgbe_set_link_to_kr(hw, autoneg); + } else if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI || + ((hw->subsystem_device_id & TXGBE_DEV_MASK) == TXGBE_ID_MAC_XAUI) || + (hw->subsystem_device_id & TXGBE_DEV_MASK) == TXGBE_ID_SGMII || + ((hw->subsystem_device_id & TXGBE_DEV_MASK) == TXGBE_ID_MAC_SGMII) || + (txgbe_get_media_type(hw) == txgbe_media_type_copper && + (hw->subsystem_device_id & TXGBE_DEV_MASK) == TXGBE_ID_SFI_XAUI)) { + if (speed == TXGBE_LINK_SPEED_10GB_FULL) { + txgbe_set_link_to_kx4(hw, 0); + } else { + txgbe_set_link_to_kx(hw, speed, 0); + if (adapter->an37) + txgbe_set_sgmii_an37_ability(hw); } } else if (txgbe_get_media_type(hw) == txgbe_media_type_fiber) { txgbe_set_link_to_sfi(hw, speed); + if (speed == TXGBE_LINK_SPEED_1GB_FULL) { + txgbe_setup_fc(hw); + txgbe_set_sgmii_an37_ability(hw); + } } out: @@ -5221,8 +5329,6 @@ s32 status; u32 link_speed; - DEBUGFUNC("\n"); - /* Setup the PHY according to input speed */ link_speed = TCALL(hw, phy.ops.setup_link_speed, speed, autoneg_wait_to_complete); @@ -5312,8 +5418,6 @@ struct txgbe_adapter *adapter = hw->back; u32 value; - DEBUGFUNC("\n"); - /* Call adapter stop to disable tx/rx and clear interrupts */ status = TCALL(hw, mac.ops.stop_adapter); if (status != 0) @@ -5377,14 +5481,12 @@ ~TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED; } } else if (hw->reset_type == TXGBE_GLOBAL_RESET) { -#ifndef _WIN32 struct txgbe_adapter *adapter = (struct txgbe_adapter *)hw->back; msleep(100 * rst_delay + 2000); pci_restore_state(adapter->pdev); pci_save_state(adapter->pdev); pci_wake_from_d3(adapter->pdev, false); -#endif /*_WIN32*/ } } else { if (txgbe_mng_present(hw)) { @@ -5460,6 +5562,9 @@ } + /* wait to make sure phy power is up */ + msleep(100); + /*A temporary solution for set to sfi*/ if (SFI_SET == 1 || adapter->ffe_set == TXGBE_BP_M_SFI) { e_dev_info("Set SFI TX_EQ MAIN:%d PRE:%d POST:%d\n", @@ -5487,8 +5592,6 @@ value = (0x50 & ~0x7F) | (1 << 6)| adapter->ffe_post; txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); - txgbe_wr32_epcs(hw, 0x18035, 0x00FF); - txgbe_wr32_epcs(hw, 0x18055, 0x00FF); } if (KX_SET == 1 || adapter->ffe_set == TXGBE_BP_M_KX) { @@ -5506,9 +5609,6 @@ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1); value = (value & ~0x7F) | adapter->ffe_post | (1 << 6); txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); - - txgbe_wr32_epcs(hw, 0x18035, 0x00FF); - txgbe_wr32_epcs(hw, 0x18055, 0x00FF); } /* Store the permanent mac address */ @@ -5578,8 +5678,6 @@ u32 fdircmd; fdirctrl &= ~TXGBE_RDB_FDIR_CTL_INIT_DONE; - DEBUGFUNC("\n"); - /* * Before starting reinitialization process, * FDIRCMD.CMD must be zero. @@ -5647,8 +5745,6 @@ { int i; - DEBUGFUNC("\n"); - /* Prime the keys for hashing */ wr32(hw, TXGBE_RDB_FDIR_HKEY, TXGBE_ATR_BUCKET_HASH_KEY); wr32(hw, TXGBE_RDB_FDIR_SKEY, TXGBE_ATR_SIGNATURE_HASH_KEY); @@ -5687,6 +5783,7 @@ **/ s32 txgbe_init_fdir_signature(struct txgbe_hw *hw, u32 fdirctrl) { + struct txgbe_adapter __always_unused *adapter = (struct txgbe_adapter *)hw->back; int i = VMDQ_P(0) / 4; int j = VMDQ_P(0) % 4; u32 flex = rd32m(hw, TXGBE_RDB_FDIR_FLEX_CFG(i), @@ -5731,8 +5828,6 @@ s32 txgbe_init_fdir_perfect(struct txgbe_hw *hw, u32 fdirctrl, bool __maybe_unused cloud_mode) { - DEBUGFUNC("\n"); - /* * Continue setup of fdirctrl register bits: * Turn perfect match filtering on @@ -5870,8 +5965,6 @@ u32 fdircmd; s32 err; - DEBUGFUNC("\n"); - /* * Get the flow_type in order to program FDIRCMD properly * lowest 2 bits are FDIRCMD.L4TYPE, third lowest bit is FDIRCMD.IPV6 @@ -5996,10 +6089,7 @@ u32 mask = TXGBE_NTOHS(input_mask->formatted.dst_port); mask <<= TXGBE_RDB_FDIR_TCP_MSK_DPORTM_SHIFT; mask |= TXGBE_NTOHS(input_mask->formatted.src_port); - mask = ((mask & 0x55555555) << 1) | ((mask & 0xAAAAAAAA) >> 1); - mask = ((mask & 0x33333333) << 2) | ((mask & 0xCCCCCCCC) >> 2); - mask = ((mask & 0x0F0F0F0F) << 4) | ((mask & 0xF0F0F0F0) >> 4); - return ((mask & 0x00FF00FF) << 8) | ((mask & 0xFF00FF00) >> 8); + return mask; } /* @@ -6028,8 +6118,7 @@ u32 fdirtcpm; u32 flex = 0; int i, j; - - DEBUGFUNC("\n"); + struct txgbe_adapter __always_unused *adapter = (struct txgbe_adapter *)hw->back; /* * Program the relevant mask registers. If src/dst_port or src/dst_addr @@ -6123,7 +6212,6 @@ u32 fdirport, fdirvlan, fdirhash, fdircmd; s32 err; - DEBUGFUNC("\n"); if (!cloud_mode) { /* currently IPv6 is not supported, must be programmed with 0 */ wr32(hw, TXGBE_RDB_FDIR_IP6(2), @@ -6242,8 +6330,6 @@ int ret_val = 0; u32 i; - DEBUGFUNC("\n"); - /* Set the media type */ hw->phy.media_type = TCALL(hw, mac.ops.get_media_type); @@ -6290,8 +6376,6 @@ s32 status = TXGBE_ERR_PHY_ADDR_INVALID; enum txgbe_media_type media_type; - DEBUGFUNC("\n"); - if (!hw->phy.phy_semaphore_mask) { hw->phy.phy_semaphore_mask = TXGBE_MNG_SWFW_SYNC_SW_PHY; } @@ -6329,9 +6413,6 @@ **/ s32 txgbe_enable_rx_dma(struct txgbe_hw *hw, u32 regval) { - - DEBUGFUNC("\n"); - /* * Workaround for sapphire silicon errata when enabling the Rx datapath. * If traffic is incoming before we enable the Rx unit, it could hang @@ -6363,8 +6444,6 @@ struct txgbe_flash_info *flash = &hw->flash; u32 eec; - DEBUGFUNC("\n"); - eec = 0x1000000; flash->semaphore_delay = 10; flash->dword_size = (eec >> 2); @@ -6393,8 +6472,6 @@ s32 status = 0; u32 i; - DEBUGFUNC("\n"); - TCALL(hw, eeprom.ops.init_params); if (!dwords || offset + dwords >= hw->flash.dword_size) { @@ -6437,8 +6514,6 @@ s32 status = 0; u32 i; - DEBUGFUNC("\n"); - TCALL(hw, eeprom.ops.init_params); if (!dwords || offset + dwords >= hw->flash.dword_size) { @@ -6479,8 +6554,6 @@ s32 status = 0; u16 data; - DEBUGFUNC("\n"); - if (eeprom->type == txgbe_eeprom_uninitialized) { eeprom->semaphore_delay = 10; eeprom->type = txgbe_eeprom_none; @@ -6523,7 +6596,6 @@ s32 status; struct txgbe_hic_read_shadow_ram buffer; - DEBUGFUNC("\n"); buffer.hdr.req.cmd = FW_READ_SHADOW_RAM_CMD; buffer.hdr.req.buf_lenh = 0; buffer.hdr.req.buf_lenl = FW_READ_SHADOW_RAM_LEN; @@ -6564,8 +6636,6 @@ { s32 status = 0; - DEBUGFUNC("\n"); - if (TCALL(hw, mac.ops.acquire_swfw_sync, TXGBE_MNG_SWFW_SYNC_SW_FLASH) == 0) { status = txgbe_read_ee_hostif_data(hw, offset, data); @@ -6597,8 +6667,6 @@ u32 i; u32 value = 0; - DEBUGFUNC("\n"); - /* Take semaphore for the entire operation. */ status = TCALL(hw, mac.ops.acquire_swfw_sync, TXGBE_MNG_SWFW_SYNC_SW_FLASH); @@ -6672,8 +6740,6 @@ s32 status; struct txgbe_hic_write_shadow_ram buffer; - DEBUGFUNC("\n"); - buffer.hdr.req.cmd = FW_WRITE_SHADOW_RAM_CMD; buffer.hdr.req.buf_lenh = 0; buffer.hdr.req.buf_lenl = FW_WRITE_SHADOW_RAM_LEN; @@ -6691,6 +6757,77 @@ return status; } +s32 txgbe_close_notify(struct txgbe_hw *hw) +{ + int tmp; + s32 status; + struct txgbe_hic_write_shadow_ram buffer; + + buffer.hdr.req.cmd = FW_DW_CLOSE_NOTIFY; + buffer.hdr.req.buf_lenh = 0; + buffer.hdr.req.buf_lenl = 0; + buffer.hdr.req.checksum = FW_DEFAULT_CHECKSUM; + + /* one word */ + buffer.length = 0; + buffer.address = 0; + + status = txgbe_host_interface_command(hw, (u32 *)&buffer, + sizeof(buffer), + TXGBE_HI_COMMAND_TIMEOUT, false); + if (status) + return status; + + if (txgbe_check_mng_access(hw)) { + tmp = (u32)rd32(hw, TXGBE_MNG_SW_SM); + if (tmp == TXGBE_CHECKSUM_CAP_ST_PASS) + status = 0; + else + status = TXGBE_ERR_EEPROM_CHECKSUM; + } else { + status = TXGBE_ERR_MNG_ACCESS_FAILED; + return status; + } + + return status; +} + +s32 txgbe_open_notify(struct txgbe_hw *hw) +{ + int tmp; + s32 status; + struct txgbe_hic_write_shadow_ram buffer; + + buffer.hdr.req.cmd = FW_DW_OPEN_NOTIFY; + buffer.hdr.req.buf_lenh = 0; + buffer.hdr.req.buf_lenl = 0; + buffer.hdr.req.checksum = FW_DEFAULT_CHECKSUM; + + /* one word */ + buffer.length = 0; + buffer.address = 0; + + status = txgbe_host_interface_command(hw, (u32 *)&buffer, + sizeof(buffer), + TXGBE_HI_COMMAND_TIMEOUT, false); + if (status) + return status; + + if (txgbe_check_mng_access(hw)) { + tmp = (u32)rd32(hw, TXGBE_MNG_SW_SM); + + if (tmp == TXGBE_CHECKSUM_CAP_ST_PASS) + status = 0; + else + status = TXGBE_ERR_EEPROM_CHECKSUM; + } else { + status = TXGBE_ERR_MNG_ACCESS_FAILED; + return status; + } + + return status; +} + /** * txgbe_write_ee_hostif - Write EEPROM word using hostif * @hw: pointer to hardware structure @@ -6704,8 +6841,6 @@ { s32 status = 0; - DEBUGFUNC("\n"); - if (TCALL(hw, mac.ops.acquire_swfw_sync, TXGBE_MNG_SWFW_SYNC_SW_FLASH) == 0) { status = txgbe_write_ee_hostif_data(hw, offset, data); @@ -6734,8 +6869,6 @@ s32 status = 0; u16 i = 0; - DEBUGFUNC("\n"); - /* Take semaphore for the entire operation. */ status = TCALL(hw, mac.ops.acquire_swfw_sync, TXGBE_MNG_SWFW_SYNC_SW_FLASH); @@ -6779,8 +6912,6 @@ u16 checksum = 0; u16 i; - DEBUGFUNC("\n"); - TCALL(hw, eeprom.ops.init_params); if (!buffer) { @@ -6827,8 +6958,6 @@ s32 status; u16 checksum = 0; - DEBUGFUNC("\n"); - /* Read the first word from the EEPROM. If this times out or fails, do * not continue or we could be in for a very long wait while every * EEPROM read fails @@ -6868,8 +6997,6 @@ u16 checksum; u16 read_checksum = 0; - DEBUGFUNC("\n"); - /* Read the first word from the EEPROM. If this times out or fails, do * not continue or we could be in for a very long wait while every * EEPROM read fails @@ -6908,6 +7035,17 @@ return status; } +u32 txgbe_flash_read_dword(struct txgbe_hw *hw, u32 addr) +{ + u8 status = fmgr_cmd_op(hw, SPI_CMD_READ_DWORD, addr); + + if (status) + return (u32)status; + + return rd32(hw, SPI_H_DAT_REG_ADDR); +} + + /** * txgbe_update_flash - Instruct HW to copy EEPROM to Flash device * @hw: pointer to hardware structure @@ -6950,12 +7088,10 @@ u32 i; u16 value; - DEBUGFUNC("\n"); - if (link_up_wait_to_complete) { for (i = 0; i < TXGBE_LINK_UP_TIME; i++) { if (TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_copper && - ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI)) { + ((hw->subsystem_device_id & 0xF0) != TXGBE_ID_SFI_XAUI)) { /* read ext phy link status */ txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 0x03, 0x8008, &value); if (value & 0x400) { @@ -6980,7 +7116,7 @@ } } else { if (TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_copper && - ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI)) { + ((hw->subsystem_device_id & 0xF0) != TXGBE_ID_SFI_XAUI)) { /* read ext phy link status */ txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 0x03, 0x8008, &value); if (value & 0x400) { @@ -7003,7 +7139,7 @@ if (*link_up) { if (TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_copper && - ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI)) { + ((hw->subsystem_device_id & 0xF0) != TXGBE_ID_SFI_XAUI)) { if ((value & 0xc000) == 0xc000) { *speed = TXGBE_LINK_SPEED_10GB_FULL; } else if ((value & 0xc000) == 0x8000) { @@ -7045,7 +7181,5 @@ s32 txgbe_setup_eee(struct txgbe_hw __maybe_unused *hw, bool __maybe_unused enable_eee) { /* fix eee */ - DEBUGFUNC("\n"); - return 0; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_hw.h
Changed
@@ -27,6 +27,43 @@ #define TXGBE_EMC_DIODE3_DATA 0x2A #define TXGBE_EMC_DIODE3_THERM_LIMIT 0x30 +#define SPI_CLK_DIV 2 + +#define SPI_CMD_ERASE_CHIP 4 // SPI erase chip command +#define SPI_CMD_ERASE_SECTOR 3 // SPI erase sector command +#define SPI_CMD_WRITE_DWORD 0 // SPI write a dword command +#define SPI_CMD_READ_DWORD 1 // SPI read a dword command +#define SPI_CMD_USER_CMD 5 // SPI user command + +#define SPI_CLK_CMD_OFFSET 28 // SPI command field offset in Command register +#define SPI_CLK_DIV_OFFSET 25 // SPI clock divide field offset in Command register + +#define SPI_TIME_OUT_VALUE 10000 +#define SPI_SECTOR_SIZE (4 * 1024) // FLASH sector size is 64KB +#define SPI_H_CMD_REG_ADDR 0x10104 // SPI Command register address +#define SPI_H_DAT_REG_ADDR 0x10108 // SPI Data register address +#define SPI_H_STA_REG_ADDR 0x1010c // SPI Status register address +#define SPI_H_USR_CMD_REG_ADDR 0x10110 // SPI User Command register address +#define SPI_CMD_CFG1_ADDR 0x10118 // Flash command configuration register 1 +#define MISC_RST_REG_ADDR 0x1000c // Misc reset register address +#define MGR_FLASH_RELOAD_REG_ADDR 0x101a0 // MGR reload flash read + +#define MAC_ADDR0_WORD0_OFFSET_1G 0x006000c // MAC Address for LAN0, stored in external FLASH +#define MAC_ADDR0_WORD1_OFFSET_1G 0x0060014 +#define MAC_ADDR1_WORD0_OFFSET_1G 0x007000c // MAC Address for LAN1, stored in external FLASH +#define MAC_ADDR1_WORD1_OFFSET_1G 0x0070014 +/* Product Serial Number, stored in external FLASH last sector */ +#define PRODUCT_SERIAL_NUM_OFFSET_1G 0x00f0000 + +struct txgbe_hic_read_cab { + union txgbe_hic_hdr2 hdr; + union { + u8 d8252; + u16 d16126; + u32 d3263; + } dbuf; +}; + /** * Packet Type decoding **/ @@ -238,6 +275,9 @@ s32 txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum_val); s32 txgbe_update_flash(struct txgbe_hw *hw); +int txgbe_upgrade_flash(struct txgbe_hw *hw, u32 region, + const u8 *data, u32 size); + s32 txgbe_write_ee_hostif_buffer(struct txgbe_hw *hw, u16 offset, u16 words, u16 *data); s32 txgbe_write_ee_hostif(struct txgbe_hw *hw, u16 offset, @@ -250,9 +290,13 @@ void txgbe_wr32_ephy(struct txgbe_hw *hw, u32 addr, u32 data); u32 rd32_ephy(struct txgbe_hw *hw, u32 addr); +u32 txgbe_flash_read_dword(struct txgbe_hw *hw, u32 addr); s32 txgbe_upgrade_flash_hostif(struct txgbe_hw *hw, u32 region, const u8 *data, u32 size); +s32 txgbe_close_notify(struct txgbe_hw *hw); +s32 txgbe_open_notify(struct txgbe_hw *hw); + s32 txgbe_set_link_to_kr(struct txgbe_hw *hw, bool autoneg); s32 txgbe_set_link_to_kx4(struct txgbe_hw *hw, bool autoneg);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_lib.c
Changed
@@ -472,6 +472,7 @@ adapter->num_rx_queues = 1; adapter->num_tx_queues = 1; adapter->queues_per_pool = 1; + adapter->num_xdp_queues = 0; if (txgbe_set_dcb_vmdq_queues(adapter)) return;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_main.c
Changed
@@ -40,7 +40,14 @@ #include <linux/if_macvlan.h> #include <linux/ethtool.h> #include <linux/if_bridge.h> +#include <linux/bpf.h> +#include <linux/bpf_trace.h> +#include <linux/atomic.h> #include <net/vxlan.h> +#include <net/udp_tunnel.h> +#include <net/pkt_cls.h> +#include <net/tc_act/tc_gact.h> +#include <net/tc_act/tc_mirred.h> #include "txgbe.h" #include "txgbe_hw.h" @@ -61,7 +68,7 @@ #define RELEASE_TAG -#define DRV_VERSION __stringify(1.1.17oe) +#define DRV_VERSION __stringify(1.3.2oe) const char txgbe_driver_version32 = DRV_VERSION; static const char txgbe_copyright = @@ -472,8 +479,8 @@ wr32(&adapter->hw, TXGBE_PX_IMC(1), value3); } + ERROR_REPORT1(TXGBE_ERROR_POLLING, "tx timeout. do pcie recovery.\n"); if (adapter->hw.bus.lan_id == 0) { - ERROR_REPORT1(TXGBE_ERROR_POLLING, "tx timeout. do pcie recovery.\n"); adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER; txgbe_service_event_schedule(adapter); } else @@ -617,8 +624,11 @@ /* schedule immediate reset if we believe we hung */ e_info(hw, "real tx hang. do pcie recovery.\n"); - adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER; - txgbe_service_event_schedule(adapter); + if (adapter->hw.bus.lan_id == 0) { + adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER; + txgbe_service_event_schedule(adapter); + } else + wr32(&adapter->hw, TXGBE_MIS_PF_SM, 1); /* the adapter is about to reset, no point in enabling stuff */ return true; @@ -1800,7 +1810,7 @@ { u32 mask = 0; struct txgbe_hw *hw = &adapter->hw; - u8 device_type = hw->subsystem_id & 0xF0; + u8 device_type = hw->subsystem_device_id & 0xF0; /* enable gpio interrupt */ if (device_type != TXGBE_ID_MAC_XAUI && @@ -2097,27 +2107,32 @@ struct txgbe_adapter *adapter = data; struct txgbe_q_vector *q_vector = adapter->q_vector0; struct txgbe_hw *hw = &adapter->hw; - u32 eicr; u32 eicr_misc; u32 value ; + u16 pci_value; - eicr = txgbe_misc_isb(adapter, TXGBE_ISB_VEC0); - if (!eicr) { - /* - * shared interrupt alert! - * the interrupt that we masked before the EICR read. - */ - if (!test_bit(__TXGBE_DOWN, &adapter->state)) - txgbe_irq_enable(adapter, true, true); - return IRQ_NONE; /* Not our interrupt */ - } - adapter->isb_memTXGBE_ISB_VEC0 = 0; - if (!(adapter->flags & TXGBE_FLAG_MSI_ENABLED)) + if (!(adapter->flags & TXGBE_FLAG_MSI_ENABLED)) { + pci_read_config_word(adapter->pdev, PCI_STATUS, &pci_value); + if (!(pci_value & PCI_STATUS_INTERRUPT)) + return IRQ_HANDLED; /* Not our interrupt */ wr32(&(adapter->hw), TXGBE_PX_INTA, 1); + } eicr_misc = txgbe_misc_isb(adapter, TXGBE_ISB_MISC); - if (eicr_misc & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN)) - txgbe_check_lsc(adapter); + if (BOND_CHECK_LINK_MODE == 1) { + if (eicr_misc & (TXGBE_PX_MISC_IC_ETH_LKDN)) { + value = rd32(hw, 0x14404); + value = value & 0x1; + if (value == 0) { + adapter->link_up = false; + adapter->flags2 |= TXGBE_FLAG2_LINK_DOWN; + txgbe_service_event_schedule(adapter); + } + } + } else { + if (eicr_misc & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN)) + txgbe_check_lsc(adapter); + } if (eicr_misc & TXGBE_PX_MISC_IC_ETH_AN) { if (adapter->backplane_an == 1 && (KR_POLLING == 0)) { @@ -3215,12 +3230,23 @@ for (i = 0; i < hw->mac.num_rar_entries; i++) { if (adapter->mac_tablei.state & TXGBE_MAC_STATE_IN_USE) { + if (ether_addr_equal(addr, adapter->mac_tablei.addr)) { + if (adapter->mac_tablei.pools != (1ULL << pool)) { + memcpy(adapter->mac_tablei.addr, addr, ETH_ALEN); + adapter->mac_tablei.pools |= (1ULL << pool); + txgbe_sync_mac_table(adapter); + return i; + } + } + } + + if (adapter->mac_tablei.state & TXGBE_MAC_STATE_IN_USE) { continue; } adapter->mac_tablei.state |= (TXGBE_MAC_STATE_MODIFIED | TXGBE_MAC_STATE_IN_USE); memcpy(adapter->mac_tablei.addr, addr, ETH_ALEN); - adapter->mac_tablei.pools = (1ULL << pool); + adapter->mac_tablei.pools |= (1ULL << pool); txgbe_sync_mac_table(adapter); return i; } @@ -3251,16 +3277,29 @@ return -EINVAL; for (i = 0; i < hw->mac.num_rar_entries; i++) { - if (ether_addr_equal(addr, adapter->mac_tablei.addr) && - adapter->mac_tablei.pools | (1ULL << pool)) { - adapter->mac_tablei.state |= TXGBE_MAC_STATE_MODIFIED; - adapter->mac_tablei.state &= ~TXGBE_MAC_STATE_IN_USE; - memset(adapter->mac_tablei.addr, 0, ETH_ALEN); - adapter->mac_tablei.pools = 0; - txgbe_sync_mac_table(adapter); + if (ether_addr_equal(addr, adapter->mac_tablei.addr)) { + if (adapter->mac_tablei.pools & (1ULL << pool)) { + adapter->mac_tablei.state |= TXGBE_MAC_STATE_MODIFIED; + adapter->mac_tablei.state &= ~TXGBE_MAC_STATE_IN_USE; + adapter->mac_tablei.pools &= ~(1ULL << pool); + txgbe_sync_mac_table(adapter); + } return 0; } + + if (adapter->mac_tablei.pools != (1 << pool)) + continue; + if (!ether_addr_equal(addr, adapter->mac_tablei.addr)) + continue; + + adapter->mac_tablei.state |= TXGBE_MAC_STATE_MODIFIED; + adapter->mac_tablei.state &= ~TXGBE_MAC_STATE_IN_USE; + memset(adapter->mac_tablei.addr, 0, ETH_ALEN); + adapter->mac_tablei.pools = 0; + txgbe_sync_mac_table(adapter); + return 0; } + return -ENOMEM; } @@ -3666,7 +3705,11 @@ wr32(hw, TXGBE_PX_ISB_ADDR_L, adapter->isb_dma & DMA_BIT_MASK(32)); +#ifdef CONFIG_64BIT wr32(hw, TXGBE_PX_ISB_ADDR_H, adapter->isb_dma >> 32); +#else + wr32(hw, TXGBE_PX_ISB_ADDR_H, 0); +#endif } void txgbe_configure_port(struct txgbe_adapter *adapter) @@ -3817,7 +3860,7 @@ if (link_up) return 0; - if ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI) { + if ((hw->subsystem_device_id & 0xF0) != TXGBE_ID_SFI_XAUI) { /* setup external PHY Mac Interface */ mtdSetMacInterfaceControl(&hw->phy_dev, hw->phy.addr, MTD_MAC_TYPE_XAUI, MTD_FALSE, MTD_MAC_SNOOP_OFF, @@ -3836,7 +3879,7 @@ autoneg = false; } - ret = TCALL(hw, mac.ops.setup_link, speed, autoneg); + ret = TCALL(hw, mac.ops.setup_link, speed, false); link_cfg_out: return ret; @@ -3956,10 +3999,12 @@ rd32(hw, TXGBE_PX_IC(0)); rd32(hw, TXGBE_PX_IC(1)); rd32(hw, TXGBE_PX_MISC_IC); + if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI) + wr32(hw, TXGBE_GPIO_EOI, TXGBE_GPIO_EOI_6); txgbe_irq_enable(adapter, true, true); /* enable external PHY interrupt */ - if ((hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI) { + if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI) { txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 0x03, 0x8011, &value); /* only enable T unit int */ txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xf043, 0x1); @@ -3973,11 +4018,25 @@ netif_tx_start_all_queues(adapter->netdev); /* bring the link up in the watchdog, this could race with our first - * link up interrupt but shouldn't be a problem */ + * link up interrupt but shouldn't be a problem + */ adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE; adapter->link_check_timeout = jiffies; +#ifdef CONFIG_TXGBE_POLL_LINK_STATUS + mod_timer(&adapter->link_check_timer, jiffies); +#endif mod_timer(&adapter->service_timer, jiffies); + + if (hw->bus.lan_id == 0) + wr32m(hw, TXGBE_MIS_PRB_CTL, + TXGBE_MIS_PRB_CTL_LAN0_UP, TXGBE_MIS_PRB_CTL_LAN0_UP); + else if (hw->bus.lan_id == 1) + wr32m(hw, TXGBE_MIS_PRB_CTL, + TXGBE_MIS_PRB_CTL_LAN1_UP, TXGBE_MIS_PRB_CTL_LAN1_UP); + else + e_err(probe, "%s:invalid bus lan id %d\n", __func__, hw->bus.lan_id); + txgbe_clear_vf_stats_counters(adapter); /* Set PF Reset Done bit so PF/VF Mail Ops can work */ @@ -3987,6 +4046,10 @@ void txgbe_reinit_locked(struct txgbe_adapter *adapter) { + if (adapter->flags2 & TXGBE_FLAG2_KR_PRO_REINIT) + return; + + adapter->flags2 |= TXGBE_FLAG2_KR_PRO_REINIT; WARN_ON(in_interrupt()); /* put off any impending NetWatchDogTimeout */ netif_trans_update(adapter->netdev); @@ -4004,6 +4067,7 @@ msleep(2000); txgbe_up(adapter); clear_bit(__TXGBE_RESETTING, &adapter->state); + adapter->flags2 &= ~TXGBE_FLAG2_KR_PRO_REINIT; } void txgbe_up(struct txgbe_adapter *adapter) @@ -4247,8 +4311,20 @@ TXGBE_FLAG2_GLOBAL_RESET_REQUESTED); adapter->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE; +#ifdef CONFIG_TXGBE_POLL_LINK_STATUS + del_timer_sync(&adapter->link_check_timer); +#endif del_timer_sync(&adapter->service_timer); + if (hw->bus.lan_id == 0) + wr32m(hw, TXGBE_MIS_PRB_CTL, + TXGBE_MIS_PRB_CTL_LAN0_UP, 0); + else if (hw->bus.lan_id == 1) + wr32m(hw, TXGBE_MIS_PRB_CTL, + TXGBE_MIS_PRB_CTL_LAN1_UP, 0); + else + e_dev_err("%s:invalid bus lan id %d\n", __func__, hw->bus.lan_id); + if (adapter->num_vfs) { /* Clear EITR Select mapping */ wr32(&adapter->hw, TXGBE_PX_ITRSEL, 0); @@ -4337,6 +4413,7 @@ struct pci_dev *pdev = adapter->pdev; int err; unsigned int fdir; + u32 ssid = 0; /* PCI config space info */ hw->vendor_id = pdev->vendor; @@ -4348,14 +4425,20 @@ err = -ENODEV; goto out; } - hw->subsystem_vendor_id = pdev->subsystem_vendor; - hw->subsystem_device_id = pdev->subsystem_device; - - pci_read_config_word(pdev, PCI_SUBSYSTEM_ID, &hw->subsystem_id); - if (hw->subsystem_id == TXGBE_FAILED_READ_CFG_WORD) { - e_err(probe, "read of subsystem id failed\n"); - err = -ENODEV; - goto out; + hw->oem_svid = pdev->subsystem_vendor; + hw->oem_ssid = pdev->subsystem_device; + if (pdev->subsystem_vendor == 0x8088) { + hw->subsystem_vendor_id = pdev->subsystem_vendor; + hw->subsystem_device_id = pdev->subsystem_device; + } else { + ssid = txgbe_flash_read_dword(hw, 0xfffdc); + if (ssid == 0x1) { + e_err(probe, "read of internasl subsystem device id failed\n"); + err = -ENODEV; + } + hw->subsystem_device_id = (u16)ssid; + hw->subsystem_device_id = hw->subsystem_device_id >> 8 | + hw->subsystem_device_id << 8; } err = txgbe_init_shared_code(hw); @@ -4375,6 +4458,7 @@ memcpy(adapter->rss_key, def_rss_key, sizeof(def_rss_key)); /* Set common capability flags and settings */ + adapter->flags |= TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE; adapter->flags2 |= TXGBE_FLAG2_RSC_CAPABLE; fdir = min_t(int, TXGBE_MAX_FDIR_INDICES, num_online_cpus()); adapter->ring_featureRING_F_FDIR.limit = fdir; @@ -5303,17 +5387,19 @@ struct txgbe_hw *hw = &adapter->hw; u32 link_speed = adapter->link_speed; bool link_up = adapter->link_up; -// bool pfc_en = adapter->dcb_cfg.pfc_mode_enable; u32 reg; - u32 i = 1; + u32 __maybe_unused i = 1; +#ifndef CONFIG_TXGBE_POLL_LINK_STATUS if (!(adapter->flags & TXGBE_FLAG_NEED_LINK_UPDATE)) return; +#endif link_speed = TXGBE_LINK_SPEED_10GB_FULL; link_up = true; TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false); +#ifndef CONFIG_TXGBE_POLL_LINK_STATUS if (link_up || time_after(jiffies, (adapter->link_check_timeout + TXGBE_TRY_LINK_TIMEOUT))) { adapter->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE; @@ -5323,8 +5409,12 @@ TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false); msleep(1); } +#endif + + adapter->link_up = link_up; + adapter->link_speed = link_speed; - if (link_up && !((adapter->flags & TXGBE_FLAG_DCB_ENABLED))) { + if (link_up && !(adapter->flags & TXGBE_FLAG_DCB_ENABLED)) { TCALL(hw, mac.ops.fc_enable); txgbe_set_rx_drop_en(adapter); } @@ -5419,8 +5509,6 @@ /* update the default user priority for VFs */ txgbe_update_default_up(adapter); - - /* ping all the active vfs to let them know link has changed */ } /** @@ -5435,24 +5523,21 @@ adapter->link_up = false; adapter->link_speed = 0; - /* only continue if link was up previously */ - if (!netif_carrier_ok(netdev)) - return; - if (hw->subsystem_device_id == TXGBE_ID_WX1820_KR_KX_KX4 || hw->subsystem_device_id == TXGBE_ID_SP1000_KR_KX_KX4) { txgbe_bp_down_event(adapter); } + /* only continue if link was up previously */ + if (!netif_carrier_ok(netdev)) + return; + if (test_bit(__TXGBE_PTP_RUNNING, &adapter->state)) txgbe_ptp_start_cyclecounter(adapter); e_info(drv, "NIC Link is Down\n"); netif_carrier_off(netdev); netif_tx_stop_all_queues(netdev); - - /* ping all the active vfs to let them know link has changed */ - } static bool txgbe_ring_tx_pending(struct txgbe_adapter *adapter) @@ -5524,7 +5609,7 @@ **/ static void txgbe_watchdog_subtask(struct txgbe_adapter *adapter) { - u32 value = 0; + u32 __maybe_unused value = 0; struct txgbe_hw *hw = &adapter->hw; /* if interface is down do nothing */ @@ -5538,6 +5623,7 @@ txgbe_bp_watchdog_event(adapter); } +#ifndef CONFIG_TXGBE_POLL_LINK_STATUS if (BOND_CHECK_LINK_MODE == 1) { value = rd32(hw, 0x14404); value = value & 0x1; @@ -5551,6 +5637,7 @@ txgbe_watchdog_link_is_up(adapter); else txgbe_watchdog_link_is_down(adapter); +#endif txgbe_update_stats(adapter); @@ -5638,7 +5725,7 @@ u32 speed; bool autoneg = false; u16 value; - u8 device_type = hw->subsystem_id & 0xF0; + u8 device_type = hw->subsystem_device_id & 0xF0; if (!(adapter->flags & TXGBE_FLAG_NEED_LINK_CONFIG)) return; @@ -5678,7 +5765,7 @@ } } - TCALL(hw, mac.ops.setup_link, speed, txgbe_is_sfp(hw)); + TCALL(hw, mac.ops.setup_link, speed, false); adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE; adapter->link_check_timeout = jiffies; @@ -5721,6 +5808,7 @@ struct txgbe_adapter *adapter = from_timer(adapter, t, service_timer); unsigned long next_event_offset; struct txgbe_hw *hw = &adapter->hw; + u32 val = 0; /* poll faster when waiting for link */ if (adapter->flags & TXGBE_FLAG_NEED_LINK_UPDATE) { @@ -5733,8 +5821,20 @@ } else next_event_offset = HZ * 2; - if ((rd32(&adapter->hw, TXGBE_MIS_PF_SM) == 1) && (hw->bus.lan_id)) { - adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER; + if (rd32(&adapter->hw, TXGBE_MIS_PF_SM) == 1) { + val = rd32m(&adapter->hw, TXGBE_MIS_PRB_CTL, TXGBE_MIS_PRB_CTL_LAN0_UP | + TXGBE_MIS_PRB_CTL_LAN1_UP); + if (val & TXGBE_MIS_PRB_CTL_LAN0_UP) { + if (hw->bus.lan_id == 0) { + adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER; + e_info(probe, "%s: set recover on Lan0\n", __func__); + } + } else if (val & TXGBE_MIS_PRB_CTL_LAN1_UP) { + if (hw->bus.lan_id == 1) { + adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER; + e_info(probe, "%s: set recover on Lan1\n", __func__); + } + } } /* Reset the timer */ @@ -5743,6 +5843,31 @@ txgbe_service_event_schedule(adapter); } +#ifdef CONFIG_TXGBE_POLL_LINK_STATUS +/** + * txgbe_service_timer - Timer Call-back + * @data: pointer to adapter cast into an unsigned long + **/ +static void txgbe_link_check_timer(struct timer_list *t) +{ + struct txgbe_adapter *adapter = from_timer(adapter, t, link_check_timer); + unsigned long next_event_offset = HZ / 100; + + mod_timer(&adapter->link_check_timer, next_event_offset + jiffies); + if (test_bit(__TXGBE_DOWN, &adapter->state) || + test_bit(__TXGBE_REMOVING, &adapter->state) || + test_bit(__TXGBE_RESETTING, &adapter->state)) + return; + + txgbe_watchdog_update_link(adapter); + + if (adapter->link_up) + txgbe_watchdog_link_is_up(adapter); + else + txgbe_watchdog_link_is_down(adapter); +} +#endif + static void txgbe_reset_subtask(struct txgbe_adapter *adapter) { u32 reset_flag = 0; @@ -6724,6 +6849,7 @@ __be16 protocol = skb->protocol; u8 hdr_len = 0; txgbe_dptype dptype; + u8 vlan_addlen = 0; /* work around hw errata 3 */ u16 _llcLen, *llcLen; @@ -6773,6 +6899,18 @@ tx_flags |= TXGBE_TX_FLAGS_SW_VLAN; } + if (protocol == htons(ETH_P_8021Q) || protocol == htons(ETH_P_8021AD)) { + struct vlan_hdr *vhdr, _vhdr; + + vhdr = skb_header_pointer(skb, ETH_HLEN, sizeof(_vhdr), &_vhdr); + if (!vhdr) + goto out_drop; + + protocol = vhdr->h_vlan_encapsulated_proto; + tx_flags |= TXGBE_TX_FLAGS_SW_VLAN; + vlan_addlen += VLAN_HLEN; + } + if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) && adapter->ptp_clock) { if (!test_and_set_bit_lock(__TXGBE_PTP_TX_IN_PROGRESS, @@ -6984,26 +7122,6 @@ } } -/* txgbe_validate_rtr - verify 802.1Qp to Rx packet buffer mapping is valid. - * @adapter: pointer to txgbe_adapter - * @tc: number of traffic classes currently enabled - * - * Configure a valid 802.1Qp to Rx packet buffer mapping ie confirm - * 802.1Q priority maps to a packet buffer that exists. - */ -static void txgbe_validate_rtr(struct txgbe_adapter *adapter, u8 tc) -{ - struct txgbe_hw *hw = &adapter->hw; - u32 reg, rsave; - - reg = rd32(hw, TXGBE_RDB_UP2TC); - rsave = reg; - if (reg != rsave) - wr32(hw, TXGBE_RDB_UP2TC, reg); - - return; -} - /** * txgbe_set_prio_tc_map - Configure netdev prio tc map * @adapter: Pointer to adapter struct @@ -7046,8 +7164,6 @@ netdev_reset_tc(dev); } - txgbe_validate_rtr(adapter, tc); - txgbe_init_interrupt_scheme(adapter); if (netif_running(dev)) txgbe_open(dev); @@ -7168,13 +7284,9 @@ else txgbe_vlan_strip_disable(adapter); - if (adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE && - features & NETIF_F_RXCSUM) { - if (!need_reset) - adapter->flags2 |= TXGBE_FLAG2_VXLAN_REREG_NEEDED; - } else { + if (!(adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE && + features & NETIF_F_RXCSUM)) txgbe_clear_vxlan_port(adapter); - } if (features & NETIF_F_RXHASH) { if (!(adapter->flags2 & TXGBE_FLAG2_RSS_ENABLED)) { @@ -7206,7 +7318,7 @@ { struct txgbe_adapter *adapter = netdev_priv(dev); struct txgbe_hw *hw = &adapter->hw; - __be16 port = ti->port; + __be16 port = ntohs(ti->port); if (ti->sa_family != AF_INET) return; @@ -7492,10 +7604,7 @@ char *info_string, *i_s_var; u8 part_strTXGBE_PBANUM_LENGTH; unsigned int indices = MAX_TX_QUEUES; - bool disable_dev = false; -/* #ifndef NETIF_F_GSO_PARTIA */ - netdev_features_t hw_features; err = pci_enable_device_mem(pdev); if (err) @@ -7614,51 +7723,43 @@ netdev->features |= NETIF_F_IPV6_CSUM; #endif - netdev->features |= NETIF_F_HW_VLAN_CTAG_TX | - NETIF_F_HW_VLAN_CTAG_RX; + netdev->features = NETIF_F_SG | + NETIF_F_TSO | + NETIF_F_TSO6 | + NETIF_F_RXHASH | + NETIF_F_RXCSUM | + NETIF_F_HW_CSUM; + + netdev->gso_partial_features = TXGBE_GSO_PARTIAL_FEATURES; + netdev->features |= NETIF_F_GSO_PARTIAL | + TXGBE_GSO_PARTIAL_FEATURES; - netdev->features |= txgbe_tso_features(); + netdev->features |= NETIF_F_SCTP_CRC; - if (adapter->flags2 & TXGBE_FLAG2_RSS_ENABLED) - netdev->features |= NETIF_F_RXHASH; + /* copy netdev features into list of user selectable features */ + netdev->hw_features |= netdev->features | + NETIF_F_HW_VLAN_CTAG_FILTER | + NETIF_F_HW_VLAN_CTAG_RX | + NETIF_F_HW_VLAN_CTAG_TX | + NETIF_F_RXALL; - netdev->features |= NETIF_F_RXCSUM; + netdev->hw_features |= NETIF_F_NTUPLE | + NETIF_F_HW_TC; - /* copy netdev features into list of user selectable features */ - hw_features = netdev->hw_features; - hw_features |= netdev->features; + if (pci_using_dac) + netdev->features |= NETIF_F_HIGHDMA; - /* give us the option of enabling RSC/LRO later */ - if (adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) - hw_features |= NETIF_F_LRO; - - /* set this bit last since it cannot be part of hw_features */ - netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER; - - netdev->features |= NETIF_F_NTUPLE; - - adapter->flags |= TXGBE_FLAG_FDIR_PERFECT_CAPABLE; - hw_features |= NETIF_F_NTUPLE; - netdev->hw_features = hw_features; - - netdev->vlan_features |= NETIF_F_SG | - NETIF_F_IP_CSUM | - NETIF_F_IPV6_CSUM | - NETIF_F_TSO | - NETIF_F_TSO6; - - netdev->hw_enc_features |= NETIF_F_SG | NETIF_F_IP_CSUM | - TXGBE_GSO_PARTIAL_FEATURES | NETIF_F_TSO; - if (netdev->features & NETIF_F_LRO) { - if ((adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) && - ((adapter->rx_itr_setting == 1) || - (adapter->rx_itr_setting > TXGBE_MIN_RSC_ITR))) { - adapter->flags2 |= TXGBE_FLAG2_RSC_ENABLED; - } else if (adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) { - e_dev_info("InterruptThrottleRate set too high, " - "disabling RSC\n"); - } - } + netdev->vlan_features |= netdev->features | NETIF_F_TSO_MANGLEID; + netdev->hw_enc_features |= netdev->vlan_features; + netdev->mpls_features |= NETIF_F_HW_CSUM; + + /* set this bit last since it cannot be part of vlan_features */ + netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER | + NETIF_F_HW_VLAN_CTAG_RX | + NETIF_F_HW_VLAN_CTAG_TX; + + netdev->priv_flags |= IFF_UNICAST_FLT; + netdev->priv_flags |= IFF_SUPP_NOFCS; netdev->priv_flags |= IFF_UNICAST_FLT; netdev->priv_flags |= IFF_SUPP_NOFCS; @@ -7674,6 +7775,7 @@ /* make sure the EEPROM is good */ if (TCALL(hw, eeprom.ops.validate_checksum, NULL)) { e_dev_err("The EEPROM Checksum Is Not Valid\n"); + wr32(hw, TXGBE_MIS_RST, TXGBE_MIS_RST_SW_RST); err = -EIO; goto err_sw_init; } @@ -7689,6 +7791,9 @@ txgbe_mac_set_default_filter(adapter, hw->mac.perm_addr); timer_setup(&adapter->service_timer, txgbe_service_timer, 0); +#ifdef CONFIG_TXGBE_POLL_LINK_STATUS + timer_setup(&adapter->link_check_timer, txgbe_link_check_timer, 0); +#endif if (TXGBE_REMOVED(hw->hw_addr)) { err = -EIO; @@ -7780,6 +7885,10 @@ pci_set_drvdata(pdev, adapter); adapter->netdev_registered = true; + /* call save state here in standalone driver because it relies on + * adapter struct to exist, and needs to call netdev_priv + */ + pci_save_state(pdev); if (!((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP)) /* power down the optics for SFP+ fiber */ @@ -7861,6 +7970,20 @@ e_info(probe, "WangXun(R) 10 Gigabit Network Connection\n"); cards_found++; +#ifdef CONFIG_TXGBE_SYSFS + if (txgbe_sysfs_init(adapter)) + e_err(probe, "failed to allocate sysfs resources\n"); +#else +#ifdef CONFIG_TXGBE_PROCFS + if (txgbe_procfs_init(adapter)) + e_err(probe, "failed to allocate procfs resources\n"); +#endif /* CONFIG_TXGBE_PROCFS */ +#endif /* CONFIG_TXGBE_SYSFS */ + +#ifdef CONFIG_TXGBE_DEBUG_FS + txgbe_dbg_adapter_init(adapter); +#endif /* CONFIG_TXGBE_DEBUG_FS */ + /* setup link for SFP devices with MNG FW, else wait for TXGBE_UP */ if (txgbe_mng_present(hw) && txgbe_is_sfp(hw)) TCALL(hw, mac.ops.setup_link, @@ -7913,9 +8036,21 @@ return; netdev = adapter->netdev; +#ifdef CONFIG_TXGBE_DEBUG_FS + txgbe_dbg_adapter_exit(adapter); +#endif + set_bit(__TXGBE_REMOVING, &adapter->state); cancel_work_sync(&adapter->service_task); +#ifdef CONFIG_TXGBE_SYSFS + txgbe_sysfs_exit(adapter); +#else +#ifdef CONFIG_TXGBE_PROCFS + txgbe_procfs_exit(adapter); +#endif +#endif /* CONFIG_TXGBE_SYSFS */ + /* remove the added san mac */ txgbe_del_sanmac_netdev(netdev); @@ -8006,6 +8141,9 @@ return -ENOMEM; } +#ifdef CONFIG_TXGBE_DEBUG_FS + txgbe_dbg_init(); +#endif ret = pci_register_driver(&txgbe_driver); return ret; } @@ -8024,6 +8162,9 @@ if (txgbe_wq) { destroy_workqueue(txgbe_wq); } +#ifdef CONFIG_TXGBE_DEBUG_FS + txgbe_dbg_exit(); +#endif /* CONFIG_TXGBE_DEBUG_FS */ } module_exit(txgbe_exit_module);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_mbx.c
Changed
@@ -21,7 +21,7 @@ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 */ - +#include "txgbe_type.h" #include "txgbe.h" #include "txgbe_mbx.h" @@ -182,6 +182,304 @@ return countdown ? 0 : TXGBE_ERR_MBX; } +/** + * txgbe_read_posted_mbx - Wait for message notification and receive message + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @mbx_id: id of mailbox to write + * + * returns SUCCESS if it successfully received a message notification and + * copied it into the receive buffer. + **/ +int txgbe_read_posted_mbx(struct txgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + int err = TXGBE_ERR_MBX; + + if (!mbx->ops.read) + goto out; + + err = txgbe_poll_for_msg(hw, mbx_id); + + /* if ack received read message, otherwise we timed out */ + if (!err) + err = TCALL(hw, mbx.ops.read, msg, size, mbx_id); +out: + return err; +} + +/** + * txgbe_write_posted_mbx - Write a message to the mailbox, wait for ack + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @mbx_id: id of mailbox to write + * + * returns SUCCESS if it successfully copied message into the buffer and + * received an ack to that message within delay * timeout period + **/ +int txgbe_write_posted_mbx(struct txgbe_hw *hw, u32 *msg, u16 size, + u16 mbx_id) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + int err; + + /* exit if either we can't write or there isn't a defined timeout */ + if (!mbx->timeout) + return TXGBE_ERR_MBX; + + /* send msg */ + err = TCALL(hw, mbx.ops.write, msg, size, mbx_id); + + /* if msg sent wait until we receive an ack */ + if (!err) + err = txgbe_poll_for_ack(hw, mbx_id); + + return err; +} + +/** + * txgbe_init_mbx_ops - Initialize MB function pointers + * @hw: pointer to the HW structure + * + * Setups up the mailbox read and write message function pointers + **/ +void txgbe_init_mbx_ops(struct txgbe_hw *hw) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + + mbx->ops.read_posted = txgbe_read_posted_mbx; + mbx->ops.write_posted = txgbe_write_posted_mbx; +} + +/** + * txgbe_read_v2p_mailbox - read v2p mailbox + * @hw: pointer to the HW structure + * + * This function is used to read the v2p mailbox without losing the read to + * clear status bits. + **/ +u32 txgbe_read_v2p_mailbox(struct txgbe_hw *hw) +{ + u32 v2p_mailbox = rd32(hw, TXGBE_VXMAILBOX); + + v2p_mailbox |= hw->mbx.v2p_mailbox; + /* read and clear mirrored mailbox flags */ + v2p_mailbox |= rd32a(hw, TXGBE_VXMBMEM, TXGBE_VXMAILBOX_SIZE); + wr32a(hw, TXGBE_VXMBMEM, TXGBE_VXMAILBOX_SIZE, 0); + hw->mbx.v2p_mailbox |= v2p_mailbox & TXGBE_VXMAILBOX_R2C_BITS; + + return v2p_mailbox; +} + +/** + * txgbe_check_for_bit_vf - Determine if a status bit was set + * @hw: pointer to the HW structure + * @mask: bitmask for bits to be tested and cleared + * + * This function is used to check for the read to clear bits within + * the V2P mailbox. + **/ +int txgbe_check_for_bit_vf(struct txgbe_hw *hw, u32 mask) +{ + u32 mailbox = txgbe_read_v2p_mailbox(hw); + + hw->mbx.v2p_mailbox &= ~mask; + + return (mailbox & mask ? 0 : TXGBE_ERR_MBX); +} + +/** + * txgbe_check_for_msg_vf - checks to see if the PF has sent mail + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns SUCCESS if the PF has set the Status bit or else ERR_MBX + **/ +int txgbe_check_for_msg_vf(struct txgbe_hw *hw, u16 __always_unused mbx_id) +{ + int err = TXGBE_ERR_MBX; + + /* read clear the pf sts bit */ + if (!txgbe_check_for_bit_vf(hw, TXGBE_VXMAILBOX_PFSTS)) { + err = 0; + hw->mbx.stats.reqs++; + } + + return err; +} + +/** + * txgbe_check_for_ack_vf - checks to see if the PF has ACK'd + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns SUCCESS if the PF has set the ACK bit or else ERR_MBX + **/ +int txgbe_check_for_ack_vf(struct txgbe_hw *hw, u16 __always_unused mbx_id) +{ + int err = TXGBE_ERR_MBX; + + /* read clear the pf ack bit */ + if (!txgbe_check_for_bit_vf(hw, TXGBE_VXMAILBOX_PFACK)) { + err = 0; + hw->mbx.stats.acks++; + } + + return err; +} + +/** + * txgbe_check_for_rst_vf - checks to see if the PF has reset + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns true if the PF has set the reset done bit or else false + **/ +int txgbe_check_for_rst_vf(struct txgbe_hw *hw, u16 __always_unused mbx_id) +{ + int err = TXGBE_ERR_MBX; + + if (!txgbe_check_for_bit_vf(hw, (TXGBE_VXMAILBOX_RSTD | + TXGBE_VXMAILBOX_RSTI))) { + err = 0; + hw->mbx.stats.rsts++; + } + + return err; +} + +/** + * txgbe_obtain_mbx_lock_vf - obtain mailbox lock + * @hw: pointer to the HW structure + * + * return SUCCESS if we obtained the mailbox lock + **/ +int txgbe_obtain_mbx_lock_vf(struct txgbe_hw *hw) +{ + int err = TXGBE_ERR_MBX; + u32 mailbox; + + /* Take ownership of the buffer */ + wr32(hw, TXGBE_VXMAILBOX, TXGBE_VXMAILBOX_VFU); + + /* reserve mailbox for vf use */ + mailbox = txgbe_read_v2p_mailbox(hw); + if (mailbox & TXGBE_VXMAILBOX_VFU) + err = 0; + else + ERROR_REPORT2(TXGBE_ERROR_POLLING, + "Failed to obtain mailbox lock for VF"); + + return err; +} + +/** + * txgbe_write_mbx_vf - Write a message to the mailbox + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @mbx_id: id of mailbox to write + * + * returns SUCCESS if it successfully copied message into the buffer + **/ +int txgbe_write_mbx_vf(struct txgbe_hw *hw, u32 *msg, u16 size, + u16 __always_unused mbx_id) +{ + int err; + u16 i; + + /* lock the mailbox to prevent pf/vf race condition */ + err = txgbe_obtain_mbx_lock_vf(hw); + if (err) + goto out_no_write; + + /* flush msg and acks as we are overwriting the message buffer */ + txgbe_check_for_msg_vf(hw, 0); + txgbe_check_for_ack_vf(hw, 0); + + /* copy the caller specified message to the mailbox memory buffer */ + for (i = 0; i < size; i++) + wr32a(hw, TXGBE_VXMBMEM, i, msgi); + + /* update stats */ + hw->mbx.stats.msgs_tx++; + + /* Drop VFU and interrupt the PF to tell it a message has been sent */ + wr32(hw, TXGBE_VXMAILBOX, TXGBE_VXMAILBOX_REQ); + +out_no_write: + return err; +} + +/** + * txgbe_read_mbx_vf - Reads a message from the inbox intended for vf + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @mbx_id: id of mailbox to read + * + * returns SUCCESS if it successfully read message from buffer + **/ +int txgbe_read_mbx_vf(struct txgbe_hw *hw, u32 *msg, u16 size, + u16 __always_unused mbx_id) +{ + int err = 0; + u16 i; + + /* lock the mailbox to prevent pf/vf race condition */ + err = txgbe_obtain_mbx_lock_vf(hw); + if (err) + goto out_no_read; + + /* copy the message from the mailbox memory buffer */ + for (i = 0; i < size; i++) + msgi = rd32a(hw, TXGBE_VXMBMEM, i); + + /* Acknowledge receipt and release mailbox, then we're done */ + wr32(hw, TXGBE_VXMAILBOX, TXGBE_VXMAILBOX_ACK); + + /* update stats */ + hw->mbx.stats.msgs_rx++; + +out_no_read: + return err; +} + +/** + * txgbe_init_mbx_params_vf - set initial values for vf mailbox + * @hw: pointer to the HW structure + * + * Initializes the hw->mbx struct to correct values for vf mailbox + */ +void txgbe_init_mbx_params_vf(struct txgbe_hw *hw) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + + /* start mailbox as timed out and let the reset_hw call set the timeout + * value to begin communications + */ + mbx->timeout = 0; + mbx->udelay = TXGBE_VF_MBX_INIT_DELAY; + + mbx->size = TXGBE_VXMAILBOX_SIZE; + + mbx->ops.read = txgbe_read_mbx_vf; + mbx->ops.write = txgbe_write_mbx_vf; + mbx->ops.read_posted = txgbe_read_posted_mbx; + mbx->ops.write_posted = txgbe_write_posted_mbx; + mbx->ops.check_for_msg = txgbe_check_for_msg_vf; + mbx->ops.check_for_ack = txgbe_check_for_ack_vf; + mbx->ops.check_for_rst = txgbe_check_for_rst_vf; + + mbx->stats.msgs_tx = 0; + mbx->stats.msgs_rx = 0; + mbx->stats.reqs = 0; + mbx->stats.acks = 0; + mbx->stats.rsts = 0; +} + int txgbe_check_for_bit_pf(struct txgbe_hw *hw, u32 mask, int index) { u32 mbvficr = rd32(hw, TXGBE_MBVFICR(index));
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.c
Changed
@@ -765,12 +765,101 @@ return MTD_FAIL; break; } + } + } + + return MTD_OK; +} +/****************************************************************************/ +MTD_STATUS mtdIsBaseTUp( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_U16 *speed, + OUT MTD_BOOL *linkUp) +{ + MTD_BOOL speedIsForced; + MTD_U16 forcedSpeed, cuSpeed, cuLinkStatus; + + *linkUp = MTD_FALSE; + *speed = MTD_ADV_NONE; + + /* first check if speed is forced to one of the speeds not requiring AN to train */ + ATTEMPT(mtdGetForcedSpeed(devPtr, port, &speedIsForced, &forcedSpeed)); + + if (speedIsForced) { + /* check if the link is up at the speed it's forced to */ + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, 3, 0x8008, 14, 2, &cuSpeed)); + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, 3, 0x8008, 10, 1, &cuLinkStatus)); + + switch (forcedSpeed) { + case MTD_SPEED_10M_HD_AN_DIS: + case MTD_SPEED_10M_FD_AN_DIS: + /* might want to add checking the duplex to make sure there + * is no duplex mismatch + */ + if (cuSpeed == MTD_CU_SPEED_10_MBPS) + *speed = forcedSpeed; + else + *speed = MTD_SPEED_MISMATCH; + + if (cuLinkStatus) + *linkUp = MTD_TRUE; + + break; + + case MTD_SPEED_100M_HD_AN_DIS: + case MTD_SPEED_100M_FD_AN_DIS: + /* might want to add checking the duplex to make sure there + * is no duplex mismatch + */ + if (cuSpeed == MTD_CU_SPEED_100_MBPS) + *speed = forcedSpeed; + else + *speed = MTD_SPEED_MISMATCH; + + if (cuLinkStatus) + *linkUp = MTD_TRUE; + + break; + + default: + return MTD_FAIL; } + } else { + /* must be going through AN */ + ATTEMPT(mtdGetAutonegSpeedDuplexResolution(devPtr, port, speed)); + + if (*speed != MTD_ADV_NONE) { + /* check if the link is up at the speed it's AN to */ + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, 3, 0x8008, 10, 1, &cuLinkStatus)); + + switch (*speed) { + case MTD_SPEED_10M_HD: + case MTD_SPEED_10M_FD: + case MTD_SPEED_100M_HD: + case MTD_SPEED_100M_FD: + case MTD_SPEED_1GIG_HD: + case MTD_SPEED_1GIG_FD: + case MTD_SPEED_10GIG_FD: + case MTD_SPEED_2P5GIG_FD: + case MTD_SPEED_5GIG_FD: + if (cuLinkStatus) + *linkUp = MTD_TRUE; + break; + default: + return MTD_FAIL; + } + } + /* else link is down, and AN is in progress, */ } - return MTD_OK; + if (*speed == MTD_SPEED_MISMATCH) { + return MTD_FAIL; + } else { + return MTD_OK; + } } MTD_STATUS mtdSetPauseAdvertisement(
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.h
Changed
@@ -1109,6 +1109,13 @@ OUT MTD_U16 *speedResolution ); +MTD_STATUS mtdIsBaseTUp( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_U16 *speed, + OUT MTD_BOOL *linkUp +); + MTD_STATUS mtdAutonegIsSpeedDuplexResolutionDone ( IN MTD_DEV_PTR devPtr,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_param.c
Changed
@@ -46,6 +46,9 @@ module_param_array(X, int, &num_##X, 0); \ MODULE_PARM_DESC(X, desc); +TXGBE_PARAM(an73_train_mode, "an73_train_mode to different switch(0 to centc, 1 to other)"); +#define TXGBE_DEFAULT_FFE_AN73_TRAIN_MODE 0 + /* ffe_main (KR/KX4/KX/SFI) * * Valid Range: 0-60 @@ -213,7 +216,9 @@ * * Default Value: 1 */ -#define DEFAULT_ITR 1 +#define DEFAULT_ITR ((TXGBE_STATIC_ITR == 0) || \ + (TXGBE_STATIC_ITR == 1) ? TXGBE_STATIC_ITR : (u16)((1000000/TXGBE_STATIC_ITR) << 2)) + TXGBE_PARAM(InterruptThrottleRate, "Maximum interrupts per second, per vector, " "(0,1,980-500000), default 1"); @@ -477,6 +482,30 @@ "Warning: no configuration for board #%d\n", bd); txgbe_notice("Using defaults for all values\n"); } + + { /* an73_mode */ + u32 an73_mode; + static struct txgbe_option opt = { + .type = range_option, + .name = "an73_train_mode", + .err = + "using default of "__MODULE_STRING(TXGBE_DEFAULT_FFE_AN73_TRAIN_MODE), + .def = TXGBE_DEFAULT_FFE_AN73_TRAIN_MODE, + .arg = { .r = { .min = 0, + .max = 1} } + }; + + if (num_an73_train_mode > bd) { + an73_mode = an73_train_modebd; + if (an73_mode == OPTION_UNSET) + an73_mode = an73_train_modebd; + txgbe_validate_option(&an73_mode, &opt); + adapter->an73_mode = an73_mode; + } else { + adapter->an73_mode = 0; + } + } + { /* MAIN */ u32 ffe_main; static struct txgbe_option opt = { @@ -760,6 +789,7 @@ } else if (opt.def == 0) { rss = min_t(int, txgbe_max_rss_indices(adapter), num_online_cpus()); + featureRING_F_FDIR.limit = (u16)rss; featureRING_F_RSS.limit = rss; } /* Check Interoperability */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_phy.c
Changed
@@ -37,8 +37,6 @@ { u32 mmngc; - DEBUGFUNC("\n"); - mmngc = rd32(hw, TXGBE_MIS_ST); if (mmngc & TXGBE_MIS_ST_MNG_VETO) { ERROR_REPORT1(TXGBE_ERROR_SOFTWARE, @@ -61,7 +59,6 @@ u16 phy_id_high = 0; u16 phy_id_low = 0; u8 numport, thisport; - DEBUGFUNC("\n"); status = mtdHwXmdioRead(&hw->phy_dev, hw->phy.addr, TXGBE_MDIO_PMA_PMD_DEV_TYPE, @@ -96,8 +93,6 @@ enum txgbe_phy_type phy_type; u16 ext_ability = 0; - DEBUGFUNC("\n"); - switch (hw->phy.id) { case TN1010_PHY_ID: phy_type = txgbe_phy_tn; @@ -134,9 +129,6 @@ { s32 status = 0; - DEBUGFUNC("\n"); - - if (status != 0 || hw->phy.type == txgbe_phy_none) goto out; @@ -208,8 +200,6 @@ s32 status; u32 gssr = hw->phy.phy_semaphore_mask; - DEBUGFUNC("\n"); - if (0 == TCALL(hw, mac.ops.acquire_swfw_sync, gssr)) { status = txgbe_read_phy_reg_mdi(hw, reg_addr, device_type, phy_data); @@ -272,8 +262,6 @@ s32 status; u32 gssr = hw->phy.phy_semaphore_mask; - DEBUGFUNC("\n"); - if (TCALL(hw, mac.ops.acquire_swfw_sync, gssr) == 0) { status = txgbe_write_phy_reg_mdi(hw, reg_addr, device_type, phy_data); @@ -325,31 +313,40 @@ { u16 speed = MTD_ADV_NONE; MTD_DEV_PTR devptr = &hw->phy_dev; - MTD_BOOL anDone = MTD_FALSE; u16 port = hw->phy.addr; - - DEBUGFUNC("\n"); - + int i = 0; + MTD_BOOL linkUp = MTD_FALSE; + u16 linkSpeed = MTD_ADV_NONE; + + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10GB_FULL) + speed |= MTD_SPEED_10GIG_FD; + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_1GB_FULL) + speed |= MTD_SPEED_1GIG_FD; + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_100_FULL) + speed |= MTD_SPEED_100M_FD; + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10_FULL) + speed |= MTD_SPEED_10M_FD; if (!autoneg_wait_to_complete) { - mtdAutonegIsSpeedDuplexResolutionDone(devptr, port, &anDone); - if (anDone) { - mtdGetAutonegSpeedDuplexResolution(devptr, port, &speed); + mtdGetAutonegSpeedDuplexResolution(devptr, port, &linkSpeed); + if (linkSpeed & speed) { + speed = linkSpeed; + goto out; } - } else { - if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10GB_FULL) - speed |= MTD_SPEED_10GIG_FD; - if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_1GB_FULL) - speed |= MTD_SPEED_1GIG_FD; - if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_100_FULL) - speed |= MTD_SPEED_100M_FD; - if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10_FULL) - speed |= MTD_SPEED_10M_FD; - mtdEnableSpeeds(devptr, port, speed, MTD_TRUE); - - /* wait autoneg to be done */ - speed = MTD_ADV_NONE; } + mtdEnableSpeeds(devptr, port, speed, MTD_TRUE); + mdelay(10); + + /* wait autoneg to be done */ + speed = MTD_ADV_NONE; + for (i = 0; i < 300; i++) { + mtdIsBaseTUp(devptr, port, &speed, &linkUp); + if (linkUp) + break; + mdelay(10); + } + +out: switch (speed) { case MTD_SPEED_10GIG_FD: return TXGBE_LINK_SPEED_10GB_FULL; @@ -374,9 +371,6 @@ u32 speed, bool autoneg_wait_to_complete) { - - DEBUGFUNC("\n"); - /* * Clear autoneg_advertised and set new values based on input link * speed. @@ -414,9 +408,6 @@ { s32 status; u16 speed_ability; - - DEBUGFUNC("\n"); - *speed = 0; *autoneg = true; @@ -439,6 +430,24 @@ } /** + * txgbe_get_phy_firmware_version - Gets the PHY Firmware Version + * @hw: pointer to hardware structure + * @firmware_version: pointer to the PHY Firmware Version + **/ +s32 txgbe_get_phy_firmware_version(struct txgbe_hw *hw, + u16 *firmware_version) +{ + s32 status; + u8 major, minor, inc, test; + + status = mtdGetFirmwareVersion(&hw->phy_dev, hw->phy.addr, + &major, &minor, &inc, &test); + if (status == 0) + *firmware_version = (major << 8) | minor; + return status; +} + +/** * txgbe_identify_module - Identifies module type * @hw: pointer to hardware structure * @@ -448,8 +457,6 @@ { s32 status = TXGBE_ERR_SFP_NOT_PRESENT; - DEBUGFUNC("\n"); - switch (TCALL(hw, mac.ops.get_media_type)) { case txgbe_media_type_fiber: status = txgbe_identify_sfp_module(hw); @@ -482,8 +489,6 @@ u8 cable_tech = 0; u8 cable_spec = 0; - DEBUGFUNC("\n"); - if (TCALL(hw, mac.ops.get_media_type) != txgbe_media_type_fiber) { hw->phy.sfp_type = txgbe_sfp_type_not_present; status = TXGBE_ERR_SFP_NOT_PRESENT; @@ -754,8 +759,6 @@ s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset, u8 *eeprom_data) { - DEBUGFUNC("\n"); - return TCALL(hw, phy.ops.read_i2c_byte, byte_offset, TXGBE_I2C_EEPROM_DEV_ADDR, eeprom_data); @@ -788,8 +791,6 @@ s32 txgbe_write_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset, u8 eeprom_data) { - DEBUGFUNC("\n"); - return TCALL(hw, phy.ops.write_i2c_byte, byte_offset, TXGBE_I2C_EEPROM_DEV_ADDR, eeprom_data); @@ -944,8 +945,6 @@ s32 status = 0; u32 ts_state; - DEBUGFUNC("\n"); - /* Check that the LASI temp alarm status was triggered */ ts_state = rd32(hw, TXGBE_TS_ALARM_ST);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_phy.h
Changed
@@ -150,6 +150,8 @@ bool *autoneg); s32 txgbe_check_reset_blocked(struct txgbe_hw *hw); +s32 txgbe_get_phy_firmware_version(struct txgbe_hw *hw, + u16 *firmware_version); s32 txgbe_identify_module(struct txgbe_hw *hw); s32 txgbe_identify_sfp_module(struct txgbe_hw *hw); s32 txgbe_tn_check_overtemp(struct txgbe_hw *hw);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_ptp.c
Changed
@@ -670,8 +670,8 @@ *incval = TXGBE_INCVAL_100; break; case TXGBE_LINK_SPEED_1GB_FULL: - *shift = TXGBE_INCVAL_SHIFT_FPGA; - *incval = TXGBE_INCVAL_FPGA; + *shift = TXGBE_INCVAL_SHIFT_1GB; + *incval = TXGBE_INCVAL_1GB; break; case TXGBE_LINK_SPEED_10GB_FULL: default: /* TXGBE_LINK_SPEED_10GB_FULL */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_sysfs.c
Added
@@ -0,0 +1,205 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2019 - 2022 Beijing WangXun Technology Co., Ltd. */ + +#include "txgbe.h" +#include "txgbe_hw.h" +#include "txgbe_type.h" + +#ifdef CONFIG_TXGBE_SYSFS + +#include <linux/module.h> +#include <linux/types.h> +#include <linux/sysfs.h> +#include <linux/kobject.h> +#include <linux/device.h> +#include <linux/netdevice.h> +#include <linux/time.h> +#ifdef CONFIG_TXGBE_HWMON +#include <linux/hwmon.h> +#endif + +#ifdef CONFIG_TXGBE_HWMON +/* hwmon callback functions */ +static ssize_t txgbe_hwmon_show_temp(struct device __always_unused *dev, + struct device_attribute *attr, + char *buf) +{ + struct hwmon_attr *txgbe_attr = container_of(attr, struct hwmon_attr, + dev_attr); + unsigned int value; + + /* reset the temp field */ + TCALL(txgbe_attr->hw, mac.ops.get_thermal_sensor_data); + + value = txgbe_attr->sensor->temp; + + /* display millidegree */ + value *= 1000; + + return sprintf(buf, "%u\n", value); +} + +static ssize_t txgbe_hwmon_show_alarmthresh(struct device __always_unused *dev, + struct device_attribute *attr, + char *buf) +{ + struct hwmon_attr *txgbe_attr = container_of(attr, struct hwmon_attr, + dev_attr); + unsigned int value = txgbe_attr->sensor->alarm_thresh; + + /* display millidegree */ + value *= 1000; + + return sprintf(buf, "%u\n", value); +} + +static ssize_t txgbe_hwmon_show_dalarmthresh(struct device __always_unused *dev, + struct device_attribute *attr, + char *buf) +{ + struct hwmon_attr *txgbe_attr = container_of(attr, struct hwmon_attr, + dev_attr); + unsigned int value = txgbe_attr->sensor->dalarm_thresh; + + /* display millidegree */ + value *= 1000; + + return sprintf(buf, "%u\n", value); +} + +/** + * txgbe_add_hwmon_attr - Create hwmon attr table for a hwmon sysfs file. + * @adapter: pointer to the adapter structure + * @type: type of sensor data to display + * + * For each file we want in hwmon's sysfs interface we need a device_attribute + * This is included in our hwmon_attr struct that contains the references to + * the data structures we need to get the data to display. + */ +static int txgbe_add_hwmon_attr(struct txgbe_adapter *adapter, int type) +{ + int rc; + unsigned int n_attr; + struct hwmon_attr *txgbe_attr; + + n_attr = adapter->txgbe_hwmon_buff.n_hwmon; + txgbe_attr = &adapter->txgbe_hwmon_buff.hwmon_listn_attr; + + switch (type) { + case TXGBE_HWMON_TYPE_TEMP: + txgbe_attr->dev_attr.show = txgbe_hwmon_show_temp; + snprintf(txgbe_attr->name, sizeof(txgbe_attr->name), + "temp%u_input", 0); + break; + case TXGBE_HWMON_TYPE_ALARMTHRESH: + txgbe_attr->dev_attr.show = txgbe_hwmon_show_alarmthresh; + snprintf(txgbe_attr->name, sizeof(txgbe_attr->name), + "temp%u_alarmthresh", 0); + break; + case TXGBE_HWMON_TYPE_DALARMTHRESH: + txgbe_attr->dev_attr.show = txgbe_hwmon_show_dalarmthresh; + snprintf(txgbe_attr->name, sizeof(txgbe_attr->name), + "temp%u_dalarmthresh", 0); + break; + default: + rc = -EPERM; + return rc; + } + + /* These always the same regardless of type */ + txgbe_attr->sensor = + &adapter->hw.mac.thermal_sensor_data.sensor; + txgbe_attr->hw = &adapter->hw; + txgbe_attr->dev_attr.store = NULL; + txgbe_attr->dev_attr.attr.mode = 0444; + txgbe_attr->dev_attr.attr.name = txgbe_attr->name; + + rc = device_create_file(pci_dev_to_dev(adapter->pdev), + &txgbe_attr->dev_attr); + + if (rc == 0) + ++adapter->txgbe_hwmon_buff.n_hwmon; + + return rc; +} +#endif /* CONFIG_TXGBE_HWMON */ + +static void txgbe_sysfs_del_adapter(struct txgbe_adapter __maybe_unused *adapter) +{ +#ifdef CONFIG_TXGBE_HWMON + int i; + + if (!adapter) + return; + + for (i = 0; i < adapter->txgbe_hwmon_buff.n_hwmon; i++) { + device_remove_file(pci_dev_to_dev(adapter->pdev), + &adapter->txgbe_hwmon_buff.hwmon_listi.dev_attr); + } + + kfree(adapter->txgbe_hwmon_buff.hwmon_list); + + if (adapter->txgbe_hwmon_buff.device) + hwmon_device_unregister(adapter->txgbe_hwmon_buff.device); +#endif /* CONFIG_TXGBE_HWMON */ +} + +/* called from txgbe_main.c */ +void txgbe_sysfs_exit(struct txgbe_adapter *adapter) +{ + txgbe_sysfs_del_adapter(adapter); +} + +/* called from txgbe_main.c */ +int txgbe_sysfs_init(struct txgbe_adapter *adapter) +{ + int rc = 0; +#ifdef CONFIG_TXGBE_HWMON + struct hwmon_buff *txgbe_hwmon = &adapter->txgbe_hwmon_buff; + int n_attrs; + +#endif /* CONFIG_TXGBE_HWMON */ + if (!adapter) + goto err; + +#ifdef CONFIG_TXGBE_HWMON + + /* Don't create thermal hwmon interface if no sensors present */ + if (TCALL(&adapter->hw, mac.ops.init_thermal_sensor_thresh)) + goto no_thermal; + + /* Allocation space for max attributs + * max num sensors * values (temp, alamthresh, dalarmthresh) + */ + n_attrs = 3; + txgbe_hwmon->hwmon_list = kcalloc(n_attrs, sizeof(struct hwmon_attr), + GFP_KERNEL); + if (!txgbe_hwmon->hwmon_list) { + rc = -ENOMEM; + goto err; + } + + txgbe_hwmon->device = + hwmon_device_register(pci_dev_to_dev(adapter->pdev)); + if (IS_ERR(txgbe_hwmon->device)) { + rc = PTR_ERR(txgbe_hwmon->device); + goto err; + } + + /* Bail if any hwmon attr struct fails to initialize */ + rc = txgbe_add_hwmon_attr(adapter, TXGBE_HWMON_TYPE_TEMP); + rc |= txgbe_add_hwmon_attr(adapter, TXGBE_HWMON_TYPE_ALARMTHRESH); + rc |= txgbe_add_hwmon_attr(adapter, TXGBE_HWMON_TYPE_DALARMTHRESH); + if (rc) + goto err; + +no_thermal: +#endif /* CONFIG_TXGBE_HWMON */ + goto exit; + +err: + txgbe_sysfs_del_adapter(adapter); +exit: + return rc; +} +#endif /* CONFIG_TXGBE_SYSFS */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ethernet/netswift/txgbe/txgbe_type.h
Changed
@@ -128,6 +128,8 @@ #define TXGBE_WOL_SUP 0x4000 #define TXGBE_WOL_MASK 0x4000 +#define TXGBE_DEV_MASK 0xf0 + /* Combined interface*/ #define TXGBE_ID_SFI_XAUI 0x50 @@ -355,6 +357,7 @@ #define TXGBE_MIS_PWR 0x10000 #define TXGBE_MIS_CTL 0x10004 #define TXGBE_MIS_PF_SM 0x10008 +#define TXGBE_MIS_PRB_CTL 0x10010 #define TXGBE_MIS_ST 0x10028 #define TXGBE_MIS_SWSM 0x1002C #define TXGBE_MIS_RST_ST 0x10030 @@ -392,6 +395,8 @@ #define TXGBE_MIS_RST_ST_RST_INI_SHIFT 8 #define TXGBE_MIS_RST_ST_RST_TIM 0x000000FFU #define TXGBE_MIS_PF_SM_SM 1 +#define TXGBE_MIS_PRB_CTL_LAN0_UP 0x2 +#define TXGBE_MIS_PRB_CTL_LAN1_UP 0x1 /* Sensors for PVT(Process Voltage Temperature) */ #define TXGBE_TS_CTL 0x10300 @@ -2262,6 +2267,12 @@ #define FW_FLASH_UPGRADE_WRITE_CMD 0xE4 #define FW_FLASH_UPGRADE_VERIFY_CMD 0xE5 #define FW_FLASH_UPGRADE_VERIFY_LEN 0x4 +#define FW_DW_OPEN_NOTIFY 0xE9 +#define FW_DW_CLOSE_NOTIFY 0xEA + +#define TXGBE_CHECKSUM_CAP_ST_PASS 0x80658383 +#define TXGBE_CHECKSUM_CAP_ST_FAIL 0x70657376 + /* Host Interface Command Structures */ struct txgbe_hic_hdr { @@ -3028,8 +3039,9 @@ #endif MTD_DEV phy_dev; enum txgbe_link_status link_status; - u16 subsystem_id; u16 tpid8; + u16 oem_ssid; + u16 oem_svid; }; #define TCALL(hw, func, args...) (((hw)->func != NULL) \
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/ipvlan/ipvlan_main.c
Changed
@@ -14,7 +14,7 @@ int sysctl_ipvlan_loop_delay = 10; static int ipvlan_default_mode = IPVLAN_MODE_L3; module_param(ipvlan_default_mode, int, 0400); -MODULE_PARM_DESC(ipvlan_default_mode, "set ipvlan default mode: 0 for l2, 1 for l3, 2 for l2e, 3 for l3s, others invalid now"); +MODULE_PARM_DESC(ipvlan_default_mode, "set ipvlan default mode: 0 for l2, 1 for l3, 2 for l3s, 3 for l2e, others invalid now"); static struct ctl_table_header *ipvlan_table_hrd; static struct ctl_table ipvlan_table = {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/virtio_net.c
Changed
@@ -3255,7 +3255,7 @@ return 0; free_unregister_netdev: - vi->vdev->config->reset(vdev); + virtio_reset_device(vdev); unregister_netdev(dev); free_failover: @@ -3271,7 +3271,7 @@ static void remove_vq_common(struct virtnet_info *vi) { - vi->vdev->config->reset(vi->vdev); + virtio_reset_device(vi->vdev); /* Free unused buffers in both send and recv, if any. */ free_unused_bufs(vi);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/wireless/mac80211_hwsim.c
Changed
@@ -4318,7 +4318,7 @@ { int i; - vdev->config->reset(vdev); + virtio_reset_device(vdev); for (i = 0; i < ARRAY_SIZE(hwsim_vqs); i++) { struct virtqueue *vq = hwsim_vqsi;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/wireless/microchip/wilc1000/cfg80211.c
Changed
@@ -939,30 +939,52 @@ return; while (index + sizeof(*e) <= len) { + u16 attr_size; + e = (struct wilc_attr_entry *)&bufindex; - if (e->attr_type == IEEE80211_P2P_ATTR_CHANNEL_LIST) + attr_size = le16_to_cpu(e->attr_len); + + if (index + sizeof(*e) + attr_size > len) + return; + + if (e->attr_type == IEEE80211_P2P_ATTR_CHANNEL_LIST && + attr_size >= (sizeof(struct wilc_attr_ch_list) - sizeof(*e))) ch_list_idx = index; - else if (e->attr_type == IEEE80211_P2P_ATTR_OPER_CHANNEL) + else if (e->attr_type == IEEE80211_P2P_ATTR_OPER_CHANNEL && + attr_size == (sizeof(struct wilc_attr_oper_ch) - sizeof(*e))) op_ch_idx = index; + if (ch_list_idx && op_ch_idx) break; - index += le16_to_cpu(e->attr_len) + sizeof(*e); + + index += sizeof(*e) + attr_size; } if (ch_list_idx) { - u16 attr_size; - struct wilc_ch_list_elem *e; - int i; + unsigned int i; + u16 elem_size; ch_list = (struct wilc_attr_ch_list *)&bufch_list_idx; - attr_size = le16_to_cpu(ch_list->attr_len); - for (i = 0; i < attr_size;) { + /* the number of bytes following the final 'elem' member */ + elem_size = le16_to_cpu(ch_list->attr_len) - + (sizeof(*ch_list) - sizeof(struct wilc_attr_entry)); + for (i = 0; i < elem_size;) { + struct wilc_ch_list_elem *e; + e = (struct wilc_ch_list_elem *)(ch_list->elem + i); + + i += sizeof(*e); + if (i > elem_size) + break; + + i += e->no_of_channels; + if (i > elem_size) + break; + if (e->op_class == WILC_WLAN_OPERATING_CLASS_2_4GHZ) { memset(e->ch_list, sta_ch, e->no_of_channels); break; } - i += e->no_of_channels; } }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/wireless/microchip/wilc1000/hif.c
Changed
@@ -467,14 +467,25 @@ rsn_ie = cfg80211_find_ie(WLAN_EID_RSN, ies->data, ies->len); if (rsn_ie) { + int rsn_ie_len = sizeof(struct element) + rsn_ie1; int offset = 8; - param->mode_802_11i = 2; - param->rsn_found = true; /* extract RSN capabilities */ - offset += (rsn_ieoffset * 4) + 2; - offset += (rsn_ieoffset * 4) + 2; - memcpy(param->rsn_cap, &rsn_ieoffset, 2); + if (offset < rsn_ie_len) { + /* skip over pairwise suites */ + offset += (rsn_ieoffset * 4) + 2; + + if (offset < rsn_ie_len) { + /* skip over authentication suites */ + offset += (rsn_ieoffset * 4) + 2; + + if (offset + 1 < rsn_ie_len) { + param->mode_802_11i = 2; + param->rsn_found = true; + memcpy(param->rsn_cap, &rsn_ieoffset, 2); + } + } + } } if (param->rsn_found) {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/xen-netback/common.h
Changed
@@ -395,7 +395,7 @@ bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread); void xenvif_rx_action(struct xenvif_queue *queue); -void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb); +bool xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb); void xenvif_carrier_on(struct xenvif *vif);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/xen-netback/interface.c
Changed
@@ -269,14 +269,16 @@ if (vif->hash.alg == XEN_NETIF_CTRL_HASH_ALGORITHM_NONE) skb_clear_hash(skb); - xenvif_rx_queue_tail(queue, skb); + if (!xenvif_rx_queue_tail(queue, skb)) + goto drop; + xenvif_kick_thread(queue); return NETDEV_TX_OK; drop: vif->dev->stats.tx_dropped++; - dev_kfree_skb(skb); + dev_kfree_skb_any(skb); return NETDEV_TX_OK; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/xen-netback/netback.c
Changed
@@ -330,10 +330,13 @@ struct xenvif_tx_cb { - u16 pending_idx; + u16 copy_pending_idxXEN_NETBK_LEGACY_SLOTS_MAX + 1; + u8 copy_count; }; #define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb) +#define copy_pending_idx(skb, i) (XENVIF_TX_CB(skb)->copy_pending_idxi) +#define copy_count(skb) (XENVIF_TX_CB(skb)->copy_count) static inline void xenvif_tx_create_map_op(struct xenvif_queue *queue, u16 pending_idx, @@ -368,31 +371,93 @@ return skb; } -static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif_queue *queue, - struct sk_buff *skb, - struct xen_netif_tx_request *txp, - struct gnttab_map_grant_ref *gop, - unsigned int frag_overflow, - struct sk_buff *nskb) +static void xenvif_get_requests(struct xenvif_queue *queue, + struct sk_buff *skb, + struct xen_netif_tx_request *first, + struct xen_netif_tx_request *txfrags, + unsigned *copy_ops, + unsigned *map_ops, + unsigned int frag_overflow, + struct sk_buff *nskb, + unsigned int extra_count, + unsigned int data_len) { struct skb_shared_info *shinfo = skb_shinfo(skb); skb_frag_t *frags = shinfo->frags; - u16 pending_idx = XENVIF_TX_CB(skb)->pending_idx; - int start; + u16 pending_idx; pending_ring_idx_t index; unsigned int nr_slots; + struct gnttab_copy *cop = queue->tx_copy_ops + *copy_ops; + struct gnttab_map_grant_ref *gop = queue->tx_map_ops + *map_ops; + struct xen_netif_tx_request *txp = first; + + nr_slots = shinfo->nr_frags + 1; + + copy_count(skb) = 0; + + /* Create copy ops for exactly data_len bytes into the skb head. */ + __skb_put(skb, data_len); + while (data_len > 0) { + int amount = data_len > txp->size ? txp->size : data_len; + + cop->source.u.ref = txp->gref; + cop->source.domid = queue->vif->domid; + cop->source.offset = txp->offset; + + cop->dest.domid = DOMID_SELF; + cop->dest.offset = (offset_in_page(skb->data + + skb_headlen(skb) - + data_len)) & ~XEN_PAGE_MASK; + cop->dest.u.gmfn = virt_to_gfn(skb->data + skb_headlen(skb) + - data_len); + + cop->len = amount; + cop->flags = GNTCOPY_source_gref; - nr_slots = shinfo->nr_frags; + index = pending_index(queue->pending_cons); + pending_idx = queue->pending_ringindex; + callback_param(queue, pending_idx).ctx = NULL; + copy_pending_idx(skb, copy_count(skb)) = pending_idx; + copy_count(skb)++; + + cop++; + data_len -= amount; - /* Skip first skb fragment if it is on same page as header fragment. */ - start = (frag_get_pending_idx(&shinfo->frags0) == pending_idx); + if (amount == txp->size) { + /* The copy op covered the full tx_request */ + + memcpy(&queue->pending_tx_infopending_idx.req, + txp, sizeof(*txp)); + queue->pending_tx_infopending_idx.extra_count = + (txp == first) ? extra_count : 0; + + if (txp == first) + txp = txfrags; + else + txp++; + queue->pending_cons++; + nr_slots--; + } else { + /* The copy op partially covered the tx_request. + * The remainder will be mapped. + */ + txp->offset += amount; + txp->size -= amount; + } + } - for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots; - shinfo->nr_frags++, txp++, gop++) { + for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots; + shinfo->nr_frags++, gop++) { index = pending_index(queue->pending_cons++); pending_idx = queue->pending_ringindex; - xenvif_tx_create_map_op(queue, pending_idx, txp, 0, gop); + xenvif_tx_create_map_op(queue, pending_idx, txp, + txp == first ? extra_count : 0, gop); frag_set_pending_idx(&fragsshinfo->nr_frags, pending_idx); + + if (txp == first) + txp = txfrags; + else + txp++; } if (frag_overflow) { @@ -413,7 +478,8 @@ skb_shinfo(skb)->frag_list = nskb; } - return gop; + (*copy_ops) = cop - queue->tx_copy_ops; + (*map_ops) = gop - queue->tx_map_ops; } static inline void xenvif_grant_handle_set(struct xenvif_queue *queue, @@ -449,7 +515,7 @@ struct gnttab_copy **gopp_copy) { struct gnttab_map_grant_ref *gop_map = *gopp_map; - u16 pending_idx = XENVIF_TX_CB(skb)->pending_idx; + u16 pending_idx; /* This always points to the shinfo of the skb being checked, which * could be either the first or the one on the frag_list */ @@ -460,24 +526,37 @@ struct skb_shared_info *first_shinfo = NULL; int nr_frags = shinfo->nr_frags; const bool sharedslot = nr_frags && - frag_get_pending_idx(&shinfo->frags0) == pending_idx; - int i, err; + frag_get_pending_idx(&shinfo->frags0) == + copy_pending_idx(skb, copy_count(skb) - 1); + int i, err = 0; - /* Check status of header. */ - err = (*gopp_copy)->status; - if (unlikely(err)) { - if (net_ratelimit()) - netdev_dbg(queue->vif->dev, - "Grant copy of header failed! status: %d pending_idx: %u ref: %u\n", - (*gopp_copy)->status, - pending_idx, - (*gopp_copy)->source.u.ref); - /* The first frag might still have this slot mapped */ - if (!sharedslot) - xenvif_idx_release(queue, pending_idx, - XEN_NETIF_RSP_ERROR); + for (i = 0; i < copy_count(skb); i++) { + int newerr; + + /* Check status of header. */ + pending_idx = copy_pending_idx(skb, i); + + newerr = (*gopp_copy)->status; + if (likely(!newerr)) { + /* The first frag might still have this slot mapped */ + if (i < copy_count(skb) - 1 || !sharedslot) + xenvif_idx_release(queue, pending_idx, + XEN_NETIF_RSP_OKAY); + } else { + err = newerr; + if (net_ratelimit()) + netdev_dbg(queue->vif->dev, + "Grant copy of header failed! status: %d pending_idx: %u ref: %u\n", + (*gopp_copy)->status, + pending_idx, + (*gopp_copy)->source.u.ref); + /* The first frag might still have this slot mapped */ + if (i < copy_count(skb) - 1 || !sharedslot) + xenvif_idx_release(queue, pending_idx, + XEN_NETIF_RSP_ERROR); + } + (*gopp_copy)++; } - (*gopp_copy)++; check_frags: for (i = 0; i < nr_frags; i++, gop_map++) { @@ -524,14 +603,6 @@ if (err) continue; - /* First error: if the header haven't shared a slot with the - * first frag, release it as well. - */ - if (!sharedslot) - xenvif_idx_release(queue, - XENVIF_TX_CB(skb)->pending_idx, - XEN_NETIF_RSP_OKAY); - /* Invalidate preceding fragments of this skb. */ for (j = 0; j < i; j++) { pending_idx = frag_get_pending_idx(&shinfo->fragsj); @@ -801,7 +872,6 @@ unsigned *copy_ops, unsigned *map_ops) { - struct gnttab_map_grant_ref *gop = queue->tx_map_ops; struct sk_buff *skb, *nskb; int ret; unsigned int frag_overflow; @@ -883,8 +953,12 @@ continue; } + data_len = (txreq.size > XEN_NETBACK_TX_COPY_LEN) ? + XEN_NETBACK_TX_COPY_LEN : txreq.size; + ret = xenvif_count_requests(queue, &txreq, extra_count, txfrags, work_to_do); + if (unlikely(ret < 0)) break; @@ -910,9 +984,8 @@ index = pending_index(queue->pending_cons); pending_idx = queue->pending_ringindex; - data_len = (txreq.size > XEN_NETBACK_TX_COPY_LEN && - ret < XEN_NETBK_LEGACY_SLOTS_MAX) ? - XEN_NETBACK_TX_COPY_LEN : txreq.size; + if (ret >= XEN_NETBK_LEGACY_SLOTS_MAX - 1 && data_len < txreq.size) + data_len = txreq.size; skb = xenvif_alloc_skb(data_len); if (unlikely(skb == NULL)) { @@ -923,8 +996,6 @@ } skb_shinfo(skb)->nr_frags = ret; - if (data_len < txreq.size) - skb_shinfo(skb)->nr_frags++; /* At this point shinfo->nr_frags is in fact the number of * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX. */ @@ -986,54 +1057,19 @@ type); } - XENVIF_TX_CB(skb)->pending_idx = pending_idx; - - __skb_put(skb, data_len); - queue->tx_copy_ops*copy_ops.source.u.ref = txreq.gref; - queue->tx_copy_ops*copy_ops.source.domid = queue->vif->domid; - queue->tx_copy_ops*copy_ops.source.offset = txreq.offset; - - queue->tx_copy_ops*copy_ops.dest.u.gmfn = - virt_to_gfn(skb->data); - queue->tx_copy_ops*copy_ops.dest.domid = DOMID_SELF; - queue->tx_copy_ops*copy_ops.dest.offset = - offset_in_page(skb->data) & ~XEN_PAGE_MASK; - - queue->tx_copy_ops*copy_ops.len = data_len; - queue->tx_copy_ops*copy_ops.flags = GNTCOPY_source_gref; - - (*copy_ops)++; - - if (data_len < txreq.size) { - frag_set_pending_idx(&skb_shinfo(skb)->frags0, - pending_idx); - xenvif_tx_create_map_op(queue, pending_idx, &txreq, - extra_count, gop); - gop++; - } else { - frag_set_pending_idx(&skb_shinfo(skb)->frags0, - INVALID_PENDING_IDX); - memcpy(&queue->pending_tx_infopending_idx.req, - &txreq, sizeof(txreq)); - queue->pending_tx_infopending_idx.extra_count = - extra_count; - } - - queue->pending_cons++; - - gop = xenvif_get_requests(queue, skb, txfrags, gop, - frag_overflow, nskb); + xenvif_get_requests(queue, skb, &txreq, txfrags, copy_ops, + map_ops, frag_overflow, nskb, extra_count, + data_len); __skb_queue_tail(&queue->tx_queue, skb); queue->tx.req_cons = idx; - if (((gop-queue->tx_map_ops) >= ARRAY_SIZE(queue->tx_map_ops)) || + if ((*map_ops >= ARRAY_SIZE(queue->tx_map_ops)) || (*copy_ops >= ARRAY_SIZE(queue->tx_copy_ops))) break; } - (*map_ops) = gop - queue->tx_map_ops; return; } @@ -1112,9 +1148,8 @@ while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) { struct xen_netif_tx_request *txp; u16 pending_idx; - unsigned data_len; - pending_idx = XENVIF_TX_CB(skb)->pending_idx; + pending_idx = copy_pending_idx(skb, 0); txp = &queue->pending_tx_infopending_idx.req; /* Check the remap error code. */ @@ -1133,18 +1168,6 @@ continue; } - data_len = skb->len; - callback_param(queue, pending_idx).ctx = NULL; - if (data_len < txp->size) { - /* Append the packet payload as a fragment. */ - txp->offset += data_len; - txp->size -= data_len; - } else { - /* Schedule a response immediately. */ - xenvif_idx_release(queue, pending_idx, - XEN_NETIF_RSP_OKAY); - } - if (txp->flags & XEN_NETTXF_csum_blank) skb->ip_summed = CHECKSUM_PARTIAL; else if (txp->flags & XEN_NETTXF_data_validated) @@ -1330,7 +1353,7 @@ /* Called after netfront has transmitted */ int xenvif_tx_action(struct xenvif_queue *queue, int budget) { - unsigned nr_mops, nr_cops = 0; + unsigned nr_mops = 0, nr_cops = 0; int work_done, ret; if (unlikely(!tx_work_todo(queue)))
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/net/xen-netback/rx.c
Changed
@@ -82,9 +82,10 @@ return false; } -void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb) +bool xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb) { unsigned long flags; + bool ret = true; spin_lock_irqsave(&queue->rx_queue.lock, flags); @@ -92,8 +93,7 @@ struct net_device *dev = queue->vif->dev; netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id)); - kfree_skb(skb); - queue->vif->dev->stats.rx_dropped++; + ret = false; } else { if (skb_queue_empty(&queue->rx_queue)) xenvif_update_needed_slots(queue, skb); @@ -104,6 +104,8 @@ } spin_unlock_irqrestore(&queue->rx_queue.lock, flags); + + return ret; } static struct sk_buff *xenvif_rx_dequeue(struct xenvif_queue *queue)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/nvdimm/virtio_pmem.c
Changed
@@ -105,7 +105,7 @@ nvdimm_bus_unregister(nvdimm_bus); vdev->config->del_vqs(vdev); - vdev->config->reset(vdev); + virtio_reset_device(vdev); } static struct virtio_driver virtio_pmem_driver = {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/perf/hisilicon/hisi_uncore_pmu.c
Changed
@@ -437,7 +437,6 @@ if (mt) { switch (read_cpuid_part_number()) { case HISI_CPU_PART_TSV110: - case HISI_CPU_PART_TSV200: case ARM_CPU_PART_CORTEX_A55: sccl = aff2 >> 3; ccl = aff2 & 0x7;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/Kconfig
Changed
@@ -16,6 +16,8 @@ if X86_PLATFORM_DEVICES +source "drivers/platform/x86/intel/ifs/Kconfig" + config ACPI_WMI tristate "WMI" depends on ACPI
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/Makefile
Changed
@@ -83,6 +83,7 @@ obj-$(CONFIG_INTEL_MENLOW) += intel_menlow.o obj-$(CONFIG_INTEL_OAKTRAIL) += intel_oaktrail.o obj-$(CONFIG_INTEL_VBTN) += intel-vbtn.o +obj-$(CONFIG_INTEL_IFS) += intel/ifs/ # Microsoft obj-$(CONFIG_SURFACE3_WMI) += surface3-wmi.o
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel
Added
+(directory)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs
Added
+(directory)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs/Kconfig
Added
@@ -0,0 +1,13 @@ +config INTEL_IFS + tristate "Intel In Field Scan" + depends on X86 && CPU_SUP_INTEL && 64BIT && SMP + select INTEL_IFS_DEVICE + help + Enable support for the In Field Scan capability in select + CPUs. The capability allows for running low level tests via + a scan image distributed by Intel via Github to validate CPU + operation beyond baseline RAS capabilities. To compile this + support as a module, choose M here. The module will be called + intel_ifs. + + If unsure, say N.
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs/Makefile
Added
@@ -0,0 +1,3 @@ +obj-$(CONFIG_INTEL_IFS) += intel_ifs.o + +intel_ifs-objs := core.o load.o runtest.o sysfs.o
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs/core.c
Added
@@ -0,0 +1,73 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2022 Intel Corporation. */ + +#include <linux/module.h> +#include <linux/kdev_t.h> +#include <linux/semaphore.h> + +#include <asm/cpu_device_id.h> + +#include "ifs.h" + +#define X86_MATCH(model) \ + X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, \ + INTEL_FAM6_##model, X86_FEATURE_CORE_CAPABILITIES, NULL) + +static const struct x86_cpu_id ifs_cpu_ids __initconst = { + X86_MATCH(SAPPHIRERAPIDS_X), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, ifs_cpu_ids); + +static struct ifs_device ifs_device = { + .data = { + .integrity_cap_bit = MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT, + }, + .misc = { + .name = "intel_ifs_0", + .nodename = "intel_ifs/0", + .minor = MISC_DYNAMIC_MINOR, + }, +}; + +static int __init ifs_init(void) +{ + const struct x86_cpu_id *m; + u64 msrval; + + m = x86_match_cpu(ifs_cpu_ids); + if (!m) + return -ENODEV; + + if (rdmsrl_safe(MSR_IA32_CORE_CAPS, &msrval)) + return -ENODEV; + + if (!(msrval & MSR_IA32_CORE_CAPS_INTEGRITY_CAPS)) + return -ENODEV; + + if (rdmsrl_safe(MSR_INTEGRITY_CAPS, &msrval)) + return -ENODEV; + + ifs_device.misc.groups = ifs_get_groups(); + + if ((msrval & BIT(ifs_device.data.integrity_cap_bit)) && + !misc_register(&ifs_device.misc)) { + down(&ifs_sem); + ifs_load_firmware(ifs_device.misc.this_device); + up(&ifs_sem); + return 0; + } + + return -ENODEV; +} + +static void __exit ifs_exit(void) +{ + misc_deregister(&ifs_device.misc); +} + +module_init(ifs_init); +module_exit(ifs_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Intel In Field Scan (IFS) device");
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs/ifs.h
Added
@@ -0,0 +1,234 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2022 Intel Corporation. */ + +#ifndef _IFS_H_ +#define _IFS_H_ + +/** + * DOC: In-Field Scan + * + * ============= + * In-Field Scan + * ============= + * + * Introduction + * ------------ + * + * In Field Scan (IFS) is a hardware feature to run circuit level tests on + * a CPU core to detect problems that are not caught by parity or ECC checks. + * Future CPUs will support more than one type of test which will show up + * with a new platform-device instance-id, for now only .0 is exposed. + * + * + * IFS Image + * --------- + * + * Intel provides a firmware file containing the scan tests via + * github #f1_. Similar to microcode there is a separate file for each + * family-model-stepping. + * + * IFS Image Loading + * ----------------- + * + * The driver loads the tests into memory reserved BIOS local to each CPU + * socket in a two step process using writes to MSRs to first load the + * SHA hashes for the test. Then the tests themselves. Status MSRs provide + * feedback on the success/failure of these steps. When a new test file + * is installed it can be loaded by writing to the driver reload file:: + * + * # echo 1 > /sys/devices/virtual/misc/intel_ifs_0/reload + * + * Similar to microcode, the current version of the scan tests is stored + * in a fixed location: /lib/firmware/intel/ifs.0/family-model-stepping.scan + * + * Running tests + * ------------- + * + * Tests are run by the driver synchronizing execution of all threads on a + * core and then writing to the ACTIVATE_SCAN MSR on all threads. Instruction + * execution continues when: + * + * 1) All tests have completed. + * 2) Execution was interrupted. + * 3) A test detected a problem. + * + * Note that ALL THREADS ON THE CORE ARE EFFECTIVELY OFFLINE FOR THE + * DURATION OF THE TEST. This can be up to 200 milliseconds. If the system + * is running latency sensitive applications that cannot tolerate an + * interruption of this magnitude, the system administrator must arrange + * to migrate those applications to other cores before running a core test. + * It may also be necessary to redirect interrupts to other CPUs. + * + * In all cases reading the SCAN_STATUS MSR provides details on what + * happened. The driver makes the value of this MSR visible to applications + * via the "details" file (see below). Interrupted tests may be restarted. + * + * The IFS driver provides sysfs interfaces via /sys/devices/virtual/misc/intel_ifs_0/ + * to control execution: + * + * Test a specific core:: + * + * # echo <cpu#> > /sys/devices/virtual/misc/intel_ifs_0/run_test + * + * when HT is enabled any of the sibling cpu# can be specified to test + * its corresponding physical core. Since the tests are per physical core, + * the result of testing any thread is same. All siblings must be online + * to run a core test. It is only necessary to test one thread. + * + * For e.g. to test core corresponding to cpu5 + * + * # echo 5 > /sys/devices/virtual/misc/intel_ifs_0/run_test + * + * Results of the last test is provided in /sys:: + * + * $ cat /sys/devices/virtual/misc/intel_ifs_0/status + * pass + * + * Status can be one of pass, fail, untested + * + * Additional details of the last test is provided by the details file:: + * + * $ cat /sys/devices/virtual/misc/intel_ifs_0/details + * 0x8081 + * + * The details file reports the hex value of the SCAN_STATUS MSR. + * Hardware defined error codes are documented in volume 4 of the Intel + * Software Developer's Manual but the error_code field may contain one of + * the following driver defined software codes: + * + * +------+--------------------+ + * | 0xFD | Software timeout | + * +------+--------------------+ + * | 0xFE | Partial completion | + * +------+--------------------+ + * + * Driver design choices + * --------------------- + * + * 1) The ACTIVATE_SCAN MSR allows for running any consecutive subrange of + * available tests. But the driver always tries to run all tests and only + * uses the subrange feature to restart an interrupted test. + * + * 2) Hardware allows for some number of cores to be tested in parallel. + * The driver does not make use of this, it only tests one core at a time. + * + * .. #f1 https://github.com/intel/TBD + */ +#include <linux/device.h> +#include <linux/miscdevice.h> + +#define MSR_COPY_SCAN_HASHES 0x000002c2 +#define MSR_SCAN_HASHES_STATUS 0x000002c3 +#define MSR_AUTHENTICATE_AND_COPY_CHUNK 0x000002c4 +#define MSR_CHUNKS_AUTHENTICATION_STATUS 0x000002c5 +#define MSR_ACTIVATE_SCAN 0x000002c6 +#define MSR_SCAN_STATUS 0x000002c7 +#define SCAN_NOT_TESTED 0 +#define SCAN_TEST_PASS 1 +#define SCAN_TEST_FAIL 2 + +/* MSR_SCAN_HASHES_STATUS bit fields */ +union ifs_scan_hashes_status { + u64 data; + struct { + u32 chunk_size :16; + u32 num_chunks :8; + u32 rsvd1 :8; + u32 error_code :8; + u32 rsvd2 :11; + u32 max_core_limit :12; + u32 valid :1; + }; +}; + +/* MSR_CHUNKS_AUTH_STATUS bit fields */ +union ifs_chunks_auth_status { + u64 data; + struct { + u32 valid_chunks :8; + u32 total_chunks :8; + u32 rsvd1 :16; + u32 error_code :8; + u32 rsvd2 :24; + }; +}; + +/* MSR_ACTIVATE_SCAN bit fields */ +union ifs_scan { + u64 data; + struct { + u32 start :8; + u32 stop :8; + u32 rsvd :16; + u32 delay :31; + u32 sigmce :1; + }; +}; + +/* MSR_SCAN_STATUS bit fields */ +union ifs_status { + u64 data; + struct { + u32 chunk_num :8; + u32 chunk_stop_index :8; + u32 rsvd1 :16; + u32 error_code :8; + u32 rsvd2 :22; + u32 control_error :1; + u32 signature_error :1; + }; +}; + +/* + * Driver populated error-codes + * 0xFD: Test timed out before completing all the chunks. + * 0xFE: not all scan chunks were executed. Maximum forward progress retries exceeded. + */ +#define IFS_SW_TIMEOUT 0xFD +#define IFS_SW_PARTIAL_COMPLETION 0xFE + +/** + * struct ifs_data - attributes related to intel IFS driver + * @integrity_cap_bit: MSR_INTEGRITY_CAPS bit enumerating this test + * @loaded_version: stores the currently loaded ifs image version. + * @loaded: If a valid test binary has been loaded into the memory + * @loading_error: Error occurred on another CPU while loading image + * @valid_chunks: number of chunks which could be validated. + * @status: it holds simple status pass/fail/untested + * @scan_details: opaque scan status code from h/w + */ +struct ifs_data { + int integrity_cap_bit; + int loaded_version; + bool loaded; + bool loading_error; + int valid_chunks; + int status; + u64 scan_details; +}; + +struct ifs_work { + struct work_struct w; + struct device *dev; +}; + +struct ifs_device { + struct ifs_data data; + struct miscdevice misc; +}; + +static inline struct ifs_data *ifs_get_data(struct device *dev) +{ + struct miscdevice *m = dev_get_drvdata(dev); + struct ifs_device *d = container_of(m, struct ifs_device, misc); + + return &d->data; +} + +void ifs_load_firmware(struct device *dev); +int do_core_test(int cpu, struct device *dev); +const struct attribute_group **ifs_get_groups(void); + +extern struct semaphore ifs_sem; + +#endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs/load.c
Added
@@ -0,0 +1,267 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2022 Intel Corporation. */ + +#include <linux/firmware.h> +#include <asm/cpu.h> +#include <linux/slab.h> +#include <asm/microcode_intel.h> + +#include "ifs.h" + +struct ifs_header { + u32 header_ver; + u32 blob_revision; + u32 date; + u32 processor_sig; + u32 check_sum; + u32 loader_rev; + u32 processor_flags; + u32 metadata_size; + u32 total_size; + u32 fusa_info; + u64 reserved; +}; + +#define IFS_HEADER_SIZE (sizeof(struct ifs_header)) +static struct ifs_header *ifs_header_ptr; /* pointer to the ifs image header */ +static u64 ifs_hash_ptr; /* Address of ifs metadata (hash) */ +static u64 ifs_test_image_ptr; /* 256B aligned address of test pattern */ +static DECLARE_COMPLETION(ifs_done); + +static const char * const scan_hash_status = { + 0 = "No error reported", + 1 = "Attempt to copy scan hashes when copy already in progress", + 2 = "Secure Memory not set up correctly", + 3 = "FuSaInfo.ProgramID does not match or ff-mm-ss does not match", + 4 = "Reserved", + 5 = "Integrity check failed", + 6 = "Scan reload or test is in progress" +}; + +static const char * const scan_authentication_status = { + 0 = "No error reported", + 1 = "Attempt to authenticate a chunk which is already marked as authentic", + 2 = "Chunk authentication error. The hash of chunk did not match expected value" +}; + +/* + * To copy scan hashes and authenticate test chunks, the initiating cpu must point + * to the EDX:EAX to the test image in linear address. + * Run wrmsr(MSR_COPY_SCAN_HASHES) for scan hash copy and run wrmsr(MSR_AUTHENTICATE_AND_COPY_CHUNK) + * for scan hash copy and test chunk authentication. + */ +static void copy_hashes_authenticate_chunks(struct work_struct *work) +{ + struct ifs_work *local_work = container_of(work, struct ifs_work, w); + union ifs_scan_hashes_status hashes_status; + union ifs_chunks_auth_status chunk_status; + struct device *dev = local_work->dev; + int i, num_chunks, chunk_size; + struct ifs_data *ifsd; + u64 linear_addr, base; + u32 err_code; + + ifsd = ifs_get_data(dev); + /* run scan hash copy */ + wrmsrl(MSR_COPY_SCAN_HASHES, ifs_hash_ptr); + rdmsrl(MSR_SCAN_HASHES_STATUS, hashes_status.data); + + /* enumerate the scan image information */ + num_chunks = hashes_status.num_chunks; + chunk_size = hashes_status.chunk_size * 1024; + err_code = hashes_status.error_code; + + if (!hashes_status.valid) { + ifsd->loading_error = true; + if (err_code >= ARRAY_SIZE(scan_hash_status)) { + dev_err(dev, "invalid error code 0x%x for hash copy\n", err_code); + goto done; + } + dev_err(dev, "Hash copy error : %s", scan_hash_statuserr_code); + goto done; + } + + /* base linear address to the scan data */ + base = ifs_test_image_ptr; + + /* scan data authentication and copy chunks to secured memory */ + for (i = 0; i < num_chunks; i++) { + linear_addr = base + i * chunk_size; + linear_addr |= i; + + wrmsrl(MSR_AUTHENTICATE_AND_COPY_CHUNK, linear_addr); + rdmsrl(MSR_CHUNKS_AUTHENTICATION_STATUS, chunk_status.data); + + ifsd->valid_chunks = chunk_status.valid_chunks; + err_code = chunk_status.error_code; + + if (err_code) { + ifsd->loading_error = true; + if (err_code >= ARRAY_SIZE(scan_authentication_status)) { + dev_err(dev, + "invalid error code 0x%x for authentication\n", err_code); + goto done; + } + dev_err(dev, "Chunk authentication error %s\n", + scan_authentication_statuserr_code); + goto done; + } + } +done: + complete(&ifs_done); +} + +/* + * IFS requires scan chunks authenticated per each socket in the platform. + * Once the test chunk is authenticated, it is automatically copied to secured memory + * and proceed the authentication for the next chunk. + */ +static int scan_chunks_sanity_check(struct device *dev) +{ + int metadata_size, curr_pkg, cpu, ret = -ENOMEM; + struct ifs_data *ifsd = ifs_get_data(dev); + bool *package_authenticated; + struct ifs_work local_work; + char *test_ptr; + + package_authenticated = kcalloc(topology_max_packages(), sizeof(bool), GFP_KERNEL); + if (!package_authenticated) + return ret; + + metadata_size = ifs_header_ptr->metadata_size; + + /* Spec says that if the Meta Data Size = 0 then it should be treated as 2000 */ + if (metadata_size == 0) + metadata_size = 2000; + + /* Scan chunk start must be 256 byte aligned */ + if ((metadata_size + IFS_HEADER_SIZE) % 256) { + dev_err(dev, "Scan pattern offset within the binary is not 256 byte aligned\n"); + return -EINVAL; + } + + test_ptr = (char *)ifs_header_ptr + IFS_HEADER_SIZE + metadata_size; + ifsd->loading_error = false; + + ifs_test_image_ptr = (u64)test_ptr; + ifsd->loaded_version = ifs_header_ptr->blob_revision; + + /* copy the scan hash and authenticate per package */ + cpus_read_lock(); + for_each_online_cpu(cpu) { + curr_pkg = topology_physical_package_id(cpu); + if (package_authenticatedcurr_pkg) + continue; + reinit_completion(&ifs_done); + local_work.dev = dev; + INIT_WORK(&local_work.w, copy_hashes_authenticate_chunks); + schedule_work_on(cpu, &local_work.w); + wait_for_completion(&ifs_done); + if (ifsd->loading_error) + goto out; + package_authenticatedcurr_pkg = 1; + } + ret = 0; +out: + cpus_read_unlock(); + kfree(package_authenticated); + + return ret; +} + +static int ifs_sanity_check(struct device *dev, + const struct microcode_header_intel *mc_header) +{ + unsigned long total_size, data_size; + u32 sum, *mc; + int i; + + total_size = get_totalsize(mc_header); + data_size = get_datasize(mc_header); + + if ((data_size + MC_HEADER_SIZE > total_size) || (total_size % sizeof(u32))) { + dev_err(dev, "bad ifs data file size.\n"); + return -EINVAL; + } + + if (mc_header->ldrver != 1 || mc_header->hdrver != 1) { + dev_err(dev, "invalid/unknown ifs update format.\n"); + return -EINVAL; + } + + mc = (u32 *)mc_header; + sum = 0; + for (i = 0; i < total_size / sizeof(u32); i++) + sum += mci; + + if (sum) { + dev_err(dev, "bad ifs data checksum, aborting.\n"); + return -EINVAL; + } + + return 0; +} + +static bool find_ifs_matching_signature(struct device *dev, struct ucode_cpu_info *uci, + const struct microcode_header_intel *shdr) +{ + unsigned int mc_size; + + mc_size = get_totalsize(shdr); + + if (!mc_size || ifs_sanity_check(dev, shdr) < 0) { + dev_err(dev, "ifs sanity check failure\n"); + return false; + } + + if (!intel_cpu_signatures_match(uci->cpu_sig.sig, uci->cpu_sig.pf, shdr->sig, shdr->pf)) { + dev_err(dev, "ifs signature, pf not matching\n"); + return false; + } + + return true; +} + +static bool ifs_image_sanity_check(struct device *dev, const struct microcode_header_intel *data) +{ + struct ucode_cpu_info uci; + + intel_cpu_collect_info(&uci); + + return find_ifs_matching_signature(dev, &uci, data); +} + +/* + * Load ifs image. Before loading ifs module, the ifs image must be located + * in /lib/firmware/intel/ifs and named as {family/model/stepping}.{testname}. + */ +void ifs_load_firmware(struct device *dev) +{ + struct ifs_data *ifsd = ifs_get_data(dev); + const struct firmware *fw; + char scan_path32; + int ret; + + snprintf(scan_path, sizeof(scan_path), "intel/ifs/%02x-%02x-%02x.scan", + boot_cpu_data.x86, boot_cpu_data.x86_model, boot_cpu_data.x86_stepping); + + ret = request_firmware_direct(&fw, scan_path, dev); + if (ret) { + dev_err(dev, "ifs file %s load failed\n", scan_path); + goto done; + } + + if (!ifs_image_sanity_check(dev, (struct microcode_header_intel *)fw->data)) { + dev_err(dev, "ifs header sanity check failed\n"); + goto release; + } + + ifs_header_ptr = (struct ifs_header *)fw->data; + ifs_hash_ptr = (u64)(ifs_header_ptr + 1); + + ret = scan_chunks_sanity_check(dev); +release: + release_firmware(fw); +done: + ifsd->loaded = (ret == 0); +}
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs/runtest.c
Added
@@ -0,0 +1,252 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2022 Intel Corporation. */ + +#include <linux/cpu.h> +#include <linux/delay.h> +#include <linux/fs.h> +#include <linux/nmi.h> +#include <linux/slab.h> +#include <linux/stop_machine.h> + +#include "ifs.h" + +/* + * Note all code and data in this file is protected by + * ifs_sem. On HT systems all threads on a core will + * execute together, but only the first thread on the + * core will update results of the test. + */ + +#define CREATE_TRACE_POINTS +#include <trace/events/intel_ifs.h> + +/* Max retries on the same chunk */ +#define MAX_IFS_RETRIES 5 + +/* + * Number of TSC cycles that a logical CPU will wait for the other + * logical CPU on the core in the WRMSR(ACTIVATE_SCAN). + */ +#define IFS_THREAD_WAIT 100000 + +enum ifs_status_err_code { + IFS_NO_ERROR = 0, + IFS_OTHER_THREAD_COULD_NOT_JOIN = 1, + IFS_INTERRUPTED_BEFORE_RENDEZVOUS = 2, + IFS_POWER_MGMT_INADEQUATE_FOR_SCAN = 3, + IFS_INVALID_CHUNK_RANGE = 4, + IFS_MISMATCH_ARGUMENTS_BETWEEN_THREADS = 5, + IFS_CORE_NOT_CAPABLE_CURRENTLY = 6, + IFS_UNASSIGNED_ERROR_CODE = 7, + IFS_EXCEED_NUMBER_OF_THREADS_CONCURRENT = 8, + IFS_INTERRUPTED_DURING_EXECUTION = 9, +}; + +static const char * const scan_test_status = { + IFS_NO_ERROR = "SCAN no error", + IFS_OTHER_THREAD_COULD_NOT_JOIN = "Other thread could not join.", + IFS_INTERRUPTED_BEFORE_RENDEZVOUS = "Interrupt occurred prior to SCAN coordination.", + IFS_POWER_MGMT_INADEQUATE_FOR_SCAN = + "Core Abort SCAN Response due to power management condition.", + IFS_INVALID_CHUNK_RANGE = "Non valid chunks in the range", + IFS_MISMATCH_ARGUMENTS_BETWEEN_THREADS = "Mismatch in arguments between threads T0/T1.", + IFS_CORE_NOT_CAPABLE_CURRENTLY = "Core not capable of performing SCAN currently", + IFS_UNASSIGNED_ERROR_CODE = "Unassigned error code 0x7", + IFS_EXCEED_NUMBER_OF_THREADS_CONCURRENT = + "Exceeded number of Logical Processors (LP) allowed to run Scan-At-Field concurrently", + IFS_INTERRUPTED_DURING_EXECUTION = "Interrupt occurred prior to SCAN start", +}; + +static void message_not_tested(struct device *dev, int cpu, union ifs_status status) +{ + if (status.error_code < ARRAY_SIZE(scan_test_status)) { + dev_info(dev, "CPU(s) %*pbl: SCAN operation did not start. %s\n", + cpumask_pr_args(cpu_smt_mask(cpu)), + scan_test_statusstatus.error_code); + } else if (status.error_code == IFS_SW_TIMEOUT) { + dev_info(dev, "CPU(s) %*pbl: software timeout during scan\n", + cpumask_pr_args(cpu_smt_mask(cpu))); + } else if (status.error_code == IFS_SW_PARTIAL_COMPLETION) { + dev_info(dev, "CPU(s) %*pbl: %s\n", + cpumask_pr_args(cpu_smt_mask(cpu)), + "Not all scan chunks were executed. Maximum forward progress retries exceeded"); + } else { + dev_info(dev, "CPU(s) %*pbl: SCAN unknown status %llx\n", + cpumask_pr_args(cpu_smt_mask(cpu)), status.data); + } +} + +static void message_fail(struct device *dev, int cpu, union ifs_status status) +{ + /* + * control_error is set when the microcode runs into a problem + * loading the image from the reserved BIOS memory, or it has + * been corrupted. Reloading the image may fix this issue. + */ + if (status.control_error) { + dev_err(dev, "CPU(s) %*pbl: could not execute from loaded scan image\n", + cpumask_pr_args(cpu_smt_mask(cpu))); + } + + /* + * signature_error is set when the output from the scan chains does not + * match the expected signature. This might be a transient problem (e.g. + * due to a bit flip from an alpha particle or neutron). If the problem + * repeats on a subsequent test, then it indicates an actual problem in + * the core being tested. + */ + if (status.signature_error) { + dev_err(dev, "CPU(s) %*pbl: test signature incorrect.\n", + cpumask_pr_args(cpu_smt_mask(cpu))); + } +} + +static bool can_restart(union ifs_status status) +{ + enum ifs_status_err_code err_code = status.error_code; + + /* Signature for chunk is bad, or scan test failed */ + if (status.signature_error || status.control_error) + return false; + + switch (err_code) { + case IFS_NO_ERROR: + case IFS_OTHER_THREAD_COULD_NOT_JOIN: + case IFS_INTERRUPTED_BEFORE_RENDEZVOUS: + case IFS_POWER_MGMT_INADEQUATE_FOR_SCAN: + case IFS_EXCEED_NUMBER_OF_THREADS_CONCURRENT: + case IFS_INTERRUPTED_DURING_EXECUTION: + return true; + case IFS_INVALID_CHUNK_RANGE: + case IFS_MISMATCH_ARGUMENTS_BETWEEN_THREADS: + case IFS_CORE_NOT_CAPABLE_CURRENTLY: + case IFS_UNASSIGNED_ERROR_CODE: + break; + } + return false; +} + +/* + * Execute the scan. Called "simultaneously" on all threads of a core + * at high priority using the stop_cpus mechanism. + */ +static int doscan(void *data) +{ + int cpu = smp_processor_id(); + u64 *msrs = data; + int first; + + /* Only the first logical CPU on a core reports result */ + first = cpumask_first(cpu_smt_mask(cpu)); + + /* + * This WRMSR will wait for other HT threads to also write + * to this MSR (at most for activate.delay cycles). Then it + * starts scan of each requested chunk. The core scan happens + * during the "execution" of the WRMSR. This instruction can + * take up to 200 milliseconds (in the case where all chunks + * are processed in a single pass) before it retires. + */ + wrmsrl(MSR_ACTIVATE_SCAN, msrs0); + + if (cpu == first) { + /* Pass back the result of the scan */ + rdmsrl(MSR_SCAN_STATUS, msrs1); + } + + return 0; +} + +/* + * Use stop_core_cpuslocked() to synchronize writing to MSR_ACTIVATE_SCAN + * on all threads of the core to be tested. Loop if necessary to complete + * run of all chunks. Include some defensive tests to make sure forward + * progress is made, and that the whole test completes in a reasonable time. + */ +static void ifs_test_core(int cpu, struct device *dev) +{ + union ifs_scan activate; + union ifs_status status; + unsigned long timeout; + struct ifs_data *ifsd; + u64 msrvals2; + int retries; + + ifsd = ifs_get_data(dev); + + activate.rsvd = 0; + activate.delay = IFS_THREAD_WAIT; + activate.sigmce = 0; + activate.start = 0; + activate.stop = ifsd->valid_chunks - 1; + + timeout = jiffies + HZ / 2; + retries = MAX_IFS_RETRIES; + + while (activate.start <= activate.stop) { + if (time_after(jiffies, timeout)) { + status.error_code = IFS_SW_TIMEOUT; + break; + } + + msrvals0 = activate.data; + stop_core_cpuslocked(cpu, doscan, msrvals); + + status.data = msrvals1; + + trace_ifs_status(cpu, activate, status); + + /* Some cases can be retried, give up for others */ + if (!can_restart(status)) + break; + + if (status.chunk_num == activate.start) { + /* Check for forward progress */ + if (--retries == 0) { + if (status.error_code == IFS_NO_ERROR) + status.error_code = IFS_SW_PARTIAL_COMPLETION; + break; + } + } else { + retries = MAX_IFS_RETRIES; + activate.start = status.chunk_num; + } + } + + /* Update status for this core */ + ifsd->scan_details = status.data; + + if (status.control_error || status.signature_error) { + ifsd->status = SCAN_TEST_FAIL; + message_fail(dev, cpu, status); + } else if (status.error_code) { + ifsd->status = SCAN_NOT_TESTED; + message_not_tested(dev, cpu, status); + } else { + ifsd->status = SCAN_TEST_PASS; + } +} + +/* + * Initiate per core test. It wakes up work queue threads on the target cpu and + * its sibling cpu. Once all sibling threads wake up, the scan test gets executed and + * wait for all sibling threads to finish the scan test. + */ +int do_core_test(int cpu, struct device *dev) +{ + int ret = 0; + + /* Prevent CPUs from being taken offline during the scan test */ + cpus_read_lock(); + + if (!cpu_online(cpu)) { + dev_info(dev, "cannot test on the offline cpu %d\n", cpu); + ret = -EINVAL; + goto out; + } + + ifs_test_core(cpu, dev); +out: + cpus_read_unlock(); + return ret; +}
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/platform/x86/intel/ifs/sysfs.c
Added
@@ -0,0 +1,149 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2022 Intel Corporation. */ + +#include <linux/cpu.h> +#include <linux/delay.h> +#include <linux/fs.h> +#include <linux/semaphore.h> +#include <linux/slab.h> + +#include "ifs.h" + +/* + * Protects against simultaneous tests on multiple cores, or + * reloading can file while a test is in progress + */ +DEFINE_SEMAPHORE(ifs_sem); + +/* + * The sysfs interface to check additional details of last test + * cat /sys/devices/system/platform/ifs/details + */ +static ssize_t details_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct ifs_data *ifsd = ifs_get_data(dev); + + return sysfs_emit(buf, "%#llx\n", ifsd->scan_details); +} + +static DEVICE_ATTR_RO(details); + +static const char * const status_msg = { + SCAN_NOT_TESTED = "untested", + SCAN_TEST_PASS = "pass", + SCAN_TEST_FAIL = "fail" +}; + +/* + * The sysfs interface to check the test status: + * To check the status of last test + * cat /sys/devices/platform/ifs/status + */ +static ssize_t status_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct ifs_data *ifsd = ifs_get_data(dev); + + return sysfs_emit(buf, "%s\n", status_msgifsd->status); +} + +static DEVICE_ATTR_RO(status); + +/* + * The sysfs interface for single core testing + * To start test, for example, cpu5 + * echo 5 > /sys/devices/platform/ifs/run_test + * To check the result: + * cat /sys/devices/platform/ifs/result + * The sibling core gets tested at the same time. + */ +static ssize_t run_test_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct ifs_data *ifsd = ifs_get_data(dev); + unsigned int cpu; + int rc; + + rc = kstrtouint(buf, 0, &cpu); + if (rc < 0 || cpu >= nr_cpu_ids) + return -EINVAL; + + if (down_interruptible(&ifs_sem)) + return -EINTR; + + if (!ifsd->loaded) + rc = -EPERM; + else + rc = do_core_test(cpu, dev); + + up(&ifs_sem); + + return rc ? rc : count; +} + +static DEVICE_ATTR_WO(run_test); + +/* + * Reload the IFS image. When user wants to install new IFS image + */ +static ssize_t reload_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct ifs_data *ifsd = ifs_get_data(dev); + bool res; + + + if (kstrtobool(buf, &res)) + return -EINVAL; + if (!res) + return count; + + if (down_interruptible(&ifs_sem)) + return -EINTR; + + ifs_load_firmware(dev); + + up(&ifs_sem); + + return ifsd->loaded ? count : -ENODEV; +} + +static DEVICE_ATTR_WO(reload); + +/* + * Display currently loaded IFS image version. + */ +static ssize_t image_version_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct ifs_data *ifsd = ifs_get_data(dev); + + if (!ifsd->loaded) + return sysfs_emit(buf, "%s\n", "none"); + else + return sysfs_emit(buf, "%#x\n", ifsd->loaded_version); +} + +static DEVICE_ATTR_RO(image_version); + +/* global scan sysfs attributes */ +static struct attribute *plat_ifs_attrs = { + &dev_attr_details.attr, + &dev_attr_status.attr, + &dev_attr_run_test.attr, + &dev_attr_reload.attr, + &dev_attr_image_version.attr, + NULL +}; + +ATTRIBUTE_GROUPS(plat_ifs); + +const struct attribute_group **ifs_get_groups(void) +{ + return plat_ifs_groups; +}
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/rpmsg/virtio_rpmsg_bus.c
Changed
@@ -1012,7 +1012,7 @@ size_t total_buf_space = vrp->num_bufs * vrp->buf_size; int ret; - vdev->config->reset(vdev); + virtio_reset_device(vdev); ret = device_for_each_child(&vdev->dev, NULL, rpmsg_remove_device); if (ret)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/be2iscsi/be_main.c
Changed
@@ -5801,21 +5801,15 @@ .resume = beiscsi_eeh_resume, }; -struct iscsi_transport_expand beiscsi_iscsi_expand = { - .unbind_conn = iscsi_conn_unbind, -}; - struct iscsi_transport beiscsi_iscsi_transport = { .owner = THIS_MODULE, .name = DRV_NAME, .caps = CAP_RECOVERY_L0 | CAP_HDRDGST | CAP_TEXT_NEGO | - CAP_MULTI_R2T | CAP_DATADGST | CAP_DATA_PATH_OFFLOAD | - CAP_OPS_EXPAND, + CAP_MULTI_R2T | CAP_DATADGST | CAP_DATA_PATH_OFFLOAD, .create_session = beiscsi_session_create, .destroy_session = beiscsi_session_destroy, .create_conn = beiscsi_conn_create, .bind_conn = beiscsi_conn_bind, - .ops_expand = &beiscsi_iscsi_expand, .destroy_conn = iscsi_conn_teardown, .attr_is_visible = beiscsi_attr_is_visible, .set_iface_param = beiscsi_iface_set_param,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/bnx2i/bnx2i_iscsi.c
Changed
@@ -2274,23 +2274,17 @@ .track_queue_depth = 1, }; - -static struct iscsi_transport_expand bnx2i_iscsi_expand = { - .unbind_conn = iscsi_conn_unbind, -}; - struct iscsi_transport bnx2i_iscsi_transport = { .owner = THIS_MODULE, .name = "bnx2i", .caps = CAP_RECOVERY_L0 | CAP_HDRDGST | CAP_MULTI_R2T | CAP_DATADGST | CAP_DATA_PATH_OFFLOAD | - CAP_TEXT_NEGO | CAP_OPS_EXPAND, + CAP_TEXT_NEGO, .create_session = bnx2i_session_create, .destroy_session = bnx2i_session_destroy, .create_conn = bnx2i_conn_create, .bind_conn = bnx2i_conn_bind, - .ops_expand = &bnx2i_iscsi_expand, .destroy_conn = bnx2i_conn_destroy, .attr_is_visible = bnx2i_attr_is_visible, .set_param = iscsi_set_param,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
Changed
@@ -100,18 +100,13 @@ .track_queue_depth = 1, }; -static struct iscsi_transport_expand cxgb3i_iscsi_expand = { - .unbind_conn = iscsi_conn_unbind, -}; - static struct iscsi_transport cxgb3i_iscsi_transport = { .owner = THIS_MODULE, .name = DRV_MODULE_NAME, /* owner and name should be set already */ .caps = CAP_RECOVERY_L0 | CAP_MULTI_R2T | CAP_HDRDGST | CAP_DATADGST | CAP_DIGEST_OFFLOAD | - CAP_PADDING_OFFLOAD | CAP_TEXT_NEGO | - CAP_OPS_EXPAND, + CAP_PADDING_OFFLOAD | CAP_TEXT_NEGO, .attr_is_visible = cxgbi_attr_is_visible, .get_host_param = cxgbi_get_host_param, .set_host_param = cxgbi_set_host_param, @@ -122,7 +117,6 @@ /* connection management */ .create_conn = cxgbi_create_conn, .bind_conn = cxgbi_bind_conn, - .ops_expand = &cxgb3i_iscsi_expand, .destroy_conn = iscsi_tcp_conn_teardown, .start_conn = iscsi_conn_start, .stop_conn = iscsi_conn_stop,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
Changed
@@ -118,17 +118,12 @@ .track_queue_depth = 1, }; -static struct iscsi_transport_expand cxgb4i_iscsi_expand = { - .unbind_conn = iscsi_conn_unbind, -}; - static struct iscsi_transport cxgb4i_iscsi_transport = { .owner = THIS_MODULE, .name = DRV_MODULE_NAME, .caps = CAP_RECOVERY_L0 | CAP_MULTI_R2T | CAP_HDRDGST | CAP_DATADGST | CAP_DIGEST_OFFLOAD | - CAP_PADDING_OFFLOAD | CAP_TEXT_NEGO | - CAP_OPS_EXPAND, + CAP_PADDING_OFFLOAD | CAP_TEXT_NEGO, .attr_is_visible = cxgbi_attr_is_visible, .get_host_param = cxgbi_get_host_param, .set_host_param = cxgbi_set_host_param, @@ -139,7 +134,6 @@ /* connection management */ .create_conn = cxgbi_create_conn, .bind_conn = cxgbi_bind_conn, - .ops_expand = &cxgb4i_iscsi_expand, .destroy_conn = iscsi_tcp_conn_teardown, .start_conn = iscsi_conn_start, .stop_conn = iscsi_conn_stop,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/qedi/qedi_iscsi.c
Changed
@@ -1429,20 +1429,15 @@ cmd->scsi_cmd = NULL; } -static struct iscsi_transport_expand qedi_iscsi_expand = { - .unbind_conn = iscsi_conn_unbind, -}; - struct iscsi_transport qedi_iscsi_transport = { .owner = THIS_MODULE, .name = QEDI_MODULE_NAME, .caps = CAP_RECOVERY_L0 | CAP_HDRDGST | CAP_MULTI_R2T | CAP_DATADGST | - CAP_DATA_PATH_OFFLOAD | CAP_TEXT_NEGO | CAP_OPS_EXPAND, + CAP_DATA_PATH_OFFLOAD | CAP_TEXT_NEGO, .create_session = qedi_session_create, .destroy_session = qedi_session_destroy, .create_conn = qedi_conn_create, .bind_conn = qedi_conn_bind, - .ops_expand = &qedi_iscsi_expand, .start_conn = qedi_conn_start, .stop_conn = iscsi_conn_stop, .destroy_conn = qedi_conn_destroy,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/qla4xxx/ql4_os.c
Changed
@@ -246,24 +246,19 @@ .vendor_id = SCSI_NL_VID_TYPE_PCI | PCI_VENDOR_ID_QLOGIC, }; -static struct iscsi_transport_expand qla4xxx_iscsi_expand = { - .unbind_conn = iscsi_conn_unbind, -}; - static struct iscsi_transport qla4xxx_iscsi_transport = { .owner = THIS_MODULE, .name = DRIVER_NAME, .caps = CAP_TEXT_NEGO | CAP_DATA_PATH_OFFLOAD | CAP_HDRDGST | CAP_DATADGST | CAP_LOGIN_OFFLOAD | - CAP_MULTI_R2T | CAP_OPS_EXPAND, + CAP_MULTI_R2T, .attr_is_visible = qla4_attr_is_visible, .create_session = qla4xxx_session_create, .destroy_session = qla4xxx_session_destroy, .start_conn = qla4xxx_conn_start, .create_conn = qla4xxx_conn_create, .bind_conn = qla4xxx_conn_bind, - .ops_expand = &qla4xxx_iscsi_expand, .stop_conn = iscsi_conn_stop, .destroy_conn = qla4xxx_conn_destroy, .set_param = iscsi_set_param,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/scsi_error.c
Changed
@@ -2359,7 +2359,7 @@ return -EIO; error = -EIO; - rq = kzalloc(sizeof(struct request_wrapper) + sizeof(struct scsi_cmnd) + + rq = kzalloc(sizeof(struct request) + sizeof(struct scsi_cmnd) + shost->hostt->cmd_size, GFP_KERNEL); if (!rq) goto out_put_autopm_host;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/scsi_transport_iscsi.c
Changed
@@ -2257,11 +2257,6 @@ ep = conn->ep; conn->ep = NULL; - if (session->transport->caps & CAP_OPS_EXPAND && - session->transport->ops_expand && - session->transport->ops_expand->unbind_conn) - session->transport->ops_expand->unbind_conn(conn, is_active); - session->transport->ep_disconnect(ep); ISCSI_DBG_TRANS_CONN(conn, "disconnect ep done.\n"); } @@ -3228,19 +3223,10 @@ struct Scsi_Host *shost; struct sockaddr *dst_addr; int err; - int (*tgt_dscvr)(struct Scsi_Host *shost, enum iscsi_tgt_dscvr type, - uint32_t enable, struct sockaddr *dst_addr); - if (transport->caps & CAP_OPS_EXPAND) { - if (!transport->ops_expand || !transport->ops_expand->tgt_dscvr) - return -EINVAL; - tgt_dscvr = transport->ops_expand->tgt_dscvr; - } else { - if (!transport->ops_expand) - return -EINVAL; - tgt_dscvr = (int (*)(struct Scsi_Host *, enum iscsi_tgt_dscvr, uint32_t, - struct sockaddr *))(transport->ops_expand); - } + if (!transport->tgt_dscvr) + return -EINVAL; + shost = scsi_host_lookup(ev->u.tgt_dscvr.host_no); if (!shost) { printk(KERN_ERR "target discovery could not find host no %u\n", @@ -3250,8 +3236,8 @@ dst_addr = (struct sockaddr *)((char*)ev + sizeof(*ev)); - err = tgt_dscvr(shost, ev->u.tgt_dscvr.type, - ev->u.tgt_dscvr.enable, dst_addr); + err = transport->tgt_dscvr(shost, ev->u.tgt_dscvr.type, + ev->u.tgt_dscvr.enable, dst_addr); scsi_host_put(shost); return err; } @@ -4904,10 +4890,7 @@ int err; BUG_ON(!tt); - if (tt->caps & CAP_OPS_EXPAND) { - BUG_ON(!tt->ops_expand); - WARN_ON(tt->ep_disconnect && !tt->ops_expand->unbind_conn); - } + priv = iscsi_if_transport_lookup(tt); if (priv) return NULL;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/scsi/virtio_scsi.c
Changed
@@ -780,7 +780,7 @@ static void virtscsi_remove_vqs(struct virtio_device *vdev) { /* Stop all the virtqueues. */ - vdev->config->reset(vdev); + virtio_reset_device(vdev); vdev->config->del_vqs(vdev); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/spi/spi-hisi-sfc-v3xx.c
Changed
@@ -5,13 +5,13 @@ // Copyright (c) 2019 HiSilicon Technologies Co., Ltd. // Author: John Garry <john.garry@huawei.com> -#include <linux/acpi.h> #include <linux/bitops.h> #include <linux/completion.h> #include <linux/dmi.h> #include <linux/interrupt.h> #include <linux/iopoll.h> #include <linux/module.h> +#include <linux/mod_devicetable.h> #include <linux/platform_device.h> #include <linux/slab.h> #include <linux/spi/spi.h> @@ -19,6 +19,8 @@ #define HISI_SFC_V3XX_VERSION (0x1f8) +#define HISI_SFC_V3XX_GLB_CFG (0x100) +#define HISI_SFC_V3XX_GLB_CFG_CS0_ADDR_MODE BIT(2) #define HISI_SFC_V3XX_RAW_INT_STAT (0x120) #define HISI_SFC_V3XX_INT_STAT (0x124) #define HISI_SFC_V3XX_INT_MASK (0x128) @@ -75,6 +77,7 @@ void __iomem *regbase; int max_cmd_dword; struct completion *completion; + u8 address_mode; int irq; }; @@ -168,10 +171,18 @@ static bool hisi_sfc_v3xx_supports_op(struct spi_mem *mem, const struct spi_mem_op *op) { + struct spi_device *spi = mem->spi; + struct hisi_sfc_v3xx_host *host; + + host = spi_controller_get_devdata(spi->master); + if (op->data.buswidth > 4 || op->dummy.buswidth > 4 || op->addr.buswidth > 4 || op->cmd.buswidth > 4) return false; + if (op->addr.nbytes != host->address_mode && op->addr.nbytes) + return false; + return spi_mem_default_supports_op(mem, op); } @@ -331,6 +342,7 @@ ret = 0; hisi_sfc_v3xx_disable_int(host); + synchronize_irq(host->irq); host->completion = NULL; } else { ret = hisi_sfc_v3xx_wait_cmd_idle(host); @@ -416,7 +428,7 @@ struct device *dev = &pdev->dev; struct hisi_sfc_v3xx_host *host; struct spi_controller *ctlr; - u32 version; + u32 version, glb_config; int ret; ctlr = spi_alloc_master(&pdev->dev, sizeof(*host)); @@ -463,16 +475,24 @@ ctlr->num_chipselect = 1; ctlr->mem_ops = &hisi_sfc_v3xx_mem_ops; + /* + * The address mode of the controller is either 3 or 4, + * which is indicated by the address mode bit in + * the global config register. The register is read only + * for the OS driver. + */ + glb_config = readl(host->regbase + HISI_SFC_V3XX_GLB_CFG); + if (glb_config & HISI_SFC_V3XX_GLB_CFG_CS0_ADDR_MODE) + host->address_mode = 4; + else + host->address_mode = 3; + version = readl(host->regbase + HISI_SFC_V3XX_VERSION); - switch (version) { - case 0x351: + if (version >= 0x351) host->max_cmd_dword = 64; - break; - default: + else host->max_cmd_dword = 16; - break; - } ret = devm_spi_register_controller(dev, ctlr); if (ret) @@ -488,18 +508,16 @@ return ret; } -#if IS_ENABLED(CONFIG_ACPI) static const struct acpi_device_id hisi_sfc_v3xx_acpi_ids = { {"HISI0341", 0}, {} }; MODULE_DEVICE_TABLE(acpi, hisi_sfc_v3xx_acpi_ids); -#endif static struct platform_driver hisi_sfc_v3xx_spi_driver = { .driver = { .name = "hisi-sfc-v3xx", - .acpi_match_table = ACPI_PTR(hisi_sfc_v3xx_acpi_ids), + .acpi_match_table = hisi_sfc_v3xx_acpi_ids, }, .probe = hisi_sfc_v3xx_probe, };
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/usb/core/hub.c
Changed
@@ -5967,6 +5967,11 @@ * the reset is over (using their post_reset method). * * Return: The same as for usb_reset_and_verify_device(). + * However, if a reset is already in progress (for instance, if a + * driver doesn't have pre_reset() or post_reset() callbacks, and while + * being unbound or re-bound during the ongoing reset its disconnect() + * or probe() routine tries to perform a second, nested reset), the + * routine returns -EINPROGRESS. * * Note: * The caller must own the device lock. For example, it's safe to use @@ -6000,6 +6005,10 @@ return -EISDIR; } + if (udev->reset_in_progress) + return -EINPROGRESS; + udev->reset_in_progress = 1; + port_dev = hub->portsudev->portnum - 1; /* @@ -6064,6 +6073,7 @@ usb_autosuspend_device(udev); memalloc_noio_restore(noio_flag); + udev->reset_in_progress = 0; return ret; } EXPORT_SYMBOL_GPL(usb_reset_device);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/usb/serial/option.c
Changed
@@ -97,6 +97,10 @@ #define YISO_VENDOR_ID 0x0EAB #define YISO_PRODUCT_U893 0xC893 +/* MEIG PRODUCTS */ +#define MEIG_VENDOR_ID 0x2DEE +#define MEIG_PRODUCT_SLM790 0x4D20 + /* * NOVATEL WIRELESS PRODUCTS * @@ -593,6 +597,7 @@ static const struct usb_device_id option_ids = { + { USB_DEVICE(MEIG_VENDOR_ID, MEIG_PRODUCT_SLM790) }, { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COLT) }, { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_RICOLA) }, { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_RICOLA_LIGHT) },
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/ifcvf/ifcvf_main.c
Changed
@@ -167,7 +167,7 @@ return &adapter->vf; } -static u64 ifcvf_vdpa_get_features(struct vdpa_device *vdpa_dev) +static u64 ifcvf_vdpa_get_device_features(struct vdpa_device *vdpa_dev) { struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev); u64 features; @@ -177,7 +177,7 @@ return features; } -static int ifcvf_vdpa_set_features(struct vdpa_device *vdpa_dev, u64 features) +static int ifcvf_vdpa_set_driver_features(struct vdpa_device *vdpa_dev, u64 features) { struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev); @@ -186,6 +186,13 @@ return 0; } +static u64 ifcvf_vdpa_get_driver_features(struct vdpa_device *vdpa_dev) +{ + struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev); + + return vf->req_features; +} + static u8 ifcvf_vdpa_get_status(struct vdpa_device *vdpa_dev) { struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev); @@ -391,8 +398,9 @@ * implemented set_map()/dma_map()/dma_unmap() */ static const struct vdpa_config_ops ifc_vdpa_ops = { - .get_features = ifcvf_vdpa_get_features, - .set_features = ifcvf_vdpa_set_features, + .get_device_features = ifcvf_vdpa_get_device_features, + .set_driver_features = ifcvf_vdpa_set_driver_features, + .get_driver_features = ifcvf_vdpa_get_driver_features, .get_status = ifcvf_vdpa_get_status, .set_status = ifcvf_vdpa_set_status, .reset = ifcvf_vdpa_reset, @@ -457,7 +465,7 @@ } adapter = vdpa_alloc_device(struct ifcvf_adapter, vdpa, - dev, &ifc_vdpa_ops, NULL); + dev, &ifc_vdpa_ops, 1, 1, NULL, false); if (adapter == NULL) { IFCVF_ERR(pdev, "Failed to allocate vDPA structure"); return -ENOMEM;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/mlx5/net/mlx5_vnet.c
Changed
@@ -1467,7 +1467,7 @@ return result; } -static u64 mlx5_vdpa_get_features(struct vdpa_device *vdev) +static u64 mlx5_vdpa_get_device_features(struct vdpa_device *vdev) { struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev); @@ -1550,7 +1550,7 @@ return __cpu_to_virtio16(mlx5_vdpa_is_little_endian(mvdev), val); } -static int mlx5_vdpa_set_features(struct vdpa_device *vdev, u64 features) +static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features) { struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev); @@ -1843,7 +1843,8 @@ return mvdev->generation; } -static int mlx5_vdpa_set_map(struct vdpa_device *vdev, struct vhost_iotlb *iotlb) +static int mlx5_vdpa_set_map(struct vdpa_device *vdev, unsigned int asid, + struct vhost_iotlb *iotlb) { struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev); @@ -1891,6 +1892,13 @@ return -EOPNOTSUPP; } +static u64 mlx5_vdpa_get_driver_features(struct vdpa_device *vdev) +{ + struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); + + return mvdev->actual_features; +} + static const struct vdpa_config_ops mlx5_vdpa_ops = { .set_vq_address = mlx5_vdpa_set_vq_address, .set_vq_num = mlx5_vdpa_set_vq_num, @@ -1903,8 +1911,9 @@ .get_vq_notification = mlx5_get_vq_notification, .get_vq_irq = mlx5_get_vq_irq, .get_vq_align = mlx5_vdpa_get_vq_align, - .get_features = mlx5_vdpa_get_features, - .set_features = mlx5_vdpa_set_features, + .get_device_features = mlx5_vdpa_get_device_features, + .set_driver_features = mlx5_vdpa_set_driver_features, + .get_driver_features = mlx5_vdpa_get_driver_features, .set_config_cb = mlx5_vdpa_set_config_cb, .get_vq_num_max = mlx5_vdpa_get_vq_num_max, .get_device_id = mlx5_vdpa_get_device_id, @@ -2006,7 +2015,7 @@ max_vqs = min_t(u32, max_vqs, MLX5_MAX_SUPPORTED_VQS); ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device, &mlx5_vdpa_ops, - NULL); + 1, 1, NULL, false); if (IS_ERR(ndev)) return ndev;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/vdpa.c
Changed
@@ -14,20 +14,37 @@ #include <uapi/linux/vdpa.h> #include <net/genetlink.h> #include <linux/mod_devicetable.h> +#include <linux/virtio_ids.h> static LIST_HEAD(mdev_head); /* A global mutex that protects vdpa management device and device level operations. */ -static DEFINE_MUTEX(vdpa_dev_mutex); +static DECLARE_RWSEM(vdpa_dev_lock); static DEFINE_IDA(vdpa_index_ida); +void vdpa_set_status(struct vdpa_device *vdev, u8 status) +{ + down_write(&vdev->cf_lock); + vdev->config->set_status(vdev, status); + up_write(&vdev->cf_lock); +} +EXPORT_SYMBOL(vdpa_set_status); + static struct genl_family vdpa_nl_family; static int vdpa_dev_probe(struct device *d) { struct vdpa_device *vdev = dev_to_vdpa(d); struct vdpa_driver *drv = drv_to_vdpa(vdev->dev.driver); + const struct vdpa_config_ops *ops = vdev->config; + u32 max_num, min_num = 1; int ret = 0; + max_num = ops->get_vq_num_max(vdev); + if (ops->get_vq_num_min) + min_num = ops->get_vq_num_min(vdev); + if (max_num < min_num) + return -EINVAL; + if (drv && drv->probe) ret = drv->probe(vdev); @@ -47,14 +64,14 @@ static int vdpa_dev_match(struct device *dev, struct device_driver *drv) { - struct vdpa_device *vdev = dev_to_vdpa(dev); + struct vdpa_device *vdev = dev_to_vdpa(dev); - /* Check override first, and if set, only use the named driver */ - if (vdev->driver_override) - return strcmp(vdev->driver_override, drv->name) == 0; + /* Check override first, and if set, only use the named driver */ + if (vdev->driver_override) + return strcmp(vdev->driver_override, drv->name) == 0; - /* Currently devices must be supported by all vDPA bus drivers */ - return 1; + /* Currently devices must be supported by all vDPA bus drivers */ + return 1; } static ssize_t driver_override_store(struct device *dev, @@ -95,30 +112,30 @@ static ssize_t driver_override_show(struct device *dev, struct device_attribute *attr, char *buf) { - struct vdpa_device *vdev = dev_to_vdpa(dev); - ssize_t len; + struct vdpa_device *vdev = dev_to_vdpa(dev); + ssize_t len; - device_lock(dev); - len = snprintf(buf, PAGE_SIZE, "%s\n", vdev->driver_override); - device_unlock(dev); + device_lock(dev); + len = snprintf(buf, PAGE_SIZE, "%s\n", vdev->driver_override); + device_unlock(dev); - return len; + return len; } static DEVICE_ATTR_RW(driver_override); static struct attribute *vdpa_dev_attrs = { - &dev_attr_driver_override.attr, - NULL, + &dev_attr_driver_override.attr, + NULL, }; static const struct attribute_group vdpa_dev_group = { - .attrs = vdpa_dev_attrs, + .attrs = vdpa_dev_attrs, }; __ATTRIBUTE_GROUPS(vdpa_dev); static struct bus_type vdpa_bus = { - .name = "vdpa", + .name = "vdpa", .dev_groups = vdpa_dev_groups, .match = vdpa_dev_match, .probe = vdpa_dev_probe, @@ -144,18 +161,23 @@ * initialized but before registered. * @parent: the parent device * @config: the bus operations that is supported by this device + * @ngroups: number of groups supported by this device + * @nas: number of address spaces supported by this device * @size: size of the parent structure that contains private data * @name: name of the vdpa device; optional. + * @use_va: indicate whether virtual address must be used by this device * * Driver should use vdpa_alloc_device() wrapper macro instead of * using this directly. * - * Returns an error when parent/config/dma_dev is not set or fail to get - * ida. + * Return: Returns an error when parent/config/dma_dev is not set or fail to get + * ida. */ struct vdpa_device *__vdpa_alloc_device(struct device *parent, const struct vdpa_config_ops *config, - size_t size, const char *name) + unsigned int ngroups, unsigned int nas, + size_t size, const char *name, + bool use_va) { struct vdpa_device *vdev; int err = -EINVAL; @@ -166,12 +188,16 @@ if (!!config->dma_map != !!config->dma_unmap) goto err; + /* It should only work for the device that use on-chip IOMMU */ + if (use_va && !(config->dma_map || config->set_map)) + goto err; + err = -ENOMEM; vdev = kzalloc(size, GFP_KERNEL); if (!vdev) goto err; - err = ida_simple_get(&vdpa_index_ida, 0, 0, GFP_KERNEL); + err = ida_alloc(&vdpa_index_ida, GFP_KERNEL); if (err < 0) goto err_ida; @@ -181,6 +207,9 @@ vdev->index = err; vdev->config = config; vdev->features_valid = false; + vdev->use_va = use_va; + vdev->ngroups = ngroups; + vdev->nas = nas; if (name) err = dev_set_name(&vdev->dev, "%s", name); @@ -189,6 +218,7 @@ if (err) goto err_name; + init_rwsem(&vdev->cf_lock); device_initialize(&vdev->dev); return vdev; @@ -209,13 +239,13 @@ return (strcmp(dev_name(&vdev->dev), data) == 0); } -static int __vdpa_register_device(struct vdpa_device *vdev, int nvqs) +static int __vdpa_register_device(struct vdpa_device *vdev, u32 nvqs) { struct device *dev; vdev->nvqs = nvqs; - lockdep_assert_held(&vdpa_dev_mutex); + lockdep_assert_held(&vdpa_dev_lock); dev = bus_find_device(&vdpa_bus, NULL, dev_name(&vdev->dev), vdpa_name_match); if (dev) { put_device(dev); @@ -232,9 +262,9 @@ * @vdev: the vdpa device to be registered to vDPA bus * @nvqs: number of virtqueues supported by this device * - * Returns an error when fail to add device to vDPA bus + * Return: Returns an error when fail to add device to vDPA bus */ -int _vdpa_register_device(struct vdpa_device *vdev, int nvqs) +int _vdpa_register_device(struct vdpa_device *vdev, u32 nvqs) { if (!vdev->mdev) return -EINVAL; @@ -249,15 +279,15 @@ * @vdev: the vdpa device to be registered to vDPA bus * @nvqs: number of virtqueues supported by this device * - * Returns an error when fail to add to vDPA bus + * Return: Returns an error when fail to add to vDPA bus */ -int vdpa_register_device(struct vdpa_device *vdev, int nvqs) +int vdpa_register_device(struct vdpa_device *vdev, u32 nvqs) { int err; - mutex_lock(&vdpa_dev_mutex); + down_write(&vdpa_dev_lock); err = __vdpa_register_device(vdev, nvqs); - mutex_unlock(&vdpa_dev_mutex); + up_write(&vdpa_dev_lock); return err; } EXPORT_SYMBOL_GPL(vdpa_register_device); @@ -270,7 +300,7 @@ */ void _vdpa_unregister_device(struct vdpa_device *vdev) { - lockdep_assert_held(&vdpa_dev_mutex); + lockdep_assert_held(&vdpa_dev_lock); WARN_ON(!vdev->mdev); device_unregister(&vdev->dev); } @@ -282,9 +312,9 @@ */ void vdpa_unregister_device(struct vdpa_device *vdev) { - mutex_lock(&vdpa_dev_mutex); + down_write(&vdpa_dev_lock); device_unregister(&vdev->dev); - mutex_unlock(&vdpa_dev_mutex); + up_write(&vdpa_dev_lock); } EXPORT_SYMBOL_GPL(vdpa_unregister_device); @@ -293,7 +323,7 @@ * @drv: the vdpa device driver to be registered * @owner: module owner of the driver * - * Returns an err when fail to do the registration + * Return: Returns an err when fail to do the registration */ int __vdpa_register_driver(struct vdpa_driver *drv, struct module *owner) { @@ -320,6 +350,8 @@ * @mdev: Pointer to vdpa management device * vdpa_mgmtdev_register() register a vdpa management device which supports * vdpa device management. + * Return: Returns 0 on success or failure when required callback ops are not + * initialized. */ int vdpa_mgmtdev_register(struct vdpa_mgmt_dev *mdev) { @@ -327,9 +359,9 @@ return -EINVAL; INIT_LIST_HEAD(&mdev->list); - mutex_lock(&vdpa_dev_mutex); + down_write(&vdpa_dev_lock); list_add_tail(&mdev->list, &mdev_head); - mutex_unlock(&vdpa_dev_mutex); + up_write(&vdpa_dev_lock); return 0; } EXPORT_SYMBOL_GPL(vdpa_mgmtdev_register); @@ -346,17 +378,64 @@ void vdpa_mgmtdev_unregister(struct vdpa_mgmt_dev *mdev) { - mutex_lock(&vdpa_dev_mutex); + down_write(&vdpa_dev_lock); list_del(&mdev->list); /* Filter out all the entries belong to this management device and delete it. */ bus_for_each_dev(&vdpa_bus, NULL, mdev, vdpa_match_remove); - mutex_unlock(&vdpa_dev_mutex); + up_write(&vdpa_dev_lock); } EXPORT_SYMBOL_GPL(vdpa_mgmtdev_unregister); +static void vdpa_get_config_unlocked(struct vdpa_device *vdev, + unsigned int offset, + void *buf, unsigned int len) +{ + const struct vdpa_config_ops *ops = vdev->config; + + /* + * Config accesses aren't supposed to trigger before features are set. + * If it does happen we assume a legacy guest. + */ + if (!vdev->features_valid) + vdpa_set_features_unlocked(vdev, 0); + ops->get_config(vdev, offset, buf, len); +} + +/** + * vdpa_get_config - Get one or more device configuration fields. + * @vdev: vdpa device to operate on + * @offset: starting byte offset of the field + * @buf: buffer pointer to read to + * @len: length of the configuration fields in bytes + */ +void vdpa_get_config(struct vdpa_device *vdev, unsigned int offset, + void *buf, unsigned int len) +{ + down_read(&vdev->cf_lock); + vdpa_get_config_unlocked(vdev, offset, buf, len); + up_read(&vdev->cf_lock); +} +EXPORT_SYMBOL_GPL(vdpa_get_config); + +/** + * vdpa_set_config - Set one or more device configuration fields. + * @vdev: vdpa device to operate on + * @offset: starting byte offset of the field + * @buf: buffer pointer to read from + * @length: length of the configuration fields in bytes + */ +void vdpa_set_config(struct vdpa_device *vdev, unsigned int offset, + const void *buf, unsigned int length) +{ + down_write(&vdev->cf_lock); + vdev->config->set_config(vdev, offset, buf, length); + up_write(&vdev->cf_lock); +} +EXPORT_SYMBOL_GPL(vdpa_set_config); + static bool mgmtdev_handle_match(const struct vdpa_mgmt_dev *mdev, const char *busname, const char *devname) { @@ -431,6 +510,16 @@ err = -EMSGSIZE; goto msg_err; } + if (nla_put_u32(msg, VDPA_ATTR_DEV_MGMTDEV_MAX_VQS, + mdev->max_supported_vqs)) { + err = -EMSGSIZE; + goto msg_err; + } + if (nla_put_u64_64bit(msg, VDPA_ATTR_DEV_SUPPORTED_FEATURES, + mdev->supported_features, VDPA_ATTR_PAD)) { + err = -EMSGSIZE; + goto msg_err; + } genlmsg_end(msg, hdr); return 0; @@ -450,17 +539,17 @@ if (!msg) return -ENOMEM; - mutex_lock(&vdpa_dev_mutex); + down_read(&vdpa_dev_lock); mdev = vdpa_mgmtdev_get_from_attr(info->attrs); if (IS_ERR(mdev)) { - mutex_unlock(&vdpa_dev_mutex); + up_read(&vdpa_dev_lock); NL_SET_ERR_MSG_MOD(info->extack, "Fail to find the specified mgmt device"); err = PTR_ERR(mdev); goto out; } err = vdpa_mgmtdev_fill(mdev, msg, info->snd_portid, info->snd_seq, 0); - mutex_unlock(&vdpa_dev_mutex); + up_read(&vdpa_dev_lock); if (err) goto out; err = genlmsg_reply(msg, info); @@ -479,7 +568,7 @@ int idx = 0; int err; - mutex_lock(&vdpa_dev_mutex); + down_read(&vdpa_dev_lock); list_for_each_entry(mdev, &mdev_head, list) { if (idx < start) { idx++; @@ -492,14 +581,21 @@ idx++; } out: - mutex_unlock(&vdpa_dev_mutex); + up_read(&vdpa_dev_lock); cb->args0 = idx; return msg->len; } +#define VDPA_DEV_NET_ATTRS_MASK (BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR) | \ + BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MTU) | \ + BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) + static int vdpa_nl_cmd_dev_add_set_doit(struct sk_buff *skb, struct genl_info *info) { + struct vdpa_dev_set_config config = {}; + struct nlattr **nl_attrs = info->attrs; struct vdpa_mgmt_dev *mdev; + const u8 *macaddr; const char *name; int err = 0; @@ -508,17 +604,53 @@ name = nla_data(info->attrsVDPA_ATTR_DEV_NAME); - mutex_lock(&vdpa_dev_mutex); + if (nl_attrsVDPA_ATTR_DEV_NET_CFG_MACADDR) { + macaddr = nla_data(nl_attrsVDPA_ATTR_DEV_NET_CFG_MACADDR); + memcpy(config.net.mac, macaddr, sizeof(config.net.mac)); + config.mask |= BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR); + } + if (nl_attrsVDPA_ATTR_DEV_NET_CFG_MTU) { + config.net.mtu = + nla_get_u16(nl_attrsVDPA_ATTR_DEV_NET_CFG_MTU); + config.mask |= BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MTU); + } + if (nl_attrsVDPA_ATTR_DEV_NET_CFG_MAX_VQP) { + config.net.max_vq_pairs = + nla_get_u16(nl_attrsVDPA_ATTR_DEV_NET_CFG_MAX_VQP); + if (!config.net.max_vq_pairs) { + NL_SET_ERR_MSG_MOD(info->extack, + "At least one pair of VQs is required"); + return -EINVAL; + } + config.mask |= BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP); + } + + /* Skip checking capability if user didn't prefer to configure any + * device networking attributes. It is likely that user might have used + * a device specific method to configure such attributes or using device + * default attributes. + */ + if ((config.mask & VDPA_DEV_NET_ATTRS_MASK) && + !netlink_capable(skb, CAP_NET_ADMIN)) + return -EPERM; + + down_write(&vdpa_dev_lock); mdev = vdpa_mgmtdev_get_from_attr(info->attrs); if (IS_ERR(mdev)) { NL_SET_ERR_MSG_MOD(info->extack, "Fail to find the specified management device"); err = PTR_ERR(mdev); goto err; } + if ((config.mask & mdev->config_attr_mask) != config.mask) { + NL_SET_ERR_MSG_MOD(info->extack, + "All provided attributes are not supported"); + err = -EOPNOTSUPP; + goto err; + } - err = mdev->ops->dev_add(mdev, name); + err = mdev->ops->dev_add(mdev, name, &config); err: - mutex_unlock(&vdpa_dev_mutex); + up_write(&vdpa_dev_lock); return err; } @@ -534,7 +666,7 @@ return -EINVAL; name = nla_data(info->attrsVDPA_ATTR_DEV_NAME); - mutex_lock(&vdpa_dev_mutex); + down_write(&vdpa_dev_lock); dev = bus_find_device(&vdpa_bus, NULL, name, vdpa_name_match); if (!dev) { NL_SET_ERR_MSG_MOD(info->extack, "device not found"); @@ -552,7 +684,7 @@ mdev_err: put_device(dev); dev_err: - mutex_unlock(&vdpa_dev_mutex); + up_write(&vdpa_dev_lock); return err; } @@ -561,6 +693,7 @@ int flags, struct netlink_ext_ack *extack) { u16 max_vq_size; + u16 min_vq_size = 1; u32 device_id; u32 vendor_id; void *hdr; @@ -577,6 +710,8 @@ device_id = vdev->config->get_device_id(vdev); vendor_id = vdev->config->get_vendor_id(vdev); max_vq_size = vdev->config->get_vq_num_max(vdev); + if (vdev->config->get_vq_num_min) + min_vq_size = vdev->config->get_vq_num_min(vdev); err = -EMSGSIZE; if (nla_put_string(msg, VDPA_ATTR_DEV_NAME, dev_name(&vdev->dev))) @@ -589,6 +724,8 @@ goto msg_err; if (nla_put_u16(msg, VDPA_ATTR_DEV_MAX_VQ_SIZE, max_vq_size)) goto msg_err; + if (nla_put_u16(msg, VDPA_ATTR_DEV_MIN_VQ_SIZE, min_vq_size)) + goto msg_err; genlmsg_end(msg, hdr); return 0; @@ -613,7 +750,7 @@ if (!msg) return -ENOMEM; - mutex_lock(&vdpa_dev_mutex); + down_read(&vdpa_dev_lock); dev = bus_find_device(&vdpa_bus, NULL, devname, vdpa_name_match); if (!dev) { NL_SET_ERR_MSG_MOD(info->extack, "device not found"); @@ -626,14 +763,19 @@ goto mdev_err; } err = vdpa_dev_fill(vdev, msg, info->snd_portid, info->snd_seq, 0, info->extack); - if (!err) - err = genlmsg_reply(msg, info); + if (err) + goto mdev_err; + + err = genlmsg_reply(msg, info); + put_device(dev); + up_read(&vdpa_dev_lock); + return err; + mdev_err: put_device(dev); err: - mutex_unlock(&vdpa_dev_mutex); - if (err) - nlmsg_free(msg); + up_read(&vdpa_dev_lock); + nlmsg_free(msg); return err; } @@ -674,17 +816,347 @@ info.start_idx = cb->args0; info.idx = 0; - mutex_lock(&vdpa_dev_mutex); + down_read(&vdpa_dev_lock); bus_for_each_dev(&vdpa_bus, NULL, &info, vdpa_dev_dump); - mutex_unlock(&vdpa_dev_mutex); + up_read(&vdpa_dev_lock); cb->args0 = info.idx; return msg->len; } +static int vdpa_dev_net_mq_config_fill(struct vdpa_device *vdev, + struct sk_buff *msg, u64 features, + const struct virtio_net_config *config) +{ + u16 val_u16; + + if ((features & BIT_ULL(VIRTIO_NET_F_MQ)) == 0) + return 0; + + val_u16 = le16_to_cpu(config->max_virtqueue_pairs); + return nla_put_u16(msg, VDPA_ATTR_DEV_NET_CFG_MAX_VQP, val_u16); +} + +static int vdpa_dev_net_config_fill(struct vdpa_device *vdev, struct sk_buff *msg) +{ + struct virtio_net_config config = {}; + u64 features; + u16 val_u16; + + vdpa_get_config_unlocked(vdev, 0, &config, sizeof(config)); + + if (nla_put(msg, VDPA_ATTR_DEV_NET_CFG_MACADDR, sizeof(config.mac), + config.mac)) + return -EMSGSIZE; + + val_u16 = __virtio16_to_cpu(true, config.status); + if (nla_put_u16(msg, VDPA_ATTR_DEV_NET_STATUS, val_u16)) + return -EMSGSIZE; + + val_u16 = __virtio16_to_cpu(true, config.mtu); + if (nla_put_u16(msg, VDPA_ATTR_DEV_NET_CFG_MTU, val_u16)) + return -EMSGSIZE; + + features = vdev->config->get_driver_features(vdev); + if (nla_put_u64_64bit(msg, VDPA_ATTR_DEV_NEGOTIATED_FEATURES, features, + VDPA_ATTR_PAD)) + return -EMSGSIZE; + + return vdpa_dev_net_mq_config_fill(vdev, msg, features, &config); +} + +static int +vdpa_dev_config_fill(struct vdpa_device *vdev, struct sk_buff *msg, u32 portid, u32 seq, + int flags, struct netlink_ext_ack *extack) +{ + u32 device_id; + void *hdr; + int err; + + down_read(&vdev->cf_lock); + hdr = genlmsg_put(msg, portid, seq, &vdpa_nl_family, flags, + VDPA_CMD_DEV_CONFIG_GET); + if (!hdr) { + err = -EMSGSIZE; + goto out; + } + + if (nla_put_string(msg, VDPA_ATTR_DEV_NAME, dev_name(&vdev->dev))) { + err = -EMSGSIZE; + goto msg_err; + } + + device_id = vdev->config->get_device_id(vdev); + if (nla_put_u32(msg, VDPA_ATTR_DEV_ID, device_id)) { + err = -EMSGSIZE; + goto msg_err; + } + + switch (device_id) { + case VIRTIO_ID_NET: + err = vdpa_dev_net_config_fill(vdev, msg); + break; + default: + err = -EOPNOTSUPP; + break; + } + if (err) + goto msg_err; + + up_read(&vdev->cf_lock); + genlmsg_end(msg, hdr); + return 0; + +msg_err: + genlmsg_cancel(msg, hdr); +out: + up_read(&vdev->cf_lock); + return err; +} + +static int vdpa_fill_stats_rec(struct vdpa_device *vdev, struct sk_buff *msg, + struct genl_info *info, u32 index) +{ + struct virtio_net_config config = {}; + u64 features; + u16 max_vqp; + u8 status; + int err; + + status = vdev->config->get_status(vdev); + if (!(status & VIRTIO_CONFIG_S_FEATURES_OK)) { + NL_SET_ERR_MSG_MOD(info->extack, "feature negotiation not complete"); + return -EAGAIN; + } + vdpa_get_config_unlocked(vdev, 0, &config, sizeof(config)); + + max_vqp = __virtio16_to_cpu(true, config.max_virtqueue_pairs); + if (nla_put_u16(msg, VDPA_ATTR_DEV_NET_CFG_MAX_VQP, max_vqp)) + return -EMSGSIZE; + + features = vdev->config->get_driver_features(vdev); + if (nla_put_u64_64bit(msg, VDPA_ATTR_DEV_NEGOTIATED_FEATURES, + features, VDPA_ATTR_PAD)) + return -EMSGSIZE; + + if (nla_put_u32(msg, VDPA_ATTR_DEV_QUEUE_INDEX, index)) + return -EMSGSIZE; + + err = vdev->config->get_vendor_vq_stats(vdev, index, msg, info->extack); + if (err) + return err; + + return 0; +} + +static int vendor_stats_fill(struct vdpa_device *vdev, struct sk_buff *msg, + struct genl_info *info, u32 index) +{ + int err; + + down_read(&vdev->cf_lock); + if (!vdev->config->get_vendor_vq_stats) { + err = -EOPNOTSUPP; + goto out; + } + + err = vdpa_fill_stats_rec(vdev, msg, info, index); +out: + up_read(&vdev->cf_lock); + return err; +} + +static int vdpa_dev_vendor_stats_fill(struct vdpa_device *vdev, + struct sk_buff *msg, + struct genl_info *info, u32 index) +{ + u32 device_id; + void *hdr; + int err; + u32 portid = info->snd_portid; + u32 seq = info->snd_seq; + u32 flags = 0; + + hdr = genlmsg_put(msg, portid, seq, &vdpa_nl_family, flags, + VDPA_CMD_DEV_VSTATS_GET); + if (!hdr) + return -EMSGSIZE; + + if (nla_put_string(msg, VDPA_ATTR_DEV_NAME, dev_name(&vdev->dev))) { + err = -EMSGSIZE; + goto undo_msg; + } + + device_id = vdev->config->get_device_id(vdev); + if (nla_put_u32(msg, VDPA_ATTR_DEV_ID, device_id)) { + err = -EMSGSIZE; + goto undo_msg; + } + + switch (device_id) { + case VIRTIO_ID_NET: + if (index > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX) { + NL_SET_ERR_MSG_MOD(info->extack, "queue index excceeds max value"); + err = -ERANGE; + break; + } + + err = vendor_stats_fill(vdev, msg, info, index); + break; + default: + err = -EOPNOTSUPP; + break; + } + genlmsg_end(msg, hdr); + + return err; + +undo_msg: + genlmsg_cancel(msg, hdr); + return err; +} + +static int vdpa_nl_cmd_dev_config_get_doit(struct sk_buff *skb, struct genl_info *info) +{ + struct vdpa_device *vdev; + struct sk_buff *msg; + const char *devname; + struct device *dev; + int err; + + if (!info->attrsVDPA_ATTR_DEV_NAME) + return -EINVAL; + devname = nla_data(info->attrsVDPA_ATTR_DEV_NAME); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!msg) + return -ENOMEM; + + down_read(&vdpa_dev_lock); + dev = bus_find_device(&vdpa_bus, NULL, devname, vdpa_name_match); + if (!dev) { + NL_SET_ERR_MSG_MOD(info->extack, "device not found"); + err = -ENODEV; + goto dev_err; + } + vdev = container_of(dev, struct vdpa_device, dev); + if (!vdev->mdev) { + NL_SET_ERR_MSG_MOD(info->extack, "unmanaged vdpa device"); + err = -EINVAL; + goto mdev_err; + } + err = vdpa_dev_config_fill(vdev, msg, info->snd_portid, info->snd_seq, + 0, info->extack); + if (!err) + err = genlmsg_reply(msg, info); + +mdev_err: + put_device(dev); +dev_err: + up_read(&vdpa_dev_lock); + if (err) + nlmsg_free(msg); + return err; +} + +static int vdpa_dev_config_dump(struct device *dev, void *data) +{ + struct vdpa_device *vdev = container_of(dev, struct vdpa_device, dev); + struct vdpa_dev_dump_info *info = data; + int err; + + if (!vdev->mdev) + return 0; + if (info->idx < info->start_idx) { + info->idx++; + return 0; + } + err = vdpa_dev_config_fill(vdev, info->msg, NETLINK_CB(info->cb->skb).portid, + info->cb->nlh->nlmsg_seq, NLM_F_MULTI, + info->cb->extack); + if (err) + return err; + + info->idx++; + return 0; +} + +static int +vdpa_nl_cmd_dev_config_get_dumpit(struct sk_buff *msg, struct netlink_callback *cb) +{ + struct vdpa_dev_dump_info info; + + info.msg = msg; + info.cb = cb; + info.start_idx = cb->args0; + info.idx = 0; + + down_read(&vdpa_dev_lock); + bus_for_each_dev(&vdpa_bus, NULL, &info, vdpa_dev_config_dump); + up_read(&vdpa_dev_lock); + cb->args0 = info.idx; + return msg->len; +} + +static int vdpa_nl_cmd_dev_stats_get_doit(struct sk_buff *skb, + struct genl_info *info) +{ + struct vdpa_device *vdev; + struct sk_buff *msg; + const char *devname; + struct device *dev; + u32 index; + int err; + + if (!info->attrsVDPA_ATTR_DEV_NAME) + return -EINVAL; + + if (!info->attrsVDPA_ATTR_DEV_QUEUE_INDEX) + return -EINVAL; + + devname = nla_data(info->attrsVDPA_ATTR_DEV_NAME); + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!msg) + return -ENOMEM; + + index = nla_get_u32(info->attrsVDPA_ATTR_DEV_QUEUE_INDEX); + down_read(&vdpa_dev_lock); + dev = bus_find_device(&vdpa_bus, NULL, devname, vdpa_name_match); + if (!dev) { + NL_SET_ERR_MSG_MOD(info->extack, "device not found"); + err = -ENODEV; + goto dev_err; + } + vdev = container_of(dev, struct vdpa_device, dev); + if (!vdev->mdev) { + NL_SET_ERR_MSG_MOD(info->extack, "unmanaged vdpa device"); + err = -EINVAL; + goto mdev_err; + } + err = vdpa_dev_vendor_stats_fill(vdev, msg, info, index); + if (err) + goto mdev_err; + + err = genlmsg_reply(msg, info); + + put_device(dev); + up_read(&vdpa_dev_lock); + + return err; + +mdev_err: + put_device(dev); +dev_err: + nlmsg_free(msg); + up_read(&vdpa_dev_lock); + return err; +} + static const struct nla_policy vdpa_nl_policyVDPA_ATTR_MAX + 1 = { VDPA_ATTR_MGMTDEV_BUS_NAME = { .type = NLA_NUL_STRING }, VDPA_ATTR_MGMTDEV_DEV_NAME = { .type = NLA_STRING }, VDPA_ATTR_DEV_NAME = { .type = NLA_STRING }, + VDPA_ATTR_DEV_NET_CFG_MACADDR = NLA_POLICY_ETH_ADDR, + /* virtio spec 1.1 section 5.1.4.1 for valid MTU range */ + VDPA_ATTR_DEV_NET_CFG_MTU = NLA_POLICY_MIN(NLA_U16, 68), }; static const struct genl_ops vdpa_nl_ops = { @@ -712,6 +1184,18 @@ .doit = vdpa_nl_cmd_dev_get_doit, .dumpit = vdpa_nl_cmd_dev_get_dumpit, }, + { + .cmd = VDPA_CMD_DEV_CONFIG_GET, + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, + .doit = vdpa_nl_cmd_dev_config_get_doit, + .dumpit = vdpa_nl_cmd_dev_config_get_dumpit, + }, + { + .cmd = VDPA_CMD_DEV_VSTATS_GET, + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, + .doit = vdpa_nl_cmd_dev_stats_get_doit, + .flags = GENL_ADMIN_PERM, + }, }; static struct genl_family vdpa_nl_family __ro_after_init = {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/vdpa_sim/vdpa_sim.c
Changed
@@ -52,6 +52,17 @@ return vdpa_to_sim(vdpa); } +static void vdpasim_vq_notify(struct vringh *vring) +{ + struct vdpasim_virtqueue *vq = + container_of(vring, struct vdpasim_virtqueue, vring); + + if (!vq->cb) + return; + + vq->cb(vq->private); +} + static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx) { struct vdpasim_virtqueue *vq = &vdpasim->vqsidx; @@ -63,6 +74,8 @@ (uintptr_t)vq->driver_addr, (struct vring_used *) (uintptr_t)vq->device_addr); + + vq->vring.notify = vdpasim_vq_notify; } static void vdpasim_vq_reset(struct vdpasim *vdpasim, @@ -76,6 +89,8 @@ vq->private = NULL; vringh_init_iotlb(&vq->vring, vdpasim->dev_attr.supported_features, VDPASIM_QUEUE_MAX, false, NULL, NULL, NULL); + + vq->vring.notify = NULL; } static void vdpasim_do_reset(struct vdpasim *vdpasim) @@ -221,8 +236,8 @@ else ops = &vdpasim_net_config_ops; - vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops, - dev_attr->name); + vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops, 1, + 1, dev_attr->name, false); if (!vdpasim) goto err_alloc; @@ -363,14 +378,19 @@ return VDPASIM_QUEUE_ALIGN; } -static u64 vdpasim_get_features(struct vdpa_device *vdpa) +static u32 vdpasim_get_vq_group(struct vdpa_device *vdpa, u16 idx) +{ + return 0; +} + +static u64 vdpasim_get_device_features(struct vdpa_device *vdpa) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); return vdpasim->dev_attr.supported_features; } -static int vdpasim_set_features(struct vdpa_device *vdpa, u64 features) +static int vdpasim_set_driver_features(struct vdpa_device *vdpa, u64 features) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); @@ -383,6 +403,13 @@ return 0; } +static u64 vdpasim_get_driver_features(struct vdpa_device *vdpa) +{ + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); + + return vdpasim->features; +} + static void vdpasim_set_config_cb(struct vdpa_device *vdpa, struct vdpa_callback *cb) { @@ -491,7 +518,7 @@ return range; } -static int vdpasim_set_map(struct vdpa_device *vdpa, +static int vdpasim_set_map(struct vdpa_device *vdpa, unsigned int asid, struct vhost_iotlb *iotlb) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); @@ -518,21 +545,23 @@ return ret; } -static int vdpasim_dma_map(struct vdpa_device *vdpa, u64 iova, u64 size, - u64 pa, u32 perm) +static int vdpasim_dma_map(struct vdpa_device *vdpa, unsigned int asid, + u64 iova, u64 size, + u64 pa, u32 perm, void *opaque) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); int ret; spin_lock(&vdpasim->iommu_lock); - ret = vhost_iotlb_add_range(vdpasim->iommu, iova, iova + size - 1, pa, - perm); + ret = vhost_iotlb_add_range_ctx(vdpasim->iommu, iova, iova + size - 1, + pa, perm, opaque); spin_unlock(&vdpasim->iommu_lock); return ret; } -static int vdpasim_dma_unmap(struct vdpa_device *vdpa, u64 iova, u64 size) +static int vdpasim_dma_unmap(struct vdpa_device *vdpa, unsigned int asid, + u64 iova, u64 size) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); @@ -565,8 +594,10 @@ .set_vq_state = vdpasim_set_vq_state, .get_vq_state = vdpasim_get_vq_state, .get_vq_align = vdpasim_get_vq_align, - .get_features = vdpasim_get_features, - .set_features = vdpasim_set_features, + .get_vq_group = vdpasim_get_vq_group, + .get_device_features = vdpasim_get_device_features, + .set_driver_features = vdpasim_set_driver_features, + .get_driver_features = vdpasim_get_driver_features, .set_config_cb = vdpasim_set_config_cb, .get_vq_num_max = vdpasim_get_vq_num_max, .get_device_id = vdpasim_get_device_id, @@ -594,8 +625,10 @@ .set_vq_state = vdpasim_set_vq_state, .get_vq_state = vdpasim_get_vq_state, .get_vq_align = vdpasim_get_vq_align, - .get_features = vdpasim_get_features, - .set_features = vdpasim_set_features, + .get_vq_group = vdpasim_get_vq_group, + .get_device_features = vdpasim_get_device_features, + .set_driver_features = vdpasim_set_driver_features, + .get_driver_features = vdpasim_get_driver_features, .set_config_cb = vdpasim_set_config_cb, .get_vq_num_max = vdpasim_get_vq_num_max, .get_device_id = vdpasim_get_device_id,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/vdpa_sim/vdpa_sim.h
Changed
@@ -61,6 +61,7 @@ u32 status; u32 generation; u64 features; + u32 groups; /* spinlock to synchronize iommu table */ spinlock_t iommu_lock; };
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c
Changed
@@ -105,7 +105,8 @@ .release = vdpasim_blk_mgmtdev_release, }; -static int vdpasim_blk_dev_add(struct vdpa_mgmt_dev *mdev, const char *name) +static int vdpasim_blk_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, + const struct vdpa_dev_set_config *config) { struct vdpasim_dev_attr dev_attr = {}; struct vdpasim *simdev;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/vdpa_sim/vdpa_sim_net.c
Changed
@@ -127,7 +127,8 @@ .release = vdpasim_net_mgmtdev_release, }; -static int vdpasim_net_dev_add(struct vdpa_mgmt_dev *mdev, const char *name) +static int vdpasim_net_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, + const struct vdpa_dev_set_config *config) { struct vdpasim_dev_attr dev_attr = {}; struct vdpasim *simdev;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vdpa/virtio_pci/vp_vdpa.c
Changed
@@ -32,7 +32,7 @@ struct vp_vdpa { struct vdpa_device vdpa; - struct virtio_pci_modern_device mdev; + struct virtio_pci_modern_device *mdev; struct vp_vring *vring; struct vdpa_callback config_cb; char msix_nameVP_VDPA_NAME_SIZE; @@ -41,6 +41,12 @@ int vectors; }; +struct vp_vdpa_mgmtdev { + struct vdpa_mgmt_dev mgtdev; + struct virtio_pci_modern_device *mdev; + struct vp_vdpa *vp_vdpa; +}; + static struct vp_vdpa *vdpa_to_vp(struct vdpa_device *vdpa) { return container_of(vdpa, struct vp_vdpa, vdpa); @@ -50,17 +56,22 @@ { struct vp_vdpa *vp_vdpa = vdpa_to_vp(vdpa); - return &vp_vdpa->mdev; + return vp_vdpa->mdev; +} + +static struct virtio_pci_modern_device *vp_vdpa_to_mdev(struct vp_vdpa *vp_vdpa) +{ + return vp_vdpa->mdev; } -static u64 vp_vdpa_get_features(struct vdpa_device *vdpa) +static u64 vp_vdpa_get_device_features(struct vdpa_device *vdpa) { struct virtio_pci_modern_device *mdev = vdpa_to_mdev(vdpa); return vp_modern_get_features(mdev); } -static int vp_vdpa_set_features(struct vdpa_device *vdpa, u64 features) +static int vp_vdpa_set_driver_features(struct vdpa_device *vdpa, u64 features) { struct virtio_pci_modern_device *mdev = vdpa_to_mdev(vdpa); @@ -69,6 +80,13 @@ return 0; } +static u64 vp_vdpa_get_driver_features(struct vdpa_device *vdpa) +{ + struct virtio_pci_modern_device *mdev = vdpa_to_mdev(vdpa); + + return vp_modern_get_driver_features(mdev); +} + static u8 vp_vdpa_get_status(struct vdpa_device *vdpa) { struct virtio_pci_modern_device *mdev = vdpa_to_mdev(vdpa); @@ -89,7 +107,7 @@ static void vp_vdpa_free_irq(struct vp_vdpa *vp_vdpa) { - struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; + struct virtio_pci_modern_device *mdev = vp_vdpa_to_mdev(vp_vdpa); struct pci_dev *pdev = mdev->pci_dev; int i; @@ -136,7 +154,7 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa) { - struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; + struct virtio_pci_modern_device *mdev = vp_vdpa_to_mdev(vp_vdpa); struct pci_dev *pdev = mdev->pci_dev; int i, ret, irq; int queues = vp_vdpa->queues; @@ -191,7 +209,7 @@ static void vp_vdpa_set_status(struct vdpa_device *vdpa, u8 status) { struct vp_vdpa *vp_vdpa = vdpa_to_vp(vdpa); - struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; + struct virtio_pci_modern_device *mdev = vp_vdpa_to_mdev(vp_vdpa); u8 s = vp_vdpa_get_status(vdpa); if (status & VIRTIO_CONFIG_S_DRIVER_OK && @@ -205,7 +223,7 @@ static int vp_vdpa_reset(struct vdpa_device *vdpa) { struct vp_vdpa *vp_vdpa = vdpa_to_vp(vdpa); - struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; + struct virtio_pci_modern_device *mdev = vp_vdpa_to_mdev(vp_vdpa); u8 s = vp_vdpa_get_status(vdpa); vp_modern_set_status(mdev, 0); @@ -365,7 +383,7 @@ void *buf, unsigned int len) { struct vp_vdpa *vp_vdpa = vdpa_to_vp(vdpa); - struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; + struct virtio_pci_modern_device *mdev = vp_vdpa_to_mdev(vp_vdpa); u8 old, new; u8 *p; int i; @@ -385,7 +403,7 @@ unsigned int len) { struct vp_vdpa *vp_vdpa = vdpa_to_vp(vdpa); - struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; + struct virtio_pci_modern_device *mdev = vp_vdpa_to_mdev(vp_vdpa); const u8 *p = buf; int i; @@ -405,7 +423,7 @@ vp_vdpa_get_vq_notification(struct vdpa_device *vdpa, u16 qid) { struct vp_vdpa *vp_vdpa = vdpa_to_vp(vdpa); - struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev; + struct virtio_pci_modern_device *mdev = vp_vdpa_to_mdev(vp_vdpa); struct vdpa_notification_area notify; notify.addr = vp_vdpa->vringqid.notify_pa; @@ -415,8 +433,9 @@ } static const struct vdpa_config_ops vp_vdpa_ops = { - .get_features = vp_vdpa_get_features, - .set_features = vp_vdpa_set_features, + .get_device_features = vp_vdpa_get_device_features, + .set_driver_features = vp_vdpa_set_driver_features, + .get_driver_features = vp_vdpa_get_driver_features, .get_status = vp_vdpa_get_status, .set_status = vp_vdpa_set_status, .reset = vp_vdpa_reset, @@ -446,38 +465,31 @@ pci_free_irq_vectors(data); } -static int vp_vdpa_probe(struct pci_dev *pdev, const struct pci_device_id *id) +static int vp_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name, + const struct vdpa_dev_set_config *add_config) { - struct virtio_pci_modern_device *mdev; + struct vp_vdpa_mgmtdev *vp_vdpa_mgtdev = + container_of(v_mdev, struct vp_vdpa_mgmtdev, mgtdev); + + struct virtio_pci_modern_device *mdev = vp_vdpa_mgtdev->mdev; + struct pci_dev *pdev = mdev->pci_dev; struct device *dev = &pdev->dev; - struct vp_vdpa *vp_vdpa; + struct vp_vdpa *vp_vdpa = NULL; int ret, i; - ret = pcim_enable_device(pdev); - if (ret) - return ret; - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, - dev, &vp_vdpa_ops, NULL); + dev, &vp_vdpa_ops, 1, 1, name, false); + if (IS_ERR(vp_vdpa)) { dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); return PTR_ERR(vp_vdpa); } - mdev = &vp_vdpa->mdev; - mdev->pci_dev = pdev; - - ret = vp_modern_probe(mdev); - if (ret) { - dev_err(&pdev->dev, "Failed to probe modern PCI device\n"); - goto err; - } - - pci_set_master(pdev); - pci_set_drvdata(pdev, vp_vdpa); + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; vp_vdpa->vdpa.dma_dev = &pdev->dev; vp_vdpa->queues = vp_modern_get_num_queues(mdev); + vp_vdpa->mdev = mdev; ret = devm_add_action_or_reset(dev, vp_vdpa_free_irq_vectors, pdev); if (ret) { @@ -501,13 +513,15 @@ vp_modern_map_vq_notify(mdev, i, &vp_vdpa->vringi.notify_pa); if (!vp_vdpa->vringi.notify) { + ret = -EINVAL; dev_warn(&pdev->dev, "Fail to map vq notify %d\n", i); goto err; } } vp_vdpa->config_irq = VIRTIO_MSI_NO_VECTOR; - ret = vdpa_register_device(&vp_vdpa->vdpa, vp_vdpa->queues); + vp_vdpa->vdpa.mdev = &vp_vdpa_mgtdev->mgtdev; + ret = _vdpa_register_device(&vp_vdpa->vdpa, vp_vdpa->queues); if (ret) { dev_err(&pdev->dev, "Failed to register to vdpa bus\n"); goto err; @@ -520,12 +534,104 @@ return ret; } +static void vp_vdpa_dev_del(struct vdpa_mgmt_dev *v_mdev, + struct vdpa_device *dev) +{ + struct vp_vdpa_mgmtdev *vp_vdpa_mgtdev = + container_of(v_mdev, struct vp_vdpa_mgmtdev, mgtdev); + + struct vp_vdpa *vp_vdpa = vp_vdpa_mgtdev->vp_vdpa; + + _vdpa_unregister_device(&vp_vdpa->vdpa); + vp_vdpa_mgtdev->vp_vdpa = NULL; +} + +static const struct vdpa_mgmtdev_ops vp_vdpa_mdev_ops = { + .dev_add = vp_vdpa_dev_add, + .dev_del = vp_vdpa_dev_del, +}; + +static int vp_vdpa_probe(struct pci_dev *pdev, const struct pci_device_id *id) +{ + struct vp_vdpa_mgmtdev *vp_vdpa_mgtdev = NULL; + struct vdpa_mgmt_dev *mgtdev; + struct device *dev = &pdev->dev; + struct virtio_pci_modern_device *mdev = NULL; + struct virtio_device_id *mdev_id = NULL; + int err; + + vp_vdpa_mgtdev = kzalloc(sizeof(*vp_vdpa_mgtdev), GFP_KERNEL); + if (!vp_vdpa_mgtdev) + return -ENOMEM; + + mgtdev = &vp_vdpa_mgtdev->mgtdev; + mgtdev->ops = &vp_vdpa_mdev_ops; + mgtdev->device = dev; + + mdev = kzalloc(sizeof(struct virtio_pci_modern_device), GFP_KERNEL); + if (!mdev) { + err = -ENOMEM; + goto mdev_err; + } + + mdev_id = kzalloc(sizeof(struct virtio_device_id), GFP_KERNEL); + if (!mdev_id) { + err = -ENOMEM; + goto mdev_id_err; + } + + vp_vdpa_mgtdev->mdev = mdev; + mdev->pci_dev = pdev; + + err = pcim_enable_device(pdev); + if (err) { + goto probe_err; + } + + err = vp_modern_probe(mdev); + if (err) { + dev_err(&pdev->dev, "Failed to probe modern PCI device\n"); + goto probe_err; + } + + mdev_id->device = mdev->id.device; + mdev_id->vendor = mdev->id.vendor; + mgtdev->id_table = mdev_id; + mgtdev->max_supported_vqs = vp_modern_get_num_queues(mdev); + mgtdev->supported_features = vp_modern_get_features(mdev); + pci_set_master(pdev); + pci_set_drvdata(pdev, vp_vdpa_mgtdev); + + err = vdpa_mgmtdev_register(mgtdev); + if (err) { + dev_err(&pdev->dev, "Failed to register vdpa mgmtdev device\n"); + goto register_err; + } + + return 0; + +register_err: + vp_modern_remove(vp_vdpa_mgtdev->mdev); +probe_err: + kfree(mdev_id); +mdev_id_err: + kfree(mdev); +mdev_err: + kfree(vp_vdpa_mgtdev); + return err; +} + static void vp_vdpa_remove(struct pci_dev *pdev) { - struct vp_vdpa *vp_vdpa = pci_get_drvdata(pdev); + struct vp_vdpa_mgmtdev *vp_vdpa_mgtdev = pci_get_drvdata(pdev); + struct virtio_pci_modern_device *mdev = NULL; - vp_modern_remove(&vp_vdpa->mdev); - vdpa_unregister_device(&vp_vdpa->vdpa); + mdev = vp_vdpa_mgtdev->mdev; + vp_modern_remove(mdev); + vdpa_mgmtdev_unregister(&vp_vdpa_mgtdev->mgtdev); + kfree(vp_vdpa_mgtdev->mgtdev.id_table); + kfree(mdev); + kfree(vp_vdpa_mgtdev); } static struct pci_driver vp_vdpa_driver = {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vhost/iotlb.c
Changed
@@ -36,25 +36,42 @@ EXPORT_SYMBOL_GPL(vhost_iotlb_map_free); /** - * vhost_iotlb_add_range - add a new range to vhost IOTLB + * vhost_iotlb_add_range_ctx - add a new range to vhost IOTLB * @iotlb: the IOTLB * @start: start of the IOVA range * @last: last of IOVA range * @addr: the address that is mapped to @start * @perm: access permission of this range + * @opaque: the opaque pointer for the new mapping * * Returns an error last is smaller than start or memory allocation * fails */ -int vhost_iotlb_add_range(struct vhost_iotlb *iotlb, - u64 start, u64 last, - u64 addr, unsigned int perm) +int vhost_iotlb_add_range_ctx(struct vhost_iotlb *iotlb, + u64 start, u64 last, + u64 addr, unsigned int perm, + void *opaque) { struct vhost_iotlb_map *map; if (last < start) return -EFAULT; + /* If the range being mapped is 0, ULONG_MAX, split it into two entries + * otherwise its size would overflow u64. + */ + if (start == 0 && last == ULONG_MAX) { + u64 mid = last / 2; + int err = vhost_iotlb_add_range_ctx(iotlb, start, mid, addr, + perm, opaque); + + if (err) + return err; + + addr += mid + 1; + start = mid + 1; + } + if (iotlb->limit && iotlb->nmaps == iotlb->limit && iotlb->flags & VHOST_IOTLB_FLAG_RETIRE) { @@ -71,6 +88,7 @@ map->last = last; map->addr = addr; map->perm = perm; + map->opaque = opaque; iotlb->nmaps++; vhost_iotlb_itree_insert(map, &iotlb->root); @@ -80,6 +98,15 @@ return 0; } +EXPORT_SYMBOL_GPL(vhost_iotlb_add_range_ctx); + +int vhost_iotlb_add_range(struct vhost_iotlb *iotlb, + u64 start, u64 last, + u64 addr, unsigned int perm) +{ + return vhost_iotlb_add_range_ctx(iotlb, start, last, + addr, perm, NULL); +} EXPORT_SYMBOL_GPL(vhost_iotlb_add_range); /** @@ -99,6 +126,23 @@ EXPORT_SYMBOL_GPL(vhost_iotlb_del_range); /** + * vhost_iotlb_init - initialize a vhost IOTLB + * @iotlb: the IOTLB that needs to be initialized + * @limit: maximum number of IOTLB entries + * @flags: VHOST_IOTLB_FLAG_XXX + */ +void vhost_iotlb_init(struct vhost_iotlb *iotlb, unsigned int limit, + unsigned int flags) +{ + iotlb->root = RB_ROOT_CACHED; + iotlb->limit = limit; + iotlb->nmaps = 0; + iotlb->flags = flags; + INIT_LIST_HEAD(&iotlb->list); +} +EXPORT_SYMBOL_GPL(vhost_iotlb_init); + +/** * vhost_iotlb_alloc - add a new vhost IOTLB * @limit: maximum number of IOTLB entries * @flags: VHOST_IOTLB_FLAG_XXX @@ -112,11 +156,7 @@ if (!iotlb) return NULL; - iotlb->root = RB_ROOT_CACHED; - iotlb->limit = limit; - iotlb->nmaps = 0; - iotlb->flags = flags; - INIT_LIST_HEAD(&iotlb->list); + vhost_iotlb_init(iotlb, limit, flags); return iotlb; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vhost/vdpa.c
Changed
@@ -16,44 +16,131 @@ #include <linux/cdev.h> #include <linux/device.h> #include <linux/mm.h> +#include <linux/slab.h> #include <linux/iommu.h> #include <linux/uuid.h> #include <linux/vdpa.h> #include <linux/nospec.h> #include <linux/vhost.h> -#include <linux/virtio_net.h> #include "vhost.h" enum { VHOST_VDPA_BACKEND_FEATURES = (1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2) | - (1ULL << VHOST_BACKEND_F_IOTLB_BATCH), + (1ULL << VHOST_BACKEND_F_IOTLB_BATCH) | + (1ULL << VHOST_BACKEND_F_IOTLB_ASID), }; #define VHOST_VDPA_DEV_MAX (1U << MINORBITS) +#define VHOST_VDPA_IOTLB_BUCKETS 16 + +struct vhost_vdpa_as { + struct hlist_node hash_link; + struct vhost_iotlb iotlb; + u32 id; +}; + struct vhost_vdpa { struct vhost_dev vdev; struct iommu_domain *domain; struct vhost_virtqueue *vqs; struct completion completion; struct vdpa_device *vdpa; + struct hlist_head asVHOST_VDPA_IOTLB_BUCKETS; struct device dev; struct cdev cdev; atomic_t opened; - int nvqs; + u32 nvqs; int virtio_id; int minor; struct eventfd_ctx *config_ctx; int in_batch; struct vdpa_iova_range range; + u32 batch_asid; }; static DEFINE_IDA(vhost_vdpa_ida); static dev_t vhost_vdpa_major; +static inline u32 iotlb_to_asid(struct vhost_iotlb *iotlb) +{ + struct vhost_vdpa_as *as = container_of(iotlb, struct + vhost_vdpa_as, iotlb); + return as->id; +} + +static struct vhost_vdpa_as *asid_to_as(struct vhost_vdpa *v, u32 asid) +{ + struct hlist_head *head = &v->asasid % VHOST_VDPA_IOTLB_BUCKETS; + struct vhost_vdpa_as *as; + + hlist_for_each_entry(as, head, hash_link) + if (as->id == asid) + return as; + + return NULL; +} + +static struct vhost_iotlb *asid_to_iotlb(struct vhost_vdpa *v, u32 asid) +{ + struct vhost_vdpa_as *as = asid_to_as(v, asid); + + if (!as) + return NULL; + + return &as->iotlb; +} + +static struct vhost_vdpa_as *vhost_vdpa_alloc_as(struct vhost_vdpa *v, u32 asid) +{ + struct hlist_head *head = &v->asasid % VHOST_VDPA_IOTLB_BUCKETS; + struct vhost_vdpa_as *as; + + if (asid_to_as(v, asid)) + return NULL; + + if (asid >= v->vdpa->nas) + return NULL; + + as = kmalloc(sizeof(*as), GFP_KERNEL); + if (!as) + return NULL; + + vhost_iotlb_init(&as->iotlb, 0, 0); + as->id = asid; + hlist_add_head(&as->hash_link, head); + + return as; +} + +static struct vhost_vdpa_as *vhost_vdpa_find_alloc_as(struct vhost_vdpa *v, + u32 asid) +{ + struct vhost_vdpa_as *as = asid_to_as(v, asid); + + if (as) + return as; + + return vhost_vdpa_alloc_as(v, asid); +} + +static int vhost_vdpa_remove_as(struct vhost_vdpa *v, u32 asid) +{ + struct vhost_vdpa_as *as = asid_to_as(v, asid); + + if (!as) + return -EINVAL; + + hlist_del(&as->hash_link); + vhost_iotlb_reset(&as->iotlb); + kfree(as); + + return 0; +} + static void handle_vq_kick(struct vhost_work *work) { struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue, @@ -119,12 +206,13 @@ irq_bypass_unregister_producer(&vq->call_ctx.producer); } -static void vhost_vdpa_reset(struct vhost_vdpa *v) +static int vhost_vdpa_reset(struct vhost_vdpa *v) { struct vdpa_device *vdpa = v->vdpa; - vdpa_reset(vdpa); v->in_batch = 0; + + return vdpa_reset(vdpa); } static long vhost_vdpa_get_device_id(struct vhost_vdpa *v, u8 __user *argp) @@ -160,7 +248,8 @@ struct vdpa_device *vdpa = v->vdpa; const struct vdpa_config_ops *ops = vdpa->config; u8 status, status_old; - int ret, nvqs = v->nvqs; + u32 nvqs = v->nvqs; + int ret; u16 i; if (copy_from_user(&status, statusp, sizeof(status))) @@ -172,24 +261,24 @@ * Userspace shouldn't remove status bits unless reset the * status to 0. */ - if (status != 0 && (ops->get_status(vdpa) & ~status) != 0) + if (status != 0 && (status_old & ~status) != 0) return -EINVAL; + if ((status_old & VIRTIO_CONFIG_S_DRIVER_OK) && !(status & VIRTIO_CONFIG_S_DRIVER_OK)) + for (i = 0; i < nvqs; i++) + vhost_vdpa_unsetup_vq_irq(v, i); + if (status == 0) { - ret = ops->reset(vdpa); + ret = vdpa_reset(vdpa); if (ret) return ret; } else - ops->set_status(vdpa, status); + vdpa_set_status(vdpa, status); if ((status & VIRTIO_CONFIG_S_DRIVER_OK) && !(status_old & VIRTIO_CONFIG_S_DRIVER_OK)) for (i = 0; i < nvqs; i++) vhost_vdpa_setup_vq_irq(v, i); - if ((status_old & VIRTIO_CONFIG_S_DRIVER_OK) && !(status & VIRTIO_CONFIG_S_DRIVER_OK)) - for (i = 0; i < nvqs; i++) - vhost_vdpa_unsetup_vq_irq(v, i); - return 0; } @@ -239,7 +328,6 @@ struct vhost_vdpa_config __user *c) { struct vdpa_device *vdpa = v->vdpa; - const struct vdpa_config_ops *ops = vdpa->config; struct vhost_vdpa_config config; unsigned long size = offsetof(struct vhost_vdpa_config, buf); u8 *buf; @@ -248,28 +336,32 @@ return -EFAULT; if (vhost_vdpa_config_validate(v, &config)) return -EINVAL; - buf = kvzalloc(config.len, GFP_KERNEL); - if (!buf) - return -ENOMEM; - if (copy_from_user(buf, c->buf, config.len)) { - kvfree(buf); - return -EFAULT; - } + buf = vmemdup_user(c->buf, config.len); + if (IS_ERR(buf)) + return PTR_ERR(buf); - ops->set_config(vdpa, config.off, buf, config.len); + vdpa_set_config(vdpa, config.off, buf, config.len); kvfree(buf); return 0; } +static bool vhost_vdpa_can_suspend(const struct vhost_vdpa *v) +{ + struct vdpa_device *vdpa = v->vdpa; + const struct vdpa_config_ops *ops = vdpa->config; + + return ops->suspend; +} + static long vhost_vdpa_get_features(struct vhost_vdpa *v, u64 __user *featurep) { struct vdpa_device *vdpa = v->vdpa; const struct vdpa_config_ops *ops = vdpa->config; u64 features; - features = ops->get_features(vdpa); + features = ops->get_device_features(vdpa); if (copy_to_user(featurep, &features, sizeof(features))) return -EFAULT; @@ -386,6 +478,22 @@ return 0; } +/* After a successful return of ioctl the device must not process more + * virtqueue descriptors. The device can answer to read or writes of config + * fields as if it were not suspended. In particular, writing to "queue_enable" + * with a value of 1 will not make the device start processing buffers. + */ +static long vhost_vdpa_suspend(struct vhost_vdpa *v) +{ + struct vdpa_device *vdpa = v->vdpa; + const struct vdpa_config_ops *ops = vdpa->config; + + if (!ops->suspend) + return -EOPNOTSUPP; + + return ops->suspend(vdpa); +} + static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd, void __user *argp) { @@ -414,6 +522,24 @@ return -EFAULT; ops->set_vq_ready(vdpa, idx, s.num); return 0; + case VHOST_VDPA_GET_VRING_GROUP: + if (!ops->get_vq_group) + return -EOPNOTSUPP; + s.index = idx; + s.num = ops->get_vq_group(vdpa, idx); + if (s.num >= vdpa->ngroups) + return -EIO; + else if (copy_to_user(argp, &s, sizeof(s))) + return -EFAULT; + return 0; + case VHOST_VDPA_SET_GROUP_ASID: + if (copy_from_user(&s, argp, sizeof(s))) + return -EFAULT; + if (s.num >= vdpa->nas) + return -EINVAL; + if (!ops->set_group_asid) + return -EOPNOTSUPP; + return ops->set_group_asid(vdpa, idx, s.num); case VHOST_GET_VRING_BASE: r = ops->get_vq_state(v->vdpa, idx, &vq_state); if (r) @@ -475,7 +601,11 @@ if (cmd == VHOST_SET_BACKEND_FEATURES) { if (copy_from_user(&features, featurep, sizeof(features))) return -EFAULT; - if (features & ~VHOST_VDPA_BACKEND_FEATURES) + if (features & ~(VHOST_VDPA_BACKEND_FEATURES | + BIT_ULL(VHOST_BACKEND_F_SUSPEND))) + return -EOPNOTSUPP; + if ((features & BIT_ULL(VHOST_BACKEND_F_SUSPEND)) && + !vhost_vdpa_can_suspend(v)) return -EOPNOTSUPP; vhost_set_backend_features(&v->vdev, features); return 0; @@ -508,6 +638,15 @@ case VHOST_VDPA_GET_VRING_NUM: r = vhost_vdpa_get_vring_num(v, argp); break; + case VHOST_VDPA_GET_GROUP_NUM: + if (copy_to_user(argp, &v->vdpa->ngroups, + sizeof(v->vdpa->ngroups))) + r = -EFAULT; + break; + case VHOST_VDPA_GET_AS_NUM: + if (copy_to_user(argp, &v->vdpa->nas, sizeof(v->vdpa->nas))) + r = -EFAULT; + break; case VHOST_SET_LOG_BASE: case VHOST_SET_LOG_FD: r = -ENOIOCTLCMD; @@ -517,6 +656,8 @@ break; case VHOST_GET_BACKEND_FEATURES: features = VHOST_VDPA_BACKEND_FEATURES; + if (vhost_vdpa_can_suspend(v)) + features |= BIT_ULL(VHOST_BACKEND_F_SUSPEND); if (copy_to_user(featurep, &features, sizeof(features))) r = -EFAULT; break; @@ -529,6 +670,9 @@ case VHOST_VDPA_GET_VQS_COUNT: r = vhost_vdpa_get_vqs_count(v, argp); break; + case VHOST_VDPA_SUSPEND: + r = vhost_vdpa_suspend(v); + break; default: r = vhost_dev_ioctl(&v->vdev, cmd, argp); if (r == -ENOIOCTLCMD) @@ -540,35 +684,54 @@ return r; } -static void vhost_vdpa_iotlb_unmap(struct vhost_vdpa *v, u64 start, u64 last) +static void vhost_vdpa_pa_unmap(struct vhost_vdpa *v, + struct vhost_iotlb *iotlb, + u64 start, u64 last) { struct vhost_dev *dev = &v->vdev; - struct vhost_iotlb *iotlb = dev->iotlb; struct vhost_iotlb_map *map; struct page *page; unsigned long pfn, pinned; while ((map = vhost_iotlb_itree_first(iotlb, start, last)) != NULL) { - pinned = map->size >> PAGE_SHIFT; - for (pfn = map->addr >> PAGE_SHIFT; + pinned = PFN_DOWN(map->size); + for (pfn = PFN_DOWN(map->addr); pinned > 0; pfn++, pinned--) { page = pfn_to_page(pfn); if (map->perm & VHOST_ACCESS_WO) set_page_dirty_lock(page); unpin_user_page(page); } - atomic64_sub(map->size >> PAGE_SHIFT, &dev->mm->pinned_vm); + atomic64_sub(PFN_DOWN(map->size), &dev->mm->pinned_vm); vhost_iotlb_map_free(iotlb, map); } } -static void vhost_vdpa_iotlb_free(struct vhost_vdpa *v) +static void vhost_vdpa_va_unmap(struct vhost_vdpa *v, + struct vhost_iotlb *iotlb, + u64 start, u64 last) { - struct vhost_dev *dev = &v->vdev; + struct vhost_iotlb_map *map; + struct vdpa_map_file *map_file; + + while ((map = vhost_iotlb_itree_first(iotlb, start, last)) != NULL) { + map_file = (struct vdpa_map_file *)map->opaque; + fput(map_file->file); + kfree(map_file); + vhost_iotlb_map_free(iotlb, map); + } +} + +static void vhost_vdpa_iotlb_unmap(struct vhost_vdpa *v, + struct vhost_iotlb *iotlb, + u64 start, u64 last) +{ + struct vdpa_device *vdpa = v->vdpa; + + if (vdpa->use_va) + return vhost_vdpa_va_unmap(v, iotlb, start, last); - vhost_vdpa_iotlb_unmap(v, 0ULL, 0ULL - 1); - kfree(dev->iotlb); - dev->iotlb = NULL; + return vhost_vdpa_pa_unmap(v, iotlb, start, last); } static int perm_to_iommu_flags(u32 perm) @@ -593,87 +756,140 @@ return flags | IOMMU_CACHE; } -static int vhost_vdpa_map(struct vhost_vdpa *v, - u64 iova, u64 size, u64 pa, u32 perm) +static int vhost_vdpa_map(struct vhost_vdpa *v, struct vhost_iotlb *iotlb, + u64 iova, u64 size, u64 pa, u32 perm, void *opaque) { struct vhost_dev *dev = &v->vdev; struct vdpa_device *vdpa = v->vdpa; const struct vdpa_config_ops *ops = vdpa->config; + u32 asid = iotlb_to_asid(iotlb); int r = 0; - r = vhost_iotlb_add_range(dev->iotlb, iova, iova + size - 1, - pa, perm); + r = vhost_iotlb_add_range_ctx(iotlb, iova, iova + size - 1, + pa, perm, opaque); if (r) return r; if (ops->dma_map) { - r = ops->dma_map(vdpa, iova, size, pa, perm); + r = ops->dma_map(vdpa, asid, iova, size, pa, perm, opaque); } else if (ops->set_map) { if (!v->in_batch) - r = ops->set_map(vdpa, dev->iotlb); + r = ops->set_map(vdpa, asid, iotlb); } else { r = iommu_map(v->domain, iova, pa, size, perm_to_iommu_flags(perm)); } + if (r) { + vhost_iotlb_del_range(iotlb, iova, iova + size - 1); + return r; + } - if (r) - vhost_iotlb_del_range(dev->iotlb, iova, iova + size - 1); - else - atomic64_add(size >> PAGE_SHIFT, &dev->mm->pinned_vm); + if (!vdpa->use_va) + atomic64_add(PFN_DOWN(size), &dev->mm->pinned_vm); - return r; + return 0; } -static void vhost_vdpa_unmap(struct vhost_vdpa *v, u64 iova, u64 size) +static void vhost_vdpa_unmap(struct vhost_vdpa *v, + struct vhost_iotlb *iotlb, + u64 iova, u64 size) { - struct vhost_dev *dev = &v->vdev; struct vdpa_device *vdpa = v->vdpa; const struct vdpa_config_ops *ops = vdpa->config; + u32 asid = iotlb_to_asid(iotlb); - vhost_vdpa_iotlb_unmap(v, iova, iova + size - 1); + vhost_vdpa_iotlb_unmap(v, iotlb, iova, iova + size - 1); if (ops->dma_map) { - ops->dma_unmap(vdpa, iova, size); + ops->dma_unmap(vdpa, asid, iova, size); } else if (ops->set_map) { if (!v->in_batch) - ops->set_map(vdpa, dev->iotlb); + ops->set_map(vdpa, asid, iotlb); } else { iommu_unmap(v->domain, iova, size); } + + /* If we are in the middle of batch processing, delay the free + * of AS until BATCH_END. + */ + if (!v->in_batch && !iotlb->nmaps) + vhost_vdpa_remove_as(v, asid); } -static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, - struct vhost_iotlb_msg *msg) +static int vhost_vdpa_va_map(struct vhost_vdpa *v, + struct vhost_iotlb *iotlb, + u64 iova, u64 size, u64 uaddr, u32 perm) +{ + struct vhost_dev *dev = &v->vdev; + u64 offset, map_size, map_iova = iova; + struct vdpa_map_file *map_file; + struct vm_area_struct *vma; + int ret = 0; + + mmap_read_lock(dev->mm); + + while (size) { + vma = find_vma(dev->mm, uaddr); + if (!vma) { + ret = -EINVAL; + break; + } + map_size = min(size, vma->vm_end - uaddr); + if (!(vma->vm_file && (vma->vm_flags & VM_SHARED) && + !(vma->vm_flags & (VM_IO | VM_PFNMAP)))) + goto next; + + map_file = kzalloc(sizeof(*map_file), GFP_KERNEL); + if (!map_file) { + ret = -ENOMEM; + break; + } + offset = (vma->vm_pgoff << PAGE_SHIFT) + uaddr - vma->vm_start; + map_file->offset = offset; + map_file->file = get_file(vma->vm_file); + ret = vhost_vdpa_map(v, iotlb, map_iova, map_size, uaddr, + perm, map_file); + if (ret) { + fput(map_file->file); + kfree(map_file); + break; + } +next: + size -= map_size; + uaddr += map_size; + map_iova += map_size; + } + if (ret) + vhost_vdpa_unmap(v, iotlb, iova, map_iova - iova); + + mmap_read_unlock(dev->mm); + + return ret; +} + +static int vhost_vdpa_pa_map(struct vhost_vdpa *v, + struct vhost_iotlb *iotlb, + u64 iova, u64 size, u64 uaddr, u32 perm) { struct vhost_dev *dev = &v->vdev; - struct vhost_iotlb *iotlb = dev->iotlb; struct page **page_list; unsigned long list_size = PAGE_SIZE / sizeof(struct page *); unsigned int gup_flags = FOLL_LONGTERM; unsigned long npages, cur_base, map_pfn, last_pfn = 0; unsigned long lock_limit, sz2pin, nchunks, i; - u64 iova = msg->iova; + u64 start = iova; long pinned; int ret = 0; - if (msg->iova < v->range.first || !msg->size || - msg->iova > U64_MAX - msg->size + 1 || - msg->iova + msg->size - 1 > v->range.last) - return -EINVAL; - - if (vhost_iotlb_itree_first(iotlb, msg->iova, - msg->iova + msg->size - 1)) - return -EEXIST; - /* Limit the use of memory for bookkeeping */ page_list = (struct page **) __get_free_page(GFP_KERNEL); if (!page_list) return -ENOMEM; - if (msg->perm & VHOST_ACCESS_WO) + if (perm & VHOST_ACCESS_WO) gup_flags |= FOLL_WRITE; - npages = PAGE_ALIGN(msg->size + (iova & ~PAGE_MASK)) >> PAGE_SHIFT; + npages = PFN_UP(size + (iova & ~PAGE_MASK)); if (!npages) { ret = -EINVAL; goto free; @@ -681,13 +897,13 @@ mmap_read_lock(dev->mm); - lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; + lock_limit = PFN_DOWN(rlimit(RLIMIT_MEMLOCK)); if (npages + atomic64_read(&dev->mm->pinned_vm) > lock_limit) { ret = -ENOMEM; goto unlock; } - cur_base = msg->uaddr & PAGE_MASK; + cur_base = uaddr & PAGE_MASK; iova &= PAGE_MASK; nchunks = 0; @@ -715,10 +931,10 @@ if (last_pfn && (this_pfn != last_pfn + 1)) { /* Pin a contiguous chunk of memory */ - csize = (last_pfn - map_pfn + 1) << PAGE_SHIFT; - ret = vhost_vdpa_map(v, iova, csize, - map_pfn << PAGE_SHIFT, - msg->perm); + csize = PFN_PHYS(last_pfn - map_pfn + 1); + ret = vhost_vdpa_map(v, iotlb, iova, csize, + PFN_PHYS(map_pfn), + perm, NULL); if (ret) { /* * Unpin the pages that are left unmapped @@ -741,13 +957,13 @@ last_pfn = this_pfn; } - cur_base += pinned << PAGE_SHIFT; + cur_base += PFN_PHYS(pinned); npages -= pinned; } /* Pin the rest chunk */ - ret = vhost_vdpa_map(v, iova, (last_pfn - map_pfn + 1) << PAGE_SHIFT, - map_pfn << PAGE_SHIFT, msg->perm); + ret = vhost_vdpa_map(v, iotlb, iova, PFN_PHYS(last_pfn - map_pfn + 1), + PFN_PHYS(map_pfn), perm, NULL); out: if (ret) { if (nchunks) { @@ -766,21 +982,47 @@ for (pfn = map_pfn; pfn <= last_pfn; pfn++) unpin_user_page(pfn_to_page(pfn)); } - vhost_vdpa_unmap(v, msg->iova, msg->size); + vhost_vdpa_unmap(v, iotlb, start, size); } unlock: mmap_read_unlock(dev->mm); free: free_page((unsigned long)page_list); return ret; + +} + +static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, + struct vhost_iotlb *iotlb, + struct vhost_iotlb_msg *msg) +{ + struct vdpa_device *vdpa = v->vdpa; + + if (msg->iova < v->range.first || !msg->size || + msg->iova > U64_MAX - msg->size + 1 || + msg->iova + msg->size - 1 > v->range.last) + return -EINVAL; + + if (vhost_iotlb_itree_first(iotlb, msg->iova, + msg->iova + msg->size - 1)) + return -EEXIST; + + if (vdpa->use_va) + return vhost_vdpa_va_map(v, iotlb, msg->iova, msg->size, + msg->uaddr, msg->perm); + + return vhost_vdpa_pa_map(v, iotlb, msg->iova, msg->size, msg->uaddr, + msg->perm); } -static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, +static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, u32 asid, struct vhost_iotlb_msg *msg) { struct vhost_vdpa *v = container_of(dev, struct vhost_vdpa, vdev); struct vdpa_device *vdpa = v->vdpa; const struct vdpa_config_ops *ops = vdpa->config; + struct vhost_iotlb *iotlb = NULL; + struct vhost_vdpa_as *as = NULL; int r = 0; mutex_lock(&dev->mutex); @@ -789,20 +1031,47 @@ if (r) goto unlock; + if (msg->type == VHOST_IOTLB_UPDATE || + msg->type == VHOST_IOTLB_BATCH_BEGIN) { + as = vhost_vdpa_find_alloc_as(v, asid); + if (!as) { + dev_err(&v->dev, "can't find and alloc asid %d\n", + asid); + r = -EINVAL; + goto unlock; + } + iotlb = &as->iotlb; + } else + iotlb = asid_to_iotlb(v, asid); + + if ((v->in_batch && v->batch_asid != asid) || !iotlb) { + if (v->in_batch && v->batch_asid != asid) { + dev_info(&v->dev, "batch id %d asid %d\n", + v->batch_asid, asid); + } + if (!iotlb) + dev_err(&v->dev, "no iotlb for asid %d\n", asid); + r = -EINVAL; + goto unlock; + } + switch (msg->type) { case VHOST_IOTLB_UPDATE: - r = vhost_vdpa_process_iotlb_update(v, msg); + r = vhost_vdpa_process_iotlb_update(v, iotlb, msg); break; case VHOST_IOTLB_INVALIDATE: - vhost_vdpa_unmap(v, msg->iova, msg->size); + vhost_vdpa_unmap(v, iotlb, msg->iova, msg->size); break; case VHOST_IOTLB_BATCH_BEGIN: + v->batch_asid = asid; v->in_batch = true; break; case VHOST_IOTLB_BATCH_END: if (v->in_batch && ops->set_map) - ops->set_map(vdpa, dev->iotlb); + ops->set_map(vdpa, asid, iotlb); v->in_batch = false; + if (!iotlb->nmaps) + vhost_vdpa_remove_as(v, asid); break; default: r = -EINVAL; @@ -892,12 +1161,28 @@ } } +static void vhost_vdpa_cleanup(struct vhost_vdpa *v) +{ + struct vhost_vdpa_as *as; + u32 asid; + + vhost_dev_cleanup(&v->vdev); + kfree(v->vdev.vqs); + + for (asid = 0; asid < v->vdpa->nas; asid++) { + as = asid_to_as(v, asid); + if (as) + vhost_vdpa_remove_as(v, asid); + } +} + static int vhost_vdpa_open(struct inode *inode, struct file *filep) { struct vhost_vdpa *v; struct vhost_dev *dev; struct vhost_virtqueue **vqs; - int nvqs, i, r, opened; + int r, opened; + u32 i, nvqs; v = container_of(inode->i_cdev, struct vhost_vdpa, cdev); @@ -906,7 +1191,9 @@ return -EBUSY; nvqs = v->nvqs; - vhost_vdpa_reset(v); + r = vhost_vdpa_reset(v); + if (r) + goto err; vqs = kmalloc_array(nvqs, sizeof(*vqs), GFP_KERNEL); if (!vqs) { @@ -922,15 +1209,9 @@ vhost_dev_init(dev, vqs, nvqs, 0, 0, 0, false, vhost_vdpa_process_iotlb_msg); - dev->iotlb = vhost_iotlb_alloc(0, 0); - if (!dev->iotlb) { - r = -ENOMEM; - goto err_init_iotlb; - } - r = vhost_vdpa_alloc_domain(v); if (r) - goto err_init_iotlb; + goto err_alloc_domain; vhost_vdpa_set_iova_range(v); @@ -938,9 +1219,8 @@ return 0; -err_init_iotlb: - vhost_dev_cleanup(&v->vdev); - kfree(vqs); +err_alloc_domain: + vhost_vdpa_cleanup(v); err: atomic_dec(&v->opened); return r; @@ -948,7 +1228,7 @@ static void vhost_vdpa_clean_irq(struct vhost_vdpa *v) { - int i; + u32 i; for (i = 0; i < v->nvqs; i++) vhost_vdpa_unsetup_vq_irq(v, i); @@ -961,14 +1241,12 @@ mutex_lock(&d->mutex); filep->private_data = NULL; + vhost_vdpa_clean_irq(v); vhost_vdpa_reset(v); vhost_dev_stop(&v->vdev); - vhost_vdpa_iotlb_free(v); vhost_vdpa_free_domain(v); vhost_vdpa_config_put(v); - vhost_vdpa_clean_irq(v); - vhost_dev_cleanup(&v->vdev); - kfree(v->vdev.vqs); + vhost_vdpa_cleanup(v); mutex_unlock(&d->mutex); atomic_dec(&v->opened); @@ -991,7 +1269,7 @@ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); if (remap_pfn_range(vma, vmf->address & PAGE_MASK, - notify.addr >> PAGE_SHIFT, PAGE_SIZE, + PFN_DOWN(notify.addr), PAGE_SIZE, vma->vm_page_prot)) return VM_FAULT_SIGBUS; @@ -1064,11 +1342,14 @@ const struct vdpa_config_ops *ops = vdpa->config; struct vhost_vdpa *v; int minor; - int r; + int i, r; - /* Currently, we only accept the network devices. */ - if (ops->get_device_id(vdpa) != VIRTIO_ID_NET) - return -ENOTSUPP; + /* We can't support platform IOMMU device with more than 1 + * group or as + */ + if (!ops->set_map && !ops->dma_map && + (vdpa->ngroups > 1 || vdpa->nas > 1)) + return -EOPNOTSUPP; v = kzalloc(sizeof(*v), GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!v) @@ -1112,10 +1393,14 @@ init_completion(&v->completion); vdpa_set_drvdata(vdpa, v); + for (i = 0; i < VHOST_VDPA_IOTLB_BUCKETS; i++) + INIT_HLIST_HEAD(&v->asi); + return 0; err: put_device(&v->dev); + ida_simple_remove(&vhost_vdpa_ida, v->minor); return r; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vhost/vhost.c
Changed
@@ -468,7 +468,7 @@ struct vhost_virtqueue **vqs, int nvqs, int iov_limit, int weight, int byte_weight, bool use_worker, - int (*msg_handler)(struct vhost_dev *dev, + int (*msg_handler)(struct vhost_dev *dev, u32 asid, struct vhost_iotlb_msg *msg)) { struct vhost_virtqueue *vq; @@ -1090,11 +1090,14 @@ return true; } -static int vhost_process_iotlb_msg(struct vhost_dev *dev, +static int vhost_process_iotlb_msg(struct vhost_dev *dev, u32 asid, struct vhost_iotlb_msg *msg) { int ret = 0; + if (asid != 0) + return -EINVAL; + mutex_lock(&dev->mutex); vhost_dev_lock_vqs(dev); switch (msg->type) { @@ -1141,6 +1144,7 @@ struct vhost_iotlb_msg msg; size_t offset; int type, ret; + u32 asid = 0; ret = copy_from_iter(&type, sizeof(type), from); if (ret != sizeof(type)) { @@ -1156,7 +1160,16 @@ offset = offsetof(struct vhost_msg, iotlb) - sizeof(int); break; case VHOST_IOTLB_MSG_V2: - offset = sizeof(__u32); + if (vhost_backend_has_feature(dev->vqs0, + VHOST_BACKEND_F_IOTLB_ASID)) { + ret = copy_from_iter(&asid, sizeof(asid), from); + if (ret != sizeof(asid)) { + ret = -EINVAL; + goto done; + } + offset = 0; + } else + offset = sizeof(__u32); break; default: ret = -EINVAL; @@ -1170,10 +1183,17 @@ goto done; } + if ((msg.type == VHOST_IOTLB_UPDATE || + msg.type == VHOST_IOTLB_INVALIDATE) && + msg.size == 0) { + ret = -EINVAL; + goto done; + } + if (dev->msg_handler) - ret = dev->msg_handler(dev, &msg); + ret = dev->msg_handler(dev, asid, &msg); else - ret = vhost_process_iotlb_msg(dev, &msg); + ret = vhost_process_iotlb_msg(dev, asid, &msg); if (ret) { ret = -EFAULT; goto done;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/vhost/vhost.h
Changed
@@ -162,7 +162,7 @@ int byte_weight; u64 kcov_handle; bool use_worker; - int (*msg_handler)(struct vhost_dev *dev, + int (*msg_handler)(struct vhost_dev *dev, u32 asid, struct vhost_iotlb_msg *msg); }; @@ -170,7 +170,7 @@ void vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs, int nvqs, int iov_limit, int weight, int byte_weight, bool use_worker, - int (*msg_handler)(struct vhost_dev *dev, + int (*msg_handler)(struct vhost_dev *dev, u32 asid, struct vhost_iotlb_msg *msg)); long vhost_dev_set_owner(struct vhost_dev *dev); bool vhost_dev_has_owner(struct vhost_dev *dev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/virtio/virtio.c
Changed
@@ -203,6 +203,28 @@ return 0; } +/** + * virtio_reset_device - quiesce device for removal + * @dev: the device to reset + * + * Prevents device from sending interrupts and accessing memory. + * + * Generally used for cleanup during driver / device removal. + * + * Once this has been invoked, caller must ensure that + * virtqueue_notify / virtqueue_kick are not in progress. + * + * Note: this guarantees that vq callbacks are not in progress, however caller + * is responsible for preventing access from other contexts, such as a system + * call/workqueue/bh. Invoking virtio_break_device then flushing any such + * contexts is one way to handle that. + * */ +void virtio_reset_device(struct virtio_device *dev) +{ + dev->config->reset(dev); +} +EXPORT_SYMBOL_GPL(virtio_reset_device); + static int virtio_dev_probe(struct device *_d) { int err, i; @@ -362,7 +384,7 @@ /* We always start by resetting the device, in case a previous * driver messed it up. This also tests that code path a little. */ - dev->config->reset(dev); + virtio_reset_device(dev); /* Acknowledge that we've seen the device. */ virtio_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE); @@ -422,7 +444,7 @@ /* We always start by resetting the device, in case a previous * driver messed it up. */ - dev->config->reset(dev); + virtio_reset_device(dev); /* Acknowledge that we've seen the device. */ virtio_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/virtio/virtio_balloon.c
Changed
@@ -1039,7 +1039,7 @@ return_free_pages_to_mm(vb, ULONG_MAX); /* Now we reset the device so we can clean up the queues. */ - vb->vdev->config->reset(vb->vdev); + virtio_reset_device(vb->vdev); vb->vdev->config->del_vqs(vb->vdev); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/virtio/virtio_input.c
Changed
@@ -323,7 +323,7 @@ spin_unlock_irqrestore(&vi->lock, flags); input_unregister_device(vi->idev); - vdev->config->reset(vdev); + virtio_reset_device(vdev); while ((buf = virtqueue_detach_unused_buf(vi->sts)) != NULL) kfree(buf); vdev->config->del_vqs(vdev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/virtio/virtio_mem.c
Changed
@@ -1889,7 +1889,7 @@ vfree(vm->sb_bitmap); /* reset the device and cleanup the queues */ - vdev->config->reset(vdev); + virtio_reset_device(vdev); vdev->config->del_vqs(vdev); kfree(vm);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/virtio/virtio_pci_modern.c
Changed
@@ -176,6 +176,29 @@ vp_synchronize_vectors(vdev); } +static int vp_active_vq(struct virtqueue *vq, u16 msix_vec) +{ + struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev); + struct virtio_pci_modern_device *mdev = &vp_dev->mdev; + unsigned long index; + + index = vq->index; + + /* activate the queue */ + vp_modern_set_queue_size(mdev, index, virtqueue_get_vring_size(vq)); + vp_modern_queue_address(mdev, index, virtqueue_get_desc_addr(vq), + virtqueue_get_avail_addr(vq), + virtqueue_get_used_addr(vq)); + + if (msix_vec != VIRTIO_MSI_NO_VECTOR) { + msix_vec = vp_modern_queue_vector(mdev, index, msix_vec); + if (msix_vec == VIRTIO_MSI_NO_VECTOR) + return -EBUSY; + } + + return 0; +} + static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector) { return vp_modern_config_vector(&vp_dev->mdev, vector); @@ -218,32 +241,19 @@ if (!vq) return ERR_PTR(-ENOMEM); - /* activate the queue */ - vp_modern_set_queue_size(mdev, index, virtqueue_get_vring_size(vq)); - vp_modern_queue_address(mdev, index, virtqueue_get_desc_addr(vq), - virtqueue_get_avail_addr(vq), - virtqueue_get_used_addr(vq)); + err = vp_active_vq(vq, msix_vec); + if (err) + goto err; vq->priv = (void __force *)vp_modern_map_vq_notify(mdev, index, NULL); if (!vq->priv) { err = -ENOMEM; - goto err_map_notify; - } - - if (msix_vec != VIRTIO_MSI_NO_VECTOR) { - msix_vec = vp_modern_queue_vector(mdev, index, msix_vec); - if (msix_vec == VIRTIO_MSI_NO_VECTOR) { - err = -EBUSY; - goto err_assign_vector; - } + goto err; } return vq; -err_assign_vector: - if (!mdev->notify_base) - pci_iounmap(mdev->pci_dev, (void __iomem __force *)vq->priv); -err_map_notify: +err: vring_del_virtqueue(vq); return ERR_PTR(err); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/virtio/virtio_pci_modern_dev.c
Changed
@@ -3,6 +3,7 @@ #include <linux/virtio_pci_modern.h> #include <linux/module.h> #include <linux/pci.h> +#include <linux/delay.h> /* * vp_modern_map_capability - map a part of virtio pci capability @@ -17,11 +18,10 @@ * * Returns the io address of for the part of the capability */ -void __iomem *vp_modern_map_capability(struct virtio_pci_modern_device *mdev, int off, - size_t minlen, - u32 align, - u32 start, u32 size, - size_t *len, resource_size_t *pa) +static void __iomem* +vp_modern_map_capability(struct virtio_pci_modern_device *mdev, int off, + size_t minlen, u32 align, u32 start, u32 size, + size_t *len, resource_size_t *pa) { struct pci_dev *dev = mdev->pci_dev; u8 bar; @@ -95,7 +95,6 @@ return p; } -EXPORT_SYMBOL_GPL(vp_modern_map_capability); /** * virtio_pci_find_capability - walk capabilities to find device info. @@ -467,6 +466,44 @@ EXPORT_SYMBOL_GPL(vp_modern_set_status); /* + * vp_modern_get_queue_reset - get the queue reset status + * @mdev: the modern virtio-pci device + * @index: queue index + */ +int vp_modern_get_queue_reset(struct virtio_pci_modern_device *mdev, u16 index) +{ + struct virtio_pci_modern_common_cfg __iomem *cfg; + + cfg = (struct virtio_pci_modern_common_cfg __iomem *)mdev->common; + + vp_iowrite16(index, &cfg->cfg.queue_select); + return vp_ioread16(&cfg->queue_reset); +} +EXPORT_SYMBOL_GPL(vp_modern_get_queue_reset); + +/* + * vp_modern_set_queue_reset - reset the queue + * @mdev: the modern virtio-pci device + * @index: queue index + */ +void vp_modern_set_queue_reset(struct virtio_pci_modern_device *mdev, u16 index) +{ + struct virtio_pci_modern_common_cfg __iomem *cfg; + + cfg = (struct virtio_pci_modern_common_cfg __iomem *)mdev->common; + + vp_iowrite16(index, &cfg->cfg.queue_select); + vp_iowrite16(1, &cfg->queue_reset); + + while (vp_ioread16(&cfg->queue_reset)) + msleep(1); + + while (vp_ioread16(&cfg->cfg.queue_enable)) + msleep(1); +} +EXPORT_SYMBOL_GPL(vp_modern_set_queue_reset); + +/* * vp_modern_queue_vector - set the MSIX vector for a specific virtqueue * @mdev: the modern virtio-pci device * @index: queue index @@ -612,14 +649,13 @@ * * Returns the notification offset for a virtqueue */ -u16 vp_modern_get_queue_notify_off(struct virtio_pci_modern_device *mdev, - u16 index) +static u16 vp_modern_get_queue_notify_off(struct virtio_pci_modern_device *mdev, + u16 index) { vp_iowrite16(index, &mdev->common->queue_select); return vp_ioread16(&mdev->common->queue_notify_off); } -EXPORT_SYMBOL_GPL(vp_modern_get_queue_notify_off); /* * vp_modern_map_vq_notify - map notification area for a
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/drivers/virtio/virtio_vdpa.c
Changed
@@ -65,9 +65,8 @@ const void *buf, unsigned len) { struct vdpa_device *vdpa = vd_get_vdpa(vdev); - const struct vdpa_config_ops *ops = vdpa->config; - ops->set_config(vdpa, offset, buf, len); + vdpa_set_config(vdpa, offset, buf, len); } static u32 virtio_vdpa_generation(struct virtio_device *vdev) @@ -92,9 +91,8 @@ static void virtio_vdpa_set_status(struct virtio_device *vdev, u8 status) { struct vdpa_device *vdpa = vd_get_vdpa(vdev); - const struct vdpa_config_ops *ops = vdpa->config; - return ops->set_status(vdpa, status); + return vdpa_set_status(vdpa, status); } static void virtio_vdpa_reset(struct virtio_device *vdev) @@ -142,8 +140,11 @@ struct vdpa_callback cb; struct virtqueue *vq; u64 desc_addr, driver_addr, device_addr; + /* Assume split virtqueue, switch to packed if necessary */ + struct vdpa_vq_state state = {0}; unsigned long flags; - u32 align, num; + u32 align, max_num, min_num = 1; + bool may_reduce_num = true; int err; if (!name) @@ -161,16 +162,21 @@ if (!info) return ERR_PTR(-ENOMEM); - num = ops->get_vq_num_max(vdpa); - if (num == 0) { + max_num = ops->get_vq_num_max(vdpa); + if (max_num == 0) { err = -ENOENT; goto error_new_virtqueue; } + if (ops->get_vq_num_min) + min_num = ops->get_vq_num_min(vdpa); + + may_reduce_num = (max_num == min_num) ? false : true; + /* Create the vring */ align = ops->get_vq_align(vdpa); - vq = vring_create_virtqueue(index, num, align, vdev, - true, true, ctx, + vq = vring_create_virtqueue(index, max_num, align, vdev, + true, may_reduce_num, ctx, virtio_vdpa_notify, callback, name); if (!vq) { err = -ENOMEM; @@ -178,7 +184,7 @@ } /* Setup virtqueue callback */ - cb.callback = virtio_vdpa_virtqueue_cb; + cb.callback = callback ? virtio_vdpa_virtqueue_cb : NULL; cb.private = info; ops->set_vq_cb(vdpa, index, &cb); ops->set_vq_num(vdpa, index, virtqueue_get_vring_size(vq)); @@ -194,6 +200,19 @@ goto err_vq; } + /* reset virtqueue state index */ + if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED)) { + struct vdpa_vq_state_packed *s = &state.packed; + + s->last_avail_counter = 1; + s->last_avail_idx = 0; + s->last_used_counter = 1; + s->last_used_idx = 0; + } + err = ops->set_vq_state(vdpa, index, &state); + if (err) + goto err_vq; + ops->set_vq_ready(vdpa, index, 1); vq->priv = info; @@ -228,9 +247,8 @@ list_del(&info->node); spin_unlock_irqrestore(&vd_dev->lock, flags); - /* Select and deactivate the queue */ + /* Select and deactivate the queue (best effort) */ ops->set_vq_ready(vdpa, index, 0); - WARN_ON(ops->get_vq_ready(vdpa, index)); vring_del_virtqueue(vq); @@ -289,7 +307,7 @@ struct vdpa_device *vdpa = vd_get_vdpa(vdev); const struct vdpa_config_ops *ops = vdpa->config; - return ops->get_features(vdpa); + return ops->get_device_features(vdpa); } static int virtio_vdpa_finalize_features(struct virtio_device *vdev)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/buffer.c
Changed
@@ -562,7 +562,7 @@ struct buffer_head *bh = __find_get_block(bdev, bblock + 1, blocksize); if (bh) { if (buffer_dirty(bh)) - ll_rw_block(REQ_OP_WRITE, 0, 1, &bh); + write_dirty_buffer(bh, 0); put_bh(bh); } } @@ -1363,7 +1363,7 @@ { struct buffer_head *bh = __getblk(bdev, block, size); if (likely(bh)) { - ll_rw_block(REQ_OP_READ, REQ_RAHEAD, 1, &bh); + bh_readahead(bh, REQ_RAHEAD); brelse(bh); } } @@ -2038,7 +2038,7 @@ if (!buffer_uptodate(bh) && !buffer_delay(bh) && !buffer_unwritten(bh) && (block_start < from || block_end > to)) { - ll_rw_block(REQ_OP_READ, 0, 1, &bh); + bh_read_nowait(bh, 0); *wait_bh++=bh; } } @@ -2927,11 +2927,9 @@ set_buffer_uptodate(bh); if (!buffer_uptodate(bh) && !buffer_delay(bh) && !buffer_unwritten(bh)) { - err = -EIO; - ll_rw_block(REQ_OP_READ, 0, 1, &bh); - wait_on_buffer(bh); + err = bh_read(bh, 0); /* Uhhuh. Read error. Complain and punt. */ - if (!buffer_uptodate(bh)) + if (err < 0) goto unlock; } @@ -3392,6 +3390,71 @@ EXPORT_SYMBOL(bh_uptodate_or_lock); /** + * __bh_read - Submit read for a locked buffer + * @bh: struct buffer_head + * @op_flags: appending REQ_OP_* flags besides REQ_OP_READ + * @wait: wait until reading finish + * + * Returns zero on success or don't wait, and -EIO on error. + */ +int __bh_read(struct buffer_head *bh, unsigned int op_flags, bool wait) +{ + int ret = 0; + + BUG_ON(!buffer_locked(bh)); + + get_bh(bh); + bh->b_end_io = end_buffer_read_sync; + submit_bh(REQ_OP_READ, op_flags, bh); + if (wait) { + wait_on_buffer(bh); + if (!buffer_uptodate(bh)) + ret = -EIO; + } + return ret; +} +EXPORT_SYMBOL(__bh_read); + +/** + * __bh_read_batch - Submit read for a batch of unlocked buffers + * @nr: entry number of the buffer batch + * @bhs: a batch of struct buffer_head + * @op_flags: appending REQ_OP_* flags besides REQ_OP_READ + * @force_lock: force to get a lock on the buffer if set, otherwise drops any + * buffer that cannot lock. + * + * Returns zero on success or don't wait, and -EIO on error. + */ +void __bh_read_batch(int nr, struct buffer_head *bhs, + unsigned int op_flags, bool force_lock) +{ + int i; + + for (i = 0; i < nr; i++) { + struct buffer_head *bh = bhsi; + + if (buffer_uptodate(bh)) + continue; + + if (force_lock) + lock_buffer(bh); + else + if (!trylock_buffer(bh)) + continue; + + if (buffer_uptodate(bh)) { + unlock_buffer(bh); + continue; + } + + bh->b_end_io = end_buffer_read_sync; + get_bh(bh); + submit_bh(REQ_OP_READ, op_flags, bh); + } +} +EXPORT_SYMBOL(__bh_read_batch); + +/** * bh_submit_read - Submit a locked buffer for reading * @bh: struct buffer_head *
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ext2/balloc.c
Changed
@@ -126,6 +126,7 @@ struct ext2_group_desc * desc; struct buffer_head * bh = NULL; ext2_fsblk_t bitmap_blk; + int ret; desc = ext2_get_group_desc(sb, block_group, NULL); if (!desc) @@ -139,10 +140,10 @@ block_group, le32_to_cpu(desc->bg_block_bitmap)); return NULL; } - if (likely(bh_uptodate_or_lock(bh))) + ret = bh_read(bh, 0); + if (ret > 0) return bh; - - if (bh_submit_read(bh) < 0) { + if (ret < 0) { brelse(bh); ext2_error(sb, __func__, "Cannot read block bitmap - "
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ext4/resize.c
Changed
@@ -1440,8 +1440,6 @@ * active. */ ext4_r_blocks_count_set(es, ext4_r_blocks_count(es) + reserved_blocks); - ext4_superblock_csum_set(sb); - unlock_buffer(sbi->s_sbh); /* Update the free space counts */ percpu_counter_add(&sbi->s_freeclusters_counter, @@ -1469,6 +1467,8 @@ ext4_calculate_overhead(sb); es->s_overhead_clusters = cpu_to_le32(sbi->s_overhead); + ext4_superblock_csum_set(sb); + unlock_buffer(sbi->s_sbh); if (test_opt(sb, DEBUG)) printk(KERN_DEBUG "EXT4-fs: added group %u:" "%llu blocks(%llu free %llu reserved)\n", flex_gd->count,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/fuse/virtio_fs.c
Changed
@@ -894,7 +894,7 @@ return 0; out_vqs: - vdev->config->reset(vdev); + virtio_reset_device(vdev); virtio_fs_cleanup_vqs(vdev, fs); kfree(fs->vqs); @@ -926,7 +926,7 @@ list_del_init(&fs->list); virtio_fs_stop_all_queues(fs); virtio_fs_drain_all_queues_locked(fs); - vdev->config->reset(vdev); + virtio_reset_device(vdev); virtio_fs_cleanup_vqs(vdev, fs); vdev->priv = NULL;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/gfs2/meta_io.c
Changed
@@ -521,8 +521,7 @@ if (buffer_uptodate(first_bh)) goto out; - if (!buffer_locked(first_bh)) - ll_rw_block(REQ_OP_READ, REQ_META | REQ_PRIO, 1, &first_bh); + bh_read_nowait(first_bh, REQ_META | REQ_PRIO); dblock++; extlen--; @@ -530,10 +529,7 @@ while (extlen) { bh = gfs2_getbuf(gl, dblock, CREATE); - if (!buffer_uptodate(bh) && !buffer_locked(bh)) - ll_rw_block(REQ_OP_READ, - REQ_RAHEAD | REQ_META | REQ_PRIO, - 1, &bh); + bh_readahead(bh, REQ_RAHEAD | REQ_META | REQ_PRIO); brelse(bh); dblock++; extlen--;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/gfs2/quota.c
Changed
@@ -741,12 +741,8 @@ } if (PageUptodate(page)) set_buffer_uptodate(bh); - if (!buffer_uptodate(bh)) { - ll_rw_block(REQ_OP_READ, REQ_META | REQ_PRIO, 1, &bh); - wait_on_buffer(bh); - if (!buffer_uptodate(bh)) - goto unlock_out; - } + if (bh_read(bh, REQ_META | REQ_PRIO) < 0) + goto unlock_out; if (gfs2_is_jdata(ip)) gfs2_trans_add_data(ip->i_gl, bh); else
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/io_uring.c
Changed
@@ -935,7 +935,7 @@ .needs_file = 1, .hash_reg_file = 1, .unbound_nonreg_file = 1, - .work_flags = IO_WQ_WORK_BLKCG, + .work_flags = IO_WQ_WORK_BLKCG | IO_WQ_WORK_FILES, }, IORING_OP_PROVIDE_BUFFERS = {}, IORING_OP_REMOVE_BUFFERS = {}, @@ -5233,6 +5233,11 @@ struct io_ring_ctx *ctx = req->ctx; bool cancel = false; + if (req->file->f_op->may_pollfree) { + spin_lock_irq(&ctx->completion_lock); + return -EOPNOTSUPP; + } + INIT_HLIST_NODE(&req->hash_node); io_init_poll_iocb(poll, mask, wake_func); poll->file = req->file; @@ -9076,7 +9081,7 @@ if (unlikely(ctx->sqo_dead)) { ret = -EOWNERDEAD; - goto out; + break; } if (!io_sqring_full(ctx)) @@ -9086,7 +9091,6 @@ } while (!signal_pending(current)); finish_wait(&ctx->sqo_sq_wait, &wait); -out: return ret; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/iomap/buffered-io.c
Changed
@@ -1456,13 +1456,6 @@ goto redirty; /* - * Given that we do not allow direct reclaim to call us, we should - * never be called in a recursive filesystem reclaim context. - */ - if (WARN_ON_ONCE(current->flags & PF_MEMALLOC_NOFS)) - goto redirty; - - /* * Is this page beyond the end of the file? * * The page index is less than the end_index, adjust the end_offset
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/isofs/compress.c
Changed
@@ -82,7 +82,7 @@ return 0; } haveblocks = isofs_get_blocks(inode, blocknum, bhs, needblocks); - ll_rw_block(REQ_OP_READ, 0, haveblocks, bhs); + bh_read_batch(haveblocks, bhs); curbh = 0; curpage = 0;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/jbd2/journal.c
Changed
@@ -1793,19 +1793,16 @@ { struct buffer_head *bh; journal_superblock_t *sb; - int err = -EIO; + int err; bh = journal->j_sb_buffer; J_ASSERT(bh != NULL); - if (!buffer_uptodate(bh)) { - ll_rw_block(REQ_OP_READ, 0, 1, &bh); - wait_on_buffer(bh); - if (!buffer_uptodate(bh)) { - printk(KERN_ERR - "JBD2: IO error reading journal superblock\n"); - goto out; - } + err = bh_read(bh, 0); + if (err < 0) { + printk(KERN_ERR + "JBD2: IO error reading journal superblock\n"); + goto out; } if (buffer_verified(bh))
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/jbd2/recovery.c
Changed
@@ -100,7 +100,7 @@ if (!buffer_uptodate(bh) && !buffer_locked(bh)) { bufsnbufs++ = bh; if (nbufs == MAXBUF) { - ll_rw_block(REQ_OP_READ, 0, nbufs, bufs); + bh_readahead_batch(nbufs, bufs, 0); journal_brelse_array(bufs, nbufs); nbufs = 0; } @@ -109,7 +109,7 @@ } if (nbufs) - ll_rw_block(REQ_OP_READ, 0, nbufs, bufs); + bh_readahead_batch(nbufs, bufs, 0); err = 0; failed: @@ -152,9 +152,14 @@ return -ENOMEM; if (!buffer_uptodate(bh)) { - /* If this is a brand new buffer, start readahead. - Otherwise, we assume we are already reading it. */ - if (!buffer_req(bh)) + /* + * If this is a brand new buffer, start readahead. + * Otherwise, we assume we are already reading it. + */ + bool need_readahead = !buffer_req(bh); + + bh_read_nowait(bh, 0); + if (need_readahead) do_readahead(journal, offset); wait_on_buffer(bh); } @@ -687,7 +692,6 @@ mark_buffer_dirty(nbh); BUFFER_TRACE(nbh, "marking uptodate"); ++info->nr_replays; - /* ll_rw_block(WRITE, 1, &nbh); */ unlock_buffer(nbh); brelse(obh); brelse(nbh);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ksmbd/auth.c
Changed
@@ -321,7 +321,8 @@ dn_off = le32_to_cpu(authblob->DomainName.BufferOffset); dn_len = le16_to_cpu(authblob->DomainName.Length); - if (blob_len < (u64)dn_off + dn_len || blob_len < (u64)nt_off + nt_len) + if (blob_len < (u64)dn_off + dn_len || blob_len < (u64)nt_off + nt_len || + nt_len < CIFS_ENCPWD_SIZE) return -EINVAL; /* TODO : use domain name that imported from configuration file */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ksmbd/smb2misc.c
Changed
@@ -132,8 +132,11 @@ *len = le16_to_cpu(((struct smb2_read_req *)hdr)->ReadChannelInfoLength); break; case SMB2_WRITE: - if (((struct smb2_write_req *)hdr)->DataOffset) { - *off = le16_to_cpu(((struct smb2_write_req *)hdr)->DataOffset); + if (((struct smb2_write_req *)hdr)->DataOffset || + ((struct smb2_write_req *)hdr)->Length) { + *off = max_t(unsigned int, + le16_to_cpu(((struct smb2_write_req *)hdr)->DataOffset), + offsetof(struct smb2_write_req, Buffer)); *len = le32_to_cpu(((struct smb2_write_req *)hdr)->Length); break; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ksmbd/smb2pdu.c
Changed
@@ -539,9 +539,10 @@ struct smb2_query_info_req *req; req = smb2_get_msg(work->request_buf); - if (req->InfoType == SMB2_O_INFO_FILE && - (req->FileInfoClass == FILE_FULL_EA_INFORMATION || - req->FileInfoClass == FILE_ALL_INFORMATION)) + if ((req->InfoType == SMB2_O_INFO_FILE && + (req->FileInfoClass == FILE_FULL_EA_INFORMATION || + req->FileInfoClass == FILE_ALL_INFORMATION)) || + req->InfoType == SMB2_O_INFO_SECURITY) sz = large_sz; } @@ -2972,7 +2973,7 @@ if (!pntsd) goto err_out; - rc = build_sec_desc(pntsd, NULL, + rc = build_sec_desc(pntsd, NULL, 0, OWNER_SECINFO | GROUP_SECINFO | DACL_SECINFO, @@ -3807,6 +3808,15 @@ return 0; } +static int smb2_resp_buf_len(struct ksmbd_work *work, unsigned short hdr2_len) +{ + int free_len; + + free_len = (int)(work->response_sz - + (get_rfc1002_len(work->response_buf) + 4)) - hdr2_len; + return free_len; +} + static int smb2_calc_max_out_buf_len(struct ksmbd_work *work, unsigned short hdr2_len, unsigned int out_buf_len) @@ -3816,9 +3826,7 @@ if (out_buf_len > work->conn->vals->max_trans_size) return -EINVAL; - free_len = (int)(work->response_sz - - (get_rfc1002_len(work->response_buf) + 4)) - - hdr2_len; + free_len = smb2_resp_buf_len(work, hdr2_len); if (free_len < 0) return -EINVAL; @@ -5074,10 +5082,10 @@ struct smb_ntsd *pntsd = (struct smb_ntsd *)rsp->Buffer, *ppntsd = NULL; struct smb_fattr fattr = {{0}}; struct inode *inode; - __u32 secdesclen; + __u32 secdesclen = 0; unsigned int id = KSMBD_NO_FID, pid = KSMBD_NO_FID; int addition_info = le32_to_cpu(req->AdditionalInformation); - int rc; + int rc = 0, ppntsd_size = 0; if (addition_info & ~(OWNER_SECINFO | GROUP_SECINFO | DACL_SECINFO | PROTECTED_DACL_SECINFO | @@ -5122,9 +5130,14 @@ if (test_share_config_flag(work->tcon->share_conf, KSMBD_SHARE_FLAG_ACL_XATTR)) - ksmbd_vfs_get_sd_xattr(work->conn, fp->filp->f_path.dentry, &ppntsd); - - rc = build_sec_desc(pntsd, ppntsd, addition_info, &secdesclen, &fattr); + ppntsd_size = ksmbd_vfs_get_sd_xattr(work->conn, + fp->filp->f_path.dentry, + &ppntsd); + + /* Check if sd buffer size exceeds response buffer size */ + if (smb2_resp_buf_len(work, 8) > ppntsd_size) + rc = build_sec_desc(pntsd, ppntsd, ppntsd_size, + addition_info, &secdesclen, &fattr); posix_acl_release(fattr.cf_acls); posix_acl_release(fattr.cf_dacls); kfree(ppntsd); @@ -6315,23 +6328,18 @@ length = le32_to_cpu(req->Length); id = le64_to_cpu(req->VolatileFileId); - if (le16_to_cpu(req->DataOffset) == - offsetof(struct smb2_write_req, Buffer)) { - data_buf = (char *)&req->Buffer0; - } else { - if ((u64)le16_to_cpu(req->DataOffset) + length > - get_rfc1002_len(work->request_buf)) { - pr_err("invalid write data offset %u, smb_len %u\n", - le16_to_cpu(req->DataOffset), - get_rfc1002_len(work->request_buf)); - err = -EINVAL; - goto out; - } - - data_buf = (char *)(((char *)&req->hdr.ProtocolId) + - le16_to_cpu(req->DataOffset)); + if ((u64)le16_to_cpu(req->DataOffset) + length > + get_rfc1002_len(work->request_buf)) { + pr_err("invalid write data offset %u, smb_len %u\n", + le16_to_cpu(req->DataOffset), + get_rfc1002_len(work->request_buf)); + err = -EINVAL; + goto out; } + data_buf = (char *)(((char *)&req->hdr.ProtocolId) + + le16_to_cpu(req->DataOffset)); + rpc_resp = ksmbd_rpc_write(work->sess, id, data_buf, length); if (rpc_resp) { if (rpc_resp->flags == KSMBD_RPC_ENOTIMPLEMENTED) { @@ -6477,23 +6485,15 @@ if (req->Channel != SMB2_CHANNEL_RDMA_V1 && req->Channel != SMB2_CHANNEL_RDMA_V1_INVALIDATE) { - if (le16_to_cpu(req->DataOffset) == + if (le16_to_cpu(req->DataOffset) < offsetof(struct smb2_write_req, Buffer)) { - data_buf = (char *)&req->Buffer0; - } else { - if ((u64)le16_to_cpu(req->DataOffset) + length > - get_rfc1002_len(work->request_buf)) { - pr_err("invalid write data offset %u, smb_len %u\n", - le16_to_cpu(req->DataOffset), - get_rfc1002_len(work->request_buf)); - err = -EINVAL; - goto out; - } - - data_buf = (char *)(((char *)&req->hdr.ProtocolId) + - le16_to_cpu(req->DataOffset)); + err = -EINVAL; + goto out; } + data_buf = (char *)(((char *)&req->hdr.ProtocolId) + + le16_to_cpu(req->DataOffset)); + ksmbd_debug(SMB, "flags %u\n", le32_to_cpu(req->Flags)); if (le32_to_cpu(req->Flags) & SMB2_WRITEFLAG_WRITE_THROUGH) writethrough = true;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ksmbd/smbacl.c
Changed
@@ -688,6 +688,7 @@ } static void set_ntacl_dacl(struct smb_acl *pndacl, struct smb_acl *nt_dacl, + unsigned int aces_size, const struct smb_sid *pownersid, const struct smb_sid *pgrpsid, struct smb_fattr *fattr) @@ -701,9 +702,19 @@ if (nt_num_aces) { ntace = (struct smb_ace *)((char *)nt_dacl + sizeof(struct smb_acl)); for (i = 0; i < nt_num_aces; i++) { - memcpy((char *)pndace + size, ntace, le16_to_cpu(ntace->size)); - size += le16_to_cpu(ntace->size); - ntace = (struct smb_ace *)((char *)ntace + le16_to_cpu(ntace->size)); + unsigned short nt_ace_size; + + if (offsetof(struct smb_ace, access_req) > aces_size) + break; + + nt_ace_size = le16_to_cpu(ntace->size); + if (nt_ace_size > aces_size) + break; + + memcpy((char *)pndace + size, ntace, nt_ace_size); + size += nt_ace_size; + aces_size -= nt_ace_size; + ntace = (struct smb_ace *)((char *)ntace + nt_ace_size); num_aces++; } } @@ -872,7 +883,7 @@ /* Convert permission bits from mode to equivalent CIFS ACL */ int build_sec_desc(struct smb_ntsd *pntsd, struct smb_ntsd *ppntsd, - int addition_info, __u32 *secdesclen, + int ppntsd_size, int addition_info, __u32 *secdesclen, struct smb_fattr *fattr) { int rc = 0; @@ -932,15 +943,25 @@ if (!ppntsd) { set_mode_dacl(dacl_ptr, fattr); - } else if (!ppntsd->dacloffset) { - goto out; } else { struct smb_acl *ppdacl_ptr; - - ppdacl_ptr = (struct smb_acl *)((char *)ppntsd + - le32_to_cpu(ppntsd->dacloffset)); - set_ntacl_dacl(dacl_ptr, ppdacl_ptr, nowner_sid_ptr, - ngroup_sid_ptr, fattr); + unsigned int dacl_offset = le32_to_cpu(ppntsd->dacloffset); + int ppdacl_size, ntacl_size = ppntsd_size - dacl_offset; + + if (!dacl_offset || + (dacl_offset + sizeof(struct smb_acl) > ppntsd_size)) + goto out; + + ppdacl_ptr = (struct smb_acl *)((char *)ppntsd + dacl_offset); + ppdacl_size = le16_to_cpu(ppdacl_ptr->size); + if (ppdacl_size > ntacl_size || + ppdacl_size < sizeof(struct smb_acl)) + goto out; + + set_ntacl_dacl(dacl_ptr, ppdacl_ptr, + ntacl_size - sizeof(struct smb_acl), + nowner_sid_ptr, ngroup_sid_ptr, + fattr); } pntsd->dacloffset = cpu_to_le32(offset); offset += le16_to_cpu(dacl_ptr->size); @@ -973,23 +994,31 @@ struct smb_ntsd *parent_pntsd = NULL; struct smb_sid owner_sid, group_sid; struct dentry *parent = path->dentry->d_parent; - int inherited_flags = 0, flags = 0, i, ace_cnt = 0, nt_size = 0; - int rc = 0, num_aces, dacloffset, pntsd_type, acl_len; + int inherited_flags = 0, flags = 0, i, ace_cnt = 0, nt_size = 0, pdacl_size; + int rc = 0, num_aces, dacloffset, pntsd_type, pntsd_size, acl_len, aces_size; char *aces_base; bool is_dir = S_ISDIR(d_inode(path->dentry)->i_mode); - acl_len = ksmbd_vfs_get_sd_xattr(conn, parent, &parent_pntsd); - if (acl_len <= 0) + pntsd_size = ksmbd_vfs_get_sd_xattr(conn, + parent, &parent_pntsd); + if (pntsd_size <= 0) return -ENOENT; dacloffset = le32_to_cpu(parent_pntsd->dacloffset); - if (!dacloffset) { + if (!dacloffset || (dacloffset + sizeof(struct smb_acl) > pntsd_size)) { rc = -EINVAL; goto free_parent_pntsd; } parent_pdacl = (struct smb_acl *)((char *)parent_pntsd + dacloffset); + acl_len = pntsd_size - dacloffset; num_aces = le32_to_cpu(parent_pdacl->num_aces); pntsd_type = le16_to_cpu(parent_pntsd->type); + pdacl_size = le16_to_cpu(parent_pdacl->size); + + if (pdacl_size > acl_len || pdacl_size < sizeof(struct smb_acl)) { + rc = -EINVAL; + goto free_parent_pntsd; + } aces_base = kmalloc(sizeof(struct smb_ace) * num_aces * 2, GFP_KERNEL); if (!aces_base) { @@ -1000,11 +1029,23 @@ aces = (struct smb_ace *)aces_base; parent_aces = (struct smb_ace *)((char *)parent_pdacl + sizeof(struct smb_acl)); + aces_size = acl_len - sizeof(struct smb_acl); if (pntsd_type & DACL_AUTO_INHERITED) inherited_flags = INHERITED_ACE; for (i = 0; i < num_aces; i++) { + int pace_size; + + if (offsetof(struct smb_ace, access_req) > aces_size) + break; + + pace_size = le16_to_cpu(parent_aces->size); + if (pace_size > aces_size) + break; + + aces_size -= pace_size; + flags = parent_aces->flags; if (!smb_inherit_flags(flags, is_dir)) goto pass; @@ -1049,8 +1090,7 @@ aces = (struct smb_ace *)((char *)aces + le16_to_cpu(aces->size)); ace_cnt++; pass: - parent_aces = - (struct smb_ace *)((char *)parent_aces + le16_to_cpu(parent_aces->size)); + parent_aces = (struct smb_ace *)((char *)parent_aces + pace_size); } if (nt_size > 0) { @@ -1143,7 +1183,7 @@ struct smb_ntsd *pntsd = NULL; struct smb_acl *pdacl; struct posix_acl *posix_acls; - int rc = 0, acl_size; + int rc = 0, pntsd_size, acl_size, aces_size, pdacl_size, dacl_offset; struct smb_sid sid; int granted = le32_to_cpu(*pdaccess & ~FILE_MAXIMAL_ACCESS_LE); struct smb_ace *ace; @@ -1152,36 +1192,33 @@ struct smb_ace *others_ace = NULL; struct posix_acl_entry *pa_entry; unsigned int sid_type = SIDOWNER; - char *end_of_acl; + unsigned short ace_size; ksmbd_debug(SMB, "check permission using windows acl\n"); - acl_size = ksmbd_vfs_get_sd_xattr(conn, path->dentry, &pntsd); - if (acl_size <= 0 || !pntsd || !pntsd->dacloffset) { - kfree(pntsd); - return 0; - } + pntsd_size = ksmbd_vfs_get_sd_xattr(conn, + path->dentry, &pntsd); + if (pntsd_size <= 0 || !pntsd) + goto err_out; + + dacl_offset = le32_to_cpu(pntsd->dacloffset); + if (!dacl_offset || + (dacl_offset + sizeof(struct smb_acl) > pntsd_size)) + goto err_out; pdacl = (struct smb_acl *)((char *)pntsd + le32_to_cpu(pntsd->dacloffset)); - end_of_acl = ((char *)pntsd) + acl_size; - if (end_of_acl <= (char *)pdacl) { - kfree(pntsd); - return 0; - } + acl_size = pntsd_size - dacl_offset; + pdacl_size = le16_to_cpu(pdacl->size); - if (end_of_acl < (char *)pdacl + le16_to_cpu(pdacl->size) || - le16_to_cpu(pdacl->size) < sizeof(struct smb_acl)) { - kfree(pntsd); - return 0; - } + if (pdacl_size > acl_size || pdacl_size < sizeof(struct smb_acl)) + goto err_out; if (!pdacl->num_aces) { - if (!(le16_to_cpu(pdacl->size) - sizeof(struct smb_acl)) && + if (!(pdacl_size - sizeof(struct smb_acl)) && *pdaccess & ~(FILE_READ_CONTROL_LE | FILE_WRITE_DAC_LE)) { rc = -EACCES; goto err_out; } - kfree(pntsd); - return 0; + goto err_out; } if (*pdaccess & FILE_MAXIMAL_ACCESS_LE) { @@ -1189,11 +1226,16 @@ DELETE; ace = (struct smb_ace *)((char *)pdacl + sizeof(struct smb_acl)); + aces_size = acl_size - sizeof(struct smb_acl); for (i = 0; i < le32_to_cpu(pdacl->num_aces); i++) { + if (offsetof(struct smb_ace, access_req) > aces_size) + break; + ace_size = le16_to_cpu(ace->size); + if (ace_size > aces_size) + break; + aces_size -= ace_size; granted |= le32_to_cpu(ace->access_req); ace = (struct smb_ace *)((char *)ace + le16_to_cpu(ace->size)); - if (end_of_acl < (char *)ace) - goto err_out; } if (!pdacl->num_aces) @@ -1205,7 +1247,15 @@ id_to_sid(uid, sid_type, &sid); ace = (struct smb_ace *)((char *)pdacl + sizeof(struct smb_acl)); + aces_size = acl_size - sizeof(struct smb_acl); for (i = 0; i < le32_to_cpu(pdacl->num_aces); i++) { + if (offsetof(struct smb_ace, access_req) > aces_size) + break; + ace_size = le16_to_cpu(ace->size); + if (ace_size > aces_size) + break; + aces_size -= ace_size; + if (!compare_sids(&sid, &ace->sid) || !compare_sids(&sid_unix_NFS_mode, &ace->sid)) { found = 1; @@ -1215,8 +1265,6 @@ others_ace = ace; ace = (struct smb_ace *)((char *)ace + le16_to_cpu(ace->size)); - if (end_of_acl < (char *)ace) - goto err_out; } if (*pdaccess & FILE_MAXIMAL_ACCESS_LE && found) {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ksmbd/smbacl.h
Changed
@@ -192,7 +192,7 @@ int parse_sec_desc(struct smb_ntsd *pntsd, int acl_len, struct smb_fattr *fattr); int build_sec_desc(struct smb_ntsd *pntsd, struct smb_ntsd *ppntsd, - int addition_info, __u32 *secdesclen, + int ppntsd_size, int addition_info, __u32 *secdesclen, struct smb_fattr *fattr); int init_acl_state(struct posix_acl_state *state, int cnt); void free_acl_state(struct posix_acl_state *state);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ksmbd/vfs.c
Changed
@@ -1495,6 +1495,11 @@ } *pntsd = acl.sd_buf; + if (acl.sd_size < sizeof(struct smb_ntsd)) { + pr_err("sd size is invalid\n"); + goto out_free; + } + (*pntsd)->osidoffset = cpu_to_le32(le32_to_cpu((*pntsd)->osidoffset) - NDR_NTSD_OFFSETOF); (*pntsd)->gsidoffset =
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ntfs3/attrib.c
Changed
@@ -1949,7 +1949,7 @@ return -ENOENT; if (!attr_b->non_res) { - u32 data_size = le32_to_cpu(attr->res.data_size); + u32 data_size = le32_to_cpu(attr_b->res.data_size); u32 from, to; if (vbo > data_size)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ntfs3/inode.c
Changed
@@ -629,12 +629,9 @@ bh->b_size = block_size; off = vbo & (PAGE_SIZE - 1); set_bh_page(bh, page, off); - ll_rw_block(REQ_OP_READ, 0, 1, &bh); - wait_on_buffer(bh); - if (!buffer_uptodate(bh)) { - err = -EIO; + err = bh_read(bh, 0); + if (err < 0) goto out; - } zero_user_segment(page, off + voff, off + block_size); } }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ocfs2/aops.c
Changed
@@ -640,7 +640,7 @@ !buffer_new(bh) && ocfs2_should_read_blk(inode, page, block_start) && (block_start < from || block_end > to)) { - ll_rw_block(REQ_OP_READ, 0, 1, &bh); + bh_read_nowait(bh, 0); *wait_bh++=bh; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ocfs2/super.c
Changed
@@ -1772,9 +1772,7 @@ if (!buffer_dirty(*bh)) clear_buffer_uptodate(*bh); unlock_buffer(*bh); - ll_rw_block(REQ_OP_READ, 0, 1, bh); - wait_on_buffer(*bh); - if (!buffer_uptodate(*bh)) { + if (bh_read(*bh, 0) < 0) { mlog_errno(-EIO); brelse(*bh); *bh = NULL;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/reiserfs/journal.c
Changed
@@ -870,7 +870,7 @@ */ if (buffer_dirty(bh) && unlikely(bh->b_page->mapping == NULL)) { spin_unlock(lock); - ll_rw_block(REQ_OP_WRITE, 0, 1, &bh); + write_dirty_buffer(bh, 0); spin_lock(lock); } put_bh(bh); @@ -1054,7 +1054,7 @@ if (tbh) { if (buffer_dirty(tbh)) { depth = reiserfs_write_unlock_nested(s); - ll_rw_block(REQ_OP_WRITE, 0, 1, &tbh); + write_dirty_buffer(tbh, 0); reiserfs_write_lock_nested(s, depth); } put_bh(tbh) ; @@ -2239,7 +2239,7 @@ } } /* read in the log blocks, memcpy to the corresponding real block */ - ll_rw_block(REQ_OP_READ, 0, get_desc_trans_len(desc), log_blocks); + bh_read_batch(get_desc_trans_len(desc), log_blocks); for (i = 0; i < get_desc_trans_len(desc); i++) { wait_on_buffer(log_blocksi); @@ -2341,10 +2341,11 @@ } else bhlistj++ = bh; } - ll_rw_block(REQ_OP_READ, 0, j, bhlist); + bh = bhlist0; + bh_read_nowait(bh, 0); + bh_readahead_batch(j - 1, &bhlist1, 0); for (i = 1; i < j; i++) brelse(bhlisti); - bh = bhlist0; wait_on_buffer(bh); if (buffer_uptodate(bh)) return bh;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/reiserfs/stree.c
Changed
@@ -579,7 +579,7 @@ if (!buffer_uptodate(bhj)) { if (depth == -1) depth = reiserfs_write_unlock_nested(s); - ll_rw_block(REQ_OP_READ, REQ_RAHEAD, 1, bh + j); + bh_readahead(bhj, REQ_RAHEAD); } brelse(bhj); } @@ -685,7 +685,7 @@ if (!buffer_uptodate(bh) && depth == -1) depth = reiserfs_write_unlock_nested(sb); - ll_rw_block(REQ_OP_READ, 0, 1, &bh); + bh_read_nowait(bh, 0); wait_on_buffer(bh); if (depth != -1)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/reiserfs/super.c
Changed
@@ -1708,9 +1708,7 @@ /* after journal replay, reread all bitmap and super blocks */ static int reread_meta_blocks(struct super_block *s) { - ll_rw_block(REQ_OP_READ, 0, 1, &SB_BUFFER_WITH_SB(s)); - wait_on_buffer(SB_BUFFER_WITH_SB(s)); - if (!buffer_uptodate(SB_BUFFER_WITH_SB(s))) { + if (bh_read(SB_BUFFER_WITH_SB(s), 0) < 0) { reiserfs_warning(s, "reiserfs-2504", "error reading the super"); return 1; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/resctrlfs.c
Changed
@@ -211,8 +211,7 @@ if (atomic_dec_and_test(&rdtgrp->waitcount) && (rdtgrp->flags & RDT_DELETED)) { kernfs_unbreak_active_protection(kn); - kernfs_put(rdtgrp->kn); - kfree(rdtgrp); + rdtgroup_remove(rdtgrp); } else { kernfs_unbreak_active_protection(kn); } @@ -272,12 +271,6 @@ if (dest_kn) *dest_kn = kn; - /* - * This extra ref will be put in kernfs_remove() and guarantees - * that @rdtgrp->kn is always accessible. - */ - kernfs_get(kn); - ret = resctrl_group_kn_set_ugid(kn); if (ret) goto out_destroy; @@ -399,8 +392,6 @@ if (ret) goto out_info; - kernfs_get(kn_mongrp); - ret = mkdir_mondata_all_prepare(&resctrl_group_default); if (ret < 0) goto out_mongrp; @@ -410,7 +401,6 @@ if (ret) goto out_mongrp; - kernfs_get(kn_mondata); resctrl_group_default.mon.mon_data_kn = kn_mondata; } @@ -495,7 +485,7 @@ /* rmid may not be used */ rmid_free(sentry->mon.rmid); list_del(&sentry->mon.crdtgrp_list); - kfree(sentry); + rdtgroup_remove(sentry); } } @@ -529,7 +519,7 @@ kernfs_remove(rdtgrp->kn); list_del(&rdtgrp->resctrl_group_list); - kfree(rdtgrp); + rdtgroup_remove(rdtgrp); } /* Notify online CPUs to update per cpu storage and PQR_ASSOC MSR */ update_closid_rmid(cpu_online_mask, &resctrl_group_default); @@ -622,12 +612,11 @@ case Opt_caPrio: ctx->enable_caPrio = true; return 0; + default: + break; + } - return 0; -} - -return -EINVAL; - + return -EINVAL; } static void resctrl_fs_context_free(struct fs_context *fc) @@ -776,7 +765,7 @@ * kernfs_remove() will drop the reference count on "kn" which * will free it. But we still need it to stick around for the * resctrl_group_kn_unlock(kn} call below. Take one extra reference - * here, which will be dropped inside resctrl_group_kn_unlock(). + * here, which will be dropped inside rdtgroup_remove(). */ kernfs_get(kn); @@ -816,6 +805,7 @@ out_prepare_clean: mkdir_mondata_all_prepare_clean(rdtgrp); out_destroy: + kernfs_put(rdtgrp->kn); kernfs_remove(rdtgrp->kn); out_free_rmid: rmid_free(rdtgrp->mon.rmid); @@ -832,7 +822,7 @@ static void mkdir_resctrl_prepare_clean(struct resctrl_group *rgrp) { kernfs_remove(rgrp->kn); - kfree(rgrp); + rdtgroup_remove(rgrp); } /* @@ -997,11 +987,6 @@ { resctrl_group_rm_mon(rdtgrp, tmpmask); - /* - * one extra hold on this, will drop when we kfree(rdtgrp) - * in resctrl_group_kn_unlock() - */ - kernfs_get(kn); kernfs_remove(rdtgrp->kn); return 0; @@ -1050,11 +1035,6 @@ { resctrl_group_rm_ctrl(rdtgrp, tmpmask); - /* - * one extra hold on this, will drop when we kfree(rdtgrp) - * in resctrl_group_kn_unlock() - */ - kernfs_get(kn); kernfs_remove(rdtgrp->kn); return 0;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/signalfd.c
Changed
@@ -248,6 +248,7 @@ .poll = signalfd_poll, .read = signalfd_read, .llseek = noop_llseek, + .may_pollfree = true, }; static int do_signalfd4(int ufd, sigset_t *mask, int flags)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/udf/dir.c
Changed
@@ -131,7 +131,7 @@ brelse(tmp); } if (num) { - ll_rw_block(REQ_OP_READ, REQ_RAHEAD, num, bha); + bh_readahead_batch(num, bha, REQ_RAHEAD); for (i = 0; i < num; i++) brelse(bhai); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/udf/directory.c
Changed
@@ -89,7 +89,7 @@ brelse(tmp); } if (num) { - ll_rw_block(REQ_OP_READ, REQ_RAHEAD, num, bha); + bh_readahead_batch(num, bha, REQ_RAHEAD); for (i = 0; i < num; i++) brelse(bhai); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/udf/inode.c
Changed
@@ -1210,13 +1210,7 @@ if (!bh) return NULL; - if (buffer_uptodate(bh)) - return bh; - - ll_rw_block(REQ_OP_READ, 0, 1, &bh); - - wait_on_buffer(bh); - if (buffer_uptodate(bh)) + if (bh_read(bh, 0) >= 0) return bh; brelse(bh);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/ufs/balloc.c
Changed
@@ -295,14 +295,10 @@ if (!buffer_mapped(bh)) map_bh(bh, inode->i_sb, oldb + pos); - if (!buffer_uptodate(bh)) { - ll_rw_block(REQ_OP_READ, 0, 1, &bh); - wait_on_buffer(bh); - if (!buffer_uptodate(bh)) { - ufs_error(inode->i_sb, __func__, - "read of block failed\n"); - break; - } + if (bh_read(bh, 0) < 0) { + ufs_error(inode->i_sb, __func__, + "read of block failed\n"); + break; } UFSD(" change from %llu to %llu, pos %u\n",
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/libxfs/xfs_btree.c
Changed
@@ -2811,7 +2811,7 @@ struct xfs_btree_split_args *args = container_of(work, struct xfs_btree_split_args, work); unsigned long pflags; - unsigned long new_pflags = PF_MEMALLOC_NOFS; + unsigned long new_pflags = 0; /* * we are in a transaction context here, but may also be doing work @@ -2823,12 +2823,20 @@ new_pflags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD; current_set_flags_nested(&pflags, new_pflags); + xfs_trans_set_context(args->cur->bc_tp); args->result = __xfs_btree_split(args->cur, args->level, args->ptrp, args->key, args->curp, args->stat); - complete(args->done); + xfs_trans_clear_context(args->cur->bc_tp); current_restore_flags_nested(&pflags, new_pflags); + + /* + * Do not access args after complete() has run here. We don't own args + * and the owner may run and free args before we return here. + */ + complete(args->done); + } /*
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/libxfs/xfs_btree.h
Changed
@@ -523,7 +523,6 @@ struct xfs_buf *bp; block = xfs_btree_get_block(cur, level, &bp); - ASSERT(block && xfs_btree_check_block(cur, block, level, bp) == 0); if (cur->bc_flags & XFS_BTREE_LONG_PTRS) return block->bb_u.l.bb_rightsib == cpu_to_be64(NULLFSBLOCK);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_aops.c
Changed
@@ -98,7 +98,7 @@ * thus we need to mark ourselves as being in a transaction manually. * Similarly for freeze protection. */ - current_set_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); + xfs_trans_set_context(tp); __sb_writers_acquired(VFS_I(ip)->i_sb, SB_FREEZE_FS); /* we abort the update if there was an IO error */ @@ -538,6 +538,12 @@ { struct xfs_writepage_ctx wpc = { }; + if (WARN_ON_ONCE(current->journal_info)) { + redirty_page_for_writepage(wbc, page); + unlock_page(page); + return 0; + } + return iomap_writepage(page, wbc, &wpc.ctx, &xfs_writeback_ops); } @@ -548,6 +554,13 @@ { struct xfs_writepage_ctx wpc = { }; + /* + * Writing back data in a transaction context can result in recursive + * transactions. This is bad, so issue a warning and get out of here. + */ + if (WARN_ON_ONCE(current->journal_info)) + return 0; + xfs_iflags_clear(XFS_I(mapping->host), XFS_ITRUNCATED); return iomap_writepages(mapping, wbc, &wpc.ctx, &xfs_writeback_ops); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_attr_inactive.c
Changed
@@ -158,6 +158,7 @@ } child_fsb = be32_to_cpu(ichdr.btree0.before); xfs_trans_brelse(*trans, bp); /* no locks for later trans */ + bp = NULL; /* * If this is the node level just above the leaves, simply loop @@ -211,12 +212,8 @@ &child_bp); if (error) return error; - error = bp->b_error; - if (error) { - xfs_trans_brelse(*trans, child_bp); - return error; - } xfs_trans_binval(*trans, child_bp); + child_bp = NULL; /* * If we're not done, re-read the parent to get the next @@ -233,6 +230,7 @@ bp->b_addr); child_fsb = be32_to_cpu(phdr.btreei + 1.before); xfs_trans_brelse(*trans, bp); + bp = NULL; } /* * Atomically commit the whole invalidate stuff.
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_buf_item.c
Changed
@@ -936,6 +936,8 @@ trace_xfs_buf_item_relse(bp, _RET_IP_); ASSERT(!test_bit(XFS_LI_IN_AIL, &bip->bli_item.li_flags)); + if (atomic_read(&bip->bli_refcount)) + return; bp->b_log_item = NULL; xfs_buf_rele(bp); xfs_buf_item_free(bip);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_fsops.c
Changed
@@ -374,46 +374,36 @@ * If the request is larger than the current reservation, reserve the * blocks before we update the reserve counters. Sample m_fdblocks and * perform a partial reservation if the request exceeds free space. + * + * The code below estimates how many blocks it can request from + * fdblocks to stash in the reserve pool. This is a classic TOCTOU + * race since fdblocks updates are not always coordinated via + * m_sb_lock. Set the reserve size even if there's not enough free + * space to fill it because mod_fdblocks will refill an undersized + * reserve when it can. */ - error = -ENOSPC; - do { - free = percpu_counter_sum(&mp->m_fdblocks) - - mp->m_alloc_set_aside; - if (free <= 0) - break; - - delta = request - mp->m_resblks; - lcounter = free - delta; - if (lcounter < 0) - /* We can't satisfy the request, just get what we can */ - fdblks_delta = free; - else - fdblks_delta = delta; - + free = percpu_counter_sum(&mp->m_fdblocks) - + xfs_fdblocks_unavailable(mp); + delta = request - mp->m_resblks; + mp->m_resblks = request; + if (delta > 0 && free > 0) { /* * We'll either succeed in getting space from the free block - * count or we'll get an ENOSPC. If we get a ENOSPC, it means - * things changed while we were calculating fdblks_delta and so - * we should try again to see if there is anything left to - * reserve. + * count or we'll get an ENOSPC. Don't set the reserved flag + * here - we don't want to reserve the extra reserve blocks + * from the reserve. * - * Don't set the reserved flag here - we don't want to reserve - * the extra reserve blocks from the reserve..... + * The desired reserve size can change after we drop the lock. + * Use mod_fdblocks to put the space into the reserve or into + * fdblocks as appropriate. */ + fdblks_delta = min(free, delta); spin_unlock(&mp->m_sb_lock); error = xfs_mod_fdblocks(mp, -fdblks_delta, 0); + if (!error) + xfs_mod_fdblocks(mp, fdblks_delta, 0); spin_lock(&mp->m_sb_lock); - } while (error == -ENOSPC); - - /* - * Update the reserve counters if blocks have been successfully - * allocated. - */ - if (!error && fdblks_delta) { - mp->m_resblks += fdblks_delta; - mp->m_resblks_avail += fdblks_delta; } - out: if (outval) { outval->resblks = mp->m_resblks;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_log.c
Changed
@@ -848,6 +848,24 @@ } /* + * Cycle all the iclogbuf locks to make sure all log IO completion + * is done before we tear down these buffers. + */ +static void +xlog_wait_iclog_completion(struct xlog *log) +{ + int i; + struct xlog_in_core *iclog = log->l_iclog; + + for (i = 0; i < log->l_iclog_bufs; i++) { + down(&iclog->ic_sema); + up(&iclog->ic_sema); + iclog = iclog->ic_next; + } +} + + +/* * Wait for the iclog and all prior iclogs to be written disk as required by the * log force state machine. Waiting on ic_force_wait ensures iclog completions * have been ordered and callbacks run before we are woken here, hence @@ -1034,6 +1052,13 @@ struct xfs_mount *mp) { xfs_log_quiesce(mp); + /* + * If shutdown has come from iclog IO context, the log + * cleaning will have been skipped and so we need to wait + * for the iclog to complete shutdown processing before we + * tear anything down. + */ + xlog_wait_iclog_completion(mp->m_log); xfs_trans_ail_destroy(mp); @@ -1942,17 +1967,6 @@ int i; /* - * Cycle all the iclogbuf locks to make sure all log IO completion - * is done before we tear down these buffers. - */ - iclog = log->l_iclog; - for (i = 0; i < log->l_iclog_bufs; i++) { - down(&iclog->ic_sema); - up(&iclog->ic_sema); - iclog = iclog->ic_next; - } - - /* * Destroy the CIL after waiting for iclog IO completion because an * iclog EIO error will try to shut down the log, which accesses the * CIL to wake up the waiters.
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_mount.h
Changed
@@ -467,6 +467,14 @@ */ #define XFS_FDBLOCKS_BATCH 1024 +/* Accessor added for 5.10.y backport */ +static inline uint64_t +xfs_fdblocks_unavailable( + struct xfs_mount *mp) +{ + return mp->m_alloc_set_aside; +} + extern int xfs_mod_fdblocks(struct xfs_mount *mp, int64_t delta, bool reserved); extern int xfs_mod_frextents(struct xfs_mount *mp, int64_t delta);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_trans.c
Changed
@@ -72,6 +72,7 @@ xfs_extent_busy_clear(tp->t_mountp, &tp->t_busy, false); trace_xfs_trans_free(tp, _RET_IP_); + xfs_trans_clear_context(tp); if (!(tp->t_flags & XFS_TRANS_NO_WRITECOUNT)) sb_end_intwrite(tp->t_mountp->m_super); xfs_trans_free_dqinfo(tp); @@ -123,7 +124,8 @@ ntp->t_rtx_res = tp->t_rtx_res - tp->t_rtx_res_used; tp->t_rtx_res = tp->t_rtx_res_used; - ntp->t_pflags = tp->t_pflags; + + xfs_trans_switch_context(tp, ntp); /* move deferred ops over to the new tp */ xfs_defer_move(ntp, tp); @@ -157,9 +159,6 @@ int error = 0; bool rsvd = (tp->t_flags & XFS_TRANS_RESERVE) != 0; - /* Mark this thread as being in a transaction */ - current_set_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); - /* * Attempt to reserve the needed disk blocks by decrementing * the number needed from the number available. This will @@ -167,10 +166,8 @@ */ if (blocks > 0) { error = xfs_mod_fdblocks(mp, -((int64_t)blocks), rsvd); - if (error != 0) { - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); + if (error != 0) return -ENOSPC; - } tp->t_blk_res += blocks; } @@ -244,9 +241,6 @@ xfs_mod_fdblocks(mp, (int64_t)blocks, rsvd); tp->t_blk_res = 0; } - - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); - return error; } @@ -272,6 +266,7 @@ tp = kmem_cache_zalloc(xfs_trans_zone, GFP_KERNEL | __GFP_NOFAIL); if (!(flags & XFS_TRANS_NO_WRITECOUNT)) sb_start_intwrite(mp->m_super); + xfs_trans_set_context(tp); /* * Zero-reservation ("empty") transactions can't modify anything, so @@ -893,7 +888,6 @@ xlog_cil_commit(mp->m_log, tp, &commit_seq, regrant); - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); xfs_trans_free(tp); /* @@ -925,7 +919,6 @@ xfs_log_ticket_ungrant(mp->m_log, tp->t_ticket); tp->t_ticket = NULL; } - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); xfs_trans_free_items(tp, !!error); xfs_trans_free(tp); @@ -985,9 +978,6 @@ tp->t_ticket = NULL; } - /* mark this thread as no longer being in a transaction */ - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); - xfs_trans_free_items(tp, dirty); xfs_trans_free(tp); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/fs/xfs/xfs_trans.h
Changed
@@ -266,4 +266,34 @@ struct xfs_dquot *gdqp, struct xfs_dquot *pdqp, bool force, struct xfs_trans **tpp); +static inline void +xfs_trans_set_context( + struct xfs_trans *tp) +{ + ASSERT(current->journal_info == NULL); + tp->t_pflags = memalloc_nofs_save(); + current->journal_info = tp; +} + +static inline void +xfs_trans_clear_context( + struct xfs_trans *tp) +{ + if (current->journal_info == tp) { + memalloc_nofs_restore(tp->t_pflags); + current->journal_info = NULL; + } +} + +static inline void +xfs_trans_switch_context( + struct xfs_trans *old_tp, + struct xfs_trans *new_tp) +{ + ASSERT(current->journal_info == old_tp); + new_tp->t_pflags = old_tp->t_pflags; + old_tp->t_pflags = 0; + current->journal_info = new_tp; +} + #endif /* __XFS_TRANS_H__ */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/acpi/actbl2.h
Changed
@@ -647,7 +647,7 @@ u8 reserved3; /* reserved - must be zero */ }; -/* 11: Generic interrupt - GICC (ACPI 5.0 + ACPI 6.0 + ACPI 6.3 changes) */ +/* 11: Generic interrupt - GICC (ACPI 5.0 + ACPI 6.0 + ACPI 6.3 + ACPI 6.5 changes) */ struct acpi_madt_generic_interrupt { struct acpi_subtable_header header; @@ -667,6 +667,7 @@ u8 efficiency_class; u8 reserved21; u16 spe_interrupt; /* ACPI 6.3 */ + u16 trbe_interrupt; /* ACPI 6.5 */ }; /* Masks for Flags field above */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/dt-bindings/clock/hi3516dv300-clock.h
Added
@@ -0,0 +1,101 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2016-2017 HiSilicon Technologies Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; either version 2 of the License, or (at your + * option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see <http://www.gnu.org/licenses/>. + * + */ + +#ifndef __DTS_HI3516DV300_CLOCK_H +#define __DTS_HI3516DV300_CLOCK_H + +/* clk in Hi3516CV500 CRG */ +/* fixed rate clocks */ +#define HI3516DV300_FIXED_3M 1 +#define HI3516DV300_FIXED_6M 2 +#define HI3516DV300_FIXED_12M 3 +#define HI3516DV300_FIXED_24M 4 +#define HI3516DV300_FIXED_50M 5 +#define HI3516DV300_FIXED_83P3M 6 +#define HI3516DV300_FIXED_100M 7 +#define HI3516DV300_FIXED_125M 8 +#define HI3516DV300_FIXED_148P5M 9 +#define HI3516DV300_FIXED_150M 10 +#define HI3516DV300_FIXED_200M 11 +#define HI3516DV300_FIXED_250M 12 +#define HI3516DV300_FIXED_300M 13 +#define HI3516DV300_FIXED_324M 14 +#define HI3516DV300_FIXED_342M 15 +#define HI3516DV300_FIXED_375M 16 +#define HI3516DV300_FIXED_400M 17 +#define HI3516DV300_FIXED_448M 18 +#define HI3516DV300_FIXED_500M 19 +#define HI3516DV300_FIXED_540M 20 +#define HI3516DV300_FIXED_600M 21 +#define HI3516DV300_FIXED_750M 22 +#define HI3516DV300_FIXED_1000M 23 +#define HI3516DV300_FIXED_1500M 24 +#define HI3516DV300_FIXED_54M 25 +#define HI3516DV300_FIXED_25M 26 +#define HI3516DV300_FIXED_163M 27 +#define HI3516DV300_FIXED_257M 28 +#define HI3516DV300_FIXED_396M 29 + +/* mux clocks */ +#define HI3516DV300_SYSAXI_CLK 30 +#define HI3516DV300_SYSAPB_CLK 31 +#define HI3516DV300_FMC_MUX 32 +#define HI3516DV300_UART_MUX 33 +#define HI3516DV300_MMC0_MUX 34 +#define HI3516DV300_MMC1_MUX 35 +#define HI3516DV300_MMC2_MUX 36 +#define HI3516DV300_UART1_MUX 33 +#define HI3516DV300_UART2_MUX 37 +#define HI3516DV300_UART4_MUX 38 +#define HI3516DV300_ETH_MUX 39 + +/* gate clocks */ +#define HI3516DV300_UART0_CLK 40 +#define HI3516DV300_UART1_CLK 41 +#define HI3516DV300_UART2_CLK 42 +#define HI3516DV300_FMC_CLK 43 +#define HI3516DV300_ETH0_CLK 44 +#define HI3516DV300_USB2_BUS_CLK 45 +#define HI3516DV300_USB2_CLK 46 +#define HI3516DV300_DMAC_CLK 47 +#define HI3516DV300_SPI0_CLK 48 +#define HI3516DV300_SPI1_CLK 49 +#define HI3516DV300_MMC0_CLK 50 +#define HI3516DV300_MMC1_CLK 51 +#define HI3516DV300_MMC2_CLK 52 +#define HI3516DV300_UART4_CLK 53 +#define HI3516DV300_SPI2_CLK 54 +#define HI3516DV300_I2C0_CLK 55 +#define HI3516DV300_I2C1_CLK 56 +#define HI3516DV300_I2C2_CLK 57 +#define HI3516DV300_I2C3_CLK 58 +#define HI3516DV300_I2C4_CLK 59 +#define HI3516DV300_I2C5_CLK 60 +#define HI3516DV300_I2C6_CLK 61 +#define HI3516DV300_I2C7_CLK 62 +#define HI3516DV300_UART3_MUX 63 +#define HI3516DV300_UART3_CLK 64 +#define HI3516DV300_DMAC_AXICLK 70 +#define HI3516DV300_PWM_CLK 71 +#define HI3516DV300_PWM_MUX 72 + +#define HI3516DV300_NR_CLKS 256 +#define HI3516DV300_NR_RSTS 256 + +#endif /* __DTS_HI3516DV300_CLOCK_H */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/blk-mq.h
Changed
@@ -303,15 +303,6 @@ KABI_RESERVE(1) }; -struct request_wrapper { - struct request rq; - - /* Time that I/O was counted in part_get_stat_info(). */ - u64 stat_time_ns; -}; - -#define request_to_wrapper(_rq) container_of(_rq, struct request_wrapper, rq) - typedef bool (busy_iter_fn)(struct blk_mq_hw_ctx *, struct request *, void *, bool); typedef bool (busy_tag_iter_fn)(struct request *, void *, bool); @@ -606,7 +597,7 @@ */ static inline struct request *blk_mq_rq_from_pdu(void *pdu) { - return pdu - sizeof(struct request_wrapper); + return pdu - sizeof(struct request); } /** @@ -620,7 +611,7 @@ */ static inline void *blk_mq_rq_to_pdu(struct request *rq) { - return request_to_wrapper(rq) + 1; + return rq + 1; } static inline struct blk_mq_hw_ctx *queue_hctx(struct request_queue *q, int id)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/blk_types.h
Changed
@@ -239,9 +239,6 @@ */ struct blkcg_gq *bi_blkg; struct bio_issue bi_issue; -#ifdef CONFIG_BLK_CGROUP_IOCOST - u64 bi_iocost_cost; -#endif #endif #ifdef CONFIG_BLK_INLINE_ENCRYPTION @@ -268,7 +265,11 @@ struct bio_set *bi_pool; +#ifdef CONFIG_BLK_CGROUP_IOCOST + KABI_USE(1, u64 bi_iocost_cost) +#else KABI_RESERVE(1) +#endif KABI_RESERVE(2) KABI_RESERVE(3) KABI_RESERVE(4)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/blkdev.h
Changed
@@ -115,6 +115,8 @@ #define RQF_MQ_POLL_SLEPT ((__force req_flags_t)(1 << 20)) /* ->timeout has been called, don't expire again */ #define RQF_TIMED_OUT ((__force req_flags_t)(1 << 21)) +/* The rq is allocated from block layer */ +#define RQF_FROM_BLOCK ((__force req_flags_t)(1 << 22)) /* flags that prevent us from merging requests: */ #define RQF_NOMERGE_FLAGS \ @@ -200,10 +202,6 @@ struct gendisk *rq_disk; struct hd_struct *part; -#ifdef CONFIG_BLK_RQ_ALLOC_TIME - /* Time that the first bio started allocating this request. */ - u64 alloc_time_ns; -#endif /* Time that this request was allocated for this IO. */ u64 start_time_ns; /* Time that I/O was submitted to the device. */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/buffer_head.h
Changed
@@ -117,6 +117,7 @@ * of the form "mark_buffer_foo()". These are higher-level functions which * do something in addition to setting a b_state bit. */ +BUFFER_FNS(Uptodate, uptodate) BUFFER_FNS(Dirty, dirty) TAS_BUFFER_FNS(Dirty, dirty) BUFFER_FNS(Lock, locked) @@ -134,30 +135,6 @@ BUFFER_FNS(Prio, prio) BUFFER_FNS(Defer_Completion, defer_completion) -static __always_inline void set_buffer_uptodate(struct buffer_head *bh) -{ - /* - * make it consistent with folio_mark_uptodate - * pairs with smp_load_acquire in buffer_uptodate - */ - smp_mb__before_atomic(); - set_bit(BH_Uptodate, &bh->b_state); -} - -static __always_inline void clear_buffer_uptodate(struct buffer_head *bh) -{ - clear_bit(BH_Uptodate, &bh->b_state); -} - -static __always_inline int buffer_uptodate(const struct buffer_head *bh) -{ - /* - * make it consistent with folio_test_uptodate - * pairs with smp_mb__before_atomic in set_buffer_uptodate - */ - return (smp_load_acquire(&bh->b_state) & (1UL << BH_Uptodate)) != 0; -} - #define bh_offset(bh) ((unsigned long)(bh)->b_data & ~PAGE_MASK) /* If we *know* page->private refers to buffer_heads */ @@ -230,6 +207,9 @@ sector_t bblock, unsigned blocksize); int bh_uptodate_or_lock(struct buffer_head *bh); int bh_submit_read(struct buffer_head *bh); +int __bh_read(struct buffer_head *bh, unsigned int op_flags, bool wait); +void __bh_read_batch(int nr, struct buffer_head *bhs, + unsigned int op_flags, bool force_lock); extern int buffer_heads_over_limit; @@ -403,6 +383,41 @@ return __getblk_gfp(bdev, block, size, __GFP_MOVABLE); } +static inline void bh_readahead(struct buffer_head *bh, unsigned int op_flags) +{ + if (!buffer_uptodate(bh) && trylock_buffer(bh)) { + if (!buffer_uptodate(bh)) + __bh_read(bh, op_flags, false); + else + unlock_buffer(bh); + } +} + +static inline void bh_read_nowait(struct buffer_head *bh, unsigned int op_flags) +{ + if (!bh_uptodate_or_lock(bh)) + __bh_read(bh, op_flags, false); +} + +/* Returns 1 if buffer uptodated, 0 on success, and -EIO on error. */ +static inline int bh_read(struct buffer_head *bh, unsigned int op_flags) +{ + if (bh_uptodate_or_lock(bh)) + return 1; + return __bh_read(bh, op_flags, true); +} + +static inline void bh_read_batch(int nr, struct buffer_head *bhs) +{ + __bh_read_batch(nr, bhs, 0, true); +} + +static inline void bh_readahead_batch(int nr, struct buffer_head *bhs, + unsigned int op_flags) +{ + __bh_read_batch(nr, bhs, op_flags, false); +} + /** * __bread() - reads a specified block and returns the bh * @bdev: the block_device to read from
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/coresight.h
Changed
@@ -33,6 +33,8 @@ #define CORESIGHT_UNLOCK 0xc5acce55 +#define ARMV9_TRBE_PDEV_NAME "arm,trbe-v1" + extern struct bus_type coresight_bustype; enum coresight_dev_type {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/fault-inject.h
Changed
@@ -20,7 +20,6 @@ atomic_t space; unsigned long verbose; bool task_filter; - bool no_warn; unsigned long stacktrace_depth; unsigned long require_start; unsigned long require_end; @@ -32,6 +31,10 @@ struct dentry *dname; }; +enum fault_flags { + FAULT_NOWARN = 1 << 0, +}; + #define FAULT_ATTR_INITIALIZER { \ .interval = 1, \ .times = ATOMIC_INIT(1), \ @@ -40,11 +43,11 @@ .ratelimit_state = RATELIMIT_STATE_INIT_DISABLED, \ .verbose = 2, \ .dname = NULL, \ - .no_warn = false, \ } #define DECLARE_FAULT_ATTR(name) struct fault_attr name = FAULT_ATTR_INITIALIZER int setup_fault_attr(struct fault_attr *attr, char *str); +bool should_fail_ex(struct fault_attr *attr, ssize_t size, int flags); bool should_fail(struct fault_attr *attr, ssize_t size); #ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/fs.h
Changed
@@ -1899,7 +1899,7 @@ loff_t len, unsigned int remap_flags); int (*fadvise)(struct file *, loff_t, loff_t, int); - KABI_RESERVE(1) + KABI_USE(1, bool may_pollfree) KABI_RESERVE(2) KABI_RESERVE(3) KABI_RESERVE(4)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/migrate.h
Changed
@@ -56,6 +56,8 @@ struct page *newpage, struct page *page); extern int migrate_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page, int extra_count); +void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep, + spinlock_t *ptl); #else static inline void putback_movable_pages(struct list_head *l) {}
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/mmzone.h
Changed
@@ -1009,6 +1009,8 @@ size_t *, loff_t *); int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *, int, void *, size_t *, loff_t *); +int percpu_max_batchsize_sysctl_handler(struct ctl_table *, int, + void *, size_t *, loff_t *); int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *, int, void *, size_t *, loff_t *); int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *, int, @@ -1016,6 +1018,7 @@ int numa_zonelist_order_handler(struct ctl_table *, int, void *, size_t *, loff_t *); extern int percpu_pagelist_fraction; +extern int percpu_max_batchsize; extern char numa_zonelist_order; #define NUMA_ZONELIST_ORDER_LEN 16
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/stop_machine.h
Changed
@@ -121,6 +121,22 @@ */ int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus); +/** + * stop_core_cpuslocked: - stop all threads on just one core + * @cpu: any cpu in the targeted core + * @fn: the function to run + * @data: the data ptr for @fn() + * + * Same as above, but instead of every CPU, only the logical CPUs of a + * single core are affected. + * + * Context: Must be called from within a cpus_read_lock() protected region. + * + * Return: 0 if all executions of @fn returned 0, any non zero return + * value if any returned non zero. + */ +int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data); + int stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus); #else /* CONFIG_SMP || CONFIG_HOTPLUG_CPU */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/swap.h
Changed
@@ -246,6 +246,11 @@ struct swap_cluster_info tail; }; +struct swap_extend_info { + struct percpu_ref users; /* indicate and keep swap device valid. */ + struct completion comp; /* seldom referenced */ +}; + /* * The in-memory structure used to track swap areas. */ @@ -293,7 +298,7 @@ */ struct work_struct discard_work; /* discard worker */ struct swap_cluster_list discard_clusters; /* discard clusters list */ - KABI_RESERVE(1) + KABI_USE(1, struct swap_extend_info *sei) KABI_RESERVE(2) struct plist_node avail_lists; /* * entries in swap_avail_heads, one @@ -535,7 +540,7 @@ static inline void put_swap_device(struct swap_info_struct *si) { - rcu_read_unlock(); + percpu_ref_put(&si->sei->users); } #else /* CONFIG_SWAP */ @@ -550,6 +555,15 @@ return NULL; } +static inline struct swap_info_struct *get_swap_device(swp_entry_t entry) +{ + return NULL; +} + +static inline void put_swap_device(struct swap_info_struct *si) +{ +} + #define swap_address_space(entry) (NULL) #define get_nr_swap_pages() 0L #define total_swap_pages 0L
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/usb.h
Changed
@@ -580,6 +580,7 @@ * @devaddr: device address, XHCI: assigned by HW, others: same as devnum * @can_submit: URBs may be submitted * @persist_enabled: USB_PERSIST enabled for this device + * @reset_in_progress: the device is being reset * @have_langid: whether string_langid is valid * @authorized: policy has said we can use it; * (user space) policy determines if we authorize this device to be @@ -665,6 +666,7 @@ unsigned can_submit:1; unsigned persist_enabled:1; + unsigned reset_in_progress:1; unsigned have_langid:1; unsigned authorized:1; unsigned authenticated:1;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/vdpa.h
Changed
@@ -6,9 +6,11 @@ #include <linux/device.h> #include <linux/interrupt.h> #include <linux/vhost_iotlb.h> +#include <linux/virtio_net.h> +#include <linux/if_ether.h> /** - * vDPA callback definition. + * struct vdpa_calllback - vDPA callback definition. * @callback: interrupt callback function * @private: the data passed to the callback function */ @@ -18,7 +20,7 @@ }; /** - * vDPA notification area + * struct vdpa_notification_area - vDPA notification area * @addr: base address of the notification area * @size: size of the notification area */ @@ -43,29 +45,33 @@ * @last_used_idx: used index */ struct vdpa_vq_state_packed { - u16 last_avail_counter:1; - u16 last_avail_idx:15; - u16 last_used_counter:1; - u16 last_used_idx:15; + u16 last_avail_counter:1; + u16 last_avail_idx:15; + u16 last_used_counter:1; + u16 last_used_idx:15; }; struct vdpa_vq_state { - union { - struct vdpa_vq_state_split split; - struct vdpa_vq_state_packed packed; - }; + union { + struct vdpa_vq_state_split split; + struct vdpa_vq_state_packed packed; + }; }; struct vdpa_mgmt_dev; /** - * vDPA device - representation of a vDPA device + * struct vdpa_device - representation of a vDPA device * @dev: underlying device * @dma_dev: the actual device that is performing DMA * @driver_override: driver name to force a match * @config: the configuration ops for this device. + * @cf_lock: Protects get and set access to configuration layout. * @index: device index * @features_valid: were features initialized? for legacy guests + * @ngroups: the number of virtqueue groups + * @nas: the number of address spaces + * @use_va: indicate whether virtual address must be used by this device * @nvqs: maximum number of supported virtqueues * @mdev: management device pointer; caller must setup when registering device as part * of dev_add() mgmtdev ops callback before invoking _vdpa_register_device(). @@ -75,14 +81,18 @@ struct device *dma_dev; const char *driver_override; const struct vdpa_config_ops *config; + struct rw_semaphore cf_lock; /* Protects get/set config */ unsigned int index; bool features_valid; - int nvqs; + bool use_va; + u32 nvqs; struct vdpa_mgmt_dev *mdev; + unsigned int ngroups; + unsigned int nas; }; /** - * vDPA IOVA range - the IOVA range support by the device + * struct vdpa_iova_range - the IOVA range support by the device * @first: start of the IOVA range * @last: end of the IOVA range */ @@ -91,8 +101,27 @@ u64 last; }; +struct vdpa_dev_set_config { + struct { + u8 macETH_ALEN; + u16 mtu; + u16 max_vq_pairs; + } net; + u64 mask; +}; + /** - * vDPA_config_ops - operations for configuring a vDPA device. + * Corresponding file area for device memory mapping + * @file: vma->vm_file for the mapping + * @offset: mapping offset in the vm_file + */ +struct vdpa_map_file { + struct file *file; + u64 offset; +}; + +/** + * struct vdpa_config_ops - operations for configuring a vDPA device. * Note: vDPA device drivers are required to implement all of the * operations unless it is mentioned to be optional in the following * list. @@ -133,7 +162,7 @@ * @vdev: vdpa device * @idx: virtqueue index * @state: pointer to returned state (last_avail_idx) - * @get_vq_notification: Get the notification area for a virtqueue + * @get_vq_notification: Get the notification area for a virtqueue (optional) * @vdev: vdpa device * @idx: virtqueue index * Returns the notifcation area @@ -147,20 +176,31 @@ * for the device * @vdev: vdpa device * Returns virtqueue algin requirement - * @get_features: Get virtio features supported by the device + * @get_vq_group: Get the group id for a specific + * virtqueue (optional) + * @vdev: vdpa device + * @idx: virtqueue index + * Returns u32: group id for this virtqueue + * @get_device_features: Get virtio features supported by the device * @vdev: vdpa device * Returns the virtio features support by the * device - * @set_features: Set virtio features supported by the driver + * @set_driver_features: Set virtio features supported by the driver * @vdev: vdpa device * @features: feature support by the driver * Returns integer: success (0) or error (< 0) + * @get_driver_features: Get the virtio driver features in action + * @vdev: vdpa device + * Returns the virtio features accepted * @set_config_cb: Set the config interrupt callback * @vdev: vdpa device * @cb: virtio-vdev interrupt callback structure * @get_vq_num_max: Get the max size of virtqueue * @vdev: vdpa device * Returns u16: max size of virtqueue + * @get_vq_num_min: Get the min size of virtqueue (optional) + * @vdev: vdpa device + * Returns u16: min size of virtqueue * @get_device_id: Get virtio device id * @vdev: vdpa device * Returns u32: virtio device id @@ -176,6 +216,9 @@ * @reset: Reset device * @vdev: vdpa device * Returns integer: success (0) or error (< 0) + * @suspend: Suspend or resume the device (optional) + * @vdev: vdpa device + * Returns integer: success (0) or error (< 0) * @get_config_size: Get the size of the configuration space includes * fields that are conditional on feature bits. * @vdev: vdpa device @@ -201,10 +244,17 @@ * @vdev: vdpa device * Returns the iova range supported by * the device. + * @set_group_asid: Set address space identifier for a + * virtqueue group (optional) + * @vdev: vdpa device + * @group: virtqueue group + * @asid: address space id for this group + * Returns integer: success (0) or error (< 0) * @set_map: Set device memory mapping (optional) * Needed for device that using device * specific DMA translation (on-chip IOMMU) * @vdev: vdpa device + * @asid: address space identifier * @iotlb: vhost memory mapping to be * used by the vDPA * Returns integer: success (0) or error (< 0) @@ -213,6 +263,7 @@ * specific DMA translation (on-chip IOMMU) * and preferring incremental map. * @vdev: vdpa device + * @asid: address space identifier * @iova: iova to be mapped * @size: size of the area * @pa: physical address for the map @@ -224,6 +275,7 @@ * specific DMA translation (on-chip IOMMU) * and preferring incremental unmap. * @vdev: vdpa device + * @asid: address space identifier * @iova: iova to be unmapped * @size: size of the area * Returns integer: success (0) or error (< 0) @@ -245,6 +297,9 @@ const struct vdpa_vq_state *state); int (*get_vq_state)(struct vdpa_device *vdev, u16 idx, struct vdpa_vq_state *state); + int (*get_vendor_vq_stats)(struct vdpa_device *vdev, u16 idx, + struct sk_buff *msg, + struct netlink_ext_ack *extack); struct vdpa_notification_area (*get_vq_notification)(struct vdpa_device *vdev, u16 idx); /* vq irq is not expected to be changed once DRIVER_OK is set */ @@ -252,16 +307,20 @@ /* Device ops */ u32 (*get_vq_align)(struct vdpa_device *vdev); - u64 (*get_features)(struct vdpa_device *vdev); - int (*set_features)(struct vdpa_device *vdev, u64 features); + u32 (*get_vq_group)(struct vdpa_device *vdev, u16 idx); + u64 (*get_device_features)(struct vdpa_device *vdev); + int (*set_driver_features)(struct vdpa_device *vdev, u64 features); + u64 (*get_driver_features)(struct vdpa_device *vdev); void (*set_config_cb)(struct vdpa_device *vdev, struct vdpa_callback *cb); u16 (*get_vq_num_max)(struct vdpa_device *vdev); + u16 (*get_vq_num_min)(struct vdpa_device *vdev); u32 (*get_device_id)(struct vdpa_device *vdev); u32 (*get_vendor_id)(struct vdpa_device *vdev); u8 (*get_status)(struct vdpa_device *vdev); void (*set_status)(struct vdpa_device *vdev, u8 status); int (*reset)(struct vdpa_device *vdev); + int (*suspend)(struct vdpa_device *vdev); size_t (*get_config_size)(struct vdpa_device *vdev); void (*get_config)(struct vdpa_device *vdev, unsigned int offset, void *buf, unsigned int len); @@ -271,10 +330,14 @@ struct vdpa_iova_range (*get_iova_range)(struct vdpa_device *vdev); /* DMA ops */ - int (*set_map)(struct vdpa_device *vdev, struct vhost_iotlb *iotlb); - int (*dma_map)(struct vdpa_device *vdev, u64 iova, u64 size, - u64 pa, u32 perm); - int (*dma_unmap)(struct vdpa_device *vdev, u64 iova, u64 size); + int (*set_map)(struct vdpa_device *vdev, unsigned int asid, + struct vhost_iotlb *iotlb); + int (*dma_map)(struct vdpa_device *vdev, unsigned int asid, + u64 iova, u64 size, u64 pa, u32 perm, void *opaque); + int (*dma_unmap)(struct vdpa_device *vdev, unsigned int asid, + u64 iova, u64 size); + int (*set_group_asid)(struct vdpa_device *vdev, unsigned int group, + unsigned int asid); /* Free device resources */ void (*free)(struct vdpa_device *vdev); @@ -282,24 +345,41 @@ struct vdpa_device *__vdpa_alloc_device(struct device *parent, const struct vdpa_config_ops *config, - size_t size, const char *name); + unsigned int ngroups, unsigned int nas, + size_t size, const char *name, + bool use_va); -#define vdpa_alloc_device(dev_struct, member, parent, config, name) \ - container_of(__vdpa_alloc_device( \ - parent, config, \ - sizeof(dev_struct) + \ +/** + * vdpa_alloc_device - allocate and initilaize a vDPA device + * + * @dev_struct: the type of the parent structure + * @member: the name of struct vdpa_device within the @dev_struct + * @parent: the parent device + * @config: the bus operations that is supported by this device + * @ngroups: the number of virtqueue groups supported by this device + * @nas: the number of address spaces + * @name: name of the vdpa device + * @use_va: indicate whether virtual address must be used by this device + * + * Return allocated data structure or ERR_PTR upon error + */ +#define vdpa_alloc_device(dev_struct, member, parent, config, ngroups, nas, \ + name, use_va) \ + container_of((__vdpa_alloc_device( \ + parent, config, ngroups, nas, \ + (sizeof(dev_struct) + \ BUILD_BUG_ON_ZERO(offsetof( \ - dev_struct, member)), name), \ + dev_struct, member))), name, use_va)), \ dev_struct, member) -int vdpa_register_device(struct vdpa_device *vdev, int nvqs); +int vdpa_register_device(struct vdpa_device *vdev, u32 nvqs); void vdpa_unregister_device(struct vdpa_device *vdev); -int _vdpa_register_device(struct vdpa_device *vdev, int nvqs); +int _vdpa_register_device(struct vdpa_device *vdev, u32 nvqs); void _vdpa_unregister_device(struct vdpa_device *vdev); /** - * vdpa_driver - operations for a vDPA driver + * struct vdpa_driver - operations for a vDPA driver * @driver: underlying device driver * @probe: the function to call when a device is found. Returns 0 or -errno. * @remove: the function to call when a device is removed. @@ -346,59 +426,82 @@ static inline int vdpa_reset(struct vdpa_device *vdev) { - const struct vdpa_config_ops *ops = vdev->config; + const struct vdpa_config_ops *ops = vdev->config; + int ret; + down_write(&vdev->cf_lock); vdev->features_valid = false; - return ops->reset(vdev); + ret = ops->reset(vdev); + up_write(&vdev->cf_lock); + return ret; } -static inline int vdpa_set_features(struct vdpa_device *vdev, u64 features) +static inline int vdpa_set_features_unlocked(struct vdpa_device *vdev, u64 features) { - const struct vdpa_config_ops *ops = vdev->config; + const struct vdpa_config_ops *ops = vdev->config; + int ret; vdev->features_valid = true; - return ops->set_features(vdev, features); -} + ret = ops->set_driver_features(vdev, features); + return ret; +} -static inline void vdpa_get_config(struct vdpa_device *vdev, unsigned offset, - void *buf, unsigned int len) +static inline int vdpa_set_features(struct vdpa_device *vdev, u64 features) { - const struct vdpa_config_ops *ops = vdev->config; - - /* - * Config accesses aren't supposed to trigger before features are set. - * If it does happen we assume a legacy guest. - */ - if (!vdev->features_valid) - vdpa_set_features(vdev, 0); - ops->get_config(vdev, offset, buf, len); + int ret; + + down_write(&vdev->cf_lock); + ret = vdpa_set_features_unlocked(vdev, features); + up_write(&vdev->cf_lock); + + return ret; } +void vdpa_get_config(struct vdpa_device *vdev, unsigned int offset, + void *buf, unsigned int len); +void vdpa_set_config(struct vdpa_device *dev, unsigned int offset, + const void *buf, unsigned int length); +void vdpa_set_status(struct vdpa_device *vdev, u8 status); + /** - * vdpa_mgmtdev_ops - vdpa device ops - * @dev_add: Add a vdpa device using alloc and register - * @mdev: parent device to use for device addition - * @name: name of the new vdpa device - * Driver need to add a new device using _vdpa_register_device() - * after fully initializing the vdpa device. Driver must return 0 - * on success or appropriate error code. - * @dev_del: Remove a vdpa device using unregister - * @mdev: parent device to use for device removal - * @dev: vdpa device to remove - * Driver need to remove the specified device by calling - * _vdpa_unregister_device(). + * struct vdpa_mgmtdev_ops - vdpa device ops + * @dev_add: Add a vdpa device using alloc and register + * @mdev: parent device to use for device addition + * @name: name of the new vdpa device + * @config: config attributes to apply to the device under creation + * Driver need to add a new device using _vdpa_register_device() + * after fully initializing the vdpa device. Driver must return 0 + * on success or appropriate error code. + * @dev_del: Remove a vdpa device using unregister + * @mdev: parent device to use for device removal + * @dev: vdpa device to remove + * Driver need to remove the specified device by calling + * _vdpa_unregister_device(). */ struct vdpa_mgmtdev_ops { - int (*dev_add)(struct vdpa_mgmt_dev *mdev, const char *name); + int (*dev_add)(struct vdpa_mgmt_dev *mdev, const char *name, + const struct vdpa_dev_set_config *config); void (*dev_del)(struct vdpa_mgmt_dev *mdev, struct vdpa_device *dev); }; +/** + * struct vdpa_mgmt_dev - vdpa management device + * @device: Management parent device + * @ops: operations supported by management device + * @id_table: Pointer to device id table of supported ids + * @config_attr_mask: bit mask of attributes of type enum vdpa_attr that + * management device support during dev_add callback + * @list: list entry + */ struct vdpa_mgmt_dev { struct device *device; const struct vdpa_mgmtdev_ops *ops; - const struct virtio_device_id *id_table; /* supported ids */ + struct virtio_device_id *id_table; + u64 config_attr_mask; struct list_head list; + u64 supported_features; + u32 max_supported_vqs; }; int vdpa_mgmtdev_register(struct vdpa_mgmt_dev *mdev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/vhost_iotlb.h
Changed
@@ -17,6 +17,7 @@ u32 perm; u32 flags_padding; u64 __subtree_last; + void *opaque; }; #define VHOST_IOTLB_FLAG_RETIRE 0x1 @@ -29,10 +30,14 @@ unsigned int flags; }; +int vhost_iotlb_add_range_ctx(struct vhost_iotlb *iotlb, u64 start, u64 last, + u64 addr, unsigned int perm, void *opaque); int vhost_iotlb_add_range(struct vhost_iotlb *iotlb, u64 start, u64 last, u64 addr, unsigned int perm); void vhost_iotlb_del_range(struct vhost_iotlb *iotlb, u64 start, u64 last); +void vhost_iotlb_init(struct vhost_iotlb *iotlb, unsigned int limit, + unsigned int flags); struct vhost_iotlb *vhost_iotlb_alloc(unsigned int limit, unsigned int flags); void vhost_iotlb_free(struct vhost_iotlb *iotlb); void vhost_iotlb_reset(struct vhost_iotlb *iotlb);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/virtio.h
Changed
@@ -139,6 +139,7 @@ int virtio_device_freeze(struct virtio_device *dev); int virtio_device_restore(struct virtio_device *dev); #endif +void virtio_reset_device(struct virtio_device *dev); size_t virtio_max_dma_size(struct virtio_device *vdev);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/linux/virtio_pci_modern.h
Changed
@@ -5,6 +5,13 @@ #include <linux/pci.h> #include <linux/virtio_pci.h> +struct virtio_pci_modern_common_cfg { + struct virtio_pci_common_cfg cfg; + + __le16 queue_notify_data; /* read-write */ + __le16 queue_reset; /* read-write */ +}; + struct virtio_pci_modern_device { struct pci_dev *pci_dev; @@ -102,15 +109,10 @@ u16 vp_modern_get_queue_size(struct virtio_pci_modern_device *mdev, u16 idx); u16 vp_modern_get_num_queues(struct virtio_pci_modern_device *mdev); -u16 vp_modern_get_queue_notify_off(struct virtio_pci_modern_device *mdev, - u16 idx); -void __iomem *vp_modern_map_capability(struct virtio_pci_modern_device *mdev, int off, - size_t minlen, - u32 align, - u32 start, u32 size, - size_t *len, resource_size_t *pa); void __iomem *vp_modern_map_vq_notify(struct virtio_pci_modern_device *mdev, u16 index, resource_size_t *pa); int vp_modern_probe(struct virtio_pci_modern_device *mdev); void vp_modern_remove(struct virtio_pci_modern_device *mdev); +int vp_modern_get_queue_reset(struct virtio_pci_modern_device *mdev, u16 index); +void vp_modern_set_queue_reset(struct virtio_pci_modern_device *mdev, u16 index); #endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/net/sock.h
Changed
@@ -318,7 +318,7 @@ * @sk_tskey: counter to disambiguate concurrent tstamp requests * @sk_zckey: counter to order MSG_ZEROCOPY notifications * @sk_socket: Identd and reporting IO signals - * @sk_user_data: RPC layer private data + * @sk_user_data: RPC layer private data. Write-protected by @sk_callback_lock. * @sk_frag: cached page frag * @sk_peek_off: current peek_offset value * @sk_send_head: front of stuff to transmit
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/scsi/iscsi_if.h
Changed
@@ -761,7 +761,6 @@ and verification */ #define CAP_LOGIN_OFFLOAD 0x4000 /* offload session login */ -#define CAP_OPS_EXPAND 0x8000 /* oiscsi_transport->ops_expand flag */ /* * These flags describes reason of stop_conn() call */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/scsi/scsi_transport_iscsi.h
Changed
@@ -29,15 +29,6 @@ struct iscsi_bus_flash_session; struct iscsi_bus_flash_conn; -/* - * The expansion of iscsi_transport to fix kabi while adding members. - */ -struct iscsi_transport_expand { - int (*tgt_dscvr)(struct Scsi_Host *shost, enum iscsi_tgt_dscvr type, - uint32_t enable, struct sockaddr *dst_addr); - void (*unbind_conn)(struct iscsi_cls_conn *conn, bool is_active); -}; - /** * struct iscsi_transport - iSCSI Transport template * @@ -132,15 +123,8 @@ int non_blocking); int (*ep_poll) (struct iscsi_endpoint *ep, int timeout_ms); void (*ep_disconnect) (struct iscsi_endpoint *ep); -#ifdef __GENKSYMS__ int (*tgt_dscvr) (struct Scsi_Host *shost, enum iscsi_tgt_dscvr type, uint32_t enable, struct sockaddr *dst_addr); -#else - /* - * onece ops_expand is used, caps must be set to CAP_OPS_EXPAND - */ - struct iscsi_transport_expand *ops_expand; -#endif int (*set_path) (struct Scsi_Host *shost, struct iscsi_path *params); int (*set_iface_param) (struct Scsi_Host *shost, void *data, uint32_t len);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/trace/events/intel_ifs.h
Added
@@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM intel_ifs + +#if !defined(_TRACE_IFS_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_IFS_H + +#include <linux/ktime.h> +#include <linux/tracepoint.h> + +TRACE_EVENT(ifs_status, + + TP_PROTO(int cpu, union ifs_scan activate, union ifs_status status), + + TP_ARGS(cpu, activate, status), + + TP_STRUCT__entry( + __field( u64, status ) + __field( int, cpu ) + __field( u8, start ) + __field( u8, stop ) + ), + + TP_fast_assign( + __entry->cpu = cpu; + __entry->start = activate.start; + __entry->stop = activate.stop; + __entry->status = status.data; + ), + + TP_printk("cpu: %d, start: %.2x, stop: %.2x, status: %llx", + __entry->cpu, + __entry->start, + __entry->stop, + __entry->status) +); + +#endif /* _TRACE_IFS_H */ + +/* This part must be outside protection */ +#include <trace/define_trace.h>
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/uapi/linux/if_link.h
Changed
@@ -689,8 +689,8 @@ enum ipvlan_mode { IPVLAN_MODE_L2 = 0, IPVLAN_MODE_L3, - IPVLAN_MODE_L2E, IPVLAN_MODE_L3S, + IPVLAN_MODE_L2E, IPVLAN_MODE_MAX };
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/uapi/linux/vdpa.h
Changed
@@ -17,11 +17,16 @@ VDPA_CMD_DEV_NEW, VDPA_CMD_DEV_DEL, VDPA_CMD_DEV_GET, /* can dump */ + VDPA_CMD_DEV_CONFIG_GET, /* can dump */ + VDPA_CMD_DEV_VSTATS_GET, }; enum vdpa_attr { VDPA_ATTR_UNSPEC, + /* Pad attribute for 64b alignment */ + VDPA_ATTR_PAD = VDPA_ATTR_UNSPEC, + /* bus name (optional) + dev name together make the parent device handle */ VDPA_ATTR_MGMTDEV_BUS_NAME, /* string */ VDPA_ATTR_MGMTDEV_DEV_NAME, /* string */ @@ -32,6 +37,20 @@ VDPA_ATTR_DEV_VENDOR_ID, /* u32 */ VDPA_ATTR_DEV_MAX_VQS, /* u32 */ VDPA_ATTR_DEV_MAX_VQ_SIZE, /* u16 */ + VDPA_ATTR_DEV_MIN_VQ_SIZE, /* u16 */ + + VDPA_ATTR_DEV_NET_CFG_MACADDR, /* binary */ + VDPA_ATTR_DEV_NET_STATUS, /* u8 */ + VDPA_ATTR_DEV_NET_CFG_MAX_VQP, /* u16 */ + VDPA_ATTR_DEV_NET_CFG_MTU, /* u16 */ + + VDPA_ATTR_DEV_NEGOTIATED_FEATURES, /* u64 */ + VDPA_ATTR_DEV_MGMTDEV_MAX_VQS, /* u32 */ + VDPA_ATTR_DEV_SUPPORTED_FEATURES, /* u64 */ + + VDPA_ATTR_DEV_QUEUE_INDEX, /* u32 */ + VDPA_ATTR_DEV_VENDOR_ATTR_NAME, /* string */ + VDPA_ATTR_DEV_VENDOR_ATTR_VALUE, /* u64 */ /* new attributes must be added above here */ VDPA_ATTR_MAX,
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/uapi/linux/vhost.h
Changed
@@ -89,11 +89,6 @@ /* Set or get vhost backend capability */ -/* Use message type V2 */ -#define VHOST_BACKEND_F_IOTLB_MSG_V2 0x1 -/* IOTLB can accept batching hints */ -#define VHOST_BACKEND_F_IOTLB_BATCH 0x2 - #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64) #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64) @@ -150,11 +145,39 @@ /* Get the valid iova range */ #define VHOST_VDPA_GET_IOVA_RANGE _IOR(VHOST_VIRTIO, 0x78, \ struct vhost_vdpa_iova_range) - /* Get the config size */ #define VHOST_VDPA_GET_CONFIG_SIZE _IOR(VHOST_VIRTIO, 0x79, __u32) /* Get the count of all virtqueues */ #define VHOST_VDPA_GET_VQS_COUNT _IOR(VHOST_VIRTIO, 0x80, __u32) +/* Get the number of virtqueue groups. */ +#define VHOST_VDPA_GET_GROUP_NUM _IOR(VHOST_VIRTIO, 0x81, __u32) + +/* Get the number of address spaces. */ +#define VHOST_VDPA_GET_AS_NUM _IOR(VHOST_VIRTIO, 0x7A, unsigned int) + +/* Get the group for a virtqueue: read index, write group in num, + * The virtqueue index is stored in the index field of + * vhost_vring_state. The group for this specific virtqueue is + * returned via num field of vhost_vring_state. + */ +#define VHOST_VDPA_GET_VRING_GROUP _IOWR(VHOST_VIRTIO, 0x7B, \ + struct vhost_vring_state) +/* Set the ASID for a virtqueue group. The group index is stored in + * the index field of vhost_vring_state, the ASID associated with this + * group is stored at num field of vhost_vring_state. + */ +#define VHOST_VDPA_SET_GROUP_ASID _IOW(VHOST_VIRTIO, 0x7C, \ + struct vhost_vring_state) + +/* Suspend a device so it does not process virtqueue requests anymore + * + * After the return of ioctl the device must preserve all the necessary state + * (the virtqueue vring base plus the possible device specific states) that is + * required for restoring in the future. The device must not change its + * configuration after that point. + */ +#define VHOST_VDPA_SUSPEND _IO(VHOST_VIRTIO, 0x7D) + #endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/uapi/linux/vhost_types.h
Changed
@@ -87,7 +87,7 @@ struct vhost_msg_v2 { __u32 type; - __u32 reserved; + __u32 asid; union { struct vhost_iotlb_msg iotlb; __u8 padding64; @@ -153,4 +153,15 @@ /* vhost-net should add virtio_net_hdr for RX, and strip for TX packets. */ #define VHOST_NET_F_VIRTIO_NET_HDR 27 +/* Use message type V2 */ +#define VHOST_BACKEND_F_IOTLB_MSG_V2 0x1 +/* IOTLB can accept batching hints */ +#define VHOST_BACKEND_F_IOTLB_BATCH 0x2 +/* IOTLB can accept address space identifier through V2 type of IOTLB + * message + */ +#define VHOST_BACKEND_F_IOTLB_ASID 0x3 +/* Device can be suspended */ +#define VHOST_BACKEND_F_SUSPEND 0x4 + #endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/include/uapi/linux/virtio_pci.h
Changed
@@ -202,6 +202,8 @@ #define VIRTIO_PCI_COMMON_Q_AVAILHI 44 #define VIRTIO_PCI_COMMON_Q_USEDLO 48 #define VIRTIO_PCI_COMMON_Q_USEDHI 52 +#define VIRTIO_PCI_COMMON_Q_NDATA 56 +#define VIRTIO_PCI_COMMON_Q_RESET 58 #endif /* VIRTIO_PCI_NO_MODERN */
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/entry/common.c
Changed
@@ -394,7 +394,7 @@ instrumentation_begin(); if (IS_ENABLED(CONFIG_PREEMPTION)) { -#ifdef CONFIG_PREEMT_DYNAMIC +#ifdef CONFIG_PREEMPT_DYNAMIC static_call(irqentry_exit_cond_resched)(); #else irqentry_exit_cond_resched();
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/fork.c
Changed
@@ -688,6 +688,7 @@ mmu_notifier_subscriptions_destroy(mm); check_mm(mm); put_user_ns(mm->user_ns); + mm_pasid_drop(mm); free_mm(mm); } EXPORT_SYMBOL_GPL(__mmdrop); @@ -856,7 +857,7 @@ static bool dup_resvd_task_struct(struct task_struct *dst, struct task_struct *orig, int node) { - dst->_resvd = kmalloc_node(sizeof(struct task_struct_resvd), + dst->_resvd = kzalloc_node(sizeof(struct task_struct_resvd), GFP_KERNEL, node); if (!dst->_resvd) return false; @@ -1137,7 +1138,6 @@ } if (mm->binfmt) module_put(mm->binfmt->module); - mm_pasid_drop(mm); mmdrop(mm); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/jump_label.c
Changed
@@ -823,6 +823,7 @@ static void jump_label_update(struct static_key *key) { struct jump_entry *stop = __stop___jump_table; + bool init = system_state < SYSTEM_RUNNING; struct jump_entry *entry; #ifdef CONFIG_MODULES struct module *mod; @@ -834,15 +835,16 @@ preempt_disable(); mod = __module_address((unsigned long)key); - if (mod) + if (mod) { stop = mod->jump_entries + mod->num_jump_entries; + init = mod->state == MODULE_STATE_COMING; + } preempt_enable(); #endif entry = static_key_entries(key); /* if there are no users, entry can be NULL */ if (entry) - __jump_label_update(key, entry, stop, - system_state < SYSTEM_RUNNING); + __jump_label_update(key, entry, stop, init); } #ifdef CONFIG_STATIC_KEYS_SELFTEST
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/livepatch/core.c
Changed
@@ -1041,11 +1041,13 @@ func->old_name); return -ENOENT; } +#ifdef CONFIG_LIVEPATCH_STOP_MACHINE_CONSISTENCY if (func->old_size < KLP_MAX_REPLACE_SIZE) { pr_err("%s size less than limit (%lu < %zu)\n", func->old_name, func->old_size, KLP_MAX_REPLACE_SIZE); return -EINVAL; } +#endif #ifdef PPC64_ELF_ABI_v1 /* @@ -1195,6 +1197,7 @@ static inline int klp_static_call_register(struct module *mod) { return 0; } #endif +#ifdef CONFIG_LIVEPATCH_STOP_MACHINE_CONSISTENCY static int check_address_conflict(struct klp_patch *patch) { struct klp_object *obj; @@ -1231,6 +1234,7 @@ } return 0; } +#endif static int klp_init_patch(struct klp_patch *patch) { @@ -1278,11 +1282,11 @@ } module_enable_ro(patch->mod, true); +#ifdef CONFIG_LIVEPATCH_STOP_MACHINE_CONSISTENCY ret = check_address_conflict(patch); if (ret) return ret; -#ifdef CONFIG_LIVEPATCH_STOP_MACHINE_CONSISTENCY klp_for_each_object(patch, obj) klp_load_hook(obj); #endif
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/sched/autogroup.c
Changed
@@ -5,7 +5,7 @@ #include <linux/nospec.h> #include "sched.h" -unsigned int __read_mostly sysctl_sched_autogroup_enabled = 1; +unsigned int __read_mostly sysctl_sched_autogroup_enabled; static struct autogroup autogroup_default; static atomic_t autogroup_seq_nr;
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/sched/core.c
Changed
@@ -9121,6 +9121,12 @@ return -EINVAL; /* + * Ensure burst equals to zero when quota is -1. + */ + if (quota == RUNTIME_INF && burst) + return -EINVAL; + + /* * Prevent race between setting of cfs_rq->runtime_enabled and * unthrottle_offline_cfs_rqs(). */ @@ -9179,8 +9185,10 @@ period = ktime_to_ns(tg->cfs_bandwidth.period); burst = tg->cfs_bandwidth.burst; - if (cfs_quota_us < 0) + if (cfs_quota_us < 0) { quota = RUNTIME_INF; + burst = 0; + } else if ((u64)cfs_quota_us <= U64_MAX / NSEC_PER_USEC) quota = (u64)cfs_quota_us * NSEC_PER_USEC; else
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/sched/fair.c
Changed
@@ -124,6 +124,13 @@ #endif #ifdef CONFIG_QOS_SCHED + +/* + * To distinguish cfs bw, use QOS_THROTTLED mark cfs_rq->throttled + * when qos throttled(and cfs bw throttle mark cfs_rq->throttled as 1). + */ +#define QOS_THROTTLED 2 + static DEFINE_PER_CPU_SHARED_ALIGNED(struct list_head, qos_throttled_cfs_rq); static DEFINE_PER_CPU_SHARED_ALIGNED(struct hrtimer, qos_overload_timer); static DEFINE_PER_CPU(int, qos_cpu_overload); @@ -4932,6 +4939,14 @@ se = cfs_rq->tg->secpu_of(rq); +#ifdef CONFIG_QOS_SCHED + /* + * if this cfs_rq throttled by qos, not need unthrottle it. + */ + if (cfs_rq->throttled == QOS_THROTTLED) + return; +#endif + cfs_rq->throttled = 0; update_rq_clock(rq); @@ -7278,26 +7293,6 @@ static void start_qos_hrtimer(int cpu); -static int qos_tg_unthrottle_up(struct task_group *tg, void *data) -{ - struct rq *rq = data; - struct cfs_rq *cfs_rq = tg->cfs_rqcpu_of(rq); - - cfs_rq->throttle_count--; - - return 0; -} - -static int qos_tg_throttle_down(struct task_group *tg, void *data) -{ - struct rq *rq = data; - struct cfs_rq *cfs_rq = tg->cfs_rqcpu_of(rq); - - cfs_rq->throttle_count++; - - return 0; -} - static void throttle_qos_cfs_rq(struct cfs_rq *cfs_rq) { struct rq *rq = rq_of(cfs_rq); @@ -7309,7 +7304,7 @@ /* freeze hierarchy runnable averages while throttled */ rcu_read_lock(); - walk_tg_tree_from(cfs_rq->tg, qos_tg_throttle_down, tg_nop, (void *)rq); + walk_tg_tree_from(cfs_rq->tg, tg_throttle_down, tg_nop, (void *)rq); rcu_read_unlock(); task_delta = cfs_rq->h_nr_running; @@ -7320,8 +7315,13 @@ if (!se->on_rq) break; - if (dequeue) + if (dequeue) { dequeue_entity(qcfs_rq, se, DEQUEUE_SLEEP); + } else { + update_load_avg(qcfs_rq, se, 0); + se_update_runnable(se); + } + qcfs_rq->h_nr_running -= task_delta; qcfs_rq->idle_h_nr_running -= idle_task_delta; @@ -7339,7 +7339,7 @@ if (list_empty(&per_cpu(qos_throttled_cfs_rq, cpu_of(rq)))) start_qos_hrtimer(cpu_of(rq)); - cfs_rq->throttled = 1; + cfs_rq->throttled = QOS_THROTTLED; list_add(&cfs_rq->qos_throttled_list, &per_cpu(qos_throttled_cfs_rq, cpu_of(rq))); @@ -7349,12 +7349,14 @@ { struct rq *rq = rq_of(cfs_rq); struct sched_entity *se; - int enqueue = 1; unsigned int prev_nr = cfs_rq->h_nr_running; long task_delta, idle_task_delta; se = cfs_rq->tg->secpu_of(rq); + if (cfs_rq->throttled != QOS_THROTTLED) + return; + cfs_rq->throttled = 0; update_rq_clock(rq); @@ -7362,7 +7364,7 @@ /* update hierarchical throttle state */ rcu_read_lock(); - walk_tg_tree_from(cfs_rq->tg, tg_nop, qos_tg_unthrottle_up, (void *)rq); + walk_tg_tree_from(cfs_rq->tg, tg_nop, tg_unthrottle_up, (void *)rq); rcu_read_unlock(); if (!cfs_rq->load.weight) @@ -7372,26 +7374,58 @@ idle_task_delta = cfs_rq->idle_h_nr_running; for_each_sched_entity(se) { if (se->on_rq) - enqueue = 0; + break; + + cfs_rq = cfs_rq_of(se); + enqueue_entity(cfs_rq, se, ENQUEUE_WAKEUP); + + cfs_rq->h_nr_running += task_delta; + cfs_rq->idle_h_nr_running += idle_task_delta; + if (cfs_rq_throttled(cfs_rq)) + goto unthrottle_throttle; + } + + for_each_sched_entity(se) { cfs_rq = cfs_rq_of(se); - if (enqueue) - enqueue_entity(cfs_rq, se, ENQUEUE_WAKEUP); + + update_load_avg(cfs_rq, se, UPDATE_TG); + se_update_runnable(se); + cfs_rq->h_nr_running += task_delta; cfs_rq->idle_h_nr_running += idle_task_delta; + /* end evaluation on encountering a throttled cfs_rq */ if (cfs_rq_throttled(cfs_rq)) - break; + goto unthrottle_throttle; + + /* + * One parent has been throttled and cfs_rq removed from the + * list. Add it back to not break the leaf list. + */ + if (throttled_hierarchy(cfs_rq)) + list_add_leaf_cfs_rq(cfs_rq); } - assert_list_leaf_cfs_rq(rq); + add_nr_running(rq, task_delta); + if (prev_nr < 2 && prev_nr + task_delta >= 2) + overload_set(rq); - if (!se) { - add_nr_running(rq, task_delta); - if (prev_nr < 2 && prev_nr + task_delta >= 2) - overload_set(rq); +unthrottle_throttle: + /* + * The cfs_rq_throttled() breaks in the above iteration can result in + * incomplete leaf list maintenance, resulting in triggering the + * assertion below. + */ + for_each_sched_entity(se) { + cfs_rq = cfs_rq_of(se); + + if (list_add_leaf_cfs_rq(cfs_rq)) + break; } + assert_list_leaf_cfs_rq(rq); + /* Determine whether we need to wake up potentially idle CPU: */ if (rq->curr == rq->idle && rq->cfs.nr_running) resched_curr(rq); @@ -12157,7 +12191,7 @@ for_each_possible_cpu(i) { #ifdef CONFIG_QOS_SCHED - if (tg->cfs_rq) + if (tg->cfs_rq && tg->cfs_rqi) unthrottle_qos_sched_group(tg->cfs_rqi); #endif if (tg->cfs_rq)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/stop_machine.c
Changed
@@ -624,6 +624,27 @@ } EXPORT_SYMBOL_GPL(stop_machine); +#ifdef CONFIG_SCHED_SMT +int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data) +{ + const struct cpumask *smt_mask = cpu_smt_mask(cpu); + + struct multi_stop_data msdata = { + .fn = fn, + .data = data, + .num_threads = cpumask_weight(smt_mask), + .active_cpus = smt_mask, + }; + + lockdep_assert_cpus_held(); + + /* Set the initial state and stop all online cpus. */ + set_state(&msdata, MULTI_STOP_PREPARE); + return stop_cpus(smt_mask, multi_cpu_stop, &msdata); +} +EXPORT_SYMBOL_GPL(stop_core_cpuslocked); +#endif + /** * stop_machine_from_inactive_cpu - stop_machine() from inactive CPU * @fn: the function to run
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/sysctl.c
Changed
@@ -396,13 +396,14 @@ ppos); } -static size_t proc_skip_spaces(char **buf) +static void proc_skip_spaces(char **buf, size_t *size) { - size_t ret; - char *tmp = skip_spaces(*buf); - ret = tmp - *buf; - *buf = tmp; - return ret; + while (*size) { + if (!isspace(**buf)) + break; + (*size)--; + (*buf)++; + } } static void proc_skip_char(char **buf, size_t *size, const char v) @@ -471,13 +472,12 @@ unsigned long *val, bool *neg, const char *perm_tr, unsigned perm_tr_len, char *tr) { - int len; char *p, tmpTMPBUFLEN; + ssize_t len = *size; - if (!*size) + if (len <= 0) return -EINVAL; - len = *size; if (len > TMPBUFLEN - 1) len = TMPBUFLEN - 1; @@ -635,7 +635,7 @@ bool neg; if (write) { - left -= proc_skip_spaces(&p); + proc_skip_spaces(&p, &left); if (!left) break; @@ -662,7 +662,7 @@ if (!write && !first && left && !err) proc_put_char(&buffer, &left, '\n'); if (write && !err && left) - left -= proc_skip_spaces(&p); + proc_skip_spaces(&p, &left); if (write && first) return err ? : -EINVAL; *lenp -= left; @@ -704,7 +704,7 @@ if (left > PAGE_SIZE - 1) left = PAGE_SIZE - 1; - left -= proc_skip_spaces(&p); + proc_skip_spaces(&p, &left); if (!left) { err = -EINVAL; goto out_free; @@ -724,7 +724,7 @@ } if (!err && left) - left -= proc_skip_spaces(&p); + proc_skip_spaces(&p, &left); out_free: if (err) @@ -1182,7 +1182,7 @@ if (write) { bool neg; - left -= proc_skip_spaces(&p); + proc_skip_spaces(&p, &left); if (!left) break; @@ -1211,7 +1211,7 @@ if (!write && !first && left && !err) proc_put_char(&buffer, &left, '\n'); if (write && !err) - left -= proc_skip_spaces(&p); + proc_skip_spaces(&p, &left); if (write && first) return err ? : -EINVAL; *lenp -= left; @@ -2994,6 +2994,14 @@ .extra1 = SYSCTL_ZERO, }, { + .procname = "percpu_max_batchsize", + .data = &percpu_max_batchsize, + .maxlen = sizeof(percpu_max_batchsize), + .mode = 0644, + .proc_handler = percpu_max_batchsize_sysctl_handler, + .extra1 = SYSCTL_ZERO, + }, + { .procname = "page_lock_unfairness", .data = &sysctl_page_lock_unfairness, .maxlen = sizeof(sysctl_page_lock_unfairness),
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/time/timekeeping.c
Changed
@@ -52,6 +52,9 @@ u64 padding8; #endif seqcount_raw_spinlock_t seq; +#ifdef CONFIG_ARCH_LLC_128_LINE_SIZE + u64 padding22; +#endif struct timekeeper timekeeper; #ifdef CONFIG_ARCH_LLC_128_LINE_SIZE } tk_core ____cacheline_aligned_128 = {
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/trace/trace_osnoise.c
Changed
@@ -2103,6 +2103,13 @@ return -EINVAL; } +static void osnoise_unhook_events(void) +{ + unhook_thread_events(); + unhook_softirq_events(); + unhook_irq_events(); +} + /* * osnoise_workload_start - start the workload and hook to events */ @@ -2135,7 +2142,14 @@ retval = start_per_cpu_kthreads(); if (retval) { - unhook_irq_events(); + trace_osnoise_callback_enabled = false; + /* + * Make sure that ftrace_nmi_enter/exit() see + * trace_osnoise_callback_enabled as false before continuing. + */ + barrier(); + + osnoise_unhook_events(); return retval; } @@ -2157,6 +2171,17 @@ if (osnoise_has_registered_instances()) return; + /* + * If callbacks were already disabled in a previous stop + * call, there is no need to disable then again. + * + * For instance, this happens when tracing is stopped via: + * echo 0 > tracing_on + * echo nop > current_tracer. + */ + if (!trace_osnoise_callback_enabled) + return; + trace_osnoise_callback_enabled = false; /* * Make sure that ftrace_nmi_enter/exit() see @@ -2166,9 +2191,7 @@ stop_per_cpu_kthreads(); - unhook_irq_events(); - unhook_softirq_events(); - unhook_thread_events(); + osnoise_unhook_events(); } static void osnoise_tracer_start(struct trace_array *tr)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/kernel/workqueue.c
Changed
@@ -4812,8 +4812,16 @@ for_each_pwq(pwq, wq) { raw_spin_lock_irqsave(&pwq->pool->lock, flags); - if (pwq->nr_active || !list_empty(&pwq->delayed_works)) + if (pwq->nr_active || !list_empty(&pwq->delayed_works)) { + /* + * Defer printing to avoid deadlocks in console + * drivers that queue work while holding locks + * also taken in their write paths. + */ + printk_safe_enter(); show_pwq(pwq); + printk_safe_exit(); + } raw_spin_unlock_irqrestore(&pwq->pool->lock, flags); /* * We could be printing a lot from atomic context, e.g. @@ -4831,7 +4839,12 @@ raw_spin_lock_irqsave(&pool->lock, flags); if (pool->nr_workers == pool->nr_idle) goto next_pool; - + /* + * Defer printing to avoid deadlocks in console drivers that + * queue work while holding locks also taken in their write + * paths. + */ + printk_safe_enter(); pr_info("pool %d:", pool->id); pr_cont_pool_info(pool); pr_cont(" hung=%us workers=%d", @@ -4846,6 +4859,7 @@ first = false; } pr_cont("\n"); + printk_safe_exit(); next_pool: raw_spin_unlock_irqrestore(&pool->lock, flags); /*
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/lib/fault-inject.c
Changed
@@ -41,9 +41,6 @@ static void fail_dump(struct fault_attr *attr) { - if (attr->no_warn) - return; - if (attr->verbose > 0 && __ratelimit(&attr->ratelimit_state)) { printk(KERN_NOTICE "FAULT_INJECTION: forcing a failure.\n" "name %pd, interval %lu, probability %lu, " @@ -103,7 +100,7 @@ * http://www.nongnu.org/failmalloc/ */ -bool should_fail(struct fault_attr *attr, ssize_t size) +bool should_fail_ex(struct fault_attr *attr, ssize_t size, int flags) { if (in_task()) { unsigned int fail_nth = READ_ONCE(current->fail_nth); @@ -146,13 +143,19 @@ return false; fail: - fail_dump(attr); + if (!(flags & FAULT_NOWARN)) + fail_dump(attr); if (atomic_read(&attr->times) != -1) atomic_dec_not_zero(&attr->times); return true; } + +bool should_fail(struct fault_attr *attr, ssize_t size) +{ + return should_fail_ex(attr, size, 0); +} EXPORT_SYMBOL_GPL(should_fail); #ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/dynamic_hugetlb.c
Changed
@@ -182,25 +182,6 @@ } } -static void clear_percpu_pools(struct dhugetlb_pool *hpool) -{ - struct percpu_pages_pool *percpu_pool; - int i; - - lockdep_assert_held(&hpool->lock); - - spin_unlock(&hpool->lock); - for (i = 0; i < NR_PERCPU_POOL; i++) - spin_lock(&hpool->percpu_pooli.lock); - spin_lock(&hpool->lock); - for (i = 0; i < NR_PERCPU_POOL; i++) { - percpu_pool = &hpool->percpu_pooli; - reclaim_pages_from_percpu_pool(hpool, percpu_pool, percpu_pool->free_pages); - } - for (i = 0; i < NR_PERCPU_POOL; i++) - spin_unlock(&hpool->percpu_pooli.lock); -} - /* We only try 5 times to reclaim pages */ #define HPOOL_RECLAIM_RETRIES 5 @@ -210,6 +191,7 @@ struct split_hugepage *split_page, *split_next; unsigned long nr_pages, block_size; struct page *page, *next, *p; + struct percpu_pages_pool *percpu_pool; bool need_migrate = false, need_initial = false; int i, try; LIST_HEAD(wait_page_list); @@ -241,7 +223,22 @@ try = 0; merge: - clear_percpu_pools(hpool); + /* + * If we are merging 4K page to 2M page, we need to get + * lock of percpu pool sequentially and clear percpu pool. + */ + if (hpages_pool_idx == HUGE_PAGES_POOL_2M) { + spin_unlock(&hpool->lock); + for (i = 0; i < NR_PERCPU_POOL; i++) + spin_lock(&hpool->percpu_pooli.lock); + spin_lock(&hpool->lock); + for (i = 0; i < NR_PERCPU_POOL; i++) { + percpu_pool = &hpool->percpu_pooli; + reclaim_pages_from_percpu_pool(hpool, percpu_pool, + percpu_pool->free_pages); + } + } + page = pfn_to_page(split_page->start_pfn); for (i = 0; i < nr_pages; i+= block_size) { p = pfn_to_page(split_page->start_pfn + i); @@ -252,6 +249,14 @@ goto migrate; } } + if (hpages_pool_idx == HUGE_PAGES_POOL_2M) { + /* + * All target 4K page are in src_hpages_pool, we + * can unlock percpu pool. + */ + for (i = 0; i < NR_PERCPU_POOL; i++) + spin_unlock(&hpool->percpu_pooli.lock); + } list_del(&split_page->head_pages); hpages_pool->split_normal_pages--; @@ -284,8 +289,14 @@ trace_dynamic_hugetlb_split_merge(hpool, page, DHUGETLB_MERGE, page_size(page)); return 0; next: + if (hpages_pool_idx == HUGE_PAGES_POOL_2M) { + /* Unlock percpu pool before try next */ + for (i = 0; i < NR_PERCPU_POOL; i++) + spin_unlock(&hpool->percpu_pooli.lock); + } continue; migrate: + /* page migration only used for HUGE_PAGES_POOL_2M */ if (try++ >= HPOOL_RECLAIM_RETRIES) goto next; @@ -300,7 +311,10 @@ } /* Unlock and try migration. */ + for (i = 0; i < NR_PERCPU_POOL; i++) + spin_unlock(&hpool->percpu_pooli.lock); spin_unlock(&hpool->lock); + for (i = 0; i < nr_pages; i+= block_size) { p = pfn_to_page(split_page->start_pfn + i); if (PagePool(p)) @@ -312,6 +326,10 @@ } spin_lock(&hpool->lock); + /* + * Move all isolate pages to src_hpages_pool and then try + * merge again. + */ list_for_each_entry_safe(page, next, &wait_page_list, lru) { list_move_tail(&page->lru, &src_hpages_pool->hugepage_freelists); src_hpages_pool->free_normal_pages++; @@ -559,6 +577,10 @@ spin_lock_irqsave(&percpu_pool->lock, flags); ClearPagePool(page); + if (!free_pages_prepare(page, 0, true)) { + SetPagePool(page); + goto out; + } list_add(&page->lru, &percpu_pool->head_page); percpu_pool->free_pages++; percpu_pool->used_pages--; @@ -567,7 +589,7 @@ reclaim_pages_from_percpu_pool(hpool, percpu_pool, PERCPU_POOL_PAGE_BATCH); spin_unlock(&hpool->lock); } - +out: spin_unlock_irqrestore(&percpu_pool->lock, flags); put_hpool(hpool); } @@ -577,8 +599,7 @@ if (!dhugetlb_enabled || !PagePool(page)) return false; - if (free_pages_prepare(page, 0, true)) - __free_page_to_dhugetlb_pool(page); + __free_page_to_dhugetlb_pool(page); return true; } @@ -592,8 +613,7 @@ list_for_each_entry_safe(page, next, list, lru) { if (PagePool(page)) { list_del(&page->lru); - if (free_pages_prepare(page, 0, true)) - __free_page_to_dhugetlb_pool(page); + __free_page_to_dhugetlb_pool(page); } } } @@ -799,7 +819,8 @@ p->mapping = NULL; } set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); - + /* compound_nr and mapping are union in page, reset it. */ + set_compound_order(page, PUD_SHIFT - PAGE_SHIFT); nid = page_to_nid(page); SetHPageFreed(page); list_move(&page->lru, &h->hugepage_freelistsnid);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/failslab.c
Changed
@@ -16,6 +16,8 @@ bool __should_failslab(struct kmem_cache *s, gfp_t gfpflags) { + int flags = 0; + /* No fault-injection for bootstrap cache */ if (unlikely(s == kmem_cache)) return false; @@ -30,10 +32,16 @@ if (failslab.cache_filter && !(s->flags & SLAB_FAILSLAB)) return false; + /* + * In some cases, it expects to specify __GFP_NOWARN + * to avoid printing any information(not just a warning), + * thus avoiding deadlocks. See commit 6b9dbedbe349 for + * details. + */ if (gfpflags & __GFP_NOWARN) - failslab.attr.no_warn = true; + flags |= FAULT_NOWARN; - return should_fail(&failslab.attr, s->object_size); + return should_fail_ex(&failslab.attr, s->object_size, flags); } static int __init setup_failslab(char *str)
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/filemap.c
Changed
@@ -21,6 +21,7 @@ #include <linux/gfp.h> #include <linux/mm.h> #include <linux/swap.h> +#include <linux/swapops.h> #include <linux/mman.h> #include <linux/pagemap.h> #include <linux/file.h> @@ -42,6 +43,7 @@ #include <linux/psi.h> #include <linux/ramfs.h> #include <linux/page_idle.h> +#include <linux/migrate.h> #include "internal.h" #define CREATE_TRACE_POINTS @@ -1323,6 +1325,95 @@ return wait->flags & WQ_FLAG_WOKEN ? 0 : -EINTR; } +#ifdef CONFIG_MIGRATION +/** + * migration_entry_wait_on_locked - Wait for a migration entry to be removed + * @entry: migration swap entry. + * @ptep: mapped pte pointer. Will return with the ptep unmapped. Only required + * for pte entries, pass NULL for pmd entries. + * @ptl: already locked ptl. This function will drop the lock. + * + * Wait for a migration entry referencing the given page to be removed. This is + * equivalent to put_and_wait_on_page_locked(page, TASK_UNINTERRUPTIBLE) except + * this can be called without taking a reference on the page. Instead this + * should be called while holding the ptl for the migration entry referencing + * the page. + * + * Returns after unmapping and unlocking the pte/ptl with pte_unmap_unlock(). + * + * This follows the same logic as wait_on_page_bit_common() so see the comments + * there. + */ +void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep, + spinlock_t *ptl) +{ + struct wait_page_queue wait_page; + wait_queue_entry_t *wait = &wait_page.wait; + bool thrashing = false; + bool delayacct = false; + unsigned long pflags; + wait_queue_head_t *q; + struct page *page = compound_head(migration_entry_to_page(entry)); + + q = page_waitqueue(page); + if (!PageUptodate(page) && PageWorkingset(page)) { + if (!PageSwapBacked(page)) { + delayacct_thrashing_start(); + delayacct = true; + } + psi_memstall_enter(&pflags); + thrashing = true; + } + + init_wait(wait); + wait->func = wake_page_function; + wait_page.page = page; + wait_page.bit_nr = PG_locked; + wait->flags = 0; + + spin_lock_irq(&q->lock); + SetPageWaiters(page); + if (!trylock_page_bit_common(page, PG_locked, wait)) + __add_wait_queue_entry_tail(q, wait); + spin_unlock_irq(&q->lock); + + /* + * If a migration entry exists for the page the migration path must hold + * a valid reference to the page, and it must take the ptl to remove the + * migration entry. So the page is valid until the ptl is dropped. + */ + if (ptep) + pte_unmap_unlock(ptep, ptl); + else + spin_unlock(ptl); + + for (;;) { + unsigned int flags; + + set_current_state(TASK_UNINTERRUPTIBLE); + + /* Loop until we've been woken or interrupted */ + flags = smp_load_acquire(&wait->flags); + if (!(flags & WQ_FLAG_WOKEN)) { + if (signal_pending_state(TASK_UNINTERRUPTIBLE, current)) + break; + + io_schedule(); + continue; + } + break; + } + + finish_wait(q, wait); + + if (thrashing) { + if (delayacct) + delayacct_thrashing_end(); + psi_memstall_leave(&pflags); + } +} +#endif + void wait_on_page_bit(struct page *page, int bit_nr) { wait_queue_head_t *q = page_waitqueue(page);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/madvise.c
Changed
@@ -221,6 +221,7 @@ if (page) put_page(page); } + cond_resched(); return 0; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/memory.c
Changed
@@ -3366,6 +3366,7 @@ { struct vm_area_struct *vma = vmf->vma; struct page *page = NULL, *swapcache; + struct swap_info_struct *si = NULL; swp_entry_t entry; pte_t pte; int locked; @@ -3425,14 +3426,16 @@ goto out; } + /* Prevent swapoff from happening to us. */ + si = get_swap_device(entry); + if (unlikely(!si)) + goto out; delayacct_set_flag(DELAYACCT_PF_SWAPIN); page = lookup_swap_cache(entry, vma, vmf->address); swapcache = page; if (!page) { - struct swap_info_struct *si = swp_swap_info(entry); - if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) { /* skip swapcache */ @@ -3604,6 +3607,8 @@ unlock: pte_unmap_unlock(vmf->pte, vmf->ptl); out: + if (si) + put_swap_device(si); return ret; out_nomap: pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -3615,6 +3620,8 @@ unlock_page(swapcache); put_page(swapcache); } + if (si) + put_swap_device(si); return ret; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/migrate.c
Changed
@@ -315,7 +315,6 @@ { pte_t pte; swp_entry_t entry; - struct page *page; spin_lock(ptl); pte = *ptep; @@ -326,18 +325,7 @@ if (!is_migration_entry(entry)) goto out; - page = migration_entry_to_page(entry); - page = compound_head(page); - - /* - * Once page cache replacement of page migration started, page_count - * is zero; but we must not call put_and_wait_on_page_locked() without - * a ref. Use get_page_unless_zero(), and just fault again if it fails. - */ - if (!get_page_unless_zero(page)) - goto out; - pte_unmap_unlock(ptep, ptl); - put_and_wait_on_page_locked(page); + migration_entry_wait_on_locked(entry, ptep, ptl); return; out: pte_unmap_unlock(ptep, ptl); @@ -362,16 +350,11 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd) { spinlock_t *ptl; - struct page *page; ptl = pmd_lock(mm, pmd); if (!is_pmd_migration_entry(*pmd)) goto unlock; - page = migration_entry_to_page(pmd_to_swp_entry(*pmd)); - if (!get_page_unless_zero(page)) - goto unlock; - spin_unlock(ptl); - put_and_wait_on_page_locked(page); + migration_entry_wait_on_locked(pmd_to_swp_entry(*pmd), NULL, ptl); return; unlock: spin_unlock(ptl); @@ -2558,22 +2541,8 @@ return false; /* Page from ZONE_DEVICE have one extra reference */ - if (is_zone_device_page(page)) { - /* - * Private page can never be pin as they have no valid pte and - * GUP will fail for those. Yet if there is a pending migration - * a thread might try to wait on the pte migration entry and - * will bump the page reference count. Sadly there is no way to - * differentiate a regular pin from migration wait. Hence to - * avoid 2 racing thread trying to migrate back to CPU to enter - * infinite loop (one stopping migration because the other is - * waiting on pte migration entry). We always return true here. - * - * FIXME proper solution is to rework migration_entry_wait() so - * it does not need to take a reference on page. - */ - return is_device_private_page(page); - } + if (is_zone_device_page(page)) + extra++; /* For file back page */ if (page_mapping(page))
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/mmap.c
Changed
@@ -1479,9 +1479,11 @@ pkey = 0; } +#ifdef CONFIG_ASCEND_FEATURES /* Physical address is within 4G */ if (flags & MAP_PA32BIT) vm_flags |= VM_PA32BIT; +#endif /* Do simple checking here so the lower-level routines won't have * to. we assume access permissions have been handled by the open
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/page_alloc.c
Changed
@@ -112,6 +112,8 @@ /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_FRACTION (8) +#define MAX_PERCPU_MAX_BATCHSIZE ((512 * 1024) / PAGE_SIZE) +#define MIN_PERCPU_MAX_BATCHSIZE (MAX_PERCPU_MAX_BATCHSIZE / 8) #ifdef CONFIG_USE_PERCPU_NUMA_NODE_ID DEFINE_PER_CPU(int, numa_node); @@ -167,6 +169,8 @@ unsigned long totalcma_pages __read_mostly; int percpu_pagelist_fraction; +int percpu_max_batchsize = MAX_PERCPU_MAX_BATCHSIZE / 2; + gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK; #ifdef CONFIG_INIT_ON_ALLOC_DEFAULT_ON DEFINE_STATIC_KEY_TRUE(init_on_alloc); @@ -3545,6 +3549,8 @@ static bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) { + int flags = 0; + if (order < fail_page_alloc.min_order) return false; if (gfp_mask & __GFP_NOFAIL) @@ -3555,10 +3561,11 @@ (gfp_mask & __GFP_DIRECT_RECLAIM)) return false; + /* See comment in __should_failslab() */ if (gfp_mask & __GFP_NOWARN) - fail_page_alloc.attr.no_warn = true; + flags |= FAULT_NOWARN; - return should_fail(&fail_page_alloc.attr, 1 << order); + return should_fail_ex(&fail_page_alloc.attr, 1 << order, flags); } #ifdef CONFIG_FAULT_INJECTION_DEBUG_FS @@ -6757,10 +6764,9 @@ * size of the zone. */ batch = zone_managed_pages(zone) / 1024; - /* But no more than a meg. */ - if (batch * PAGE_SIZE > 1024 * 1024) - batch = (1024 * 1024) / PAGE_SIZE; batch /= 4; /* We effectively *= 4 below */ + if (batch > percpu_max_batchsize) + batch = percpu_max_batchsize; if (batch < 1) batch = 1; @@ -8609,6 +8615,39 @@ goto out; for_each_populated_zone(zone) + zone_set_pageset_high_and_batch(zone); +out: + mutex_unlock(&pcp_batch_high_lock); + return ret; +} + +int percpu_max_batchsize_sysctl_handler(struct ctl_table *table, int write, + void *buffer, size_t *length, loff_t *ppos) +{ + struct zone *zone; + int old_percpu_max_batchsize; + int ret; + + mutex_lock(&pcp_batch_high_lock); + old_percpu_max_batchsize = percpu_max_batchsize; + + ret = proc_dointvec_minmax(table, write, buffer, length, ppos); + if (!write || ret < 0) + goto out; + + /* Sanity checking to avoid pcp imbalance */ + if (percpu_max_batchsize > MAX_PERCPU_MAX_BATCHSIZE || + percpu_max_batchsize < MIN_PERCPU_MAX_BATCHSIZE) { + percpu_max_batchsize = old_percpu_max_batchsize; + ret = -EINVAL; + goto out; + } + + /* No change? */ + if (percpu_max_batchsize == old_percpu_max_batchsize) + goto out; + + for_each_populated_zone(zone) zone_set_pageset_high_and_batch(zone); out: mutex_unlock(&pcp_batch_high_lock);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/shmem.c
Changed
@@ -1711,7 +1711,8 @@ struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm; - struct page *page; + struct swap_info_struct *si; + struct page *page = NULL; swp_entry_t swap; int error; @@ -1719,6 +1720,12 @@ swap = radix_to_swp_entry(*pagep); *pagep = NULL; + /* Prevent swapoff from happening to us. */ + si = get_swap_device(swap); + if (!si) { + error = EINVAL; + goto failed; + } /* Look it up and read it in.. */ page = lookup_swap_cache(swap, NULL, 0); if (!page) { @@ -1780,6 +1787,8 @@ swap_free(swap); *pagep = page; + if (si) + put_swap_device(si); return 0; failed: if (!shmem_confirm_swap(mapping, index, swap)) @@ -1790,6 +1799,9 @@ put_page(page); } + if (si) + put_swap_device(si); + return error; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/swapfile.c
Changed
@@ -39,6 +39,7 @@ #include <linux/export.h> #include <linux/swap_slots.h> #include <linux/sort.h> +#include <linux/completion.h> #include <asm/tlbflush.h> #include <linux/swapops.h> @@ -512,6 +513,14 @@ spin_unlock(&si->lock); } +static void swap_users_ref_free(struct percpu_ref *ref) +{ + struct swap_extend_info *sei; + + sei = container_of(ref, struct swap_extend_info, users); + complete(&sei->comp); +} + static void alloc_cluster(struct swap_info_struct *si, unsigned long idx) { struct swap_cluster_info *ci = si->cluster_info; @@ -944,6 +953,11 @@ scan: spin_unlock(&si->lock); while (++offset <= READ_ONCE(si->highest_bit)) { + if (unlikely(--latency_ration < 0)) { + cond_resched(); + latency_ration = LATENCY_LIMIT; + scanned_many = true; + } if (data_race(!si->swap_mapoffset)) { spin_lock(&si->lock); goto checks; @@ -953,14 +967,14 @@ spin_lock(&si->lock); goto checks; } + } + offset = si->lowest_bit; + while (offset < scan_base) { if (unlikely(--latency_ration < 0)) { cond_resched(); latency_ration = LATENCY_LIMIT; scanned_many = true; } - } - offset = si->lowest_bit; - while (offset < scan_base) { if (data_race(!si->swap_mapoffset)) { spin_lock(&si->lock); goto checks; @@ -970,11 +984,6 @@ spin_lock(&si->lock); goto checks; } - if (unlikely(--latency_ration < 0)) { - cond_resched(); - latency_ration = LATENCY_LIMIT; - scanned_many = true; - } offset++; } spin_lock(&si->lock); @@ -1274,18 +1283,12 @@ * via preventing the swap device from being swapoff, until * put_swap_device() is called. Otherwise return NULL. * - * The entirety of the RCU read critical section must come before the - * return from or after the call to synchronize_rcu() in - * enable_swap_info() or swapoff(). So if "si->flags & SWP_VALID" is - * true, the si->map, si->cluster_info, etc. must be valid in the - * critical section. - * * Notice that swapoff or swapoff+swapon can still happen before the - * rcu_read_lock() in get_swap_device() or after the rcu_read_unlock() - * in put_swap_device() if there isn't any other way to prevent - * swapoff, such as page lock, page table lock, etc. The caller must - * be prepared for that. For example, the following situation is - * possible. + * percpu_ref_tryget_live() in get_swap_device() or after the + * percpu_ref_put() in put_swap_device() if there isn't any other way + * to prevent swapoff, such as page lock, page table lock, etc. The + * caller must be prepared for that. For example, the following + * situation is possible. * * CPU1 CPU2 * do_swap_page() @@ -1313,21 +1316,27 @@ si = swp_swap_info(entry); if (!si) goto bad_nofile; - - rcu_read_lock(); - if (data_race(!(si->flags & SWP_VALID))) - goto unlock_out; + if (!percpu_ref_tryget_live(&si->sei->users)) + goto out; + /* + * Guarantee the si->users are checked before accessing other + * fields of swap_info_struct. + * + * Paired with the spin_unlock() after setup_swap_info() in + * enable_swap_info(). + */ + smp_rmb(); offset = swp_offset(entry); if (offset >= si->max) - goto unlock_out; + goto put_out; return si; bad_nofile: pr_err("%s: %s%08lx\n", __func__, Bad_file, entry.val); out: return NULL; -unlock_out: - rcu_read_unlock(); +put_out: + percpu_ref_put(&si->sei->users); return NULL; } @@ -2500,7 +2509,7 @@ static void _enable_swap_info(struct swap_info_struct *p) { - p->flags |= SWP_WRITEOK | SWP_VALID; + p->flags |= SWP_WRITEOK; atomic_long_add(p->pages, &nr_swap_pages); total_swap_pages += p->pages; @@ -2531,10 +2540,9 @@ spin_unlock(&p->lock); spin_unlock(&swap_lock); /* - * Guarantee swap_map, cluster_info, etc. fields are valid - * between get/put_swap_device() if SWP_VALID bit is set + * Finished initializing swap device, now it's safe to reference it. */ - synchronize_rcu(); + percpu_ref_resurrect(&p->sei->users); spin_lock(&swap_lock); spin_lock(&p->lock); _enable_swap_info(p); @@ -2650,16 +2658,16 @@ reenable_swap_slots_cache_unlock(); - spin_lock(&swap_lock); - spin_lock(&p->lock); - p->flags &= ~SWP_VALID; /* mark swap device as invalid */ - spin_unlock(&p->lock); - spin_unlock(&swap_lock); /* - * wait for swap operations protected by get/put_swap_device() - * to complete + * Wait for swap operations protected by get/put_swap_device() + * to complete. + * + * We need synchronize_rcu() here to protect the accessing to + * the swap cache data structure. */ + percpu_ref_kill(&p->sei->users); synchronize_rcu(); + wait_for_completion(&p->sei->comp); flush_work(&p->discard_work); @@ -2891,6 +2899,19 @@ if (!p) return ERR_PTR(-ENOMEM); + p->sei = kvzalloc(sizeof(struct swap_extend_info), GFP_KERNEL); + if (!p->sei) { + kvfree(p); + return ERR_PTR(-ENOMEM); + } + + if (percpu_ref_init(&p->sei->users, swap_users_ref_free, + PERCPU_REF_INIT_DEAD, GFP_KERNEL)) { + kvfree(p->sei); + kvfree(p); + return ERR_PTR(-ENOMEM); + } + spin_lock(&swap_lock); for (type = 0; type < nr_swapfiles; type++) { if (!(swap_infotype->flags & SWP_USED)) @@ -2898,6 +2919,8 @@ } if (type >= MAX_SWAPFILES) { spin_unlock(&swap_lock); + percpu_ref_exit(&p->sei->users); + kvfree(p->sei); kvfree(p); return ERR_PTR(-EPERM); } @@ -2925,9 +2948,14 @@ plist_node_init(&p->avail_listsi, 0); p->flags = SWP_USED; spin_unlock(&swap_lock); - kvfree(defer); + if (defer) { + percpu_ref_exit(&defer->sei->users); + kvfree(defer->sei); + kvfree(defer); + } spin_lock_init(&p->lock); spin_lock_init(&p->cont_lock); + init_completion(&p->sei->comp); return p; }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/mm/zswap.c
Changed
@@ -79,6 +79,8 @@ #define ZSWAP_PARAM_UNSET "" +static int zswap_setup(void); + /* Enable/disable zswap */ static bool zswap_enabled = IS_ENABLED(CONFIG_ZSWAP_DEFAULT_ON); static int zswap_enabled_param_set(const char *, @@ -203,11 +205,14 @@ /* pool counter to provide unique names to zpool */ static atomic_t zswap_pools_count = ATOMIC_INIT(0); -/* used by param callback function */ -static bool zswap_init_started; +#define ZSWAP_UNINIT 0 +#define ZSWAP_INIT_SUCCEED 1 +#define ZSWAP_INIT_FAILED 2 -/* fatal error during init */ -static bool zswap_init_failed; +/* init state */ +static int zswap_init_state; +/* used to ensure the integrity of initialization */ +static DEFINE_MUTEX(zswap_init_lock); /* init completed, but couldn't create the initial pool */ static bool zswap_has_pool; @@ -261,13 +266,13 @@ **********************************/ static struct kmem_cache *zswap_entry_cache; -static int __init zswap_entry_cache_create(void) +static int zswap_entry_cache_create(void) { zswap_entry_cache = KMEM_CACHE(zswap_entry, 0); return zswap_entry_cache == NULL; } -static void __init zswap_entry_cache_destroy(void) +static void zswap_entry_cache_destroy(void) { kmem_cache_destroy(zswap_entry_cache); } @@ -648,7 +653,7 @@ return NULL; } -static __init struct zswap_pool *__zswap_pool_create_fallback(void) +static struct zswap_pool *__zswap_pool_create_fallback(void) { bool has_comp, has_zpool; @@ -757,7 +762,7 @@ char *s = strstrip((char *)val); int ret; - if (zswap_init_failed) { + if (zswap_init_state == ZSWAP_INIT_FAILED) { pr_err("can't set param, initialization failed\n"); return -ENODEV; } @@ -766,11 +771,17 @@ if (!strcmp(s, *(char **)kp->arg) && zswap_has_pool) return 0; - /* if this is load-time (pre-init) param setting, + /* + * if zswap has not been initialized, * don't create a pool; that's done during init. */ - if (!zswap_init_started) - return param_set_charp(s, kp); + mutex_lock(&zswap_init_lock); + if (zswap_init_state == ZSWAP_UNINIT) { + ret = param_set_charp(s, kp); + mutex_unlock(&zswap_init_lock); + return ret; + } + mutex_unlock(&zswap_init_lock); if (!type) { if (!zpool_has_pool(s)) { @@ -860,11 +871,19 @@ static int zswap_enabled_param_set(const char *val, const struct kernel_param *kp) { - if (zswap_init_failed) { + if (system_state == SYSTEM_RUNNING) { + mutex_lock(&zswap_init_lock); + if (zswap_setup()) { + mutex_unlock(&zswap_init_lock); + return -ENODEV; + } + mutex_unlock(&zswap_init_lock); + } + if (zswap_init_state == ZSWAP_INIT_FAILED) { pr_err("can't enable, initialization failed\n"); return -ENODEV; } - if (!zswap_has_pool && zswap_init_started) { + if (!zswap_has_pool && zswap_init_state == ZSWAP_INIT_SUCCEED) { pr_err("can't enable, no pool configured\n"); return -ENODEV; } @@ -1390,7 +1409,7 @@ static struct dentry *zswap_debugfs_root; -static int __init zswap_debugfs_init(void) +static int zswap_debugfs_init(void) { if (!debugfs_initialized()) return -ENODEV; @@ -1426,7 +1445,7 @@ debugfs_remove_recursive(zswap_debugfs_root); } #else -static int __init zswap_debugfs_init(void) +static int zswap_debugfs_init(void) { return 0; } @@ -1434,15 +1453,13 @@ static void __exit zswap_debugfs_exit(void) { } #endif -/********************************* -* module init and exit -**********************************/ -static int __init init_zswap(void) +static int zswap_setup(void) { struct zswap_pool *pool; int ret; - zswap_init_started = true; + if (zswap_init_state != ZSWAP_UNINIT) + return 0; if (zswap_entry_cache_create()) { pr_err("entry cache creation failed\n"); @@ -1481,6 +1498,7 @@ frontswap_register_ops(&zswap_frontswap_ops); if (zswap_debugfs_init()) pr_warn("debugfs initialization failed\n"); + zswap_init_state = ZSWAP_INIT_SUCCEED; return 0; fallback_fail: @@ -1492,10 +1510,22 @@ zswap_entry_cache_destroy(); cache_fail: /* if built-in, we aren't unloaded on failure; don't allow use */ - zswap_init_failed = true; + zswap_init_state = ZSWAP_INIT_FAILED; zswap_enabled = false; return -ENOMEM; } + +/********************************* +* module init and exit +**********************************/ +static int __init init_zswap(void) +{ + /* skip init if zswap is disabled when system startup */ + if (!zswap_enabled) + return 0; + return zswap_setup(); +} + /* must be late so crypto has time to come up */ late_initcall(init_zswap);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/net/9p/trans_virtio.c
Changed
@@ -716,7 +716,7 @@ mutex_unlock(&virtio_9p_lock); - vdev->config->reset(vdev); + virtio_reset_device(vdev); vdev->config->del_vqs(vdev); sysfs_remove_file(&(vdev->dev.kobj), &dev_attr_mount_tag.attr);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/net/bluetooth/l2cap_core.c
Changed
@@ -1988,11 +1988,11 @@ src_match = !bacmp(&c->src, src); dst_match = !bacmp(&c->dst, dst); if (src_match && dst_match) { - c = l2cap_chan_hold_unless_zero(c); - if (c) { - read_unlock(&chan_list_lock); - return c; - } + if (!l2cap_chan_hold_unless_zero(c)) + continue; + + read_unlock(&chan_list_lock); + return c; } /* Closest match */ @@ -4440,7 +4440,8 @@ chan->ident = cmd->ident; l2cap_send_cmd(conn, cmd->ident, L2CAP_CONF_RSP, len, rsp); - chan->num_conf_rsp++; + if (chan->num_conf_rsp < L2CAP_CONF_MAX_CONF_RSP) + chan->num_conf_rsp++; /* Reset config buffer. */ chan->conf_len = 0; @@ -7621,6 +7622,7 @@ return; } + l2cap_chan_hold(chan); l2cap_chan_lock(chan); } else { BT_DBG("unknown cid 0x%4.4x", cid);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/net/l2tp/l2tp_core.c
Changed
@@ -1150,8 +1150,10 @@ } /* Remove hooks into tunnel socket */ + write_lock_bh(&sk->sk_callback_lock); sk->sk_destruct = tunnel->old_sk_destruct; sk->sk_user_data = NULL; + write_unlock_bh(&sk->sk_callback_lock); /* Call the original destructor */ if (sk->sk_destruct) @@ -1471,16 +1473,19 @@ sock = sockfd_lookup(tunnel->fd, &ret); if (!sock) goto err; - - ret = l2tp_validate_socket(sock->sk, net, tunnel->encap); - if (ret < 0) - goto err_sock; } + sk = sock->sk; + write_lock_bh(&sk->sk_callback_lock); + ret = l2tp_validate_socket(sk, net, tunnel->encap); + if (ret < 0) + goto err_inval_sock; + rcu_assign_sk_user_data(sk, tunnel); + write_unlock_bh(&sk->sk_callback_lock); + tunnel->l2tp_net = net; pn = l2tp_pernet(net); - sk = sock->sk; sock_hold(sk); tunnel->sock = sk; @@ -1505,8 +1510,6 @@ }; setup_udp_tunnel_sock(net, sock, &udp_cfg); - } else { - sk->sk_user_data = tunnel; } tunnel->old_sk_destruct = sk->sk_destruct; @@ -1523,6 +1526,11 @@ return 0; err_sock: + write_lock_bh(&sk->sk_callback_lock); + rcu_assign_sk_user_data(sk, NULL); +err_inval_sock: + write_unlock_bh(&sk->sk_callback_lock); + if (tunnel->fd < 0) sock_release(sock); else
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/net/packet/af_packet.c
Changed
@@ -1896,8 +1896,10 @@ /* Move network header to the right position for VLAN tagged packets */ if (likely(skb->dev->type == ARPHRD_ETHER) && eth_type_vlan(skb->protocol) && - __vlan_get_protocol(skb, skb->protocol, &depth) != 0) - skb_set_network_header(skb, depth); + __vlan_get_protocol(skb, skb->protocol, &depth) != 0) { + if (pskb_may_pull(skb, depth)) + skb_set_network_header(skb, depth); + } skb_probe_transport_header(skb); }
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/net/vmw_vsock/virtio_transport.c
Changed
@@ -641,7 +641,7 @@ virtio_vsock_reset_sock); /* Stop all work handlers to make sure no one is accessing the device, - * so we can safely call vdev->config->reset(). + * so we can safely call virtio_reset_device(). */ mutex_lock(&vsock->rx_lock); vsock->rx_run = false; @@ -658,7 +658,7 @@ /* Flush all device writes and interrupts, device will not use any * more buffers. */ - vdev->config->reset(vdev); + virtio_reset_device(vdev); mutex_lock(&vsock->rx_lock); while ((pkt = virtqueue_detach_unused_buf(vsock->vqsVSOCK_VQ_RX)))
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/openEuler/MAINTAINERS
Changed
@@ -67,6 +67,24 @@ first. When adding to this list, please keep the entries in alphabetical order. +KERNEL VIRTUAL MACHINE (KVM) +M: zhukeqian1@huawei.com +M: yuzenghui@huawei.com +S: Maintained +F: Documentation/virt/kvm/ +F: include/asm-generic/kvm* +F: include/kvm/ +F: include/linux/kvm* +F: include/trace/events/kvm.h +F: include/uapi/asm-generic/kvm* +F: include/uapi/linux/kvm* +F: tools/kvm/ +F: tools/testing/selftests/kvm/ +F: virt/kvm/ +F: arch/*/include/asm/kvm* +F: arch/*/include/uapi/asm/kvm* +F: arch/*/kvm/ + SCHEDULER M: zhengzucheng@huawei.com S: Maintained
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/security/integrity/ima/ima_template.c
Changed
@@ -292,8 +292,11 @@ template_desc->name = ""; template_desc->fmt = kstrdup(template_name, GFP_KERNEL); - if (!template_desc->fmt) + if (!template_desc->fmt) { + kfree(template_desc); + template_desc = NULL; goto out; + } spin_lock(&template_list); list_add_tail_rcu(&template_desc->list, &defined_templates);
View file
_service:recompress:tar_scm:risc-v-kernel-5.10.0.tar.bz2/tools/include/uapi/linux/vhost.h
Changed
@@ -89,11 +89,6 @@ /* Set or get vhost backend capability */ -/* Use message type V2 */ -#define VHOST_BACKEND_F_IOTLB_MSG_V2 0x1 -/* IOTLB can accept batching hints */ -#define VHOST_BACKEND_F_IOTLB_BATCH 0x2 - #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64) #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64) @@ -150,4 +145,39 @@ /* Get the valid iova range */ #define VHOST_VDPA_GET_IOVA_RANGE _IOR(VHOST_VIRTIO, 0x78, \ struct vhost_vdpa_iova_range) +/* Get the config size */ +#define VHOST_VDPA_GET_CONFIG_SIZE _IOR(VHOST_VIRTIO, 0x79, __u32) + +/* Get the count of all virtqueues */ +#define VHOST_VDPA_GET_VQS_COUNT _IOR(VHOST_VIRTIO, 0x80, __u32) + +/* Get the number of virtqueue groups. */ +#define VHOST_VDPA_GET_GROUP_NUM _IOR(VHOST_VIRTIO, 0x81, __u32) + +/* Get the number of address spaces. */ +#define VHOST_VDPA_GET_AS_NUM _IOR(VHOST_VIRTIO, 0x7A, unsigned int) + +/* Get the group for a virtqueue: read index, write group in num, + * The virtqueue index is stored in the index field of + * vhost_vring_state. The group for this specific virtqueue is + * returned via num field of vhost_vring_state. + */ +#define VHOST_VDPA_GET_VRING_GROUP _IOWR(VHOST_VIRTIO, 0x7B, \ + struct vhost_vring_state) +/* Set the ASID for a virtqueue group. The group index is stored in + * the index field of vhost_vring_state, the ASID associated with this + * group is stored at num field of vhost_vring_state. + */ +#define VHOST_VDPA_SET_GROUP_ASID _IOW(VHOST_VIRTIO, 0x7C, \ + struct vhost_vring_state) + +/* Suspend a device so it does not process virtqueue requests anymore + * + * After the return of ioctl the device must preserve all the necessary state + * (the virtqueue vring base plus the possible device specific states) that is + * required for restoring in the future. The device must not change its + * configuration after that point. + */ +#define VHOST_VDPA_SUSPEND _IO(VHOST_VIRTIO, 0x7D) + #endif
View file
_service:tar_scm:openEuler_riscv64_defconfig
Changed
@@ -117,7 +117,14 @@ CONFIG_FAT_DEFAULT_UTF8=y CONFIG_EXFAT_FS=m CONFIG_EXFAT_DEFAULT_IOCHARSET="utf8" - +CONFIG_NLS_CODEPAGE_437=y +CONFIG_NLS_CODEPAGE_936=m +CONFIG_NLS_CODEPAGE_950=m +CONFIG_NLS_CODEPAGE_1250=m +CONFIG_NLS_CODEPAGE_1251=m +CONFIG_NLS_ASCII=m +CONFIG_NLS_ISO8859_1=m +CONFIG_NLS_UTF8=m ### End CONFIG_CRYPTO=y @@ -219,3 +226,72 @@ CONFIG_X509_CERTIFICATE_PARSER=y CONFIG_PKCS8_PRIVATE_KEY_PARSER=y CONFIG_PKCS7_MESSAGE_PARSER=y + +# for FUSE +CONFIG_FUSE_FS=m +CONFIG_CUSE=m +CONFIG_VIRTIO_FS=m +# for NFS server +CONFIG_NFSD=m +CONFIG_NFSD_V2_ACL=y +CONFIG_NFSD_V3_ACL=y +CONFIG_NFSD_V4=y +CONFIG_NFSD_PNFS=y +CONFIG_NFSD_BLOCKLAYOUT=y +CONFIG_NFSD_V4_SECURITY_LABEL=y +# for package clamav +CONFIG_FANOTIFY=y +CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y +# for package firewalld +CONFIG_NETFILTER_NETLINK_OSF=m +CONFIG_NETFILTER_CONNCOUNT=m +CONFIG_NETFILTER_SYNPROXY=m +CONFIG_NF_TABLES=m +CONFIG_NF_TABLES_INET=y +CONFIG_NF_TABLES_NETDEV=y +CONFIG_NFT_NUMGEN=m +CONFIG_NFT_CT=m +CONFIG_NFT_COUNTER=m +CONFIG_NFT_CONNLIMIT=m +CONFIG_NFT_LOG=m +CONFIG_NFT_LIMIT=m +CONFIG_NFT_MASQ=m +CONFIG_NFT_REDIR=m +CONFIG_NFT_NAT=m +CONFIG_NFT_TUNNEL=m +CONFIG_NFT_OBJREF=m +CONFIG_NFT_QUOTA=m +CONFIG_NFT_REJECT=m +CONFIG_NFT_REJECT_INET=m +CONFIG_NFT_COMPAT=m +CONFIG_NFT_HASH=m +CONFIG_NFT_XFRM=m +CONFIG_NFT_SOCKET=m +CONFIG_NFT_OSF=m +CONFIG_NFT_TPROXY=m +CONFIG_NFT_SYNPROXY=m +CONFIG_NF_DUP_NETDEV=m +CONFIG_NFT_DUP_NETDEV=m +CONFIG_NFT_FWD_NETDEV=m +CONFIG_NF_FLOW_TABLE_INET=m +CONFIG_NF_FLOW_TABLE=m +CONFIG_NF_SOCKET_IPV4=m +CONFIG_NF_TPROXY_IPV4=m +CONFIG_NF_TABLES_IPV4=y +CONFIG_NFT_REJECT_IPV4=m +CONFIG_NF_SOCKET_IPV6=m +CONFIG_NF_TPROXY_IPV6=m +CONFIG_NF_TABLES_IPV6=y +CONFIG_NFT_REJECT_IPV6=m +CONFIG_NF_CONNTRACK_BROADCAST=m +CONFIG_NF_CONNTRACK_NETBIOS_NS=m +CONFIG_NF_CONNTRACK_SNMP=m +CONFIG_NFT_FIB=m +CONFIG_NFT_FIB_INET=m +CONFIG_IP_SET=m +CONFIG_IP_SET_MAX=256 +CONFIG_NFT_FIB_IPV4=m +CONFIG_NF_NAT_SNMP_BASIC=m +CONFIG_IP_NF_RAW=m +CONFIG_IP_NF_SECURITY=m +CONFIG_NFT_FIB_IPV6=m
Locations
Projects
Search
Status Monitor
Help
Open Build Service
OBS Manuals
API Documentation
OBS Portal
Reporting a Bug
Contact
Mailing List
Forums
Chat (IRC)
Twitter
Open Build Service (OBS)
is an
openSUSE project
.
浙ICP备2022010568号-2