Projects
openEuler:24.03:SP1:Everything
kernel
Sign Up
Log In
Username
Password
We truncated the diff of some files because they were too big. If you want to see the full diff for every file,
click here
.
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
Expand all
Collapse all
Changes of Revision 8
View file
_service:tar_scm:kernel.spec
Changed
@@ -40,9 +40,9 @@ %global upstream_version 6.6 %global upstream_sublevel 0 -%global devel_release 44 +%global devel_release 46 %global maintenance_release .0.0 -%global pkg_release .50 +%global pkg_release .51 %global openeuler_lts 1 %global openeuler_major 2403 @@ -1089,6 +1089,222 @@ %endif %changelog +* Thu Oct 10 2024 ZhangPeng <zhangpeng362@huawei.com> - 6.6.0-46.0.0.51 +- !12061 net/mlx5: Fix bridge mode operations when there are no VFs +- net/mlx5: Fix bridge mode operations when there are no VFs +- !12074 drm/amd/display: Check index for aux_rd_interval before using +- drm/amd/display: Check index for aux_rd_interval before using +- !12063 cpufreq: intel_pstate: Revise global turbo disable check +- cpufreq: intel_pstate: Revise global turbo disable check +- !12051 um: line: always fill *error_out in setup_one_line() +- um: line: always fill *error_out in setup_one_line() +- !12023 ASoC: meson: axg-card: fix 'use-after-free' +- ASoC: meson: axg-card: fix 'use-after-free' +- !12022 drm/amd/display: Check gpio_id before used as array index +- drm/amd/display: Check gpio_id before used as array index +- !12049 drm/amd/display: Skip inactive planes within ModeSupportAndSystemConfiguration +- drm/amd/display: Skip inactive planes within ModeSupportAndSystemConfiguration +- !11949 mptcp: pm: only decrement add_addr_accepted for MPJ req +- mptcp: pm: only decrement add_addr_accepted for MPJ req +- !11763 OLK-6.6 Intel: Backport to support Intel IFS(In Field Scan) SBAF on GNR +- platform/x86/intel/ifs: Fix SBAF title underline length +- trace: platform/x86/intel/ifs: Add SBAF trace support +- platform/x86/intel/ifs: Add SBAF test support +- platform/x86/intel/ifs: Add SBAF test image loading support +- platform/x86/intel/ifs: Refactor MSR usage in IFS test code +- selftests: ifs: verify IFS ARRAY BIST functionality +- selftests: ifs: verify IFS scan test functionality +- selftests: ifs: verify test image loading functionality +- selftests: ifs: verify test interfaces are created by the driver +- platform/x86/intel/ifs: Disable irq during one load stage +- platform/x86/intel/ifs: trace: display batch num in hex +- platform/x86/intel/ifs: Classify error scenarios correctly +- platform/x86/intel/ifs: Remove unnecessary initialization of 'ret' +- platform/x86/intel/ifs: Add an entry rendezvous for SAF +- platform/x86/intel/ifs: Replace the exit rendezvous with an entry rendezvous for ARRAY_BIST +- platform/x86/intel/ifs: Add current batch number to trace output +- platform/x86/intel/ifs: Trace on all HT threads when executing a test +- !12039 drm/amd/display: Stop amdgpu_dm initialize when link nums greater than max_links +- drm/amd/display: Stop amdgpu_dm initialize when link nums greater than max_links +- !12033 sysctl: always initialize i_uid/i_gid +- sysctl: always initialize i_uid/i_gid +- !12014 dma-buf: heaps: Fix off-by-one in CMA heap fault handler +- dma-buf: heaps: Fix off-by-one in CMA heap fault handler +- !12009 rtmutex: Drop rt_mutex::wait_lock before scheduling +- rtmutex: Drop rt_mutex::wait_lock before scheduling +- !11617 v2 Fix VSYNC referencing an unmapped VPE on GIC v4.0/v4.1 +- irqchip/gic-v3-its: Fix VSYNC referencing an unmapped VPE on GIC v4.0 +- irqchip/gic-v3-its: Fix VSYNC referencing an unmapped VPE on GIC v4.1 +- !12016 perf/x86/intel: Limit the period on Haswell +- perf/x86/intel: Limit the period on Haswell +- !12017 cppc_cpufreq: Fix possible null pointer dereference +- cppc_cpufreq: Fix possible null pointer dereference +- !11995 drm/amd/display: Assign linear_pitch_alignment even for VM +- drm/amd/display: Assign linear_pitch_alignment even for VM +- !11796 bpf: Take return from set_memory_rox() into account with bpf_jit_binary_lock_ro() +- bpf: Take return from set_memory_rox() into account with bpf_jit_binary_lock_ro() +- !12004 spi: rockchip: Resolve unbalanced runtime PM / system PM handling +- spi: rockchip: Resolve unbalanced runtime PM / system PM handling +- !12000 nvmet-tcp: fix kernel crash if commands allocation fails +- nvmet-tcp: fix kernel crash if commands allocation fails +- !11969 drm/amd/display: Run DC_LOG_DC after checking link->link_enc +- drm/amd/display: Run DC_LOG_DC after checking link->link_enc +- !11963 serial: sc16is7xx: fix invalid FIFO access with special register set +- serial: sc16is7xx: fix invalid FIFO access with special register set +- !11985 Fix CVE-2024-46845 +- tracing/osnoise: Fix build when timerlat is not enabled +- tracing/timerlat: Only clear timer if a kthread exists +- !11909 ksmbd: unset the binding mark of a reused connection +- ksmbd: unset the binding mark of a reused connection +- !11983 wifi: ath12k: fix firmware crash due to invalid peer nss +- wifi: ath12k: fix firmware crash due to invalid peer nss +- !11984 drm/amd/display: Add array index check for hdcp ddc access +- drm/amd/display: Add array index check for hdcp ddc access +- !11989 drm/amd/display: Fix index may exceed array range within fpu_update_bw_bounding_box +- drm/amd/display: Fix index may exceed array range within fpu_update_bw_bounding_box +- !11992 scsi: lpfc: Handle mailbox timeouts in lpfc_get_sfp_info +- scsi: lpfc: Handle mailbox timeouts in lpfc_get_sfp_info +- !11979 CVE-2024-46814 +- drm/amd/display: Check msg_id before processing transcation +- !11960 scsi: ufs: core: Remove SCSI host only if added +- scsi: ufs: core: Remove SCSI host only if added +- !11944 drm/amdgpu: Fix the warning division or modulo by zero +- drm/amdgpu: Fix the warning division or modulo by zero +- !11971 sched: Support to enable/disable dynamic_affinity +- sched: Support to enable/disable dynamic_affinity +- !11951 net: phy: Fix missing of_node_put() for leds +- net: phy: Fix missing of_node_put() for leds +- !11956 Fix CVE-2024-44958 6.6 +- sched/smt: Fix unbalance sched_smt_present dec/inc +- sched/smt: Introduce sched_smt_present_inc/dec() helper +- !11945 btrfs: don't BUG_ON on ENOMEM from btrfs_lookup_extent_info() in walk_down_proc() +- btrfs: don't BUG_ON on ENOMEM from btrfs_lookup_extent_info() in walk_down_proc() +- !11919 btrfs: fix race between direct IO write and fsync when using same fd +- btrfs: fix race between direct IO write and fsync when using same fd +- !11920 hwmon: (lm95234) Fix underflows seen when writing limit attributes +- hwmon: (lm95234) Fix underflows seen when writing limit attributes +- !11820 powerpc/qspinlock: Fix deadlock in MCS queue +- powerpc/qspinlock: Fix deadlock in MCS queue +- !11915 ethtool: check device is present when getting link settings +- ethtool: check device is present when getting link settings +- !11936 btrfs: remove NULL transaction support for btrfs_lookup_extent_info() +- btrfs: remove NULL transaction support for btrfs_lookup_extent_info() +- !11929 Merge some hns RoCE patches from the mainline to OLK-6.6 +- RDMA/hns: Fix the overflow risk of hem_list_calc_ba_range() +- RDMA/hns: Fix Use-After-Free of rsv_qp on HIP08 +- Revert "RDMA/hns: Fix Use-After-Free of rsv_qp" +- Revert "RDMA/hns: Fix the overflow risk of hem_list_calc_ba_range()" +- !11912 smb: client: fix double put of @cfile in smb2_set_path_size() +- smb: client: fix double put of @cfile in smb2_set_path_size() +- !11908 HID: amd_sfh: free driver_data after destroying hid device +- HID: amd_sfh: free driver_data after destroying hid device +- !11926 nilfs2: protect references to superblock parameters exposed in sysfs +- nilfs2: protect references to superblock parameters exposed in sysfs +- !11927 hwmon: (hp-wmi-sensors) Check if WMI event data exists +- hwmon: (hp-wmi-sensors) Check if WMI event data exists +- !11844 fix CVE-2024-46771 +- can: bcm: Clear bo->bcm_proc_read after remove_proc_entry(). +- can: bcm: Remove proc entry when dev is unregistered. +- !11885 pci/hotplug/pnv_php: Fix hotplug driver crash on Powernv +- pci/hotplug/pnv_php: Fix hotplug driver crash on Powernv +- !11795 i2c: tegra: Do not mark ACPI devices as irq safe +- i2c: tegra: Do not mark ACPI devices as irq safe +- !11804 drm/amdgpu: Fix out-of-bounds write warning +- drm/amdgpu: Fix out-of-bounds write warning +- !11901 scsi: aacraid: Fix double-free on probe failure +- scsi: aacraid: Fix double-free on probe failure +- !11859 char: xillybus: Check USB endpoints when probing device +- char: xillybus: Check USB endpoints when probing device +- !11865 pktgen: use cpus_read_lock() in pg_net_init() +- pktgen: use cpus_read_lock() in pg_net_init() +- !11876 btrfs: handle errors from btrfs_dec_ref() properly +- btrfs: handle errors from btrfs_dec_ref() properly +- !11893 tracing/osnoise: Use a cpumask to know what threads are kthreads +- tracing/osnoise: Use a cpumask to know what threads are kthreads +- !11840 userfaultfd: fix checks for huge PMDs +- userfaultfd: fix checks for huge PMDs +- !11891 wifi: rtw88: usb: schedule rx work after everything is set up +- wifi: rtw88: usb: schedule rx work after everything is set up +- !11860 VMCI: Fix use-after-free when removing resource in vmci_resource_remove() +- VMCI: Fix use-after-free when removing resource in vmci_resource_remove() +- !11870 NFSD: Reset cb_seq_status after NFS4ERR_DELAY +- NFSD: Reset cb_seq_status after NFS4ERR_DELAY +- !11874 fix CVE-2024-46701 +- libfs: fix infinite directory reads for offset dir +- fs: fix kabi kroken in struct offset_ctx +- libfs: Convert simple directory offsets to use a Maple Tree +- test_maple_tree: testing the cyclic allocation +- maple_tree: Add mtree_alloc_cyclic() +- libfs: Add simple_offset_empty() +- libfs: Define a minimum directory offset +- libfs: Re-arrange locking in offset_iterate_dir() +- !11689 smb: client: fix double put of @cfile in smb2_rename_path() +- smb: client: fix double put of @cfile in smb2_rename_path() +- !11883 usb: dwc3: st: fix probed platform device ref count on probe error path +- usb: dwc3: st: fix probed platform device ref count on probe error path +- !11884 PCI: Add missing bridge lock to pci_bus_lock() +- PCI: Add missing bridge lock to pci_bus_lock() +- !11862 hwmon: (w83627ehf) Fix underflows seen when writing limit attributes +- hwmon: (w83627ehf) Fix underflows seen when writing limit attributes +- !11758 smb/client: avoid dereferencing rdata=NULL in smb2_new_read_req() +- smb/client: avoid dereferencing rdata=NULL in smb2_new_read_req() +- !11857 arm64/mpam: Fix redefined reference of 'mpam_detect_is_enabled' +- arm64/mpam: Fix redefined reference of 'mpam_detect_is_enabled' +- !11823 mmc: mmc_test: Fix NULL dereference on allocation failure +- mmc: mmc_test: Fix NULL dereference on allocation failure +- !11819 uio_hv_generic: Fix kernel NULL pointer dereference in hv_uio_rescind +- uio_hv_generic: Fix kernel NULL pointer dereference in hv_uio_rescind +- !11495 ext4: Fix race in buffer_head read fault injection +- ext4: Fix race in buffer_head read fault injection +- !11845 bpf: verifier: prevent userspace memory access +- bpf: verifier: prevent userspace memory access +- !11748 ASoC: dapm: Fix UAF for snd_soc_pcm_runtime object +- ASoC: dapm: Fix UAF for snd_soc_pcm_runtime object +- !11624 Fix iBMA bug and change version +- BMA: Fix edma driver initialization problem and change the version number. +- !11850 misc: fastrpc: Fix double free of 'buf' in error path +- misc: fastrpc: Fix double free of 'buf' in error path +- !11655 drm/amd/display: Check denominator pbn_div before used +- drm/amd/display: Check denominator pbn_div before used +- !11686 udf: Avoid excessive partition lengths +- udf: Avoid excessive partition lengths +- !11685 binder: fix UAF caused by offsets overwrite +- binder: fix UAF caused by offsets overwrite +- !11798 CVE-2024-46784 +- net: mana: Fix error handling in mana_create_txq/rxq's NAPI cleanup +- net: mana: Fix doorbell out of order violation and avoid unnecessary doorbell rings
View file
_service
Changed
@@ -12,7 +12,7 @@ <param name="scm">git</param> <param name="url">git@gitee.com:openeuler/kernel.git</param> <param name="exclude">.git</param> - <param name="revision">6.6.0-44.0.0</param> + <param name="revision">6.6.0-46.0.0</param> <param name="history-depth">1</param> <param name="filename">kernel</param> <param name="version">_none_</param>
View file
_service:recompress:tar_scm:kernel.tar.gz/MAINTAINERS
Changed
@@ -10632,6 +10632,7 @@ S: Maintained F: drivers/platform/x86/intel/ifs F: include/trace/events/intel_ifs.h +F: tools/testing/selftests/drivers/platform/x86/intel/ifs/ INTEL INTEGRATED SENSOR HUB DRIVER M: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/arm/net/bpf_jit_32.c
Changed
@@ -1982,28 +1982,21 @@ /* If building the body of the JITed code fails somehow, * we fall back to the interpretation. */ - if (build_body(&ctx) < 0) { - image_ptr = NULL; - bpf_jit_binary_free(header); - prog = orig_prog; - goto out_imms; - } + if (build_body(&ctx) < 0) + goto out_free; build_epilogue(&ctx); /* 3.) Extra pass to validate JITed Code */ - if (validate_code(&ctx)) { - image_ptr = NULL; - bpf_jit_binary_free(header); - prog = orig_prog; - goto out_imms; - } + if (validate_code(&ctx)) + goto out_free; flush_icache_range((u32)header, (u32)(ctx.target + ctx.idx)); if (bpf_jit_enable > 1) /* there are 2 passes here */ bpf_jit_dump(prog->len, image_size, 2, ctx.target); - bpf_jit_binary_lock_ro(header); + if (bpf_jit_binary_lock_ro(header)) + goto out_free; prog->bpf_func = (void *)ctx.target; prog->jited = 1; prog->jited_len = image_size; @@ -2020,5 +2013,11 @@ bpf_jit_prog_release_other(prog, prog == orig_prog ? tmp : orig_prog); return prog; + +out_free: + image_ptr = NULL; + bpf_jit_binary_free(header); + prog = orig_prog; + goto out_imms; }
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/arm64/include/asm/cpufeature.h
Changed
@@ -858,14 +858,7 @@ cpus_have_final_cap(ARM64_MPAM); } -#ifdef CONFIG_ARM64_MPAM bool mpam_detect_is_enabled(void); -#else -static inline bool mpam_detect_is_enabled(void) -{ - return false; -} -#endif int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt); bool try_emulate_mrs(struct pt_regs *regs, u32 isn);
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/arm64/net/bpf_jit_comp.c
Changed
@@ -1648,7 +1648,14 @@ prog->jited_len = 0; goto out_off; } - bpf_jit_binary_lock_ro(header); + if (bpf_jit_binary_lock_ro(header)) { + bpf_jit_binary_free(header); + prog = orig_prog; + prog->bpf_func = NULL; + prog->jited = 0; + prog->jited_len = 0; + goto out_off; + } } else { jit_data->ctx = ctx; jit_data->image = image_ptr;
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/loongarch/net/bpf_jit.c
Changed
@@ -1206,16 +1206,19 @@ flush_icache_range((unsigned long)header, (unsigned long)(ctx.image + ctx.idx)); if (!prog->is_func || extra_pass) { + int err; + if (extra_pass && ctx.idx != jit_data->ctx.idx) { pr_err_once("multi-func JIT bug %d != %d\n", ctx.idx, jit_data->ctx.idx); - bpf_jit_binary_free(header); - prog->bpf_func = NULL; - prog->jited = 0; - prog->jited_len = 0; - goto out_offset; + goto out_free; + } + err = bpf_jit_binary_lock_ro(header); + if (err) { + pr_err_once("bpf_jit_binary_lock_ro() returned %d\n", + err); + goto out_free; } - bpf_jit_binary_lock_ro(header); } else { jit_data->ctx = ctx; jit_data->image = image_ptr; @@ -1246,6 +1249,13 @@ out_offset = -1; return prog; + +out_free: + bpf_jit_binary_free(header); + prog->bpf_func = NULL; + prog->jited = 0; + prog->jited_len = 0; + goto out_offset; } /* Indicate the JIT backend supports mixing bpf2bpf and tailcalls. */
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/mips/net/bpf_jit_comp.c
Changed
@@ -1012,7 +1012,8 @@ bpf_prog_fill_jited_linfo(prog, &ctx.descriptors1); /* Set as read-only exec and flush instruction cache */ - bpf_jit_binary_lock_ro(header); + if (bpf_jit_binary_lock_ro(header)) + goto out_err; flush_icache_range((unsigned long)header, (unsigned long)&ctx.targetctx.jit_index);
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/parisc/net/bpf_jit_core.c
Changed
@@ -167,7 +167,13 @@ bpf_flush_icache(jit_data->header, ctx->insns + ctx->ninsns); if (!prog->is_func || extra_pass) { - bpf_jit_binary_lock_ro(jit_data->header); + if (bpf_jit_binary_lock_ro(jit_data->header)) { + bpf_jit_binary_free(jit_data->header); + prog->bpf_func = NULL; + prog->jited = 0; + prog->jited_len = 0; + goto out_offset; + } prologue_len = ctx->epilogue_offset - ctx->body_len; for (i = 0; i < prog->len; i++) ctx->offseti += prologue_len;
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/powerpc/kernel/rtas.c
Changed
@@ -18,6 +18,7 @@ #include <linux/kernel.h> #include <linux/lockdep.h> #include <linux/memblock.h> +#include <linux/nospec.h> #include <linux/of.h> #include <linux/of_fdt.h> #include <linux/reboot.h> @@ -1839,6 +1840,9 @@ || nargs + nret > ARRAY_SIZE(args.args)) return -EINVAL; + nargs = array_index_nospec(nargs, ARRAY_SIZE(args.args)); + nret = array_index_nospec(nret, ARRAY_SIZE(args.args) - nargs); + /* Copy in args. */ if (copy_from_user(args.args, uargs->args, nargs * sizeof(rtas_arg_t)) != 0)
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/powerpc/lib/qspinlock.c
Changed
@@ -715,7 +715,15 @@ } release: - qnodesp->count--; /* release the node */ + /* + * Clear the lock before releasing the node, as another CPU might see stale + * values if an interrupt occurs after we increment qnodesp->count + * but before node->lock is initialized. The barrier ensures that + * there are no further stores to the node after it has been released. + */ + node->lock = NULL; + barrier(); + qnodesp->count--; } void queued_spin_lock_slowpath(struct qspinlock *lock)
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/powerpc/net/bpf_jit_comp.c
Changed
@@ -205,7 +205,14 @@ bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + bpf_hdr->size); if (!fp->is_func || extra_pass) { - bpf_jit_binary_lock_ro(bpf_hdr); + if (bpf_jit_binary_lock_ro(bpf_hdr)) { + bpf_jit_binary_free(bpf_hdr); + fp = org_fp; + fp->bpf_func = NULL; + fp->jited = 0; + fp->jited_len = 0; + goto out_addrs; + } bpf_prog_fill_jited_linfo(fp, addrs); out_addrs: kfree(addrs);
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/s390/net/bpf_jit_comp.c
Changed
@@ -1973,7 +1973,11 @@ print_fn_code(jit.prg_buf, jit.size_prg); } if (!fp->is_func || extra_pass) { - bpf_jit_binary_lock_ro(header); + if (bpf_jit_binary_lock_ro(header)) { + bpf_jit_binary_free(header); + fp = orig_fp; + goto free_addrs; + } } else { jit_data->header = header; jit_data->ctx = jit;
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/sparc/net/bpf_jit_comp_64.c
Changed
@@ -1602,7 +1602,11 @@ bpf_flush_icache(header, (u8 *)header + header->size); if (!prog->is_func || extra_pass) { - bpf_jit_binary_lock_ro(header); + if (bpf_jit_binary_lock_ro(header)) { + bpf_jit_binary_free(header); + prog = orig_prog; + goto out_off; + } } else { jit_data->ctx = ctx; jit_data->image = image_ptr;
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/sw_64/net/bpf_jit_comp.c
Changed
@@ -1427,7 +1427,14 @@ bpf_flush_icache(header, ctx.image + ctx.idx); if (!prog->is_func || extra_pass) { - bpf_jit_binary_lock_ro(header); + if (bpf_jit_binary_lock_ro(header)) { + bpf_jit_binary_free(header); + prog = orig_prog; + prog->bpf_func = NULL; + prog->jited = 0; + prog->jited_len = 0; + goto out_off; + } } else { jit_data->ctx = ctx; jit_data->image = image_ptr;
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/um/drivers/line.c
Changed
@@ -383,6 +383,7 @@ parse_chan_pair(NULL, line, n, opts, error_out); err = 0; } + *error_out = "configured as 'none'"; } else { char *new = kstrdup(init, GFP_KERNEL); if (!new) { @@ -406,6 +407,7 @@ } } if (err) { + *error_out = "failed to parse channel pair"; line->init_str = NULL; line->valid = 0; kfree(new);
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/x86/events/intel/core.c
Changed
@@ -4520,6 +4520,25 @@ return HYBRID_INTEL_CORE; } +static inline bool erratum_hsw11(struct perf_event *event) +{ + return (event->hw.config & INTEL_ARCH_EVENT_MASK) == + X86_CONFIG(.event=0xc0, .umask=0x01); +} + +/* + * The HSW11 requires a period larger than 100 which is the same as the BDM11. + * A minimum period of 128 is enforced as well for the INST_RETIRED.ALL. + * + * The message 'interrupt took too long' can be observed on any counter which + * was armed with a period < 32 and two events expired in the same NMI. + * A minimum period of 32 is enforced for the rest of the events. + */ +static void hsw_limit_period(struct perf_event *event, s64 *left) +{ + *left = max(*left, erratum_hsw11(event) ? 128 : 32); +} + /* * Broadwell: * @@ -4537,8 +4556,7 @@ */ static void bdw_limit_period(struct perf_event *event, s64 *left) { - if ((event->hw.config & INTEL_ARCH_EVENT_MASK) == - X86_CONFIG(.event=0xc0, .umask=0x01)) { + if (erratum_hsw11(event)) { if (*left < 128) *left = 128; *left &= ~0x3fULL; @@ -6598,6 +6616,7 @@ x86_pmu.hw_config = hsw_hw_config; x86_pmu.get_event_constraints = hsw_get_event_constraints; + x86_pmu.limit_period = hsw_limit_period; x86_pmu.lbr_double_abort = true; extra_attr = boot_cpu_has(X86_FEATURE_RTM) ? hsw_format_attr : nhm_format_attr;
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/x86/include/asm/msr-index.h
Changed
@@ -248,6 +248,8 @@ #define MSR_INTEGRITY_CAPS_ARRAY_BIST BIT(MSR_INTEGRITY_CAPS_ARRAY_BIST_BIT) #define MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT 4 #define MSR_INTEGRITY_CAPS_PERIODIC_BIST BIT(MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT) +#define MSR_INTEGRITY_CAPS_SBAF_BIT 8 +#define MSR_INTEGRITY_CAPS_SBAF BIT(MSR_INTEGRITY_CAPS_SBAF_BIT) #define MSR_INTEGRITY_CAPS_SAF_GEN_MASK GENMASK_ULL(10, 9) #define MSR_LBR_NHM_FROM 0x00000680
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/x86/net/bpf_jit_comp.c
Changed
@@ -2968,3 +2968,9 @@ BUG_ON(ret < 0); } } + +/* x86-64 JIT emits its own code to filter user addresses so return 0 here */ +u64 bpf_arch_uaddress_limit(void) +{ + return 0; +}
View file
_service:recompress:tar_scm:kernel.tar.gz/arch/x86/net/bpf_jit_comp32.c
Changed
@@ -2600,8 +2600,7 @@ if (bpf_jit_enable > 1) bpf_jit_dump(prog->len, proglen, pass + 1, image); - if (image) { - bpf_jit_binary_lock_ro(header); + if (image && !bpf_jit_binary_lock_ro(header)) { prog->bpf_func = (void *)image; prog->jited = 1; prog->jited_len = proglen;
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/android/binder.c
Changed
@@ -3342,6 +3342,7 @@ */ copy_size = object_offset - user_offset; if (copy_size && (user_offset > object_offset || + object_offset > tr->data_size || binder_alloc_copy_user_to_buffer( &target_proc->alloc, t->buffer, user_offset,
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/char/xillybus/xillyusb.c
Changed
@@ -1906,6 +1906,13 @@ static int xillyusb_setup_base_eps(struct xillyusb_dev *xdev) { + struct usb_device *udev = xdev->udev; + + /* Verify that device has the two fundamental bulk in/out endpoints */ + if (usb_pipe_type_check(udev, usb_sndbulkpipe(udev, MSG_EP_NUM)) || + usb_pipe_type_check(udev, usb_rcvbulkpipe(udev, IN_EP_NUM))) + return -ENODEV; + xdev->msg_ep = endpoint_alloc(xdev, MSG_EP_NUM | USB_DIR_OUT, bulk_out_work, 1, 2); if (!xdev->msg_ep) @@ -1935,14 +1942,15 @@ __le16 *chandesc, int num_channels) { - struct xillyusb_channel *chan; + struct usb_device *udev = xdev->udev; + struct xillyusb_channel *chan, *new_channels; int i; chan = kcalloc(num_channels, sizeof(*chan), GFP_KERNEL); if (!chan) return -ENOMEM; - xdev->channels = chan; + new_channels = chan; for (i = 0; i < num_channels; i++, chan++) { unsigned int in_desc = le16_to_cpu(*chandesc++); @@ -1971,6 +1979,15 @@ */ if ((out_desc & 0x80) && i < 14) { /* Entry is valid */ + if (usb_pipe_type_check(udev, + usb_sndbulkpipe(udev, i + 2))) { + dev_err(xdev->dev, + "Missing BULK OUT endpoint %d\n", + i + 2); + kfree(new_channels); + return -ENODEV; + } + chan->writable = 1; chan->out_synchronous = !!(out_desc & 0x40); chan->out_seekable = !!(out_desc & 0x20); @@ -1980,6 +1997,7 @@ } } + xdev->channels = new_channels; return 0; }
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/cpufreq/cppc_cpufreq.c
Changed
@@ -878,10 +878,15 @@ { struct fb_ctr_pair fb_ctrs = { .cpu = cpu, }; struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); - struct cppc_cpudata *cpu_data = policy->driver_data; + struct cppc_cpudata *cpu_data; u64 delivered_perf; int ret; + if (!policy) + return -ENODEV; + + cpu_data = policy->driver_data; + cpufreq_cpu_put(policy); if (cpu_has_amu_feat(cpu)) @@ -961,10 +966,15 @@ static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpu) { struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); - struct cppc_cpudata *cpu_data = policy->driver_data; + struct cppc_cpudata *cpu_data; u64 desired_perf; int ret; + if (!policy) + return -ENODEV; + + cpu_data = policy->driver_data; + cpufreq_cpu_put(policy); ret = cppc_get_desired_perf(cpu, &desired_perf);
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/cpufreq/intel_pstate.c
Changed
@@ -595,13 +595,9 @@ static inline void update_turbo_state(void) { u64 misc_en; - struct cpudata *cpu; - cpu = all_cpu_data0; rdmsrl(MSR_IA32_MISC_ENABLE, misc_en); - global.turbo_disabled = - (misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE || - cpu->pstate.max_pstate == cpu->pstate.turbo_pstate); + global.turbo_disabled = misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE; } static int min_perf_pct_min(void)
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/dma-buf/heaps/cma_heap.c
Changed
@@ -165,7 +165,7 @@ struct vm_area_struct *vma = vmf->vma; struct cma_heap_buffer *buffer = vma->vm_private_data; - if (vmf->pgoff > buffer->pagecount) + if (vmf->pgoff >= buffer->pagecount) return VM_FAULT_SIGBUS; vmf->page = buffer->pagesvmf->pgoff;
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/firmware/efi/libstub/randomalloc.c
Changed
@@ -107,6 +107,33 @@ return total_slots; } +static void mem_check_memmaps(void) +{ + int i; + efi_status_t status; + unsigned long nr_pages; + unsigned long long start, end; + + for (i = 0; i < MAX_MEMMAP_REGIONS; i++) { + if (!mem_avoidi.size) + continue; + start = round_down(mem_avoidi.start, EFI_ALLOC_ALIGN); + end = round_up(mem_avoidi.start + mem_avoidi.size, EFI_ALLOC_ALIGN); + nr_pages = (end - start) / EFI_PAGE_SIZE; + + mem_avoidi.start = start; + mem_avoidi.size = end - start; + status = efi_bs_call(allocate_pages, EFI_ALLOCATE_ADDRESS, + EFI_LOADER_DATA, nr_pages, &mem_avoidi.start); + if (status == EFI_SUCCESS) { + efi_free(mem_avoidi.size, mem_avoidi.start); + } else { + mem_avoidi.size = 0; + efi_err("Failed to reserve memmap, index: %d, status: %lu\n", i, status); + } + } +} + void mem_avoid_memmap(char *str) { static int i; @@ -137,6 +164,7 @@ str = k; i++; } + mem_check_memmaps(); } void mem_avoid_mem_nokaslr(char *str)
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
Changed
@@ -352,7 +352,7 @@ ring->max_dw = max_dw; ring->hw_prio = hw_prio; - if (!ring->no_scheduler) { + if (!ring->no_scheduler && ring->funcs->type < AMDGPU_HW_IP_NUM) { hw_ip = ring->funcs->type; num_sched = &adev->gpu_schedhw_iphw_prio.num_scheds; adev->gpu_schedhw_iphw_prio.sched(*num_sched)++ =
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c
Changed
@@ -500,6 +500,12 @@ if (mode == AMDGPU_AUTO_COMPUTE_PARTITION_MODE) { mode = __aqua_vanjaram_get_auto_mode(xcp_mgr); + if (mode == AMDGPU_UNKNOWN_COMPUTE_PARTITION_MODE) { + dev_err(adev->dev, + "Invalid config, no compatible compute partition mode found, available memory partitions: %d", + adev->gmc.num_mem_partitions); + return -EINVAL; + } } else if (!__aqua_vanjaram_is_valid_mode(xcp_mgr, mode)) { dev_err(adev->dev, "Invalid compute partition mode requested, requested: %s, available memory partitions: %d",
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
Changed
@@ -4450,17 +4450,17 @@ } } + if (link_cnt > MAX_PIPES * 2) { + DRM_ERROR( + "KMS: Cannot support more than %d display indexes\n", + MAX_PIPES * 2); + goto fail; + } + /* loops over all connectors on the board */ for (i = 0; i < link_cnt; i++) { struct dc_link *link = NULL; - if (i > AMDGPU_DM_MAX_DISPLAY_INDEX) { - DRM_ERROR( - "KMS: Cannot support more than %d display indexes\n", - AMDGPU_DM_MAX_DISPLAY_INDEX); - continue; - } - aconnector = kzalloc(sizeof(*aconnector), GFP_KERNEL); if (!aconnector) goto fail; @@ -6934,7 +6934,7 @@ } } - if (j == dc_state->stream_count) + if (j == dc_state->stream_count || pbn_div == 0) continue; slot_num = DIV_ROUND_UP(pbn, pbn_div);
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/display/dc/core/dc.c
Changed
@@ -1298,6 +1298,7 @@ return NULL; if (init_params->dce_environment == DCE_ENV_VIRTUAL_HW) { + dc->caps.linear_pitch_alignment = 64; if (!dc_construct_ctx(dc, init_params)) goto destruct_dc; } else {
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c
Changed
@@ -1751,7 +1751,7 @@ bool split_required = pipe->stream->timing.pix_clk_100hz >= dcn_get_max_non_odm_pix_rate_100hz(&dc->dml.soc) || (pipe->plane_state && pipe->plane_state->src_rect.width > 5120); - if (remaining_det_segs > MIN_RESERVED_DET_SEGS) + if (remaining_det_segs > MIN_RESERVED_DET_SEGS && crb_pipes != 0) pipespipe_cnt.pipe.src.det_size_override += (remaining_det_segs - MIN_RESERVED_DET_SEGS) / crb_pipes + (crb_idx < (remaining_det_segs - MIN_RESERVED_DET_SEGS) % crb_pipes ? 1 : 0); if (pipespipe_cnt.pipe.src.det_size_override > 2 * DCN3_15_MAX_DET_SEGS) {
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/display/dc/dml/dcn302/dcn302_fpu.c
Changed
@@ -304,6 +304,16 @@ dram_speed_mtsnum_states++ = bw_params->clk_table.entriesj++.memclk_mhz * 16; } + /* bw_params->clk_table.entriesMAX_NUM_DPM_LVL. + * MAX_NUM_DPM_LVL is 8. + * dcn3_02_soc.clock_limitsDC__VOLTAGE_STATES. + * DC__VOLTAGE_STATES is 40. + */ + if (num_states > MAX_NUM_DPM_LVL) { + ASSERT(0); + return; + } + dcn3_02_soc.num_states = num_states; for (i = 0; i < dcn3_02_soc.num_states; i++) { dcn3_02_soc.clock_limitsi.state = i;
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/display/dc/dml/dcn303/dcn303_fpu.c
Changed
@@ -299,6 +299,16 @@ dram_speed_mtsnum_states++ = bw_params->clk_table.entriesj++.memclk_mhz * 16; } + /* bw_params->clk_table.entriesMAX_NUM_DPM_LVL. + * MAX_NUM_DPM_LVL is 8. + * dcn3_02_soc.clock_limitsDC__VOLTAGE_STATES. + * DC__VOLTAGE_STATES is 40. + */ + if (num_states > MAX_NUM_DPM_LVL) { + ASSERT(0); + return; + } + dcn3_03_soc.num_states = num_states; for (i = 0; i < dcn3_03_soc.num_states; i++) { dcn3_03_soc.clock_limitsi.state = i;
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
Changed
@@ -2885,6 +2885,16 @@ dram_speed_mtsnum_states++ = bw_params->clk_table.entriesj++.memclk_mhz * 16; } + /* bw_params->clk_table.entriesMAX_NUM_DPM_LVL. + * MAX_NUM_DPM_LVL is 8. + * dcn3_02_soc.clock_limitsDC__VOLTAGE_STATES. + * DC__VOLTAGE_STATES is 40. + */ + if (num_states > MAX_NUM_DPM_LVL) { + ASSERT(0); + return; + } + dcn3_2_soc.num_states = num_states; for (i = 0; i < dcn3_2_soc.num_states; i++) { dcn3_2_soc.clock_limitsi.state = i;
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c
Changed
@@ -789,6 +789,16 @@ dram_speed_mtsnum_states++ = bw_params->clk_table.entriesj++.memclk_mhz * 16; } + /* bw_params->clk_table.entriesMAX_NUM_DPM_LVL. + * MAX_NUM_DPM_LVL is 8. + * dcn3_02_soc.clock_limitsDC__VOLTAGE_STATES. + * DC__VOLTAGE_STATES is 40. + */ + if (num_states > MAX_NUM_DPM_LVL) { + ASSERT(0); + return; + } + dcn3_21_soc.num_states = num_states; for (i = 0; i < dcn3_21_soc.num_states; i++) { dcn3_21_soc.clock_limitsi.state = i;
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
Changed
@@ -1099,8 +1099,13 @@ // Total Available Pipes Support Check for (k = 0; k < mode_lib->vba.NumberOfActivePlanes; ++k) { - total_pipes += mode_lib->vba.DPPPerPlanek; pipe_idx = get_pipe_idx(mode_lib, k); + if (pipe_idx == -1) { + ASSERT(0); + continue; // skip inactive planes + } + total_pipes += mode_lib->vba.DPPPerPlanek; + if (mode_lib->vba.cache_pipespipe_idx.clks_cfg.dppclk_mhz > 0.0) mode_lib->vba.DPPCLKk = mode_lib->vba.cache_pipespipe_idx.clks_cfg.dppclk_mhz; else
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c
Changed
@@ -239,6 +239,9 @@ enum gpio_id id, uint32_t en) { + if (id == GPIO_ID_UNKNOWN) + return false; + return service->busynessiden; } @@ -247,6 +250,9 @@ enum gpio_id id, uint32_t en) { + if (id == GPIO_ID_UNKNOWN) + return; + service->busynessiden = true; } @@ -255,6 +261,9 @@ enum gpio_id id, uint32_t en) { + if (id == GPIO_ID_UNKNOWN) + return; + service->busynessiden = false; } @@ -263,7 +272,7 @@ enum gpio_id id, uint32_t en) { - if (!service->busynessid) { + if (id != GPIO_ID_UNKNOWN && !service->busynessid) { ASSERT_CRITICAL(false); return GPIO_RESULT_OPEN_FAILED; } @@ -277,7 +286,7 @@ enum gpio_id id, uint32_t en) { - if (!service->busynessid) { + if (id != GPIO_ID_UNKNOWN && !service->busynessid) { ASSERT_CRITICAL(false); return GPIO_RESULT_OPEN_FAILED; }
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
Changed
@@ -130,13 +130,21 @@ const uint8_t hdcp_i2c_addr_link_primary = 0x3a; /* 0x74 >> 1*/ const uint8_t hdcp_i2c_addr_link_secondary = 0x3b; /* 0x76 >> 1*/ struct i2c_command i2c_command; - uint8_t offset = hdcp_i2c_offsetsmessage_info->msg_id; + uint8_t offset; struct i2c_payload i2c_payloads = { - { true, 0, 1, &offset }, + { true, 0, 1, 0 }, /* actual hdcp payload, will be filled later, zeroed for now*/ { 0 } }; + if (message_info->msg_id == HDCP_MESSAGE_ID_INVALID) { + DC_LOG_ERROR("%s: Invalid message_info msg_id - %d\n", __func__, message_info->msg_id); + return false; + } + + offset = hdcp_i2c_offsetsmessage_info->msg_id; + i2c_payloads0.data = &offset; + switch (message_info->link) { case HDCP_LINK_SECONDARY: i2c_payloads0.address = hdcp_i2c_addr_link_secondary; @@ -310,6 +318,11 @@ struct dc_link *link, struct hdcp_protection_message *message_info) { + if (message_info->msg_id == HDCP_MESSAGE_ID_INVALID) { + DC_LOG_ERROR("%s: Invalid message_info msg_id - %d\n", __func__, message_info->msg_id); + return false; + } + return dpcd_access_helper( link, message_info->length,
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/display/dc/link/link_factory.c
Changed
@@ -629,14 +629,14 @@ link->link_enc = link->dc->res_pool->funcs->link_enc_create(dc_ctx, &enc_init_data); - DC_LOG_DC("BIOS object table - DP_IS_USB_C: %d", link->link_enc->features.flags.bits.DP_IS_USB_C); - DC_LOG_DC("BIOS object table - IS_DP2_CAPABLE: %d", link->link_enc->features.flags.bits.IS_DP2_CAPABLE); - if (!link->link_enc) { DC_ERROR("Failed to create link encoder!\n"); goto link_enc_create_fail; } + DC_LOG_DC("BIOS object table - DP_IS_USB_C: %d", link->link_enc->features.flags.bits.DP_IS_USB_C); + DC_LOG_DC("BIOS object table - IS_DP2_CAPABLE: %d", link->link_enc->features.flags.bits.IS_DP2_CAPABLE); + /* Update link encoder tracking variables. These are used for the dynamic * assignment of link encoders to streams. */
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
Changed
@@ -914,10 +914,10 @@ /* Driver does not need to train the first hop. Skip DPCD read and clear * AUX_RD_INTERVAL for DPTX-to-DPIA hop. */ - if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA) + if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA && repeater_cnt > 0 && repeater_cnt < MAX_REPEATER_CNT) link->dpcd_caps.lttpr_caps.aux_rd_interval--repeater_cnt = 0; - for (repeater_id = repeater_cnt; repeater_id > 0; repeater_id--) { + for (repeater_id = repeater_cnt; repeater_id > 0 && repeater_id < MAX_REPEATER_CNT; repeater_id--) { aux_interval_address = DP_TRAINING_AUX_RD_INTERVAL_PHY_REPEATER1 + ((DP_REPEATER_CONFIGURATION_AND_STATUS_SIZE) * (repeater_id - 1)); core_link_read_dpcd(
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_ddc.c
Changed
@@ -156,11 +156,16 @@ uint32_t cur_size = 0; uint32_t data_offset = 0; - if (msg_id == MOD_HDCP_MESSAGE_ID_INVALID) { + if (msg_id == MOD_HDCP_MESSAGE_ID_INVALID || + msg_id >= MOD_HDCP_MESSAGE_ID_MAX) return MOD_HDCP_STATUS_DDC_FAILURE; - } if (is_dp_hdcp(hdcp)) { + int num_dpcd_addrs = sizeof(hdcp_dpcd_addrs) / + sizeof(hdcp_dpcd_addrs0); + if (msg_id >= num_dpcd_addrs) + return MOD_HDCP_STATUS_DDC_FAILURE; + while (buf_len > 0) { cur_size = MIN(buf_len, HDCP_MAX_AUX_TRANSACTION_SIZE); success = hdcp->config.ddc.funcs.read_dpcd(hdcp->config.ddc.handle, @@ -175,6 +180,11 @@ data_offset += cur_size; } } else { + int num_i2c_offsets = sizeof(hdcp_i2c_offsets) / + sizeof(hdcp_i2c_offsets0); + if (msg_id >= num_i2c_offsets) + return MOD_HDCP_STATUS_DDC_FAILURE; + success = hdcp->config.ddc.funcs.read_i2c( hdcp->config.ddc.handle, HDCP_I2C_ADDR, @@ -219,11 +229,16 @@ uint32_t cur_size = 0; uint32_t data_offset = 0; - if (msg_id == MOD_HDCP_MESSAGE_ID_INVALID) { + if (msg_id == MOD_HDCP_MESSAGE_ID_INVALID || + msg_id >= MOD_HDCP_MESSAGE_ID_MAX) return MOD_HDCP_STATUS_DDC_FAILURE; - } if (is_dp_hdcp(hdcp)) { + int num_dpcd_addrs = sizeof(hdcp_dpcd_addrs) / + sizeof(hdcp_dpcd_addrs0); + if (msg_id >= num_dpcd_addrs) + return MOD_HDCP_STATUS_DDC_FAILURE; + while (buf_len > 0) { cur_size = MIN(buf_len, HDCP_MAX_AUX_TRANSACTION_SIZE); success = hdcp->config.ddc.funcs.write_dpcd( @@ -239,6 +254,11 @@ data_offset += cur_size; } } else { + int num_i2c_offsets = sizeof(hdcp_i2c_offsets) / + sizeof(hdcp_i2c_offsets0); + if (msg_id >= num_i2c_offsets) + return MOD_HDCP_STATUS_DDC_FAILURE; + hdcp->buf0 = hdcp_i2c_offsetsmsg_id; memmove(&hdcp->buf1, buf, buf_len); success = hdcp->config.ddc.funcs.write_i2c(
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/hid/amd-sfh-hid/amd_sfh_hid.c
Changed
@@ -171,11 +171,13 @@ void amdtp_hid_remove(struct amdtp_cl_data *cli_data) { int i; + struct amdtp_hid_data *hid_data; for (i = 0; i < cli_data->num_hid_devices; ++i) { if (cli_data->hid_sensor_hubsi) { - kfree(cli_data->hid_sensor_hubsi->driver_data); + hid_data = cli_data->hid_sensor_hubsi->driver_data; hid_destroy_device(cli_data->hid_sensor_hubsi); + kfree(hid_data); cli_data->hid_sensor_hubsi = NULL; } }
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/hwmon/hp-wmi-sensors.c
Changed
@@ -1637,6 +1637,8 @@ goto out_unlock; wobj = out.pointer; + if (!wobj) + goto out_unlock; err = populate_event_from_wobj(dev, &event, wobj); if (err) {
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/hwmon/lm95234.c
Changed
@@ -301,7 +301,8 @@ if (ret < 0) return ret; - val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, index ? 255 : 127); + val = DIV_ROUND_CLOSEST(clamp_val(val, 0, (index ? 255 : 127) * 1000), + 1000); mutex_lock(&data->update_lock); data->tcrit2index = val; @@ -350,7 +351,7 @@ if (ret < 0) return ret; - val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 255); + val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 255000), 1000); mutex_lock(&data->update_lock); data->tcrit1index = val; @@ -391,7 +392,7 @@ if (ret < 0) return ret; - val = DIV_ROUND_CLOSEST(val, 1000); + val = DIV_ROUND_CLOSEST(clamp_val(val, -255000, 255000), 1000); val = clamp_val((int)data->tcrit1index - val, 0, 31); mutex_lock(&data->update_lock); @@ -431,7 +432,7 @@ return ret; /* Accuracy is 1/2 degrees C */ - val = clamp_val(DIV_ROUND_CLOSEST(val, 500), -128, 127); + val = DIV_ROUND_CLOSEST(clamp_val(val, -64000, 63500), 500); mutex_lock(&data->update_lock); data->toffsetindex = val;
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/hwmon/w83627ehf.c
Changed
@@ -895,7 +895,7 @@ if (err < 0) return err; - val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 127); + val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 127000), 1000); mutex_lock(&data->update_lock); data->target_tempnr = val; @@ -920,7 +920,7 @@ return err; /* Limit the temp to 0C - 15C */ - val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 15); + val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 15000), 1000); mutex_lock(&data->update_lock); reg = w83627ehf_read_value(data, W83627EHF_REG_TOLERANCEnr);
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/i2c/busses/i2c-tegra.c
Changed
@@ -1804,9 +1804,9 @@ * domain. * * VI I2C device shouldn't be marked as IRQ-safe because VI I2C won't - * be used for atomic transfers. + * be used for atomic transfers. ACPI device is not IRQ safe also. */ - if (!IS_VI(i2c_dev)) + if (!IS_VI(i2c_dev) && !has_acpi_companion(i2c_dev->dev)) pm_runtime_irq_safe(i2c_dev->dev); pm_runtime_enable(i2c_dev->dev);
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/irqchip/irq-gic-v3-its.c
Changed
@@ -1096,6 +1096,7 @@ struct its_cmd_block *cmd, struct its_cmd_desc *desc) { + struct its_vpe *vpe = valid_vpe(its, desc->its_vmapp_cmd.vpe); unsigned long vpt_addr, vconf_addr; u64 target; bool alloc; @@ -1108,6 +1109,13 @@ if (is_v4_1(its)) { alloc = !atomic_dec_return(&desc->its_vmapp_cmd.vpe->vmapp_count); its_encode_alloc(cmd, alloc); + /* + * Unmapping a VPE is self-synchronizing on GICv4.1, + * no need to issue a VSYNC. + */ + vpe = NULL; + } else if (is_v4(its)) { + vpe = NULL; } goto out; @@ -1142,7 +1150,7 @@ out: its_fixup_cmd(cmd); - return valid_vpe(its, desc->its_vmapp_cmd.vpe); + return vpe; } static struct its_vpe *its_build_vmapti_cmd(struct its_node *its,
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/misc/fastrpc.c
Changed
@@ -1912,7 +1912,8 @@ &args0); if (err) { dev_err(dev, "mmap error (len 0x%08llx)\n", buf->size); - goto err_invoke; + fastrpc_buf_free(buf); + return err; } /* update the buffer to be able to deallocate the memory on the DSP */ @@ -1950,8 +1951,6 @@ err_assign: fastrpc_req_munmap_impl(fl, buf); -err_invoke: - fastrpc_buf_free(buf); return err; }
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/misc/vmw_vmci/vmci_resource.c
Changed
@@ -144,7 +144,8 @@ spin_lock(&vmci_resource_table.lock); hlist_for_each_entry(r, &vmci_resource_table.entriesidx, node) { - if (vmci_handle_is_equal(r->handle, resource->handle)) { + if (vmci_handle_is_equal(r->handle, resource->handle) && + resource->type == r->type) { hlist_del_init_rcu(&r->node); break; }
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/mmc/core/mmc_test.c
Changed
@@ -3104,13 +3104,13 @@ test->buffer = kzalloc(BUFFER_SIZE, GFP_KERNEL); #ifdef CONFIG_HIGHMEM test->highmem = alloc_pages(GFP_KERNEL | __GFP_HIGHMEM, BUFFER_ORDER); + if (!test->highmem) { + count = -ENOMEM; + goto free_test_buffer; + } #endif -#ifdef CONFIG_HIGHMEM - if (test->buffer && test->highmem) { -#else if (test->buffer) { -#endif mutex_lock(&mmc_test_lock); mmc_test_run(test, testcase); mutex_unlock(&mmc_test_lock); @@ -3118,6 +3118,7 @@ #ifdef CONFIG_HIGHMEM __free_pages(test->highmem, BUFFER_ORDER); +free_test_buffer: #endif kfree(test->buffer); kfree(test);
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/can/spi/mcp251x.c
Changed
@@ -753,7 +753,7 @@ int ret; /* Force wakeup interrupt to wake device, but don't execute IST */ - disable_irq(spi->irq); + disable_irq_nosync(spi->irq); mcp251x_write_2regs(spi, CANINTE, CANINTE_WAKIE, CANINTF_WAKIF); /* Wait for oscillator startup timer after wake up */
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/hisilicon/hns3/hnae3.h
Changed
@@ -607,6 +607,10 @@ * Execute debugfs read command. * request_flush_qb_config * Request to update queue bonding configuration + * request_pfc_storm_config + * Request to update pfc storm configuration + * get_pfc_storm_config + * Get pfc storm config * query_fd_qb_state * Query whether hw queue bonding enabled * set_tx_hwts_info @@ -809,6 +813,9 @@ int (*set_phy_link_ksettings)(struct hnae3_handle *handle, const struct ethtool_link_ksettings *cmd); void (*request_flush_qb_config)(struct hnae3_handle *handle); + void (*request_pfc_storm_config)(struct hnae3_handle *handle, + bool enable); + int (*get_pfc_storm_config)(struct hnae3_handle *handle, bool *enable); bool (*query_fd_qb_state)(struct hnae3_handle *handle); bool (*set_tx_hwts_info)(struct hnae3_handle *handle, struct sk_buff *skb); @@ -936,6 +943,7 @@ HNAE3_PFLAG_LIMIT_PROMISC, HNAE3_PFLAG_FD_QB_ENABLE, HNAE3_PFLAG_ROH_ARP_PROXY_ENABLE, + HNAE3_PFLAG_PFC_STORM_PREVENT_ENABLE, HNAE3_PFLAG_MAX };
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
Changed
@@ -5600,6 +5600,18 @@ &handle->supported_pflags); set_bit(HNAE3_PFLAG_ROH_ARP_PROXY_ENABLE, &handle->priv_flags); } + + if (handle->ae_algo->ops->get_pfc_storm_config) { + bool enable = true; + int ret = handle->ae_algo->ops->get_pfc_storm_config(handle, + &enable); + + set_bit(HNAE3_PFLAG_PFC_STORM_PREVENT_ENABLE, + &handle->supported_pflags); + if (!ret && enable) + set_bit(HNAE3_PFLAG_PFC_STORM_PREVENT_ENABLE, + &handle->priv_flags); + } } static void hns3_state_uninit(struct hnae3_handle *handle)
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
Changed
@@ -499,10 +499,22 @@ enable ? "enable" : "disable"); } +static void hns3_update_pfc_storm_prevent_enable(struct net_device *netdev, + bool enable) +{ + struct hnae3_handle *handle = hns3_get_handle(netdev); + + if (!handle->ae_algo->ops->request_pfc_storm_config) + return; + + handle->ae_algo->ops->request_pfc_storm_config(handle, enable); +} + static const struct hns3_pflag_desc hns3_priv_flagsHNAE3_PFLAG_MAX = { { "limit_promisc", hns3_update_limit_promisc_mode }, { "qb_enable", hns3_update_fd_qb_state }, { "roh_arp_proxy_enable", hns3_update_roh_arp_proxy_enable }, + { "pfc_storm_prevent_enable", hns3_update_pfc_storm_prevent_enable }, }; static int hns3_get_sset_count(struct net_device *netdev, int stringset)
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
Changed
@@ -925,6 +925,15 @@ u8 rsv20; }; +struct hclge_pfc_storm_para_cmd { + __le32 dir; + __le32 enable; + __le32 period_ms; + __le32 times; + __le32 recovery_period_ms; + __le32 rsv; +}; + struct hclge_hw; int hclge_cmd_send(struct hclge_hw *hw, struct hclge_desc *desc, int num); #endif
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ext.h
Changed
@@ -77,15 +77,6 @@ u8 rsv20; }; -struct hclge_pfc_storm_para_cmd { - __le32 dir; - __le32 enable; - __le32 period_ms; - __le32 times; - __le32 recovery_period_ms; - __le32 rsv; -}; - struct hclge_notify_pkt_param_cmd { __le32 cfg; __le32 ipg;
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
Changed
@@ -5498,6 +5498,86 @@ set_bit(HCLGE_VPORT_STATE_QB_CHANGE, &vport->state); } +static int +hclge_get_pfc_storm_prevent(struct hclge_dev *hdev, int dir, bool *enable) +{ + struct hclge_pfc_storm_para_cmd *para_cmd; + struct hclge_desc desc; + int ret; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_CFG_PAUSE_STORM_PARA, true); + para_cmd = (struct hclge_pfc_storm_para_cmd *)desc.data; + para_cmd->dir = cpu_to_le32(dir); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) + return ret; + + *enable = !!le32_to_cpu(para_cmd->enable); + return 0; +} + +static int hclge_get_pfc_storm_config(struct hnae3_handle *handle, bool *enable) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + struct hclge_dev *hdev = vport->back; + bool enable_tx, enable_rx; + int ret; + + ret = hclge_get_pfc_storm_prevent(hdev, HCLGE_DIR_TX, &enable_tx); + if (ret) { + dev_err(&hdev->pdev->dev, "failed to get tx pfc storm prevent, ret=%d\n", + ret); + return ret; + } + ret = hclge_get_pfc_storm_prevent(hdev, HCLGE_DIR_RX, &enable_rx); + if (ret) { + dev_err(&hdev->pdev->dev, "failed to get rx pfc storm prevent, ret=%d\n", + ret); + return ret; + } + + *enable = enable_tx || enable_rx; + return 0; +} + +static int +hclge_enable_pfc_storm_prevent(struct hclge_dev *hdev, int dir, bool enable) +{ + struct hclge_pfc_storm_para_cmd *para_cmd; + struct hclge_desc desc; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_CFG_PAUSE_STORM_PARA, + false); + para_cmd = (struct hclge_pfc_storm_para_cmd *)desc.data; + para_cmd->dir = cpu_to_le32(dir); + para_cmd->enable = cpu_to_le32(enable); + + return hclge_cmd_send(&hdev->hw, &desc, 1); +} + +static void +hclge_request_pfc_storm_config(struct hnae3_handle *handle, bool enable) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + struct hclge_dev *hdev = vport->back; + int ret; + + ret = hclge_enable_pfc_storm_prevent(hdev, HCLGE_DIR_TX, enable); + if (ret) { + dev_err(&hdev->pdev->dev, "failed to %s tx pfc storm prevent, ret=%d\n", + enable ? "enable" : "disable", ret); + return; + } + + ret = hclge_enable_pfc_storm_prevent(hdev, HCLGE_DIR_RX, enable); + if (ret) + dev_err(&hdev->pdev->dev, "failed to %s rx pfc storm prevent, ret=%d\n", + enable ? "enable" : "disable", ret); + else + dev_info(&hdev->pdev->dev, "pfc storm prevent %s\n", + enable ? "enabled" : "disabled"); +} + static void hclge_sync_fd_state(struct hclge_dev *hdev) { struct hclge_vport *vport = &hdev->vport0; @@ -10203,33 +10283,35 @@ return false; } -int hclge_enable_vport_vlan_filter(struct hclge_vport *vport, bool request_en) +static int __hclge_enable_vport_vlan_filter(struct hclge_vport *vport, bool request_en) { - struct hclge_dev *hdev = vport->back; bool need_en; int ret; - mutex_lock(&hdev->vport_lock); - - vport->req_vlan_fltr_en = request_en; - need_en = hclge_need_enable_vport_vlan_filter(vport); - if (need_en == vport->cur_vlan_fltr_en) { - mutex_unlock(&hdev->vport_lock); + if (need_en == vport->cur_vlan_fltr_en) return 0; - } ret = hclge_set_vport_vlan_filter(vport, need_en); - if (ret) { - mutex_unlock(&hdev->vport_lock); + if (ret) return ret; - } vport->cur_vlan_fltr_en = need_en; + return 0; +} + +int hclge_enable_vport_vlan_filter(struct hclge_vport *vport, bool request_en) +{ + struct hclge_dev *hdev = vport->back; + int ret; + + mutex_lock(&hdev->vport_lock); + vport->req_vlan_fltr_en = request_en; + ret = __hclge_enable_vport_vlan_filter(vport, request_en); mutex_unlock(&hdev->vport_lock); - return 0; + return ret; } static int hclge_enable_vlan_filter(struct hnae3_handle *handle, bool enable) @@ -11251,16 +11333,19 @@ &vport->state)) continue; - ret = hclge_enable_vport_vlan_filter(vport, - vport->req_vlan_fltr_en); + mutex_lock(&hdev->vport_lock); + ret = __hclge_enable_vport_vlan_filter(vport, + vport->req_vlan_fltr_en); if (ret) { dev_err(&hdev->pdev->dev, "failed to sync vlan filter state for vport%u, ret = %d\n", vport->vport_id, ret); set_bit(HCLGE_VPORT_STATE_VLAN_FLTR_CHANGE, &vport->state); + mutex_unlock(&hdev->vport_lock); return; } + mutex_unlock(&hdev->vport_lock); } } @@ -12248,8 +12333,8 @@ dev_err(&hdev->pdev->dev, "fail to rebuild, ret=%d\n", ret); hdev->reset_type = HNAE3_NONE_RESET; - clear_bit(HCLGE_STATE_RST_HANDLING, &hdev->state); - up(&hdev->reset_sem); + if (test_and_clear_bit(HCLGE_STATE_RST_HANDLING, &hdev->state)) + up(&hdev->reset_sem); } static void hclge_clear_resetting_state(struct hclge_dev *hdev) @@ -13510,6 +13595,8 @@ .set_promisc_mode = hclge_set_promisc_mode, .request_update_promisc_mode = hclge_request_update_promisc_mode, .request_flush_qb_config = hclge_flush_qb_config, + .request_pfc_storm_config = hclge_request_pfc_storm_config, + .get_pfc_storm_config = hclge_get_pfc_storm_config, .query_fd_qb_state = hclge_query_fd_qb_state, .set_loopback = hclge_set_loopback, .start = hclge_ae_start,
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
Changed
@@ -347,6 +347,9 @@ #define HCLGE_LINK_STATUS_DOWN 0 #define HCLGE_LINK_STATUS_UP 1 +#define HCLGE_DIR_RX 0 +#define HCLGE_DIR_TX 1 + #define HCLGE_PG_NUM 4 #define HCLGE_SCH_MODE_SP 0 #define HCLGE_SCH_MODE_DWRR 1
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
Changed
@@ -487,7 +487,7 @@ ret = hclge_ptp_get_cycle(hdev); if (ret) - return ret; + goto out; } ret = hclge_ptp_int_en(hdev, true);
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c
Changed
@@ -510,9 +510,9 @@ static int hclge_fetch_pf_reg(struct hclge_dev *hdev, void *data, struct hnae3_knic_private_info *kinfo) { -#define HCLGE_RING_REG_OFFSET 0x200 #define HCLGE_RING_INT_REG_OFFSET 0x4 + struct hnae3_queue *tqp; int i, j, reg_num; int data_num_sum; u32 *reg = data; @@ -533,10 +533,11 @@ reg_num = ARRAY_SIZE(ring_reg_addr_list); for (j = 0; j < kinfo->num_tqps; j++) { reg += hclge_reg_get_tlv(HCLGE_REG_TAG_RING, reg_num, reg); + tqp = kinfo->tqpj; for (i = 0; i < reg_num; i++) - *reg++ = hclge_read_dev(&hdev->hw, - ring_reg_addr_listi + - HCLGE_RING_REG_OFFSET * j); + *reg++ = readl_relaxed(tqp->io_base - + HCLGE_TQP_REG_OFFSET + + ring_reg_addr_listi); } data_num_sum += (reg_num + HCLGE_REG_TLV_SPACE) * kinfo->num_tqps;
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
Changed
@@ -1841,8 +1841,8 @@ ret); hdev->reset_type = HNAE3_NONE_RESET; - clear_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state); - up(&hdev->reset_sem); + if (test_and_clear_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state)) + up(&hdev->reset_sem); } static u32 hclgevf_get_fw_version(struct hnae3_handle *handle)
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c
Changed
@@ -123,40 +123,41 @@ void hclgevf_get_regs(struct hnae3_handle *handle, u32 *version, void *data) { -#define HCLGEVF_RING_REG_OFFSET 0x200 #define HCLGEVF_RING_INT_REG_OFFSET 0x4 struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); - int i, j, reg_um; + struct hnae3_queue *tqp; + int i, j, reg_num; u32 *reg = data; *version = hdev->fw_version; reg += hclgevf_reg_get_header(reg); /* fetching per-VF registers values from VF PCIe register space */ - reg_um = sizeof(cmdq_reg_addr_list) / sizeof(u32); - reg += hclgevf_reg_get_tlv(HCLGEVF_REG_TAG_CMDQ, reg_um, reg); - for (i = 0; i < reg_um; i++) + reg_num = sizeof(cmdq_reg_addr_list) / sizeof(u32); + reg += hclgevf_reg_get_tlv(HCLGEVF_REG_TAG_CMDQ, reg_num, reg); + for (i = 0; i < reg_num; i++) *reg++ = hclgevf_read_dev(&hdev->hw, cmdq_reg_addr_listi); - reg_um = sizeof(common_reg_addr_list) / sizeof(u32); - reg += hclgevf_reg_get_tlv(HCLGEVF_REG_TAG_COMMON, reg_um, reg); - for (i = 0; i < reg_um; i++) + reg_num = sizeof(common_reg_addr_list) / sizeof(u32); + reg += hclgevf_reg_get_tlv(HCLGEVF_REG_TAG_COMMON, reg_num, reg); + for (i = 0; i < reg_num; i++) *reg++ = hclgevf_read_dev(&hdev->hw, common_reg_addr_listi); - reg_um = sizeof(ring_reg_addr_list) / sizeof(u32); + reg_num = sizeof(ring_reg_addr_list) / sizeof(u32); for (j = 0; j < hdev->num_tqps; j++) { - reg += hclgevf_reg_get_tlv(HCLGEVF_REG_TAG_RING, reg_um, reg); - for (i = 0; i < reg_um; i++) - *reg++ = hclgevf_read_dev(&hdev->hw, - ring_reg_addr_listi + - HCLGEVF_RING_REG_OFFSET * j); + reg += hclgevf_reg_get_tlv(HCLGEVF_REG_TAG_RING, reg_num, reg); + tqp = &hdev->htqpj.q; + for (i = 0; i < reg_num; i++) + *reg++ = readl_relaxed(tqp->io_base - + HCLGEVF_TQP_REG_OFFSET + + ring_reg_addr_listi); } - reg_um = sizeof(tqp_intr_reg_addr_list) / sizeof(u32); + reg_num = sizeof(tqp_intr_reg_addr_list) / sizeof(u32); for (j = 0; j < hdev->num_msi_used - 1; j++) { - reg += hclgevf_reg_get_tlv(HCLGEVF_REG_TAG_TQP_INTR, reg_um, reg); - for (i = 0; i < reg_um; i++) + reg += hclgevf_reg_get_tlv(HCLGEVF_REG_TAG_TQP_INTR, reg_num, reg); + for (i = 0; i < reg_num; i++) *reg++ = hclgevf_read_dev(&hdev->hw, tqp_intr_reg_addr_listi + HCLGEVF_RING_INT_REG_OFFSET * j);
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/huawei/bma/cdev_drv/bma_cdev.c
Changed
@@ -28,7 +28,7 @@ #ifdef DRV_VERSION #define CDEV_VERSION MICRO_TO_STR(DRV_VERSION) #else -#define CDEV_VERSION "0.3.7" +#define CDEV_VERSION "0.3.8" #endif #define CDEV_DEFAULT_NUM 4
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/huawei/bma/edma_drv/bma_pci.h
Changed
@@ -71,7 +71,7 @@ #ifdef DRV_VERSION #define BMA_VERSION MICRO_TO_STR(DRV_VERSION) #else -#define BMA_VERSION "0.3.7" +#define BMA_VERSION "0.3.8" #endif #ifdef CONFIG_ARM64
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/huawei/bma/edma_drv/edma_host.c
Changed
@@ -1319,47 +1319,15 @@ edma_host->pdev = bma_dev->bma_pci_dev->pdev; -#ifdef EDMA_TIMER - #ifdef HAVE_TIMER_SETUP - timer_setup(&edma_host->timer, edma_host_timeout, 0); - #else - setup_timer(&edma_host->timer, edma_host_timeout, - (unsigned long)edma_host); - #endif - (void)mod_timer(&edma_host->timer, jiffies_64 + TIMER_INTERVAL_CHECK); -#ifdef USE_DMA - #ifdef HAVE_TIMER_SETUP - timer_setup(&edma_host->dma_timer, edma_host_dma_timeout, 0); - - #else - setup_timer(&edma_host->dma_timer, edma_host_dma_timeout, - (unsigned long)edma_host); - #endif - (void)mod_timer(&edma_host->dma_timer, - jiffies_64 + DMA_TIMER_INTERVAL_CHECK); -#endif - -#else - init_completion(&edma_host->msg_ready); - - edma_host->edma_thread = - kthread_run(edma_host_thread, (void *)edma_host, "edma_host_msg"); - - if (IS_ERR(edma_host->edma_thread)) { - BMA_LOG(DLOG_ERROR, "kernel_run edma_host_msg failed\n"); - return PTR_ERR(edma_host->edma_thread); - } -#endif - edma_host->msg_send_buf = kmalloc(HOST_MAX_SEND_MBX_LEN, GFP_KERNEL); if (!edma_host->msg_send_buf) { BMA_LOG(DLOG_ERROR, "malloc msg_send_buf failed!"); ret = -ENOMEM; - goto failed1; + return ret; } edma_host->msg_send_write = 0; - + /* init send_msg_lock before timer setup */ spin_lock_init(&edma_host->send_msg_lock); tasklet_init(&edma_host->tasklet, @@ -1392,6 +1360,38 @@ edma_host->h2b_state = H2BSTATE_IDLE; edma_host->b2h_state = B2HSTATE_IDLE; +#ifdef EDMA_TIMER + #ifdef HAVE_TIMER_SETUP + timer_setup(&edma_host->timer, edma_host_timeout, 0); + #else + setup_timer(&edma_host->timer, edma_host_timeout, + (unsigned long)edma_host); + #endif + (void)mod_timer(&edma_host->timer, jiffies_64 + TIMER_INTERVAL_CHECK); +#ifdef USE_DMA + #ifdef HAVE_TIMER_SETUP + timer_setup(&edma_host->dma_timer, edma_host_dma_timeout, 0); + + #else + setup_timer(&edma_host->dma_timer, edma_host_dma_timeout, + (unsigned long)edma_host); + #endif + (void)mod_timer(&edma_host->dma_timer, + jiffies_64 + DMA_TIMER_INTERVAL_CHECK); +#endif + +#else + init_completion(&edma_host->msg_ready); + + edma_host->edma_thread = + kthread_run(edma_host_thread, (void *)edma_host, "edma_host_msg"); + + if (IS_ERR(edma_host->edma_thread)) { + BMA_LOG(DLOG_ERROR, "kernel_run edma_host_msg failed\n"); + return PTR_ERR(edma_host->edma_thread); + } +#endif + #ifdef HAVE_TIMER_SETUP timer_setup(&edma_host->heartbeat_timer, edma_host_heartbeat_timer, 0); @@ -1415,18 +1415,6 @@ BMA_LOG(DLOG_ERROR, "thread ok\n"); #endif return 0; - -failed1: -#ifdef EDMA_TIMER - (void)del_timer_sync(&edma_host->timer); -#ifdef USE_DMA - (void)del_timer_sync(&edma_host->dma_timer); -#endif -#else - kthread_stop(edma_host->edma_thread); - complete(&edma_host->msg_ready); -#endif - return ret; } void edma_host_cleanup(struct edma_host_s *edma_host)
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/huawei/bma/kbox_drv/kbox_include.h
Changed
@@ -23,7 +23,7 @@ #ifdef DRV_VERSION #define KBOX_VERSION MICRO_TO_STR(DRV_VERSION) #else -#define KBOX_VERSION "0.3.7" +#define KBOX_VERSION "0.3.8" #endif #define UNUSED(x) (x = x)
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/huawei/bma/veth_drv/veth_hb.h
Changed
@@ -31,7 +31,7 @@ #ifdef DRV_VERSION #define VETH_VERSION MICRO_TO_STR(DRV_VERSION) #else -#define VETH_VERSION "0.3.7" +#define VETH_VERSION "0.3.8" #endif #define MODULE_NAME "veth"
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
Changed
@@ -2369,7 +2369,8 @@ if (flush) mlx5e_shampo_flush_skb(rq, cqe, match); free_hd_entry: - mlx5e_free_rx_shampo_hd_entry(rq, header_index); + if (likely(head_size)) + mlx5e_free_rx_shampo_hd_entry(rq, header_index); mpwrq_cqe_out: if (likely(wi->consumed_strides < rq->mpwqe.num_strides)) return;
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
Changed
@@ -319,7 +319,7 @@ return -EPERM; mutex_lock(&esw->state_lock); - if (esw->mode != MLX5_ESWITCH_LEGACY) { + if (esw->mode != MLX5_ESWITCH_LEGACY || !mlx5_esw_is_fdb_created(esw)) { err = -EOPNOTSUPP; goto out; } @@ -339,7 +339,7 @@ if (!mlx5_esw_allowed(esw)) return -EPERM; - if (esw->mode != MLX5_ESWITCH_LEGACY) + if (esw->mode != MLX5_ESWITCH_LEGACY || !mlx5_esw_is_fdb_created(esw)) return -EOPNOTSUPP; *setting = esw->fdb_table.legacy.vepa_uplink_rule ? 1 : 0;
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/ethernet/microsoft/mana/mana_en.c
Changed
@@ -1774,7 +1774,6 @@ static int mana_cq_handler(void *context, struct gdma_queue *gdma_queue) { struct mana_cq *cq = context; - u8 arm_bit; int w; WARN_ON_ONCE(cq->gdma_cq != gdma_queue); @@ -1785,16 +1784,23 @@ mana_poll_tx_cq(cq); w = cq->work_done; - - if (w < cq->budget && - napi_complete_done(&cq->napi, w)) { - arm_bit = SET_ARM_BIT; - } else { - arm_bit = 0; + cq->work_done_since_doorbell += w; + + if (w < cq->budget) { + mana_gd_ring_cq(gdma_queue, SET_ARM_BIT); + cq->work_done_since_doorbell = 0; + napi_complete_done(&cq->napi, w); + } else if (cq->work_done_since_doorbell > + cq->gdma_cq->queue_size / COMP_ENTRY_SIZE * 4) { + /* MANA hardware requires at least one doorbell ring every 8 + * wraparounds of CQ even if there is no need to arm the CQ. + * This driver rings the doorbell as soon as we have exceeded + * 4 wraparounds. + */ + mana_gd_ring_cq(gdma_queue, 0); + cq->work_done_since_doorbell = 0; } - mana_gd_ring_cq(gdma_queue, arm_bit); - return w; } @@ -1848,10 +1854,12 @@ for (i = 0; i < apc->num_queues; i++) { napi = &apc->tx_qpi.tx_cq.napi; - napi_synchronize(napi); - napi_disable(napi); - netif_napi_del(napi); - + if (apc->tx_qpi.txq.napi_initialized) { + napi_synchronize(napi); + napi_disable(napi); + netif_napi_del(napi); + apc->tx_qpi.txq.napi_initialized = false; + } mana_destroy_wq_obj(apc, GDMA_SQ, apc->tx_qpi.tx_object); mana_deinit_cq(apc, &apc->tx_qpi.tx_cq); @@ -1907,6 +1915,7 @@ txq->ndev = net; txq->net_txq = netdev_get_tx_queue(net, i); txq->vp_offset = apc->tx_vp_offset; + txq->napi_initialized = false; skb_queue_head_init(&txq->pending_skbs); memset(&spec, 0, sizeof(spec)); @@ -1973,6 +1982,7 @@ netif_napi_add_tx(net, &cq->napi, mana_poll); napi_enable(&cq->napi); + txq->napi_initialized = true; mana_gd_ring_cq(cq->gdma_cq, SET_ARM_BIT); } @@ -1984,7 +1994,7 @@ } static void mana_destroy_rxq(struct mana_port_context *apc, - struct mana_rxq *rxq, bool validate_state) + struct mana_rxq *rxq, bool napi_initialized) { struct gdma_context *gc = apc->ac->gdma_dev->gdma_context; @@ -1999,15 +2009,15 @@ napi = &rxq->rx_cq.napi; - if (validate_state) + if (napi_initialized) { napi_synchronize(napi); - napi_disable(napi); + napi_disable(napi); + netif_napi_del(napi); + } xdp_rxq_info_unreg(&rxq->xdp_rxq); - netif_napi_del(napi); - mana_destroy_wq_obj(apc, GDMA_RQ, rxq->rxobj); mana_deinit_cq(apc, &rxq->rx_cq);
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/phy/phy_device.c
Changed
@@ -3164,11 +3164,13 @@ err = of_phy_led(phydev, led); if (err) { of_node_put(led); + of_node_put(leds); phy_leds_unregister(phydev); return err; } } + of_node_put(leds); return 0; }
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/wireless/ath/ath12k/mac.c
Changed
@@ -3355,6 +3355,11 @@ ath12k_peer_assoc_prepare(ar, vif, sta, &peer_arg, reassoc); + if (peer_arg.peer_nss < 1) { + ath12k_warn(ar->ab, + "invalid peer NSS %d\n", peer_arg.peer_nss); + return -EINVAL; + } ret = ath12k_wmi_send_peer_assoc_cmd(ar, &peer_arg); if (ret) { ath12k_warn(ar->ab, "failed to run peer assoc for STA %pM vdev %i: %d\n",
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/wireless/marvell/mwifiex/main.h
Changed
@@ -1290,6 +1290,9 @@ for (i = 0; i < adapter->priv_num; i++) { if (adapter->privi) { + if (adapter->privi->bss_mode == NL80211_IFTYPE_UNSPECIFIED) + continue; + if ((adapter->privi->bss_num == bss_num) && (adapter->privi->bss_type == bss_type)) break;
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/net/wireless/realtek/rtw88/usb.c
Changed
@@ -742,7 +742,6 @@ static int rtw_usb_init_rx(struct rtw_dev *rtwdev) { struct rtw_usb *rtwusb = rtw_get_usb_priv(rtwdev); - int i; rtwusb->rxwq = create_singlethread_workqueue("rtw88_usb: rx wq"); if (!rtwusb->rxwq) { @@ -754,13 +753,19 @@ INIT_WORK(&rtwusb->rx_work, rtw_usb_rx_handler); + return 0; +} + +static void rtw_usb_setup_rx(struct rtw_dev *rtwdev) +{ + struct rtw_usb *rtwusb = rtw_get_usb_priv(rtwdev); + int i; + for (i = 0; i < RTW_USB_RXCB_NUM; i++) { struct rx_usb_ctrl_block *rxcb = &rtwusb->rx_cbi; rtw_usb_rx_resubmit(rtwusb, rxcb); } - - return 0; } static void rtw_usb_deinit_rx(struct rtw_dev *rtwdev) @@ -897,6 +902,8 @@ goto err_destroy_rxwq; } + rtw_usb_setup_rx(rtwdev); + return 0; err_destroy_rxwq:
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/nvme/target/tcp.c
Changed
@@ -1858,8 +1858,10 @@ } queue->nr_cmds = sq->size * 2; - if (nvmet_tcp_alloc_cmds(queue)) + if (nvmet_tcp_alloc_cmds(queue)) { + queue->nr_cmds = 0; return NVME_SC_INTERNAL; + } return 0; }
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/pci/hotplug/pnv_php.c
Changed
@@ -39,7 +39,6 @@ bool disable_device) { struct pci_dev *pdev = php_slot->pdev; - int irq = php_slot->irq; u16 ctrl; if (php_slot->irq > 0) { @@ -58,7 +57,7 @@ php_slot->wq = NULL; } - if (disable_device || irq > 0) { + if (disable_device) { if (pdev->msix_enabled) pci_disable_msix(pdev); else if (pdev->msi_enabled)
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/pci/pci.c
Changed
@@ -5737,10 +5737,12 @@ { struct pci_dev *dev; + pci_dev_lock(bus->self); list_for_each_entry(dev, &bus->devices, bus_list) { - pci_dev_lock(dev); if (dev->subordinate) pci_bus_lock(dev->subordinate); + else + pci_dev_lock(dev); } } @@ -5752,8 +5754,10 @@ list_for_each_entry(dev, &bus->devices, bus_list) { if (dev->subordinate) pci_bus_unlock(dev->subordinate); - pci_dev_unlock(dev); + else + pci_dev_unlock(dev); } + pci_dev_unlock(bus->self); } /* Return 1 on successful lock, 0 on contention */ @@ -5761,15 +5765,15 @@ { struct pci_dev *dev; + if (!pci_dev_trylock(bus->self)) + return 0; + list_for_each_entry(dev, &bus->devices, bus_list) { - if (!pci_dev_trylock(dev)) - goto unlock; if (dev->subordinate) { - if (!pci_bus_trylock(dev->subordinate)) { - pci_dev_unlock(dev); + if (!pci_bus_trylock(dev->subordinate)) goto unlock; - } - } + } else if (!pci_dev_trylock(dev)) + goto unlock; } return 1; @@ -5777,8 +5781,10 @@ list_for_each_entry_continue_reverse(dev, &bus->devices, bus_list) { if (dev->subordinate) pci_bus_unlock(dev->subordinate); - pci_dev_unlock(dev); + else + pci_dev_unlock(dev); } + pci_dev_unlock(bus->self); return 0; } @@ -5810,9 +5816,10 @@ list_for_each_entry(dev, &slot->bus->devices, bus_list) { if (!dev->slot || dev->slot != slot) continue; - pci_dev_lock(dev); if (dev->subordinate) pci_bus_lock(dev->subordinate); + else + pci_dev_lock(dev); } } @@ -5838,14 +5845,13 @@ list_for_each_entry(dev, &slot->bus->devices, bus_list) { if (!dev->slot || dev->slot != slot) continue; - if (!pci_dev_trylock(dev)) - goto unlock; if (dev->subordinate) { if (!pci_bus_trylock(dev->subordinate)) { pci_dev_unlock(dev); goto unlock; } - } + } else if (!pci_dev_trylock(dev)) + goto unlock; } return 1; @@ -5856,7 +5862,8 @@ continue; if (dev->subordinate) pci_bus_unlock(dev->subordinate); - pci_dev_unlock(dev); + else + pci_dev_unlock(dev); } return 0; }
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/platform/x86/intel/ifs/core.c
Changed
@@ -33,6 +33,7 @@ static const struct ifs_test_caps scan_test = { .integrity_cap_bit = MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT, .test_num = IFS_TYPE_SAF, + .image_suffix = "scan", }; static const struct ifs_test_caps array_test = { @@ -40,9 +41,32 @@ .test_num = IFS_TYPE_ARRAY_BIST, }; +static const struct ifs_test_msrs scan_msrs = { + .copy_hashes = MSR_COPY_SCAN_HASHES, + .copy_hashes_status = MSR_SCAN_HASHES_STATUS, + .copy_chunks = MSR_AUTHENTICATE_AND_COPY_CHUNK, + .copy_chunks_status = MSR_CHUNKS_AUTHENTICATION_STATUS, + .test_ctrl = MSR_SAF_CTRL, +}; + +static const struct ifs_test_msrs sbaf_msrs = { + .copy_hashes = MSR_COPY_SBAF_HASHES, + .copy_hashes_status = MSR_SBAF_HASHES_STATUS, + .copy_chunks = MSR_AUTHENTICATE_AND_COPY_SBAF_CHUNK, + .copy_chunks_status = MSR_SBAF_CHUNKS_AUTHENTICATION_STATUS, + .test_ctrl = MSR_SBAF_CTRL, +}; + +static const struct ifs_test_caps sbaf_test = { + .integrity_cap_bit = MSR_INTEGRITY_CAPS_SBAF_BIT, + .test_num = IFS_TYPE_SBAF, + .image_suffix = "sbft", +}; + static struct ifs_device ifs_devices = { IFS_TYPE_SAF = { .test_caps = &scan_test, + .test_msrs = &scan_msrs, .misc = { .name = "intel_ifs_0", .minor = MISC_DYNAMIC_MINOR, @@ -57,6 +81,15 @@ .groups = plat_ifs_array_groups, }, }, + IFS_TYPE_SBAF = { + .test_caps = &sbaf_test, + .test_msrs = &sbaf_msrs, + .misc = { + .name = "intel_ifs_2", + .minor = MISC_DYNAMIC_MINOR, + .groups = plat_ifs_groups, + }, + }, }; #define IFS_NUMTESTS ARRAY_SIZE(ifs_devices)
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/platform/x86/intel/ifs/ifs.h
Changed
@@ -126,11 +126,40 @@ * The driver does not make use of this, it only tests one core at a time. * * .. #f1 https://github.com/intel/TBD + * + * + * Structural Based Functional Test at Field (SBAF): + * ------------------------------------------------- + * + * SBAF is a new type of testing that provides comprehensive core test + * coverage complementing Scan at Field (SAF) testing. SBAF mimics the + * manufacturing screening environment and leverages the same test suite. + * It makes use of Design For Test (DFT) observation sites and features + * to maximize coverage in minimum time. + * + * Similar to the SAF test, SBAF isolates the core under test from the + * rest of the system during execution. Upon completion, the core + * seamlessly resets to its pre-test state and resumes normal operation. + * Any machine checks or hangs encountered during the test are confined to + * the isolated core, preventing disruption to the overall system. + * + * Like the SAF test, the SBAF test is also divided into multiple batches, + * and each batch test can take hundreds of milliseconds (100-200 ms) to + * complete. If such a lengthy interruption is undesirable, it is + * recommended to relocate the time-sensitive applications to other cores. */ #include <linux/device.h> #include <linux/miscdevice.h> #define MSR_ARRAY_BIST 0x00000105 + +#define MSR_COPY_SBAF_HASHES 0x000002b8 +#define MSR_SBAF_HASHES_STATUS 0x000002b9 +#define MSR_AUTHENTICATE_AND_COPY_SBAF_CHUNK 0x000002ba +#define MSR_SBAF_CHUNKS_AUTHENTICATION_STATUS 0x000002bb +#define MSR_ACTIVATE_SBAF 0x000002bc +#define MSR_SBAF_STATUS 0x000002bd + #define MSR_COPY_SCAN_HASHES 0x000002c2 #define MSR_SCAN_HASHES_STATUS 0x000002c3 #define MSR_AUTHENTICATE_AND_COPY_CHUNK 0x000002c4 @@ -140,6 +169,7 @@ #define MSR_ARRAY_TRIGGER 0x000002d6 #define MSR_ARRAY_STATUS 0x000002d7 #define MSR_SAF_CTRL 0x000004f0 +#define MSR_SBAF_CTRL 0x000004f8 #define SCAN_NOT_TESTED 0 #define SCAN_TEST_PASS 1 @@ -147,6 +177,7 @@ #define IFS_TYPE_SAF 0 #define IFS_TYPE_ARRAY_BIST 1 +#define IFS_TYPE_SBAF 2 #define ARRAY_GEN0 0 #define ARRAY_GEN1 1 @@ -196,7 +227,8 @@ u16 valid_chunks; u16 total_chunks; u32 error_code :8; - u32 rsvd2 :24; + u32 rsvd2 :8; + u32 max_bundle :16; }; }; @@ -253,6 +285,34 @@ }; }; +/* MSR_ACTIVATE_SBAF bit fields */ +union ifs_sbaf { + u64 data; + struct { + u32 bundle_idx :9; + u32 rsvd1 :5; + u32 pgm_idx :2; + u32 rsvd2 :16; + u32 delay :31; + u32 sigmce :1; + }; +}; + +/* MSR_SBAF_STATUS bit fields */ +union ifs_sbaf_status { + u64 data; + struct { + u32 bundle_idx :9; + u32 rsvd1 :5; + u32 pgm_idx :2; + u32 rsvd2 :16; + u32 error_code :8; + u32 rsvd3 :21; + u32 test_fail :1; + u32 sbaf_status :2; + }; +}; + /* * Driver populated error-codes * 0xFD: Test timed out before completing all the chunks. @@ -261,9 +321,28 @@ #define IFS_SW_TIMEOUT 0xFD #define IFS_SW_PARTIAL_COMPLETION 0xFE +#define IFS_SUFFIX_SZ 5 + struct ifs_test_caps { int integrity_cap_bit; int test_num; + char image_suffixIFS_SUFFIX_SZ; +}; + +/** + * struct ifs_test_msrs - MSRs used in IFS tests + * @copy_hashes: Copy test hash data + * @copy_hashes_status: Status of copied test hash data + * @copy_chunks: Copy chunks of the test data + * @copy_chunks_status: Status of the copied test data chunks + * @test_ctrl: Control the test attributes + */ +struct ifs_test_msrs { + u32 copy_hashes; + u32 copy_hashes_status; + u32 copy_chunks; + u32 copy_chunks_status; + u32 test_ctrl; }; /** @@ -278,6 +357,7 @@ * @generation: IFS test generation enumerated by hardware * @chunk_size: size of a test chunk * @array_gen: test generation of array test + * @max_bundle: maximum bundle index */ struct ifs_data { int loaded_version; @@ -290,6 +370,7 @@ u32 generation; u32 chunk_size; u32 array_gen; + u32 max_bundle; }; struct ifs_work { @@ -299,6 +380,7 @@ struct ifs_device { const struct ifs_test_caps *test_caps; + const struct ifs_test_msrs *test_msrs; struct ifs_data rw_data; struct miscdevice misc; }; @@ -319,6 +401,14 @@ return d->test_caps; } +static inline const struct ifs_test_msrs *ifs_get_test_msrs(struct device *dev) +{ + struct miscdevice *m = dev_get_drvdata(dev); + struct ifs_device *d = container_of(m, struct ifs_device, misc); + + return d->test_msrs; +} + extern bool *ifs_pkg_auth; int ifs_load_firmware(struct device *dev); int do_core_test(int cpu, struct device *dev);
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/platform/x86/intel/ifs/load.c
Changed
@@ -118,15 +118,17 @@ union ifs_scan_hashes_status hashes_status; union ifs_chunks_auth_status chunk_status; struct device *dev = local_work->dev; + const struct ifs_test_msrs *msrs; int i, num_chunks, chunk_size; struct ifs_data *ifsd; u64 linear_addr, base; u32 err_code; ifsd = ifs_get_data(dev); + msrs = ifs_get_test_msrs(dev); /* run scan hash copy */ - wrmsrl(MSR_COPY_SCAN_HASHES, ifs_hash_ptr); - rdmsrl(MSR_SCAN_HASHES_STATUS, hashes_status.data); + wrmsrl(msrs->copy_hashes, ifs_hash_ptr); + rdmsrl(msrs->copy_hashes_status, hashes_status.data); /* enumerate the scan image information */ num_chunks = hashes_status.num_chunks; @@ -147,8 +149,8 @@ linear_addr = base + i * chunk_size; linear_addr |= i; - wrmsrl(MSR_AUTHENTICATE_AND_COPY_CHUNK, linear_addr); - rdmsrl(MSR_CHUNKS_AUTHENTICATION_STATUS, chunk_status.data); + wrmsrl(msrs->copy_chunks, linear_addr); + rdmsrl(msrs->copy_chunks_status, chunk_status.data); ifsd->valid_chunks = chunk_status.valid_chunks; err_code = chunk_status.error_code; @@ -180,6 +182,7 @@ union ifs_scan_hashes_status_gen2 hashes_status; union ifs_chunks_auth_status_gen2 chunk_status; u32 err_code, valid_chunks, total_chunks; + const struct ifs_test_msrs *msrs; int i, num_chunks, chunk_size; union meta_data *ifs_meta; int starting_chunk_nr; @@ -189,10 +192,11 @@ int retry_count; ifsd = ifs_get_data(dev); + msrs = ifs_get_test_msrs(dev); if (need_copy_scan_hashes(ifsd)) { - wrmsrl(MSR_COPY_SCAN_HASHES, ifs_hash_ptr); - rdmsrl(MSR_SCAN_HASHES_STATUS, hashes_status.data); + wrmsrl(msrs->copy_hashes, ifs_hash_ptr); + rdmsrl(msrs->copy_hashes_status, hashes_status.data); /* enumerate the scan image information */ chunk_size = hashes_status.chunk_size * SZ_1K; @@ -212,8 +216,8 @@ } if (ifsd->generation >= IFS_GEN_STRIDE_AWARE) { - wrmsrl(MSR_SAF_CTRL, INVALIDATE_STRIDE); - rdmsrl(MSR_CHUNKS_AUTHENTICATION_STATUS, chunk_status.data); + wrmsrl(msrs->test_ctrl, INVALIDATE_STRIDE); + rdmsrl(msrs->copy_chunks_status, chunk_status.data); if (chunk_status.valid_chunks != 0) { dev_err(dev, "Couldn't invalidate installed stride - %d\n", chunk_status.valid_chunks); @@ -233,8 +237,10 @@ chunk_table0 = starting_chunk_nr + i; chunk_table1 = linear_addr; do { - wrmsrl(MSR_AUTHENTICATE_AND_COPY_CHUNK, (u64)chunk_table); - rdmsrl(MSR_CHUNKS_AUTHENTICATION_STATUS, chunk_status.data); + local_irq_disable(); + wrmsrl(msrs->copy_chunks, (u64)chunk_table); + local_irq_enable(); + rdmsrl(msrs->copy_chunks_status, chunk_status.data); err_code = chunk_status.error_code; } while (err_code == AUTH_INTERRUPTED_ERROR && --retry_count); @@ -255,20 +261,22 @@ return -EIO; } ifsd->valid_chunks = valid_chunks; + ifsd->max_bundle = chunk_status.max_bundle; return 0; } static int validate_ifs_metadata(struct device *dev) { + const struct ifs_test_caps *test = ifs_get_test_caps(dev); struct ifs_data *ifsd = ifs_get_data(dev); union meta_data *ifs_meta; char test_file64; int ret = -EINVAL; - snprintf(test_file, sizeof(test_file), "%02x-%02x-%02x-%02x.scan", + snprintf(test_file, sizeof(test_file), "%02x-%02x-%02x-%02x.%s", boot_cpu_data.x86, boot_cpu_data.x86_model, - boot_cpu_data.x86_stepping, ifsd->cur_batch); + boot_cpu_data.x86_stepping, ifsd->cur_batch, test->image_suffix); ifs_meta = (union meta_data *)find_meta_data(ifs_header_ptr, META_TYPE_IFS); if (!ifs_meta) { @@ -298,6 +306,12 @@ return ret; } + if (ifs_meta->test_type != test->test_num) { + dev_warn(dev, "Metadata test_type %d mismatches with device type\n", + ifs_meta->test_type); + return ret; + } + return 0; } @@ -383,11 +397,11 @@ unsigned int expected_size; const struct firmware *fw; char scan_path64; - int ret = -EINVAL; + int ret; - snprintf(scan_path, sizeof(scan_path), "intel/ifs_%d/%02x-%02x-%02x-%02x.scan", + snprintf(scan_path, sizeof(scan_path), "intel/ifs_%d/%02x-%02x-%02x-%02x.%s", test->test_num, boot_cpu_data.x86, boot_cpu_data.x86_model, - boot_cpu_data.x86_stepping, ifsd->cur_batch); + boot_cpu_data.x86_stepping, ifsd->cur_batch, test->image_suffix); ret = request_firmware_direct(&fw, scan_path, dev); if (ret) {
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/platform/x86/intel/ifs/runtest.c
Changed
@@ -23,6 +23,19 @@ /* Max retries on the same chunk */ #define MAX_IFS_RETRIES 5 +struct run_params { + struct ifs_data *ifsd; + union ifs_scan *activate; + union ifs_status status; +}; + +struct sbaf_run_params { + struct ifs_data *ifsd; + int *retry_cnt; + union ifs_sbaf *activate; + union ifs_sbaf_status status; +}; + /* * Number of TSC cycles that a logical CPU will wait for the other * logical CPU on the core in the WRMSR(ACTIVATE_SCAN). @@ -63,6 +76,19 @@ static void message_not_tested(struct device *dev, int cpu, union ifs_status status) { + struct ifs_data *ifsd = ifs_get_data(dev); + + /* + * control_error is set when the microcode runs into a problem + * loading the image from the reserved BIOS memory, or it has + * been corrupted. Reloading the image may fix this issue. + */ + if (status.control_error) { + dev_warn(dev, "CPU(s) %*pbl: Scan controller error. Batch: %02x version: 0x%x\n", + cpumask_pr_args(cpu_smt_mask(cpu)), ifsd->cur_batch, ifsd->loaded_version); + return; + } + if (status.error_code < ARRAY_SIZE(scan_test_status)) { dev_info(dev, "CPU(s) %*pbl: SCAN operation did not start. %s\n", cpumask_pr_args(cpu_smt_mask(cpu)), @@ -85,16 +111,6 @@ struct ifs_data *ifsd = ifs_get_data(dev); /* - * control_error is set when the microcode runs into a problem - * loading the image from the reserved BIOS memory, or it has - * been corrupted. Reloading the image may fix this issue. - */ - if (status.control_error) { - dev_err(dev, "CPU(s) %*pbl: could not execute from loaded scan image. Batch: %02x version: 0x%x\n", - cpumask_pr_args(cpu_smt_mask(cpu)), ifsd->cur_batch, ifsd->loaded_version); - } - - /* * signature_error is set when the output from the scan chains does not * match the expected signature. This might be a transient problem (e.g. * due to a bit flip from an alpha particle or neutron). If the problem @@ -134,19 +150,57 @@ return false; } +#define SPINUNIT 100 /* 100 nsec */ +static atomic_t array_cpus_in; +static atomic_t scan_cpus_in; +static atomic_t sbaf_cpus_in; + +/* + * Simplified cpu sibling rendezvous loop based on microcode loader __wait_for_cpus() + */ +static void wait_for_sibling_cpu(atomic_t *t, long long timeout) +{ + int cpu = smp_processor_id(); + const struct cpumask *smt_mask = cpu_smt_mask(cpu); + int all_cpus = cpumask_weight(smt_mask); + + atomic_inc(t); + while (atomic_read(t) < all_cpus) { + if (timeout < SPINUNIT) + return; + ndelay(SPINUNIT); + timeout -= SPINUNIT; + touch_nmi_watchdog(); + } +} + /* * Execute the scan. Called "simultaneously" on all threads of a core * at high priority using the stop_cpus mechanism. */ static int doscan(void *data) { - int cpu = smp_processor_id(); - u64 *msrs = data; + int cpu = smp_processor_id(), start, stop; + struct run_params *params = data; + union ifs_status status; + struct ifs_data *ifsd; int first; + ifsd = params->ifsd; + + if (ifsd->generation) { + start = params->activate->gen2.start; + stop = params->activate->gen2.stop; + } else { + start = params->activate->gen0.start; + stop = params->activate->gen0.stop; + } + /* Only the first logical CPU on a core reports result */ first = cpumask_first(cpu_smt_mask(cpu)); + wait_for_sibling_cpu(&scan_cpus_in, NSEC_PER_SEC); + /* * This WRMSR will wait for other HT threads to also write * to this MSR (at most for activate.delay cycles). Then it @@ -155,12 +209,14 @@ * take up to 200 milliseconds (in the case where all chunks * are processed in a single pass) before it retires. */ - wrmsrl(MSR_ACTIVATE_SCAN, msrs0); + wrmsrl(MSR_ACTIVATE_SCAN, params->activate->data); + rdmsrl(MSR_SCAN_STATUS, status.data); - if (cpu == first) { - /* Pass back the result of the scan */ - rdmsrl(MSR_SCAN_STATUS, msrs1); - } + trace_ifs_status(ifsd->cur_batch, start, stop, status.data); + + /* Pass back the result of the scan */ + if (cpu == first) + params->status = status; return 0; } @@ -179,7 +235,7 @@ struct ifs_data *ifsd; int to_start, to_stop; int status_chunk; - u64 msrvals2; + struct run_params params; int retries; ifsd = ifs_get_data(dev); @@ -190,6 +246,8 @@ to_start = 0; to_stop = ifsd->valid_chunks - 1; + params.ifsd = ifs_get_data(dev); + if (ifsd->generation) { activate.gen2.start = to_start; activate.gen2.stop = to_stop; @@ -207,12 +265,11 @@ break; } - msrvals0 = activate.data; - stop_core_cpuslocked(cpu, doscan, msrvals); - - status.data = msrvals1; + params.activate = &activate; + atomic_set(&scan_cpus_in, 0); + stop_core_cpuslocked(cpu, doscan, ¶ms); - trace_ifs_status(cpu, to_start, to_stop, status.data); + status = params.status; /* Some cases can be retried, give up for others */ if (!can_restart(status)) @@ -239,10 +296,10 @@ /* Update status for this core */ ifsd->scan_details = status.data; - if (status.control_error || status.signature_error) { + if (status.signature_error) { ifsd->status = SCAN_TEST_FAIL; message_fail(dev, cpu, status); - } else if (status.error_code) { + } else if (status.control_error || status.error_code) { ifsd->status = SCAN_NOT_TESTED; message_not_tested(dev, cpu, status); } else { @@ -250,34 +307,14 @@ } } -#define SPINUNIT 100 /* 100 nsec */ -static atomic_t array_cpus_out; - -/* - * Simplified cpu sibling rendezvous loop based on microcode loader __wait_for_cpus() - */ -static void wait_for_sibling_cpu(atomic_t *t, long long timeout) -{ - int cpu = smp_processor_id(); - const struct cpumask *smt_mask = cpu_smt_mask(cpu); - int all_cpus = cpumask_weight(smt_mask);
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/ptp/ptp_hisi.c
Changed
@@ -15,10 +15,6 @@ #include <linux/net_tstamp.h> #include <linux/debugfs.h> -#ifndef PTP_CLOCK_NAME_LEN -#define PTP_CLOCK_NAME_LEN 32 -#endif - #define HISI_PTP_VERSION "22.10.2" #define HISI_PTP_NAME "hisi_ptp" @@ -588,7 +584,7 @@ { dev_info(ptp->ptp_tx->dev, "register ptp clock\n"); - snprintf(ptp->info.name, PTP_CLOCK_NAME_LEN, "%s", HISI_PTP_NAME); + snprintf(ptp->info.name, sizeof(ptp->info.name), "%s", HISI_PTP_NAME); ptp->info.owner = THIS_MODULE; ptp->info.adjfine = hisi_ptp_adjfine; ptp->info.adjtime = hisi_ptp_adjtime;
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/scsi/aacraid/comminit.c
Changed
@@ -642,6 +642,7 @@ if (aac_comm_init(dev)<0){ kfree(dev->queues); + dev->queues = NULL; return NULL; } /* @@ -649,6 +650,7 @@ */ if (aac_fib_setup(dev) < 0) { kfree(dev->queues); + dev->queues = NULL; return NULL; }
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/scsi/lpfc/lpfc_els.c
Changed
@@ -7304,13 +7304,13 @@ } mbox->vport = phba->pport; mbox->ctx_ndlp = (struct lpfc_rdp_context *)rdp_context; - - rc = lpfc_sli_issue_mbox_wait(phba, mbox, 30); + rc = lpfc_sli_issue_mbox_wait(phba, mbox, LPFC_MBOX_SLI4_CONFIG_TMO); if (rc == MBX_NOT_FINISHED) { rc = 1; goto error; } - + if (rc == MBX_TIMEOUT) + goto error; if (phba->sli_rev == LPFC_SLI_REV4) mp = (struct lpfc_dmabuf *)(mbox->ctx_buf); else @@ -7364,7 +7364,10 @@ } mbox->ctx_ndlp = (struct lpfc_rdp_context *)rdp_context; - rc = lpfc_sli_issue_mbox_wait(phba, mbox, 30); + rc = lpfc_sli_issue_mbox_wait(phba, mbox, LPFC_MBOX_SLI4_CONFIG_TMO); + + if (rc == MBX_TIMEOUT) + goto error; if (bf_get(lpfc_mqe_status, &mbox->u.mqe)) { rc = 1; goto error; @@ -7375,8 +7378,10 @@ DMP_SFF_PAGE_A2_SIZE); error: - mbox->ctx_buf = mpsave; - lpfc_mbox_rsrc_cleanup(phba, mbox, MBOX_THD_UNLOCKED); + if (mbox->mbox_flag & LPFC_MBX_WAKE) { + mbox->ctx_buf = mpsave; + lpfc_mbox_rsrc_cleanup(phba, mbox, MBOX_THD_UNLOCKED); + } return rc;
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/spi/spi-rockchip.c
Changed
@@ -974,14 +974,16 @@ { int ret; struct spi_controller *ctlr = dev_get_drvdata(dev); - struct rockchip_spi *rs = spi_controller_get_devdata(ctlr); ret = spi_controller_suspend(ctlr); if (ret < 0) return ret; - clk_disable_unprepare(rs->spiclk); - clk_disable_unprepare(rs->apb_pclk); + ret = pm_runtime_force_suspend(dev); + if (ret < 0) { + spi_controller_resume(ctlr); + return ret; + } pinctrl_pm_select_sleep_state(dev); @@ -992,25 +994,14 @@ { int ret; struct spi_controller *ctlr = dev_get_drvdata(dev); - struct rockchip_spi *rs = spi_controller_get_devdata(ctlr); pinctrl_pm_select_default_state(dev); - ret = clk_prepare_enable(rs->apb_pclk); + ret = pm_runtime_force_resume(dev); if (ret < 0) return ret; - ret = clk_prepare_enable(rs->spiclk); - if (ret < 0) - clk_disable_unprepare(rs->apb_pclk); - - ret = spi_controller_resume(ctlr); - if (ret < 0) { - clk_disable_unprepare(rs->spiclk); - clk_disable_unprepare(rs->apb_pclk); - } - - return 0; + return spi_controller_resume(ctlr); } #endif /* CONFIG_PM_SLEEP */
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/tty/serial/sc16is7xx.c
Changed
@@ -545,6 +545,8 @@ SC16IS7XX_MCR_CLKSEL_BIT, prescaler == 1 ? 0 : SC16IS7XX_MCR_CLKSEL_BIT); + mutex_lock(&one->efr_lock); + /* Open the LCR divisors for configuration */ sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, SC16IS7XX_LCR_CONF_MODE_A); @@ -558,6 +560,8 @@ /* Put LCR back to the normal mode */ sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, lcr); + mutex_unlock(&one->efr_lock); + return DIV_ROUND_CLOSEST((clk / prescaler) / 16, div); }
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/ufs/core/ufshcd.c
Changed
@@ -10119,7 +10119,8 @@ blk_mq_destroy_queue(hba->tmf_queue); blk_put_queue(hba->tmf_queue); blk_mq_free_tag_set(&hba->tmf_tag_set); - scsi_remove_host(hba->host); + if (hba->scsi_host_added) + scsi_remove_host(hba->host); /* disable interrupts */ ufshcd_disable_intr(hba, hba->intr_mask); ufshcd_hba_stop(hba); @@ -10391,6 +10392,7 @@ dev_err(hba->dev, "scsi_add_host failed\n"); goto out_disable; } + hba->scsi_host_added = true; } hba->tmf_tag_set = (struct blk_mq_tag_set) { @@ -10472,7 +10474,8 @@ free_tmf_tag_set: blk_mq_free_tag_set(&hba->tmf_tag_set); out_remove_scsi_host: - scsi_remove_host(hba->host); + if (hba->scsi_host_added) + scsi_remove_host(hba->host); out_disable: hba->is_irq_enabled = false; ufshcd_hba_exit(hba);
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/uio/uio_hv_generic.c
Changed
@@ -104,10 +104,11 @@ /* * Callback from vmbus_event when channel is rescinded. + * It is meant for rescind of primary channels only. */ static void hv_uio_rescind(struct vmbus_channel *channel) { - struct hv_device *hv_dev = channel->primary_channel->device_obj; + struct hv_device *hv_dev = channel->device_obj; struct hv_uio_private_data *pdata = hv_get_drvdata(hv_dev); /*
View file
_service:recompress:tar_scm:kernel.tar.gz/drivers/usb/dwc3/dwc3-st.c
Changed
@@ -219,10 +219,8 @@ dwc3_data->regmap = regmap; res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "syscfg-reg"); - if (!res) { - ret = -ENXIO; - goto undo_platform_dev_alloc; - } + if (!res) + return -ENXIO; dwc3_data->syscfg_reg_off = res->start; @@ -233,8 +231,7 @@ devm_reset_control_get_exclusive(dev, "powerdown"); if (IS_ERR(dwc3_data->rstc_pwrdn)) { dev_err(&pdev->dev, "could not get power controller\n"); - ret = PTR_ERR(dwc3_data->rstc_pwrdn); - goto undo_platform_dev_alloc; + return PTR_ERR(dwc3_data->rstc_pwrdn); } /* Manage PowerDown */ @@ -300,8 +297,6 @@ reset_control_assert(dwc3_data->rstc_rst); undo_powerdown: reset_control_assert(dwc3_data->rstc_pwrdn); -undo_platform_dev_alloc: - platform_device_put(pdev); return ret; }
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/btrfs/ctree.h
Changed
@@ -445,7 +445,6 @@ void *filldir_buf; u64 last_index; struct extent_state *llseek_cached_state; - bool fsync_skip_inode_lock; }; static inline u32 BTRFS_LEAF_DATA_SIZE(const struct btrfs_fs_info *info)
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/btrfs/extent-tree.c
Changed
@@ -124,11 +124,6 @@ if (!path) return -ENOMEM; - if (!trans) { - path->skip_locking = 1; - path->search_commit_root = 1; - } - search_again: key.objectid = bytenr; key.offset = offset; @@ -164,11 +159,7 @@ btrfs_err(fs_info, "unexpected extent item size, has %u expect >= %zu", item_size, sizeof(*ei)); - if (trans) - btrfs_abort_transaction(trans, ret); - else - btrfs_handle_fs_error(fs_info, ret, NULL); - + btrfs_abort_transaction(trans, ret); goto out_free; } @@ -189,9 +180,6 @@ ret = 0; } - if (!trans) - goto out; - delayed_refs = &trans->transaction->delayed_refs; spin_lock(&delayed_refs->lock); head = btrfs_find_delayed_ref_head(delayed_refs, bytenr); @@ -231,7 +219,7 @@ mutex_unlock(&head->mutex); } spin_unlock(&delayed_refs->lock); -out: + WARN_ON(num_refs == 0); if (refs) *refs = num_refs; @@ -5162,7 +5150,6 @@ eb->start, level, 1, &wc->refslevel, &wc->flagslevel); - BUG_ON(ret == -ENOMEM); if (ret) return ret; BUG_ON(wc->refslevel == 0); @@ -5518,7 +5505,10 @@ ret = btrfs_dec_ref(trans, root, eb, 1); else ret = btrfs_dec_ref(trans, root, eb, 0); - BUG_ON(ret); /* -ENOMEM */ + if (ret) { + btrfs_abort_transaction(trans, ret); + return ret; + } if (is_fstree(root->root_key.objectid)) { ret = btrfs_qgroup_trace_leaf_items(trans, eb); if (ret) {
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/btrfs/file.c
Changed
@@ -1543,13 +1543,6 @@ if (IS_ERR_OR_NULL(dio)) { err = PTR_ERR_OR_ZERO(dio); } else { - struct btrfs_file_private stack_private = { 0 }; - struct btrfs_file_private *private; - const bool have_private = (file->private_data != NULL); - - if (!have_private) - file->private_data = &stack_private; - /* * If we have a synchoronous write, we must make sure the fsync * triggered by the iomap_dio_complete() call below doesn't @@ -1558,13 +1551,10 @@ * partial writes due to the input buffer (or parts of it) not * being already faulted in. */ - private = file->private_data; - private->fsync_skip_inode_lock = true; + ASSERT(current->journal_info == NULL); + current->journal_info = BTRFS_TRANS_DIO_WRITE_STUB; err = iomap_dio_complete(dio); - private->fsync_skip_inode_lock = false; - - if (!have_private) - file->private_data = NULL; + current->journal_info = NULL; } /* No increment (+=) because iomap returns a cumulative value. */ @@ -1796,7 +1786,6 @@ */ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) { - struct btrfs_file_private *private = file->private_data; struct dentry *dentry = file_dentry(file); struct inode *inode = d_inode(dentry); struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); @@ -1806,7 +1795,13 @@ int ret = 0, err; u64 len; bool full_sync; - const bool skip_ilock = (private ? private->fsync_skip_inode_lock : false); + bool skip_ilock = false; + + if (current->journal_info == BTRFS_TRANS_DIO_WRITE_STUB) { + skip_ilock = true; + current->journal_info = NULL; + lockdep_assert_held(&inode->i_rwsem); + } trace_btrfs_sync_file(file, datasync);
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/btrfs/transaction.h
Changed
@@ -12,6 +12,12 @@ #include "ctree.h" #include "misc.h" +/* + * Signal that a direct IO write is in progress, to avoid deadlock for sync + * direct IO writes when fsync is called during the direct IO write path. + */ +#define BTRFS_TRANS_DIO_WRITE_STUB ((void *) 1) + /* Radix-tree tag for roots that are part of the trasaction. */ #define BTRFS_ROOT_TRANS_TAG 0
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/ext4/balloc.c
Changed
@@ -545,7 +545,8 @@ trace_ext4_read_block_bitmap_load(sb, block_group, ignore_locked); ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO | (ignore_locked ? REQ_RAHEAD : 0), - ext4_end_bitmap_read); + ext4_end_bitmap_read, + ext4_simulate_fail(sb, EXT4_SIM_BBITMAP_EIO)); return bh; verify: err = ext4_validate_block_bitmap(sb, desc, block_group, bh); @@ -569,7 +570,6 @@ if (!desc) return -EFSCORRUPTED; wait_on_buffer(bh); - ext4_simulate_fail_bh(sb, bh, EXT4_SIM_BBITMAP_EIO); if (!buffer_uptodate(bh)) { ext4_error_err(sb, EIO, "Cannot read block bitmap - " "block_group = %u, block_bitmap = %llu",
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/ext4/ext4.h
Changed
@@ -1866,14 +1866,6 @@ return false; } -static inline void ext4_simulate_fail_bh(struct super_block *sb, - struct buffer_head *bh, - unsigned long code) -{ - if (!IS_ERR(bh) && ext4_simulate_fail(sb, code)) - clear_buffer_uptodate(bh); -} - /* * Error number codes for s_{first,last}_error_errno * @@ -3092,9 +3084,9 @@ extern struct buffer_head *ext4_sb_bread_unmovable(struct super_block *sb, sector_t block); extern void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags, - bh_end_io_t *end_io); + bh_end_io_t *end_io, bool simu_fail); extern int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, - bh_end_io_t *end_io); + bh_end_io_t *end_io, bool simu_fail); extern int ext4_read_bh_lock(struct buffer_head *bh, blk_opf_t op_flags, bool wait); extern void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block); extern int ext4_seq_options_show(struct seq_file *seq, void *offset);
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/ext4/extents.c
Changed
@@ -564,7 +564,7 @@ if (!bh_uptodate_or_lock(bh)) { trace_ext4_ext_load_extent(inode, pblk, _RET_IP_); - err = ext4_read_bh(bh, 0, NULL); + err = ext4_read_bh(bh, 0, NULL, false); if (err < 0) goto errout; }
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/ext4/ialloc.c
Changed
@@ -194,8 +194,9 @@ * submit the buffer_head for reading */ trace_ext4_load_inode_bitmap(sb, block_group); - ext4_read_bh(bh, REQ_META | REQ_PRIO, ext4_end_bitmap_read); - ext4_simulate_fail_bh(sb, bh, EXT4_SIM_IBITMAP_EIO); + ext4_read_bh(bh, REQ_META | REQ_PRIO, + ext4_end_bitmap_read, + ext4_simulate_fail(sb, EXT4_SIM_IBITMAP_EIO)); if (!buffer_uptodate(bh)) { put_bh(bh); ext4_error_err(sb, EIO, "Cannot read inode bitmap - "
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/ext4/indirect.c
Changed
@@ -170,7 +170,7 @@ } if (!bh_uptodate_or_lock(bh)) { - if (ext4_read_bh(bh, 0, NULL) < 0) { + if (ext4_read_bh(bh, 0, NULL, false) < 0) { put_bh(bh); goto failure; }
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/ext4/inode.c
Changed
@@ -5083,10 +5083,10 @@ * Read the block from disk. */ trace_ext4_load_inode(sb, ino); - ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO, NULL); + ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO, NULL, + ext4_simulate_fail(sb, EXT4_SIM_INODE_EIO)); blk_finish_plug(&plug); wait_on_buffer(bh); - ext4_simulate_fail_bh(sb, bh, EXT4_SIM_INODE_EIO); if (!buffer_uptodate(bh)) { if (ret_block) *ret_block = block;
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/ext4/mmp.c
Changed
@@ -94,7 +94,7 @@ } lock_buffer(*bh); - ret = ext4_read_bh(*bh, REQ_META | REQ_PRIO, NULL); + ret = ext4_read_bh(*bh, REQ_META | REQ_PRIO, NULL, false); if (ret) goto warn_exit;
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/ext4/move_extent.c
Changed
@@ -221,7 +221,7 @@ for (i = 0; i < nr; i++) { bh = arri; if (!bh_uptodate_or_lock(bh)) { - err = ext4_read_bh(bh, 0, NULL); + err = ext4_read_bh(bh, 0, NULL, false); if (err) return err; }
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/ext4/resize.c
Changed
@@ -1301,7 +1301,7 @@ if (unlikely(!bh)) return NULL; if (!bh_uptodate_or_lock(bh)) { - if (ext4_read_bh(bh, 0, NULL) < 0) { + if (ext4_read_bh(bh, 0, NULL, false) < 0) { brelse(bh); return NULL; }
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/ext4/super.c
Changed
@@ -185,8 +185,14 @@ static inline void __ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, - bh_end_io_t *end_io) + bh_end_io_t *end_io, bool simu_fail) { + if (simu_fail) { + clear_buffer_uptodate(bh); + unlock_buffer(bh); + return; + } + /* * buffer's verified bit is no longer valid after reading from * disk again due to write out error, clear it to make sure we @@ -200,7 +206,7 @@ } void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags, - bh_end_io_t *end_io) + bh_end_io_t *end_io, bool simu_fail) { BUG_ON(!buffer_locked(bh)); @@ -208,10 +214,11 @@ unlock_buffer(bh); return; } - __ext4_read_bh(bh, op_flags, end_io); + __ext4_read_bh(bh, op_flags, end_io, simu_fail); } -int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, bh_end_io_t *end_io) +int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, + bh_end_io_t *end_io, bool simu_fail) { BUG_ON(!buffer_locked(bh)); @@ -220,7 +227,7 @@ return 0; } - __ext4_read_bh(bh, op_flags, end_io); + __ext4_read_bh(bh, op_flags, end_io, simu_fail); wait_on_buffer(bh); if (buffer_uptodate(bh)) @@ -232,10 +239,10 @@ { lock_buffer(bh); if (!wait) { - ext4_read_bh_nowait(bh, op_flags, NULL); + ext4_read_bh_nowait(bh, op_flags, NULL, false); return 0; } - return ext4_read_bh(bh, op_flags, NULL); + return ext4_read_bh(bh, op_flags, NULL, false); } /* @@ -283,7 +290,7 @@ if (likely(bh)) { if (trylock_buffer(bh)) - ext4_read_bh_nowait(bh, REQ_RAHEAD, NULL); + ext4_read_bh_nowait(bh, REQ_RAHEAD, NULL, false); brelse(bh); } }
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/libfs.c
Changed
@@ -239,17 +239,22 @@ }; EXPORT_SYMBOL(simple_dir_inode_operations); -static void offset_set(struct dentry *dentry, u32 offset) +/* 0 is '.', 1 is '..', so always start with offset 2 or more */ +enum { + DIR_OFFSET_MIN = 2, +}; + +static void offset_set(struct dentry *dentry, long offset) { - dentry->d_fsdata = (void *)((uintptr_t)(offset)); + dentry->d_fsdata = (void *)offset; } -static u32 dentry2offset(struct dentry *dentry) +static long dentry2offset(struct dentry *dentry) { - return (u32)((uintptr_t)(dentry->d_fsdata)); + return (long)dentry->d_fsdata; } -static struct lock_class_key simple_offset_xa_lock; +static struct lock_class_key simple_offset_lock_class; /** * simple_offset_init - initialize an offset_ctx @@ -258,11 +263,9 @@ */ void simple_offset_init(struct offset_ctx *octx) { - xa_init_flags(&octx->xa, XA_FLAGS_ALLOC1); - lockdep_set_class(&octx->xa.xa_lock, &simple_offset_xa_lock); - - /* 0 is '.', 1 is '..', so always start with offset 2 */ - octx->next_offset = 2; + mt_init_flags(&octx->mt, MT_FLAGS_ALLOC_RANGE); + lockdep_set_class(&octx->mt.ma_lock, &simple_offset_lock_class); + octx->next_offset = DIR_OFFSET_MIN; } /** @@ -270,20 +273,19 @@ * @octx: directory offset ctx to be updated * @dentry: new dentry being added * - * Returns zero on success. @so_ctx and the dentry offset are updated. + * Returns zero on success. @octx and the dentry's offset are updated. * Otherwise, a negative errno value is returned. */ int simple_offset_add(struct offset_ctx *octx, struct dentry *dentry) { - static const struct xa_limit limit = XA_LIMIT(2, U32_MAX); - u32 offset; + unsigned long offset; int ret; if (dentry2offset(dentry) != 0) return -EBUSY; - ret = xa_alloc_cyclic(&octx->xa, &offset, dentry, limit, - &octx->next_offset, GFP_KERNEL); + ret = mtree_alloc_cyclic(&octx->mt, &offset, dentry, DIR_OFFSET_MIN, + LONG_MAX, &octx->next_offset, GFP_KERNEL); if (ret < 0) return ret; @@ -299,17 +301,49 @@ */ void simple_offset_remove(struct offset_ctx *octx, struct dentry *dentry) { - u32 offset; + long offset; offset = dentry2offset(dentry); if (offset == 0) return; - xa_erase(&octx->xa, offset); + mtree_erase(&octx->mt, offset); offset_set(dentry, 0); } /** + * simple_offset_empty - Check if a dentry can be unlinked + * @dentry: dentry to be tested + * + * Returns 0 if @dentry is a non-empty directory; otherwise returns 1. + */ +int simple_offset_empty(struct dentry *dentry) +{ + struct inode *inode = d_inode(dentry); + struct offset_ctx *octx; + struct dentry *child; + unsigned long index; + int ret = 1; + + if (!inode || !S_ISDIR(inode->i_mode)) + return ret; + + index = DIR_OFFSET_MIN; + octx = inode->i_op->get_offset_ctx(inode); + mt_for_each(&octx->mt, child, index, LONG_MAX) { + spin_lock(&child->d_lock); + if (simple_positive(child)) { + spin_unlock(&child->d_lock); + ret = 0; + break; + } + spin_unlock(&child->d_lock); + } + + return ret; +} + +/** * simple_offset_rename_exchange - exchange rename with directory offsets * @old_dir: parent of dentry being moved * @old_dentry: dentry being moved @@ -326,8 +360,8 @@ { struct offset_ctx *old_ctx = old_dir->i_op->get_offset_ctx(old_dir); struct offset_ctx *new_ctx = new_dir->i_op->get_offset_ctx(new_dir); - u32 old_index = dentry2offset(old_dentry); - u32 new_index = dentry2offset(new_dentry); + long old_index = dentry2offset(old_dentry); + long new_index = dentry2offset(new_dentry); int ret; simple_offset_remove(old_ctx, old_dentry); @@ -353,9 +387,9 @@ out_restore: offset_set(old_dentry, old_index); - xa_store(&old_ctx->xa, old_index, old_dentry, GFP_KERNEL); + mtree_store(&old_ctx->mt, old_index, old_dentry, GFP_KERNEL); offset_set(new_dentry, new_index); - xa_store(&new_ctx->xa, new_index, new_dentry, GFP_KERNEL); + mtree_store(&new_ctx->mt, new_index, new_dentry, GFP_KERNEL); return ret; } @@ -368,7 +402,15 @@ */ void simple_offset_destroy(struct offset_ctx *octx) { - xa_destroy(&octx->xa); + mtree_destroy(&octx->mt); +} + +static int offset_dir_open(struct inode *inode, struct file *file) +{ + struct offset_ctx *ctx = inode->i_op->get_offset_ctx(inode); + + file->private_data = (void *)ctx->next_offset; + return 0; } /** @@ -384,6 +426,9 @@ */ static loff_t offset_dir_llseek(struct file *file, loff_t offset, int whence) { + struct inode *inode = file->f_inode; + struct offset_ctx *ctx = inode->i_op->get_offset_ctx(inode); + switch (whence) { case SEEK_CUR: offset += file->f_pos; @@ -397,16 +442,18 @@ } /* In this case, ->private_data is protected by f_pos_lock */ - file->private_data = NULL; - return vfs_setpos(file, offset, U32_MAX); + if (!offset) + file->private_data = (void *)ctx->next_offset; + return vfs_setpos(file, offset, LONG_MAX); } -static struct dentry *offset_find_next(struct xa_state *xas) +static struct dentry *offset_find_next(struct offset_ctx *octx, loff_t offset) { + MA_STATE(mas, &octx->mt, offset, offset); struct dentry *child, *found = NULL; rcu_read_lock(); - child = xas_next_entry(xas, U32_MAX); + child = mas_find(&mas, LONG_MAX); if (!child) goto out; spin_lock(&child->d_lock); @@ -420,33 +467,36 @@ static bool offset_dir_emit(struct dir_context *ctx, struct dentry *dentry) { - u32 offset = dentry2offset(dentry); struct inode *inode = d_inode(dentry); + long offset = dentry2offset(dentry);
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/nfsd/nfs4callback.c
Changed
@@ -1202,6 +1202,7 @@ ret = false; break; case -NFS4ERR_DELAY: + cb->cb_seq_status = 1; if (!rpc_restart_call(task)) goto out;
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/nilfs2/recovery.c
Changed
@@ -709,6 +709,33 @@ } /** + * nilfs_abort_roll_forward - cleaning up after a failed rollforward recovery + * @nilfs: nilfs object + */ +static void nilfs_abort_roll_forward(struct the_nilfs *nilfs) +{ + struct nilfs_inode_info *ii, *n; + LIST_HEAD(head); + + /* Abandon inodes that have read recovery data */ + spin_lock(&nilfs->ns_inode_lock); + list_splice_init(&nilfs->ns_dirty_files, &head); + spin_unlock(&nilfs->ns_inode_lock); + if (list_empty(&head)) + return; + + set_nilfs_purging(nilfs); + list_for_each_entry_safe(ii, n, &head, i_dirty) { + spin_lock(&nilfs->ns_inode_lock); + list_del_init(&ii->i_dirty); + spin_unlock(&nilfs->ns_inode_lock); + + iput(&ii->vfs_inode); + } + clear_nilfs_purging(nilfs); +} + +/** * nilfs_salvage_orphan_logs - salvage logs written after the latest checkpoint * @nilfs: nilfs object * @sb: super block instance @@ -766,15 +793,19 @@ if (unlikely(err)) { nilfs_err(sb, "error %d writing segment for recovery", err); - goto failed; + goto put_root; } nilfs_finish_roll_forward(nilfs, ri); } - failed: +put_root: nilfs_put_root(root); return err; + +failed: + nilfs_abort_roll_forward(nilfs); + goto put_root; } /**
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/nilfs2/sysfs.c
Changed
@@ -836,9 +836,15 @@ struct the_nilfs *nilfs, char *buf) { - struct nilfs_super_block **sbp = nilfs->ns_sbp; - u32 major = le32_to_cpu(sbp0->s_rev_level); - u16 minor = le16_to_cpu(sbp0->s_minor_rev_level); + struct nilfs_super_block *raw_sb; + u32 major; + u16 minor; + + down_read(&nilfs->ns_sem); + raw_sb = nilfs->ns_sbp0; + major = le32_to_cpu(raw_sb->s_rev_level); + minor = le16_to_cpu(raw_sb->s_minor_rev_level); + up_read(&nilfs->ns_sem); return sysfs_emit(buf, "%d.%d\n", major, minor); } @@ -856,8 +862,13 @@ struct the_nilfs *nilfs, char *buf) { - struct nilfs_super_block **sbp = nilfs->ns_sbp; - u64 dev_size = le64_to_cpu(sbp0->s_dev_size); + struct nilfs_super_block *raw_sb; + u64 dev_size; + + down_read(&nilfs->ns_sem); + raw_sb = nilfs->ns_sbp0; + dev_size = le64_to_cpu(raw_sb->s_dev_size); + up_read(&nilfs->ns_sem); return sysfs_emit(buf, "%llu\n", dev_size); } @@ -879,9 +890,15 @@ struct the_nilfs *nilfs, char *buf) { - struct nilfs_super_block **sbp = nilfs->ns_sbp; + struct nilfs_super_block *raw_sb; + ssize_t len; - return sysfs_emit(buf, "%pUb\n", sbp0->s_uuid); + down_read(&nilfs->ns_sem); + raw_sb = nilfs->ns_sbp0; + len = sysfs_emit(buf, "%pUb\n", raw_sb->s_uuid); + up_read(&nilfs->ns_sem); + + return len; } static @@ -889,10 +906,16 @@ struct the_nilfs *nilfs, char *buf) { - struct nilfs_super_block **sbp = nilfs->ns_sbp; + struct nilfs_super_block *raw_sb; + ssize_t len; + + down_read(&nilfs->ns_sem); + raw_sb = nilfs->ns_sbp0; + len = scnprintf(buf, sizeof(raw_sb->s_volume_name), "%s\n", + raw_sb->s_volume_name); + up_read(&nilfs->ns_sem); - return scnprintf(buf, sizeof(sbp0->s_volume_name), "%s\n", - sbp0->s_volume_name); + return len; } static const char dev_readme_str =
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/proc/proc_sysctl.c
Changed
@@ -480,12 +480,10 @@ make_empty_dir_inode(inode); } + inode->i_uid = GLOBAL_ROOT_UID; + inode->i_gid = GLOBAL_ROOT_GID; if (root->set_ownership) root->set_ownership(head, table, &inode->i_uid, &inode->i_gid); - else { - inode->i_uid = GLOBAL_ROOT_UID; - inode->i_gid = GLOBAL_ROOT_GID; - } return inode; }
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/smb/client/smb2inode.c
Changed
@@ -1105,6 +1105,8 @@ co, DELETE, SMB2_OP_RENAME, cfile, source_dentry); if (rc == -EINVAL) { cifs_dbg(FYI, "invalid lease key, resending request without lease"); + cifs_get_writable_path(tcon, from_name, + FIND_WR_WITH_DELETE, &cfile); rc = smb2_set_path_attr(xid, tcon, from_name, to_name, cifs_sb, co, DELETE, SMB2_OP_RENAME, cfile, NULL); } @@ -1148,6 +1150,7 @@ cfile, NULL, NULL, dentry); if (rc == -EINVAL) { cifs_dbg(FYI, "invalid lease key, resending request without lease"); + cifs_get_writable_path(tcon, full_path, FIND_WR_ANY, &cfile); rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, &oparms, &in_iov, &(int){SMB2_OP_SET_EOF}, 1,
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/smb/client/smb2pdu.c
Changed
@@ -4431,7 +4431,7 @@ * If we want to do a RDMA write, fill in and append * smbd_buffer_descriptor_v1 to the end of read request */ - if (smb3_use_rdma_offload(io_parms)) { + if (rdata && smb3_use_rdma_offload(io_parms)) { struct smbd_buffer_descriptor_v1 *v1; bool need_invalidate = server->dialect == SMB30_PROT_ID;
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/smb/server/smb2pdu.c
Changed
@@ -1681,6 +1681,8 @@ rc = ksmbd_session_register(conn, sess); if (rc) goto out_err; + + conn->binding = false; } else if (conn->dialect >= SMB30_PROT_ID && (server_conf.flags & KSMBD_GLOBAL_FLAG_SMB3_MULTICHANNEL) && req->Flags & SMB2_SESSION_REQ_FLAG_BINDING) { @@ -1759,6 +1761,8 @@ sess = NULL; goto out_err; } + + conn->binding = false; } work->sess = sess;
View file
_service:recompress:tar_scm:kernel.tar.gz/fs/udf/super.c
Changed
@@ -1080,12 +1080,19 @@ struct udf_part_map *map; struct udf_sb_info *sbi = UDF_SB(sb); struct partitionHeaderDesc *phd; + u32 sum; int err; map = &sbi->s_partmapsp_index; map->s_partition_len = le32_to_cpu(p->partitionLength); /* blocks */ map->s_partition_root = le32_to_cpu(p->partitionStartingLocation); + if (check_add_overflow(map->s_partition_root, map->s_partition_len, + &sum)) { + udf_err(sb, "Partition %d has invalid location %u + %u\n", + p_index, map->s_partition_root, map->s_partition_len); + return -EFSCORRUPTED; + } if (p->accessType == cpu_to_le32(PD_ACCESS_TYPE_READ_ONLY)) map->s_partition_flags |= UDF_PART_FLAG_READ_ONLY; @@ -1141,6 +1148,14 @@ bitmap->s_extPosition = le32_to_cpu( phd->unallocSpaceBitmap.extPosition); map->s_partition_flags |= UDF_PART_FLAG_UNALLOC_BITMAP; + /* Check whether math over bitmap won't overflow. */ + if (check_add_overflow(map->s_partition_len, + (__u32)(sizeof(struct spaceBitmapDesc) << 3), + &sum)) { + udf_err(sb, "Partition %d is too long (%u)\n", p_index, + map->s_partition_len); + return -EFSCORRUPTED; + } udf_debug("unallocSpaceBitmap (part %d) @ %u\n", p_index, bitmap->s_extPosition); }
View file
_service:recompress:tar_scm:kernel.tar.gz/include/linux/filter.h
Changed
@@ -853,10 +853,11 @@ return 0; } -static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr) +static inline int __must_check +bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr) { set_vm_flush_reset_perms(hdr); - set_memory_rox((unsigned long)hdr, hdr->size >> PAGE_SHIFT); + return set_memory_rox((unsigned long)hdr, hdr->size >> PAGE_SHIFT); } int sk_filter_trim_cap(struct sock *sk, struct sk_buff *skb, unsigned int cap); @@ -916,6 +917,7 @@ bool bpf_jit_supports_subprog_tailcalls(void); bool bpf_jit_supports_kfunc_call(void); bool bpf_jit_supports_far_kfunc_call(void); +u64 bpf_arch_uaddress_limit(void); bool bpf_helper_changes_pkt_data(void *func); static inline bool bpf_dump_raw_ok(const struct cred *cred)
View file
_service:recompress:tar_scm:kernel.tar.gz/include/linux/fs.h
Changed
@@ -43,6 +43,7 @@ #include <linux/cred.h> #include <linux/mnt_idmapping.h> #include <linux/slab.h> +#include <linux/maple_tree.h> #ifdef CONFIG_BPF_READAHEAD #include <linux/tracepoint-defs.h> #endif @@ -3255,13 +3256,23 @@ const void __user *from, size_t count); struct offset_ctx { - struct xarray xa; - u32 next_offset; + KABI_REPLACE(struct xarray xa, struct maple_tree mt) +#if BITS_PER_LONG == 32 + KABI_REPLACE(u32 next_offset, unsigned long next_offset) +#else +#ifdef __GENKSYMS__ + u32 next_offset; + /* 4 bytes hole */ +#else + unsigned long next_offset; +#endif +#endif }; void simple_offset_init(struct offset_ctx *octx); int simple_offset_add(struct offset_ctx *octx, struct dentry *dentry); void simple_offset_remove(struct offset_ctx *octx, struct dentry *dentry); +int simple_offset_empty(struct dentry *dentry); int simple_offset_rename_exchange(struct inode *old_dir, struct dentry *old_dentry, struct inode *new_dir,
View file
_service:recompress:tar_scm:kernel.tar.gz/include/linux/maple_tree.h
Changed
@@ -171,6 +171,7 @@ #define MT_FLAGS_LOCK_IRQ 0x100 #define MT_FLAGS_LOCK_BH 0x200 #define MT_FLAGS_LOCK_EXTERN 0x300 +#define MT_FLAGS_ALLOC_WRAPPED 0x0800 #define MAPLE_HEIGHT_MAX 31 @@ -319,6 +320,9 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp, void *entry, unsigned long size, unsigned long min, unsigned long max, gfp_t gfp); +int mtree_alloc_cyclic(struct maple_tree *mt, unsigned long *startp, + void *entry, unsigned long range_lo, unsigned long range_hi, + unsigned long *next, gfp_t gfp); int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp, void *entry, unsigned long size, unsigned long min, unsigned long max, gfp_t gfp); @@ -499,6 +503,9 @@ void *mas_find_rev(struct ma_state *mas, unsigned long min); void *mas_find_range_rev(struct ma_state *mas, unsigned long max); int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp); +int mas_alloc_cyclic(struct ma_state *mas, unsigned long *startp, + void *entry, unsigned long range_lo, unsigned long range_hi, + unsigned long *next, gfp_t gfp); bool mas_nomem(struct ma_state *mas, gfp_t gfp); void mas_pause(struct ma_state *mas);
View file
_service:recompress:tar_scm:kernel.tar.gz/include/net/mana/mana.h
Changed
@@ -97,6 +97,8 @@ atomic_t pending_sends; + bool napi_initialized; + struct mana_stats_tx stats; }; @@ -274,6 +276,7 @@ /* NAPI data */ struct napi_struct napi; int work_done; + int work_done_since_doorbell; int budget; };
View file
_service:recompress:tar_scm:kernel.tar.gz/include/trace/events/intel_ifs.h
Changed
@@ -10,31 +10,58 @@ TRACE_EVENT(ifs_status, - TP_PROTO(int cpu, int start, int stop, u64 status), + TP_PROTO(int batch, int start, int stop, u64 status), - TP_ARGS(cpu, start, stop, status), + TP_ARGS(batch, start, stop, status), TP_STRUCT__entry( + __field( int, batch ) __field( u64, status ) - __field( int, cpu ) __field( u16, start ) __field( u16, stop ) ), TP_fast_assign( - __entry->cpu = cpu; + __entry->batch = batch; __entry->start = start; __entry->stop = stop; __entry->status = status; ), - TP_printk("cpu: %d, start: %.4x, stop: %.4x, status: %.16llx", - __entry->cpu, + TP_printk("batch: 0x%.2x, start: 0x%.4x, stop: 0x%.4x, status: 0x%.16llx", + __entry->batch, __entry->start, __entry->stop, __entry->status) ); +TRACE_EVENT(ifs_sbaf, + + TP_PROTO(int batch, union ifs_sbaf activate, union ifs_sbaf_status status), + + TP_ARGS(batch, activate, status), + + TP_STRUCT__entry( + __field( u64, status ) + __field( int, batch ) + __field( u16, bundle ) + __field( u16, pgm ) + ), + + TP_fast_assign( + __entry->status = status.data; + __entry->batch = batch; + __entry->bundle = activate.bundle_idx; + __entry->pgm = activate.pgm_idx; + ), + + TP_printk("batch: 0x%.2x, bundle_idx: 0x%.4x, pgm_idx: 0x%.4x, status: 0x%.16llx", + __entry->batch, + __entry->bundle, + __entry->pgm, + __entry->status) +); + #endif /* _TRACE_IFS_H */ /* This part must be outside protection */
View file
_service:recompress:tar_scm:kernel.tar.gz/kernel/bpf/core.c
Changed
@@ -2906,6 +2906,15 @@ return false; } +u64 __weak bpf_arch_uaddress_limit(void) +{ +#if defined(CONFIG_64BIT) && defined(CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE) + return TASK_SIZE; +#else + return 0; +#endif +} + /* To execute LD_ABS/LD_IND instructions __bpf_prog_run() may call * skb_copy_bits(), so provide a weak definition of it for NET-less config. */
View file
_service:recompress:tar_scm:kernel.tar.gz/kernel/bpf/verifier.c
Changed
@@ -18942,6 +18942,36 @@ continue; } + /* Make it impossible to de-reference a userspace address */ + if (BPF_CLASS(insn->code) == BPF_LDX && + (BPF_MODE(insn->code) == BPF_PROBE_MEM || + BPF_MODE(insn->code) == BPF_PROBE_MEMSX)) { + struct bpf_insn *patch = &insn_buf0; + u64 uaddress_limit = bpf_arch_uaddress_limit(); + + if (!uaddress_limit) + continue; + + *patch++ = BPF_MOV64_REG(BPF_REG_AX, insn->src_reg); + if (insn->off) + *patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_AX, insn->off); + *patch++ = BPF_ALU64_IMM(BPF_RSH, BPF_REG_AX, 32); + *patch++ = BPF_JMP_IMM(BPF_JLE, BPF_REG_AX, uaddress_limit >> 32, 2); + *patch++ = *insn; + *patch++ = BPF_JMP_IMM(BPF_JA, 0, 0, 1); + *patch++ = BPF_MOV64_IMM(insn->dst_reg, 0); + + cnt = patch - insn_buf; + new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); + if (!new_prog) + return -ENOMEM; + + delta += cnt - 1; + env->prog = prog = new_prog; + insn = new_prog->insnsi + i + delta; + continue; + } + /* Implement LD_ABS and LD_IND with a rewrite, if supported by the program type. */ if (BPF_CLASS(insn->code) == BPF_LD && (BPF_MODE(insn->code) == BPF_ABS ||
View file
_service:recompress:tar_scm:kernel.tar.gz/kernel/locking/rtmutex.c
Changed
@@ -1624,6 +1624,7 @@ } static void __sched rt_mutex_handle_deadlock(int res, int detect_deadlock, + struct rt_mutex_base *lock, struct rt_mutex_waiter *w) { /* @@ -1636,10 +1637,10 @@ if (build_ww_mutex() && w->ww_ctx) return; - /* - * Yell loudly and stop the task right here. - */ + raw_spin_unlock_irq(&lock->wait_lock); + WARN(1, "rtmutex deadlock detected\n"); + while (1) { set_current_state(TASK_INTERRUPTIBLE); schedule(); @@ -1693,7 +1694,7 @@ } else { __set_current_state(TASK_RUNNING); remove_waiter(lock, waiter); - rt_mutex_handle_deadlock(ret, chwalk, waiter); + rt_mutex_handle_deadlock(ret, chwalk, lock, waiter); } /*
View file
_service:recompress:tar_scm:kernel.tar.gz/kernel/sched/core.c
Changed
@@ -9711,6 +9711,22 @@ return 0; } +static inline void sched_smt_present_inc(int cpu) +{ +#ifdef CONFIG_SCHED_SMT + if (cpumask_weight(cpu_smt_mask(cpu)) == 2) + static_branch_inc_cpuslocked(&sched_smt_present); +#endif +} + +static inline void sched_smt_present_dec(int cpu) +{ +#ifdef CONFIG_SCHED_SMT + if (cpumask_weight(cpu_smt_mask(cpu)) == 2) + static_branch_dec_cpuslocked(&sched_smt_present); +#endif +} + int sched_cpu_activate(unsigned int cpu) { struct rq *rq = cpu_rq(cpu); @@ -9722,13 +9738,10 @@ */ balance_push_set(cpu, false); -#ifdef CONFIG_SCHED_SMT /* * When going up, increment the number of cores with SMT present. */ - if (cpumask_weight(cpu_smt_mask(cpu)) == 2) - static_branch_inc_cpuslocked(&sched_smt_present); -#endif + sched_smt_present_inc(cpu); set_cpu_active(cpu, true); #ifdef CONFIG_QOS_SCHED_SMART_GRID tg_update_affinity_domains(cpu, 1); @@ -9800,13 +9813,12 @@ } rq_unlock_irqrestore(rq, &rf); -#ifdef CONFIG_SCHED_SMT /* * When going down, decrement the number of cores with SMT present. */ - if (cpumask_weight(cpu_smt_mask(cpu)) == 2) - static_branch_dec_cpuslocked(&sched_smt_present); + sched_smt_present_dec(cpu); +#ifdef CONFIG_SCHED_SMT sched_core_cpu_deactivate(cpu); #endif @@ -9816,6 +9828,7 @@ sched_update_numa(cpu, false); ret = cpuset_cpu_inactive(cpu); if (ret) { + sched_smt_present_inc(cpu); balance_push_set(cpu, false); set_cpu_active(cpu, true); sched_update_numa(cpu, true);
View file
_service:recompress:tar_scm:kernel.tar.gz/kernel/sched/fair.c
Changed
@@ -8962,12 +8962,27 @@ DEFINE_STATIC_KEY_FALSE(__dynamic_affinity_switch); -static int __init dynamic_affinity_switch_setup(char *__unused) +static int __init dynamic_affinity_switch_setup(char *str) { - static_branch_enable(&__dynamic_affinity_switch); - return 1; + int ret = 1; + + if (!str) + goto out; + + if (!strcmp(str, "enable")) + static_branch_enable(&__dynamic_affinity_switch); + else if (!strcmp(str, "disable")) + static_branch_disable(&__dynamic_affinity_switch); + else + ret = 0; + +out: + if (!ret) + pr_warn("Unable to parse dynamic_affinity=\n"); + + return ret; } -__setup("dynamic_affinity", dynamic_affinity_switch_setup); +__setup("dynamic_affinity=", dynamic_affinity_switch_setup); static inline bool prefer_cpus_valid(struct task_struct *p) {
View file
_service:recompress:tar_scm:kernel.tar.gz/kernel/trace/trace_osnoise.c
Changed
@@ -228,6 +228,11 @@ return this_cpu_ptr(&per_cpu_osnoise_var); } +/* + * Protect the interface. + */ +static struct mutex interface_lock; + #ifdef CONFIG_TIMERLAT_TRACER /* * Runtime information for the timer mode. @@ -259,14 +264,20 @@ { struct timerlat_variables *tlat_var; int cpu; + + /* Synchronize with the timerlat interfaces */ + mutex_lock(&interface_lock); /* * So far, all the values are initialized as 0, so * zeroing the structure is perfect. */ for_each_cpu(cpu, cpu_online_mask) { tlat_var = per_cpu_ptr(&per_cpu_timerlat_var, cpu); + if (tlat_var->kthread) + hrtimer_cancel(&tlat_var->timer); memset(tlat_var, 0, sizeof(*tlat_var)); } + mutex_unlock(&interface_lock); } #else /* CONFIG_TIMERLAT_TRACER */ #define tlat_var_reset() do {} while (0) @@ -332,11 +343,6 @@ #endif /* - * Protect the interface. - */ -static struct mutex interface_lock; - -/* * Tracer data. */ static struct osnoise_data { @@ -1612,6 +1618,7 @@ static struct cpumask osnoise_cpumask; static struct cpumask save_cpumask; +static struct cpumask kthread_cpumask; /* * osnoise_sleep - sleep until the next period @@ -1675,6 +1682,7 @@ */ mutex_lock(&interface_lock); this_cpu_osn_var()->kthread = NULL; + cpumask_clear_cpu(smp_processor_id(), &kthread_cpumask); mutex_unlock(&interface_lock); return 1; @@ -1947,9 +1955,10 @@ kthread = per_cpu(per_cpu_osnoise_var, cpu).kthread; if (kthread) { - if (test_bit(OSN_WORKLOAD, &osnoise_options)) { + if (cpumask_test_and_clear_cpu(cpu, &kthread_cpumask) && + !WARN_ON(!test_bit(OSN_WORKLOAD, &osnoise_options))) { kthread_stop(kthread); - } else { + } else if (!WARN_ON(test_bit(OSN_WORKLOAD, &osnoise_options))) { /* * This is a user thread waiting on the timerlat_fd. We need * to close all users, and the best way to guarantee this is @@ -2021,6 +2030,7 @@ } per_cpu(per_cpu_osnoise_var, cpu).kthread = kthread; + cpumask_set_cpu(cpu, &kthread_cpumask); return 0; } @@ -2048,8 +2058,16 @@ */ cpumask_and(current_mask, cpu_online_mask, &osnoise_cpumask); - for_each_possible_cpu(cpu) + for_each_possible_cpu(cpu) { + if (cpumask_test_and_clear_cpu(cpu, &kthread_cpumask)) { + struct task_struct *kthread; + + kthread = per_cpu(per_cpu_osnoise_var, cpu).kthread; + if (!WARN_ON(!kthread)) + kthread_stop(kthread); + } per_cpu(per_cpu_osnoise_var, cpu).kthread = NULL; + } for_each_cpu(cpu, current_mask) { retval = start_kthread(cpu); @@ -2579,7 +2597,8 @@ osn_var = per_cpu_ptr(&per_cpu_osnoise_var, cpu); tlat_var = per_cpu_ptr(&per_cpu_timerlat_var, cpu); - hrtimer_cancel(&tlat_var->timer); + if (tlat_var->kthread) + hrtimer_cancel(&tlat_var->timer); memset(tlat_var, 0, sizeof(*tlat_var)); osn_var->sampling = 0;
View file
_service:recompress:tar_scm:kernel.tar.gz/lib/maple_tree.c
Changed
@@ -4337,6 +4337,56 @@ } +/** + * mas_alloc_cyclic() - Internal call to find somewhere to store an entry + * @mas: The maple state. + * @startp: Pointer to ID. + * @range_lo: Lower bound of range to search. + * @range_hi: Upper bound of range to search. + * @entry: The entry to store. + * @next: Pointer to next ID to allocate. + * @gfp: The GFP_FLAGS to use for allocations. + * + * Return: 0 if the allocation succeeded without wrapping, 1 if the + * allocation succeeded after wrapping, or -EBUSY if there are no + * free entries. + */ +int mas_alloc_cyclic(struct ma_state *mas, unsigned long *startp, + void *entry, unsigned long range_lo, unsigned long range_hi, + unsigned long *next, gfp_t gfp) +{ + unsigned long min = range_lo; + int ret = 0; + + range_lo = max(min, *next); + ret = mas_empty_area(mas, range_lo, range_hi, 1); + if ((mas->tree->ma_flags & MT_FLAGS_ALLOC_WRAPPED) && ret == 0) { + mas->tree->ma_flags &= ~MT_FLAGS_ALLOC_WRAPPED; + ret = 1; + } + if (ret < 0 && range_lo > min) { + ret = mas_empty_area(mas, min, range_hi, 1); + if (ret == 0) + ret = 1; + } + if (ret < 0) + return ret; + + do { + mas_insert(mas, entry); + } while (mas_nomem(mas, gfp)); + if (mas_is_err(mas)) + return xa_err(mas->node); + + *startp = mas->index; + *next = *startp + 1; + if (*next == 0) + mas->tree->ma_flags |= MT_FLAGS_ALLOC_WRAPPED; + + return ret; +} +EXPORT_SYMBOL(mas_alloc_cyclic); + static __always_inline void mas_rewalk(struct ma_state *mas, unsigned long index) { retry: @@ -6490,6 +6540,49 @@ } EXPORT_SYMBOL(mtree_alloc_range); +/** + * mtree_alloc_cyclic() - Find somewhere to store this entry in the tree. + * @mt: The maple tree. + * @startp: Pointer to ID. + * @range_lo: Lower bound of range to search. + * @range_hi: Upper bound of range to search. + * @entry: The entry to store. + * @next: Pointer to next ID to allocate. + * @gfp: The GFP_FLAGS to use for allocations. + * + * Finds an empty entry in @mt after @next, stores the new index into + * the @id pointer, stores the entry at that index, then updates @next. + * + * @mt must be initialized with the MT_FLAGS_ALLOC_RANGE flag. + * + * Context: Any context. Takes and releases the mt.lock. May sleep if + * the @gfp flags permit. + * + * Return: 0 if the allocation succeeded without wrapping, 1 if the + * allocation succeeded after wrapping, -ENOMEM if memory could not be + * allocated, -EINVAL if @mt cannot be used, or -EBUSY if there are no + * free entries. + */ +int mtree_alloc_cyclic(struct maple_tree *mt, unsigned long *startp, + void *entry, unsigned long range_lo, unsigned long range_hi, + unsigned long *next, gfp_t gfp) +{ + int ret; + + MA_STATE(mas, mt, 0, 0); + + if (!mt_is_alloc(mt)) + return -EINVAL; + if (WARN_ON_ONCE(mt_is_reserved(entry))) + return -EINVAL; + mtree_lock(mt); + ret = mas_alloc_cyclic(&mas, startp, entry, range_lo, range_hi, + next, gfp); + mtree_unlock(mt); + return ret; +} +EXPORT_SYMBOL(mtree_alloc_cyclic); + int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp, void *entry, unsigned long size, unsigned long min, unsigned long max, gfp_t gfp)
View file
_service:recompress:tar_scm:kernel.tar.gz/lib/test_maple_tree.c
Changed
@@ -3599,6 +3599,45 @@ mas_unlock(&mas); } +static noinline void __init alloc_cyclic_testing(struct maple_tree *mt) +{ + unsigned long location; + unsigned long next; + int ret = 0; + MA_STATE(mas, mt, 0, 0); + + next = 0; + mtree_lock(mt); + for (int i = 0; i < 100; i++) { + mas_alloc_cyclic(&mas, &location, mt, 2, ULONG_MAX, &next, GFP_KERNEL); + MAS_BUG_ON(&mas, i != location - 2); + MAS_BUG_ON(&mas, mas.index != location); + MAS_BUG_ON(&mas, mas.last != location); + MAS_BUG_ON(&mas, i != next - 3); + } + + mtree_unlock(mt); + mtree_destroy(mt); + next = 0; + mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE); + for (int i = 0; i < 100; i++) { + mtree_alloc_cyclic(mt, &location, mt, 2, ULONG_MAX, &next, GFP_KERNEL); + MT_BUG_ON(mt, i != location - 2); + MT_BUG_ON(mt, i != next - 3); + MT_BUG_ON(mt, mtree_load(mt, location) != mt); + } + + mtree_destroy(mt); + /* Overflow test */ + next = ULONG_MAX - 1; + ret = mtree_alloc_cyclic(mt, &location, mt, 2, ULONG_MAX, &next, GFP_KERNEL); + MT_BUG_ON(mt, ret != 0); + ret = mtree_alloc_cyclic(mt, &location, mt, 2, ULONG_MAX, &next, GFP_KERNEL); + MT_BUG_ON(mt, ret != 0); + ret = mtree_alloc_cyclic(mt, &location, mt, 2, ULONG_MAX, &next, GFP_KERNEL); + MT_BUG_ON(mt, ret != 1); +} + static DEFINE_MTREE(tree); static int __init maple_tree_seed(void) { @@ -3880,6 +3919,11 @@ check_state_handling(&tree); mtree_destroy(&tree); + mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); + alloc_cyclic_testing(&tree); + mtree_destroy(&tree); + + #if defined(BENCH) skip: #endif
View file
_service:recompress:tar_scm:kernel.tar.gz/mm/memory.c
Changed
@@ -920,8 +920,11 @@ * We have a prealloc page, all good! Take it * over and copy the page & arm it. */ + + if (copy_mc_user_highpage(&new_folio->page, page, addr, src_vma)) + return -EHWPOISON; + *prealloc = NULL; - copy_user_highpage(&new_folio->page, page, addr, src_vma); __folio_mark_uptodate(new_folio); folio_add_new_anon_rmap(new_folio, dst_vma, addr, RMAP_EXCLUSIVE); folio_add_lru_vma(new_folio, dst_vma); @@ -1161,8 +1164,9 @@ /* * If we need a pre-allocated page for this pte, drop the * locks, allocate, and try again. + * If copy failed due to hwpoison in source page, break out. */ - if (unlikely(ret == -EAGAIN)) + if (unlikely(ret == -EAGAIN || ret == -EHWPOISON)) break; if (unlikely(prealloc)) { /* @@ -1192,7 +1196,7 @@ goto out; } entry.val = 0; - } else if (ret == -EBUSY) { + } else if (ret == -EBUSY || unlikely(ret == -EHWPOISON)) { goto out; } else if (ret == -EAGAIN) { prealloc = folio_prealloc(src_mm, src_vma, addr, false); @@ -5054,10 +5058,14 @@ if (ret & VM_FAULT_DONE_COW) return ret; - copy_user_highpage(vmf->cow_page, vmf->page, vmf->address, vma); + if (copy_mc_user_highpage(vmf->cow_page, vmf->page, vmf->address, vma)) { + ret = VM_FAULT_HWPOISON; + goto unlock; + } __folio_mark_uptodate(folio); ret |= finish_fault(vmf); +unlock: unlock_page(vmf->page); put_page(vmf->page); if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY)))
View file
_service:recompress:tar_scm:kernel.tar.gz/mm/shmem.c
Changed
@@ -3543,7 +3543,7 @@ static int shmem_rmdir(struct inode *dir, struct dentry *dentry) { - if (!simple_empty(dentry)) + if (!simple_offset_empty(dentry)) return -ENOTEMPTY; drop_nlink(d_inode(dentry)); @@ -3600,7 +3600,7 @@ return simple_offset_rename_exchange(old_dir, old_dentry, new_dir, new_dentry); - if (!simple_empty(new_dentry)) + if (!simple_offset_empty(new_dentry)) return -ENOTEMPTY; if (flags & RENAME_WHITEOUT) {
View file
_service:recompress:tar_scm:kernel.tar.gz/mm/userfaultfd.c
Changed
@@ -704,21 +704,23 @@ } dst_pmdval = pmdp_get_lockless(dst_pmd); - /* - * If the dst_pmd is mapped as THP don't - * override it and just be strict. - */ - if (unlikely(pmd_trans_huge(dst_pmdval))) { - err = -EEXIST; - break; - } if (unlikely(pmd_none(dst_pmdval)) && unlikely(__pte_alloc(dst_mm, dst_pmd))) { err = -ENOMEM; break; } - /* If an huge pmd materialized from under us fail */ - if (unlikely(pmd_trans_huge(*dst_pmd))) { + dst_pmdval = pmdp_get_lockless(dst_pmd); + /* + * If the dst_pmd is THP don't override it and just be strict. + * (This includes the case where the PMD used to be THP and + * changed back to none after __pte_alloc().) + */ + if (unlikely(!pmd_present(dst_pmdval) || pmd_trans_huge(dst_pmdval) || + pmd_devmap(dst_pmdval))) { + err = -EEXIST; + break; + } + if (unlikely(pmd_bad(dst_pmdval))) { err = -EFAULT; break; }
View file
_service:recompress:tar_scm:kernel.tar.gz/net/can/bcm.c
Changed
@@ -1428,6 +1428,12 @@ /* remove device reference, if this is our bound device */ if (bo->bound && bo->ifindex == dev->ifindex) { +#if IS_ENABLED(CONFIG_PROC_FS) + if (sock_net(sk)->can.bcmproc_dir && bo->bcm_proc_read) { + remove_proc_entry(bo->procname, sock_net(sk)->can.bcmproc_dir); + bo->bcm_proc_read = NULL; + } +#endif bo->bound = 0; bo->ifindex = 0; notify_enodev = 1;
View file
_service:recompress:tar_scm:kernel.tar.gz/net/core/net-sysfs.c
Changed
@@ -216,7 +216,7 @@ if (!rtnl_trylock()) return restart_syscall(); - if (netif_running(netdev) && netif_device_present(netdev)) { + if (netif_running(netdev)) { struct ethtool_link_ksettings cmd; if (!__ethtool_get_link_ksettings(netdev, &cmd))
View file
_service:recompress:tar_scm:kernel.tar.gz/net/core/pktgen.c
Changed
@@ -170,6 +170,7 @@ #include <linux/uaccess.h> #include <asm/dma.h> #include <asm/div64.h> /* do_div */ +#include <linux/cpu.h> #define VERSION "2.75" #define IP_NAME_SZ 32 @@ -3605,7 +3606,7 @@ struct pktgen_dev *pkt_dev = NULL; int cpu = t->cpu; - WARN_ON(smp_processor_id() != cpu); + WARN_ON_ONCE(smp_processor_id() != cpu); init_waitqueue_head(&t->queue); complete(&t->start_done); @@ -3941,6 +3942,7 @@ goto remove; } + cpus_read_lock(); for_each_online_cpu(cpu) { int err; @@ -3949,6 +3951,7 @@ pr_warn("Cannot create thread for cpu %d (%d)\n", cpu, err); } + cpus_read_unlock(); if (list_empty(&pn->pktgen_threads)) { pr_err("Initialization failed for all threads\n");
View file
_service:recompress:tar_scm:kernel.tar.gz/net/ethtool/ioctl.c
Changed
@@ -438,6 +438,9 @@ if (!dev->ethtool_ops->get_link_ksettings) return -EOPNOTSUPP; + if (!netif_device_present(dev)) + return -ENODEV; + memset(link_ksettings, 0, sizeof(*link_ksettings)); return dev->ethtool_ops->get_link_ksettings(dev, link_ksettings); }
View file
_service:recompress:tar_scm:kernel.tar.gz/net/mptcp/pm_netlink.c
Changed
@@ -841,7 +841,7 @@ mptcp_close_ssk(sk, ssk, subflow); spin_lock_bh(&msk->pm.lock); - removed = true; + removed |= subflow->request_join; if (rm_type == MPTCP_MIB_RMSUBFLOW) __MPTCP_INC_STATS(sock_net(sk), rm_type); } @@ -855,7 +855,11 @@ if (!mptcp_pm_is_kernel(msk)) continue; - if (rm_type == MPTCP_MIB_RMADDR) { + if (rm_type == MPTCP_MIB_RMADDR && rm_id && + !WARN_ON_ONCE(msk->pm.add_addr_accepted == 0)) { + /* Note: if the subflow has been closed before, this + * add_addr_accepted counter will not be decremented. + */ msk->pm.add_addr_accepted--; WRITE_ONCE(msk->pm.accept_addr, true); } else if (rm_type == MPTCP_MIB_RMSUBFLOW) {
View file
_service:recompress:tar_scm:kernel.tar.gz/net/sched/sch_netem.c
Changed
@@ -446,12 +446,10 @@ struct netem_sched_data *q = qdisc_priv(sch); /* We don't fill cb now as skb_unshare() may invalidate it */ struct netem_skb_cb *cb; - struct sk_buff *skb2; + struct sk_buff *skb2 = NULL; struct sk_buff *segs = NULL; unsigned int prev_len = qdisc_pkt_len(skb); int count = 1; - int rc = NET_XMIT_SUCCESS; - int rc_drop = NET_XMIT_DROP; /* Do not fool qdisc_drop_all() */ skb->prev = NULL; @@ -480,19 +478,11 @@ skb_orphan_partial(skb); /* - * If we need to duplicate packet, then re-insert at top of the - * qdisc tree, since parent queuer expects that only one - * skb will be queued. + * If we need to duplicate packet, then clone it before + * original is modified. */ - if (count > 1 && (skb2 = skb_clone(skb, GFP_ATOMIC)) != NULL) { - struct Qdisc *rootq = qdisc_root_bh(sch); - u32 dupsave = q->duplicate; /* prevent duplicating a dup... */ - - q->duplicate = 0; - rootq->enqueue(skb2, rootq, to_free); - q->duplicate = dupsave; - rc_drop = NET_XMIT_SUCCESS; - } + if (count > 1) + skb2 = skb_clone(skb, GFP_ATOMIC); /* * Randomized packet corruption. @@ -504,7 +494,8 @@ if (skb_is_gso(skb)) { skb = netem_segment(skb, sch, to_free); if (!skb) - return rc_drop; + goto finish_segs; + segs = skb->next; skb_mark_not_on_list(skb); qdisc_skb_cb(skb)->pkt_len = skb->len; @@ -530,7 +521,24 @@ /* re-link segs, so that qdisc_drop_all() frees them all */ skb->next = segs; qdisc_drop_all(skb, sch, to_free); - return rc_drop; + if (skb2) + __qdisc_drop(skb2, to_free); + return NET_XMIT_DROP; + } + + /* + * If doing duplication then re-insert at top of the + * qdisc tree, since parent queuer expects that only one + * skb will be queued. + */ + if (skb2) { + struct Qdisc *rootq = qdisc_root_bh(sch); + u32 dupsave = q->duplicate; /* prevent duplicating a dup... */ + + q->duplicate = 0; + rootq->enqueue(skb2, rootq, to_free); + q->duplicate = dupsave; + skb2 = NULL; } qdisc_qstats_backlog_inc(sch, skb); @@ -601,9 +609,12 @@ } finish_segs: + if (skb2) + __qdisc_drop(skb2, to_free); + if (segs) { unsigned int len, last_len; - int nb; + int rc, nb; len = skb ? skb->len : 0; nb = skb ? 1 : 0;
View file
_service:recompress:tar_scm:kernel.tar.gz/security/apparmor/apparmorfs.c
Changed
@@ -1698,6 +1698,10 @@ struct aa_profile *p; p = aa_deref_parent(profile); dent = prof_dir(p); + if (!dent) { + error = -ENOENT; + goto fail2; + } /* adding to parent that previously didn't have children */ dent = aafs_create_dir("profiles", dent); if (IS_ERR(dent))
View file
_service:recompress:tar_scm:kernel.tar.gz/security/selinux/hooks.c
Changed
@@ -6543,8 +6543,8 @@ */ static int selinux_inode_setsecctx(struct dentry *dentry, void *ctx, u32 ctxlen) { - return __vfs_setxattr_noperm(&nop_mnt_idmap, dentry, XATTR_NAME_SELINUX, - ctx, ctxlen, 0); + return __vfs_setxattr_locked(&nop_mnt_idmap, dentry, XATTR_NAME_SELINUX, + ctx, ctxlen, 0, NULL); } static int selinux_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen)
View file
_service:recompress:tar_scm:kernel.tar.gz/security/smack/smack_lsm.c
Changed
@@ -4770,8 +4770,8 @@ static int smack_inode_setsecctx(struct dentry *dentry, void *ctx, u32 ctxlen) { - return __vfs_setxattr_noperm(&nop_mnt_idmap, dentry, XATTR_NAME_SMACK, - ctx, ctxlen, 0); + return __vfs_setxattr_locked(&nop_mnt_idmap, dentry, XATTR_NAME_SMACK, + ctx, ctxlen, 0, NULL); } static int smack_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen)
View file
_service:recompress:tar_scm:kernel.tar.gz/sound/soc/meson/axg-card.c
Changed
@@ -104,7 +104,7 @@ int *index) { struct meson_card *priv = snd_soc_card_get_drvdata(card); - struct snd_soc_dai_link *pad = &card->dai_link*index; + struct snd_soc_dai_link *pad; struct snd_soc_dai_link *lb; struct snd_soc_dai_link_component *dlc; int ret; @@ -114,6 +114,7 @@ if (ret) return ret; + pad = &card->dai_link*index; lb = &card->dai_link*index + 1; lb->name = devm_kasprintf(card->dev, GFP_KERNEL, "%s-lb", pad->name);
View file
_service:recompress:tar_scm:kernel.tar.gz/sound/soc/soc-dapm.c
Changed
@@ -4018,6 +4018,7 @@ case SND_SOC_DAPM_POST_PMD: kfree(substream->runtime); + substream->runtime = NULL; break; default:
View file
_service:recompress:tar_scm:kernel.tar.gz/tools/testing/selftests/Makefile
Changed
@@ -18,6 +18,7 @@ TARGETS += drivers/s390x/uvdevice TARGETS += drivers/net/bonding TARGETS += drivers/net/team +TARGETS += drivers/platform/x86/intel/ifs TARGETS += efivarfs TARGETS += exec TARGETS += fchmodat2
View file
_service:recompress:tar_scm:kernel.tar.gz/tools/testing/selftests/drivers/platform
Added
+(directory)
View file
_service:recompress:tar_scm:kernel.tar.gz/tools/testing/selftests/drivers/platform/x86
Added
+(directory)
View file
_service:recompress:tar_scm:kernel.tar.gz/tools/testing/selftests/drivers/platform/x86/intel
Added
+(directory)
View file
_service:recompress:tar_scm:kernel.tar.gz/tools/testing/selftests/drivers/platform/x86/intel/ifs
Added
+(directory)
View file
_service:recompress:tar_scm:kernel.tar.gz/tools/testing/selftests/drivers/platform/x86/intel/ifs/Makefile
Added
@@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0 +# Makefile for ifs(In Field Scan) selftests + +TEST_PROGS := test_ifs.sh + +include ../../../../../lib.mk
View file
_service:recompress:tar_scm:kernel.tar.gz/tools/testing/selftests/drivers/platform/x86/intel/ifs/test_ifs.sh
Added
@@ -0,0 +1,494 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 +# +# Test the functionality of the Intel IFS(In Field Scan) driver. +# + +# Matched with kselftest framework: tools/testing/selftests/kselftest.h +readonly KSFT_PASS=0 +readonly KSFT_FAIL=1 +readonly KSFT_XFAIL=2 +readonly KSFT_SKIP=4 + +readonly CPU_SYSFS="/sys/devices/system/cpu" +readonly CPU_OFFLINE_SYSFS="${CPU_SYSFS}/offline" +readonly IMG_PATH="/lib/firmware/intel/ifs_0" +readonly IFS_SCAN_MODE="0" +readonly IFS_ARRAY_BIST_SCAN_MODE="1" +readonly IFS_PATH="/sys/devices/virtual/misc/intel_ifs" +readonly IFS_SCAN_SYSFS_PATH="${IFS_PATH}_${IFS_SCAN_MODE}" +readonly IFS_ARRAY_BIST_SYSFS_PATH="${IFS_PATH}_${IFS_ARRAY_BIST_SCAN_MODE}" +readonly RUN_TEST="run_test" +readonly STATUS="status" +readonly DETAILS="details" +readonly STATUS_PASS="pass" +readonly PASS="PASS" +readonly FAIL="FAIL" +readonly INFO="INFO" +readonly XFAIL="XFAIL" +readonly SKIP="SKIP" +readonly IFS_NAME="intel_ifs" +readonly ALL="all" +readonly SIBLINGS="siblings" + +# Matches arch/x86/include/asm/intel-family.h and +# drivers/platform/x86/intel/ifs/core.c requirement as follows +readonly SAPPHIRERAPIDS_X="8f" +readonly EMERALDRAPIDS_X="cf" + +readonly INTEL_FAM6="06" + +LOOP_TIMES=3 +FML="" +MODEL="" +STEPPING="" +CPU_FMS="" +TRUE="true" +FALSE="false" +RESULT=$KSFT_PASS +IMAGE_NAME="" +INTERVAL_TIME=1 +OFFLINE_CPUS="" +# For IFS cleanup tags +ORIGIN_IFS_LOADED="" +IFS_IMAGE_NEED_RESTORE=$FALSE +IFS_LOG="/tmp/ifs_logs.$$" +RANDOM_CPU="" +DEFAULT_IMG_ID="" + +append_log() +{ + echo -e "$1" | tee -a "$IFS_LOG" +} + +online_offline_cpu_list() +{ + local on_off=$1 + local target_cpus=$2 + local cpu="" + local cpu_start="" + local cpu_end="" + local i="" + + if -n "$target_cpus" ; then + for cpu in $(echo "$target_cpus" | tr ',' ' '); do + if "$cpu" == *"-"* ; then + cpu_start="" + cpu_end="" + i="" + cpu_start=$(echo "$cpu" | cut -d "-" -f 1) + cpu_end=$(echo "$cpu" | cut -d "-" -f 2) + for((i=cpu_start;i<=cpu_end;i++)); do + append_log "$INFO echo $on_off > \ +${CPU_SYSFS}/cpu${i}/online" + echo "$on_off" > "$CPU_SYSFS"/cpu"$i"/online + done + else + set_target_cpu "$on_off" "$cpu" + fi + done + fi +} + +ifs_scan_result_summary() +{ + local failed_info pass_num skip_num fail_num + + if -e "$IFS_LOG" ; then + failed_info=$(grep ^"\${FAIL}\" "$IFS_LOG") + fail_num=$(grep -c ^"\${FAIL}\" "$IFS_LOG") + skip_num=$(grep -c ^"\${SKIP}\" "$IFS_LOG") + pass_num=$(grep -c ^"\${PASS}\" "$IFS_LOG") + + if "$fail_num" -ne 0 ; then + RESULT=$KSFT_FAIL + echo "$INFO IFS test failure summary:" + echo "$failed_info" + elif "$skip_num" -ne 0 ; then + RESULT=$KSFT_SKIP + fi + echo "$INFO IFS test pass:$pass_num, skip:$skip_num, fail:$fail_num" + else + echo "$INFO No file $IFS_LOG for IFS scan summary" + fi +} + +ifs_cleanup() +{ + echo "$INFO Restore environment after IFS test" + + # Restore ifs origin image if origin image backup step is needed + "$IFS_IMAGE_NEED_RESTORE" == "$TRUE" && { + mv -f "$IMG_PATH"/"$IMAGE_NAME"_origin "$IMG_PATH"/"$IMAGE_NAME" + } + + # Restore the CPUs to the state before testing + -z "$OFFLINE_CPUS" || online_offline_cpu_list "0" "$OFFLINE_CPUS" + + lsmod | grep -q "$IFS_NAME" && "$ORIGIN_IFS_LOADED" == "$FALSE" && { + echo "$INFO modprobe -r $IFS_NAME" + modprobe -r "$IFS_NAME" + } + + ifs_scan_result_summary + -e "$IFS_LOG" && rm -rf "$IFS_LOG" + + echo "RESULT IFS test exit with $RESULT" + exit "$RESULT" +} + +do_cmd() +{ + local cmd=$* + local ret="" + + append_log "$INFO $cmd" + eval "$cmd" + ret=$? + if $ret -ne 0 ; then + append_log "$FAIL $cmd failed. Return code is $ret" + RESULT=$KSFT_XFAIL + ifs_cleanup + fi +} + +test_exit() +{ + local info=$1 + RESULT=$2 + + declare -A EXIT_MAP + EXIT_MAP$KSFT_PASS=$PASS + EXIT_MAP$KSFT_FAIL=$FAIL + EXIT_MAP$KSFT_XFAIL=$XFAIL + EXIT_MAP$KSFT_SKIP=$SKIP + + append_log "${EXIT_MAP$RESULT} $info" + ifs_cleanup +} + +online_all_cpus() +{ + local off_cpus="" + + OFFLINE_CPUS=$(cat "$CPU_OFFLINE_SYSFS") + online_offline_cpu_list "1" "$OFFLINE_CPUS" + + off_cpus=$(cat "$CPU_OFFLINE_SYSFS") + if -z "$off_cpus" ; then + append_log "$INFO All CPUs are online." + else + append_log "$XFAIL There is offline cpu:$off_cpus after online all cpu!" + RESULT=$KSFT_XFAIL + ifs_cleanup + fi +} + +get_cpu_fms() +{ + FML=$(grep -m 1 "family" /proc/cpuinfo | awk -F ":" '{printf "%02x",$2;}') + MODEL=$(grep -m 1 "model" /proc/cpuinfo | awk -F ":" '{printf "%02x",$2;}') + STEPPING=$(grep -m 1 "stepping" /proc/cpuinfo | awk -F ":" '{printf "%02x",$2;}') + CPU_FMS="${FML}-${MODEL}-${STEPPING}" +} + +check_cpu_ifs_support_interval_time() +{ + get_cpu_fms + + if "$FML" != "$INTEL_FAM6" ; then
View file
_service:tar_scm:SOURCE
Changed
@@ -1,1 +1,1 @@ -6.6.0-44.0.0 +6.6.0-46.0.0
Locations
Projects
Search
Status Monitor
Help
Open Build Service
OBS Manuals
API Documentation
OBS Portal
Reporting a Bug
Contact
Mailing List
Forums
Chat (IRC)
Twitter
Open Build Service (OBS)
is an
openSUSE project
.
浙ICP备2022010568号-2