Projects
openEuler:Mainline
snappy
Sign Up
Log In
Username
Password
We truncated the diff of some files because they were too big. If you want to see the full diff for every file,
click here
.
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
Expand all
Collapse all
Changes of Revision 5
View file
_service:tar_scm:snappy.spec
Changed
@@ -1,6 +1,6 @@ Name: snappy -Version: 1.1.9 -Release: 2 +Version: 1.1.10 +Release: 1 Summary: A fast compressor/decompressor License: BSD URL: https://github.com/google/snappy @@ -8,7 +8,6 @@ Source1: snappy.pc Patch0: remove-dependency-on-google-benchmark-and-gmock.patch -Patch1: fix-the-AdvanceToNextTag-fails-to-be-compiled-without-inline.patch Patch2: add-option-to-enable-rtti-set-default-to-current-ben.patch BuildRequires: gcc-c++ make gtest-devel cmake @@ -70,6 +69,11 @@ %doc NEWS README.md %changelog +* Mon Jul 3 2023 dillon chen<dillon.chen@gmail.com> -1.1.10-1 +- update version to 1.1.10 +- Removed patch1(inline.patch) as it's no longer required. +- repatch patch2(snappy-stubs-internal.h) + * Wed Jun 22 2022 wangzengliang<wangzengliang1@huawei.com> - 1.1.9-2 - DESC: add option to enable rtti set default to current
View file
_service:tar_scm:add-option-to-enable-rtti-set-default-to-current-ben.patch
Changed
@@ -61,12 +61,11 @@ @@ -100,7 +100,7 @@ // Inlining hints. - #ifdef HAVE_ATTRIBUTE_ALWAYS_INLINE + #if HAVE_ATTRIBUTE_ALWAYS_INLINE -#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE __attribute__((always_inline)) +#define SNAPPY_ATTRIBUTE_ALWAYS_INLINE #else #define SNAPPY_ATTRIBUTE_ALWAYS_INLINE - #endif + #endif // HAVE_ATTRIBUTE_ALWAYS_INLINE -- 2.24.4 -
View file
_service:tar_scm:fix-the-AdvanceToNextTag-fails-to-be-compiled-without-inline.patch
Deleted
@@ -1,26 +0,0 @@ -From 581af0c0a819da2214466e4d30416616966e781d Mon Sep 17 00:00:00 2001 -From: hanxinke <hanxinke@huawei.com> -Date: Tue, 7 Dec 2021 15:47:14 +0800 -Subject: PATCH fix the AdvanceToNextTag fails to be compiled without inline - -Signed-off-by: hanxinke <hanxinke@huawei.com> ---- - snappy.cc | 2 +- - 1 file changed, 1 insertion(+), 1 deletion(-) - -diff --git a/snappy.cc b/snappy.cc -index 79dc0e8..51157be 100644 ---- a/snappy.cc -+++ b/snappy.cc -@@ -1014,7 +1014,7 @@ void MemMove(ptrdiff_t dst, const void* src, size_t size) { - } - - SNAPPY_ATTRIBUTE_ALWAYS_INLINE --size_t AdvanceToNextTag(const uint8_t** ip_p, size_t* tag) { -+inline size_t AdvanceToNextTag(const uint8_t** ip_p, size_t* tag) { - const uint8_t*& ip = *ip_p; - // This section is crucial for the throughput of the decompression loop. - // The latency of an iteration is fundamentally constrained by the --- -1.8.3.1 -
View file
_service:tar_scm:snappy-1.1.10.tar.gz/.github
Added
+(directory)
View file
_service:tar_scm:snappy-1.1.10.tar.gz/.github/workflows
Added
+(directory)
View file
_service:tar_scm:snappy-1.1.10.tar.gz/.github/workflows/build.yml
Added
@@ -0,0 +1,135 @@ +# Copyright 2021 Google Inc. All Rights Reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above +# copyright notice, this list of conditions and the following disclaimer +# in the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Google Inc. nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +name: ci +on: push, pull_request + +permissions: + contents: read + +jobs: + build-and-test: + name: >- + CI + ${{ matrix.os }} + ${{ matrix.cpu_level }} + ${{ matrix.compiler }} + ${{ matrix.optimized && 'release' || 'debug' }} + runs-on: ${{ matrix.os }} + strategy: + fail-fast: false + matrix: + compiler: clang, gcc, msvc + os: ubuntu-latest, macos-latest, windows-latest + cpu_level: baseline, avx, avx2 + optimized: true, false + exclude: + # MSVC only works on Windows. + - os: ubuntu-latest + compiler: msvc + - os: macos-latest + compiler: msvc + # GitHub servers seem to run on pre-Haswell CPUs. Attempting to use AVX2 + # results in crashes. + - os: macos-latest + cpu_level: avx2 + # Not testing with GCC on macOS. + - os: macos-latest + compiler: gcc + # Only testing with MSVC on Windows. + - os: windows-latest + compiler: clang + - os: windows-latest + compiler: gcc + include: + - compiler: clang + CC: clang + CXX: clang++ + - compiler: gcc + CC: gcc + CXX: g++ + - compiler: msvc + CC: + CXX: + + env: + CMAKE_BUILD_DIR: ${{ github.workspace }}/build + CMAKE_BUILD_TYPE: ${{ matrix.optimized && 'RelWithDebInfo' || 'Debug' }} + CC: ${{ matrix.CC }} + CXX: ${{ matrix.CXX }} + SNAPPY_REQUIRE_AVX: ${{ matrix.cpu_level == 'baseline' && '0' || '1' }} + SNAPPY_REQUIRE_AVX2: ${{ matrix.cpu_level == 'avx2' && '1' || '0' }} + SNAPPY_FUZZING_BUILD: >- + ${{ (startsWith(matrix.os, 'ubuntu') && matrix.compiler == 'clang' && + !matrix.optimized) && '1' || '0' }} + BINARY_SUFFIX: ${{ startsWith(matrix.os, 'windows') && '.exe' || '' }} + BINARY_PATH: >- + ${{ format( + startsWith(matrix.os, 'windows') && '{0}\build\{1}\' || '{0}/build/', + github.workspace, + matrix.optimized && 'RelWithDebInfo' || 'Debug') }} + + steps: + - uses: actions/checkout@v2 + with: + submodules: true + + - name: Generate build config + run: >- + cmake -S "${{ github.workspace }}" -B "${{ env.CMAKE_BUILD_DIR }}" + -DCMAKE_BUILD_TYPE=${{ env.CMAKE_BUILD_TYPE }} + -DCMAKE_INSTALL_PREFIX=${{ runner.temp }}/install_test/ + -DSNAPPY_FUZZING_BUILD=${{ env.SNAPPY_FUZZING_BUILD }} + -DSNAPPY_REQUIRE_AVX=${{ env.SNAPPY_REQUIRE_AVX }} + -DSNAPPY_REQUIRE_AVX2=${{ env.SNAPPY_REQUIRE_AVX2 }} + + - name: Build + run: >- + cmake --build "${{ env.CMAKE_BUILD_DIR }}" + --config "${{ env.CMAKE_BUILD_TYPE }}" + + - name: Run C++ API Tests + run: ${{ env.BINARY_PATH }}snappy_unittest${{ env.BINARY_SUFFIX }} + + - name: Run Compression Fuzzer + if: ${{ env.SNAPPY_FUZZING_BUILD == '1' }} + run: >- + ${{ env.BINARY_PATH }}snappy_compress_fuzzer${{ env.BINARY_SUFFIX }} + -runs=1000 -close_fd_mask=3 + + - name: Run Decompression Fuzzer + if: ${{ env.SNAPPY_FUZZING_BUILD == '1' }} + run: >- + ${{ env.BINARY_PATH }}snappy_uncompress_fuzzer${{ env.BINARY_SUFFIX }} + -runs=1000 -close_fd_mask=3 + + - name: Run Benchmarks + run: ${{ env.BINARY_PATH }}snappy_benchmark${{ env.BINARY_SUFFIX }} + + - name: Test CMake installation + run: cmake --build "${{ env.CMAKE_BUILD_DIR }}" --target install
View file
_service:tar_scm:snappy-1.1.9.tar.gz/CMakeLists.txt -> _service:tar_scm:snappy-1.1.10.tar.gz/CMakeLists.txt
Changed
@@ -27,7 +27,7 @@ # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. cmake_minimum_required(VERSION 3.1) -project(Snappy VERSION 1.1.9 LANGUAGES C CXX) +project(Snappy VERSION 1.1.10 LANGUAGES C CXX) # C++ standard can be overridden when this is used as a sub-project. if(NOT CMAKE_CXX_STANDARD) @@ -175,9 +175,31 @@ check_cxx_source_compiles(" #include <immintrin.h> int main() { + return _mm_crc32_u32(0, 1); +}" SNAPPY_HAVE_X86_CRC32) + +check_cxx_source_compiles(" +#include <arm_neon.h> +#include <arm_acle.h> +int main() { + return __crc32cw(0, 1); +}" SNAPPY_HAVE_NEON_CRC32) + +check_cxx_source_compiles(" +#include <immintrin.h> +int main() { return _bzhi_u32(0, 1); }" SNAPPY_HAVE_BMI2) +check_cxx_source_compiles(" +#include <arm_neon.h> +int main() { + uint8_t val = 3, dup8; + uint8x16_t v = vld1q_dup_u8(&val); + vst1q_u8(dup, v); + return 0; +}" SNAPPY_HAVE_NEON) + include(CheckSymbolExists) check_symbol_exists("mmap" "sys/mman.h" HAVE_FUNC_MMAP) check_symbol_exists("sysconf" "unistd.h" HAVE_FUNC_SYSCONF)
View file
_service:tar_scm:snappy-1.1.9.tar.gz/CONTRIBUTING.md -> _service:tar_scm:snappy-1.1.10.tar.gz/CONTRIBUTING.md
Changed
@@ -3,30 +3,10 @@ We'd love to accept your patches and contributions to this project. There are just a few small guidelines you need to follow. -## Project Goals - -In addition to the aims listed at the top of the README(README.md) Snappy -explicitly supports the following: - -1. C++11 -2. Clang (gcc and MSVC are best-effort). -3. Low level optimizations (e.g. assembly or equivalent intrinsics) for: - 1. x86(https://en.wikipedia.org/wiki/X86) - 2. x86-64(https://en.wikipedia.org/wiki/X86-64) - 3. ARMv7 (32-bit) - 4. ARMv8 (AArch64) -4. Supports only the Snappy compression scheme as described in - format_description.txt(format_description.txt). -5. CMake for building - -Changes adding features or dependencies outside of the core area of focus listed -above might not be accepted. If in doubt post a message to the -Snappy discussion mailing list(https://groups.google.com/g/snappy-compression). - ## Contributor License Agreement Contributions to this project must be accompanied by a Contributor License -Agreement. You (or your employer) retain the copyright to your contribution, +Agreement. You (or your employer) retain the copyright to your contribution; this simply gives us permission to use and redistribute your contributions as part of the project. Head over to <https://cla.developers.google.com/> to see your current agreements on file or to sign a new one. @@ -35,12 +15,17 @@ (even if it was for a different project), you probably don't need to do it again. -## Code reviews +## Code Reviews All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult GitHub Help(https://help.github.com/articles/about-pull-requests/) for more information on using pull requests. -Please make sure that all the automated checks (CLA, AppVeyor, Travis) pass for -your pull requests. Pull requests whose checks fail may be ignored. +See the README(README.md#contributing-to-the-snappy-project) for areas +where we are likely to accept external contributions. + +## Community Guidelines + +This project follows Google's Open Source Community +Guidelines(https://opensource.google/conduct/).
View file
_service:tar_scm:snappy-1.1.9.tar.gz/NEWS -> _service:tar_scm:snappy-1.1.10.tar.gz/NEWS
Changed
@@ -1,3 +1,9 @@ +Snappy v1.1.10, Mar 8th 2023: + + * Performance improvements + + * Compilation fixes for various environments + Snappy v1.1.9, May 4th 2021: * Performance improvements.
View file
_service:tar_scm:snappy-1.1.9.tar.gz/README.md -> _service:tar_scm:snappy-1.1.10.tar.gz/README.md
Changed
@@ -1,7 +1,6 @@ Snappy, a fast compressor/decompressor. -!Build Status(https://travis-ci.org/google/snappy.svg?branch=master)(https://travis-ci.org/google/snappy) -!Build status(https://ci.appveyor.com/api/projects/status/t9nubcqkwo8rw8yn/branch/master?svg=true)(https://ci.appveyor.com/project/pwnall/leveldb) +!Build Status(https://github.com/google/snappy/actions/workflows/build.yml/badge.svg)(https://github.com/google/snappy/actions/workflows/build.yml) Introduction ============ @@ -132,6 +131,32 @@ baddata1-3.snappy are not intended as benchmarks; they are used to verify correctness in the presence of corrupted data in the unit test.) +Contributing to the Snappy Project +================================== + +In addition to the aims listed at the top of the README(README.md) Snappy +explicitly supports the following: + +1. C++11 +2. Clang (gcc and MSVC are best-effort). +3. Low level optimizations (e.g. assembly or equivalent intrinsics) for: + 1. x86(https://en.wikipedia.org/wiki/X86) + 2. x86-64(https://en.wikipedia.org/wiki/X86-64) + 3. ARMv7 (32-bit) + 4. ARMv8 (AArch64) +4. Supports only the Snappy compression scheme as described in + format_description.txt(format_description.txt). +5. CMake for building + +Changes adding features or dependencies outside of the core area of focus listed +above might not be accepted. If in doubt post a message to the +Snappy discussion mailing list(https://groups.google.com/g/snappy-compression). + +We are unlikely to accept contributions to the build configuration files, such +as `CMakeLists.txt`. We are focused on maintaining a build configuration that +allows us to test that the project works in a few supported configurations +inside Google. We are not currently interested in supporting other requirements, +such as different operating systems, compilers, or build systems. Contact =======
View file
_service:tar_scm:snappy-1.1.9.tar.gz/cmake/config.h.in -> _service:tar_scm:snappy-1.1.10.tar.gz/cmake/config.h.in
Changed
@@ -2,55 +2,65 @@ #define THIRD_PARTY_SNAPPY_OPENSOURCE_CMAKE_CONFIG_H_ /* Define to 1 if the compiler supports __attribute__((always_inline)). */ -#cmakedefine HAVE_ATTRIBUTE_ALWAYS_INLINE 1 +#cmakedefine01 HAVE_ATTRIBUTE_ALWAYS_INLINE /* Define to 1 if the compiler supports __builtin_ctz and friends. */ -#cmakedefine HAVE_BUILTIN_CTZ 1 +#cmakedefine01 HAVE_BUILTIN_CTZ /* Define to 1 if the compiler supports __builtin_expect. */ -#cmakedefine HAVE_BUILTIN_EXPECT 1 +#cmakedefine01 HAVE_BUILTIN_EXPECT /* Define to 1 if you have a definition for mmap() in <sys/mman.h>. */ -#cmakedefine HAVE_FUNC_MMAP 1 +#cmakedefine01 HAVE_FUNC_MMAP /* Define to 1 if you have a definition for sysconf() in <unistd.h>. */ -#cmakedefine HAVE_FUNC_SYSCONF 1 +#cmakedefine01 HAVE_FUNC_SYSCONF /* Define to 1 if you have the `lzo2' library (-llzo2). */ -#cmakedefine HAVE_LIBLZO2 1 +#cmakedefine01 HAVE_LIBLZO2 /* Define to 1 if you have the `z' library (-lz). */ -#cmakedefine HAVE_LIBZ 1 +#cmakedefine01 HAVE_LIBZ /* Define to 1 if you have the `lz4' library (-llz4). */ -#cmakedefine HAVE_LIBLZ4 1 +#cmakedefine01 HAVE_LIBLZ4 /* Define to 1 if you have the <sys/mman.h> header file. */ -#cmakedefine HAVE_SYS_MMAN_H 1 +#cmakedefine01 HAVE_SYS_MMAN_H /* Define to 1 if you have the <sys/resource.h> header file. */ -#cmakedefine HAVE_SYS_RESOURCE_H 1 +#cmakedefine01 HAVE_SYS_RESOURCE_H /* Define to 1 if you have the <sys/time.h> header file. */ -#cmakedefine HAVE_SYS_TIME_H 1 +#cmakedefine01 HAVE_SYS_TIME_H /* Define to 1 if you have the <sys/uio.h> header file. */ -#cmakedefine HAVE_SYS_UIO_H 1 +#cmakedefine01 HAVE_SYS_UIO_H /* Define to 1 if you have the <unistd.h> header file. */ -#cmakedefine HAVE_UNISTD_H 1 +#cmakedefine01 HAVE_UNISTD_H /* Define to 1 if you have the <windows.h> header file. */ -#cmakedefine HAVE_WINDOWS_H 1 +#cmakedefine01 HAVE_WINDOWS_H /* Define to 1 if you target processors with SSSE3+ and have <tmmintrin.h>. */ #cmakedefine01 SNAPPY_HAVE_SSSE3 +/* Define to 1 if you target processors with SSE4.2 and have <crc32intrin.h>. */ +#cmakedefine01 SNAPPY_HAVE_X86_CRC32 + /* Define to 1 if you target processors with BMI2+ and have <bmi2intrin.h>. */ #cmakedefine01 SNAPPY_HAVE_BMI2 +/* Define to 1 if you target processors with NEON and have <arm_neon.h>. */ +#cmakedefine01 SNAPPY_HAVE_NEON + +/* Define to 1 if you have <arm_neon.h> and <arm_acle.h> and want to optimize + compression speed by using __crc32cw from <arm_acle.h>. */ +#cmakedefine01 SNAPPY_HAVE_NEON_CRC32 + /* Define to 1 if your processor stores words with the most significant byte first (like Motorola and SPARC, unlike Intel and VAX). */ -#cmakedefine SNAPPY_IS_BIG_ENDIAN 1 +#cmakedefine01 SNAPPY_IS_BIG_ENDIAN #endif // THIRD_PARTY_SNAPPY_OPENSOURCE_CMAKE_CONFIG_H_
View file
_service:tar_scm:snappy-1.1.9.tar.gz/snappy-internal.h -> _service:tar_scm:snappy-1.1.10.tar.gz/snappy-internal.h
Changed
@@ -33,9 +33,84 @@ #include "snappy-stubs-internal.h" +#if SNAPPY_HAVE_SSSE3 +// Please do not replace with <x86intrin.h> or with headers that assume more +// advanced SSE versions without checking with all the OWNERS. +#include <emmintrin.h> +#include <tmmintrin.h> +#endif + +#if SNAPPY_HAVE_NEON +#include <arm_neon.h> +#endif + +#if SNAPPY_HAVE_SSSE3 || SNAPPY_HAVE_NEON +#define SNAPPY_HAVE_VECTOR_BYTE_SHUFFLE 1 +#else +#define SNAPPY_HAVE_VECTOR_BYTE_SHUFFLE 0 +#endif + namespace snappy { namespace internal { +#if SNAPPY_HAVE_VECTOR_BYTE_SHUFFLE +#if SNAPPY_HAVE_SSSE3 +using V128 = __m128i; +#elif SNAPPY_HAVE_NEON +using V128 = uint8x16_t; +#endif + +// Load 128 bits of integer data. `src` must be 16-byte aligned. +inline V128 V128_Load(const V128* src); + +// Load 128 bits of integer data. `src` does not need to be aligned. +inline V128 V128_LoadU(const V128* src); + +// Store 128 bits of integer data. `dst` does not need to be aligned. +inline void V128_StoreU(V128* dst, V128 val); + +// Shuffle packed 8-bit integers using a shuffle mask. +// Each packed integer in the shuffle mask must be in 0,16). +inline V128 V128_Shuffle(V128 input, V128 shuffle_mask); + +// Constructs V128 with 16 chars |c|. +inline V128 V128_DupChar(char c); + +#if SNAPPY_HAVE_SSSE3 +inline V128 V128_Load(const V128* src) { return _mm_load_si128(src); } + +inline V128 V128_LoadU(const V128* src) { return _mm_loadu_si128(src); } + +inline void V128_StoreU(V128* dst, V128 val) { _mm_storeu_si128(dst, val); } + +inline V128 V128_Shuffle(V128 input, V128 shuffle_mask) { + return _mm_shuffle_epi8(input, shuffle_mask); +} + +inline V128 V128_DupChar(char c) { return _mm_set1_epi8(c); } + +#elif SNAPPY_HAVE_NEON +inline V128 V128_Load(const V128* src) { + return vld1q_u8(reinterpret_cast<const uint8_t*>(src)); +} + +inline V128 V128_LoadU(const V128* src) { + return vld1q_u8(reinterpret_cast<const uint8_t*>(src)); +} + +inline void V128_StoreU(V128* dst, V128 val) { + vst1q_u8(reinterpret_cast<uint8_t*>(dst), val); +} + +inline V128 V128_Shuffle(V128 input, V128 shuffle_mask) { + assert(vminvq_u8(shuffle_mask) >= 0 && vmaxvq_u8(shuffle_mask) <= 15); + return vqtbl1q_u8(input, shuffle_mask); +} + +inline V128 V128_DupChar(char c) { return vdupq_n_u8(c); } +#endif +#endif // SNAPPY_HAVE_VECTOR_BYTE_SHUFFLE + // Working memory performs a single allocation to hold all scratch space // required for compression. class WorkingMemory { @@ -95,8 +170,9 @@ // loading from s2 + n. // // Separate implementation for 64-bit, little-endian cpus. -#if !defined(SNAPPY_IS_BIG_ENDIAN) && \ - (defined(__x86_64__) || defined(_M_X64) || defined(ARCH_PPC) || defined(ARCH_ARM)) +#if !SNAPPY_IS_BIG_ENDIAN && \ + (defined(__x86_64__) || defined(_M_X64) || defined(ARCH_PPC) || \ + defined(ARCH_ARM)) static inline std::pair<size_t, bool> FindMatchLength(const char* s1, const char* s2, const char* s2_limit, @@ -154,8 +230,9 @@ uint64_t xorval = a1 ^ a2; int shift = Bits::FindLSBSetNonZero64(xorval); size_t matched_bytes = shift >> 3; + uint64_t a3 = UNALIGNED_LOAD64(s2 + 4); #ifndef __x86_64__ - *data = UNALIGNED_LOAD64(s2 + matched_bytes); + a2 = static_cast<uint32_t>(xorval) == 0 ? a3 : a2; #else // Ideally this would just be // @@ -166,13 +243,13 @@ // use a conditional move (it's tuned to cut data dependencies). In this // case there is a longer parallel chain anyway AND this will be fairly // unpredictable. - uint64_t a3 = UNALIGNED_LOAD64(s2 + 4); asm("testl %k2, %k2\n\t" "cmovzq %1, %0\n\t" : "+r"(a2) - : "r"(a3), "r"(xorval)); - *data = a2 >> (shift & (3 * 8)); + : "r"(a3), "r"(xorval) + : "cc"); #endif + *data = a2 >> (shift & (3 * 8)); return std::pair<size_t, bool>(matched_bytes, true); } else { matched = 8; @@ -194,16 +271,17 @@ uint64_t xorval = a1 ^ a2; int shift = Bits::FindLSBSetNonZero64(xorval); size_t matched_bytes = shift >> 3; + uint64_t a3 = UNALIGNED_LOAD64(s2 + 4); #ifndef __x86_64__ - *data = UNALIGNED_LOAD64(s2 + matched_bytes); + a2 = static_cast<uint32_t>(xorval) == 0 ? a3 : a2; #else - uint64_t a3 = UNALIGNED_LOAD64(s2 + 4); asm("testl %k2, %k2\n\t" "cmovzq %1, %0\n\t" : "+r"(a2) - : "r"(a3), "r"(xorval)); - *data = a2 >> (shift & (3 * 8)); + : "r"(a3), "r"(xorval) + : "cc"); #endif + *data = a2 >> (shift & (3 * 8)); matched += matched_bytes; assert(matched >= 8); return std::pair<size_t, bool>(matched, false);
View file
_service:tar_scm:snappy-1.1.9.tar.gz/snappy-stubs-internal.h -> _service:tar_scm:snappy-1.1.10.tar.gz/snappy-stubs-internal.h
Changed
@@ -31,7 +31,7 @@ #ifndef THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_STUBS_INTERNAL_H_ #define THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_STUBS_INTERNAL_H_ -#ifdef HAVE_CONFIG_H +#if HAVE_CONFIG_H #include "config.h" #endif @@ -43,11 +43,11 @@ #include <limits> #include <string> -#ifdef HAVE_SYS_MMAN_H +#if HAVE_SYS_MMAN_H #include <sys/mman.h> #endif -#ifdef HAVE_UNISTD_H +#if HAVE_UNISTD_H #include <unistd.h> #endif @@ -90,20 +90,20 @@ #define ARRAYSIZE(a) int{sizeof(a) / sizeof(*(a))} // Static prediction hints. -#ifdef HAVE_BUILTIN_EXPECT +#if HAVE_BUILTIN_EXPECT #define SNAPPY_PREDICT_FALSE(x) (__builtin_expect(x, 0)) #define SNAPPY_PREDICT_TRUE(x) (__builtin_expect(!!(x), 1)) #else #define SNAPPY_PREDICT_FALSE(x) x #define SNAPPY_PREDICT_TRUE(x) x -#endif +#endif // HAVE_BUILTIN_EXPECT // Inlining hints. -#ifdef HAVE_ATTRIBUTE_ALWAYS_INLINE +#if HAVE_ATTRIBUTE_ALWAYS_INLINE #define SNAPPY_ATTRIBUTE_ALWAYS_INLINE __attribute__((always_inline)) #else #define SNAPPY_ATTRIBUTE_ALWAYS_INLINE -#endif +#endif // HAVE_ATTRIBUTE_ALWAYS_INLINE // Stubbed version of ABSL_FLAG. // @@ -171,27 +171,42 @@ public: // Functions to do unaligned loads and stores in little-endian order. static inline uint16_t Load16(const void *ptr) { - const uint8_t* const buffer = reinterpret_cast<const uint8_t*>(ptr); - // Compiles to a single mov/str on recent clang and gcc. +#if SNAPPY_IS_BIG_ENDIAN + const uint8_t* const buffer = reinterpret_cast<const uint8_t*>(ptr); return (static_cast<uint16_t>(buffer0)) | (static_cast<uint16_t>(buffer1) << 8); +#else + // memcpy() turns into a single instruction early in the optimization + // pipeline (relatively to a series of byte accesses). So, using memcpy + // instead of byte accesses may lead to better decisions in more stages of + // the optimization pipeline. + uint16_t value; + std::memcpy(&value, ptr, 2); + return value; +#endif } static inline uint32_t Load32(const void *ptr) { - const uint8_t* const buffer = reinterpret_cast<const uint8_t*>(ptr); - // Compiles to a single mov/str on recent clang and gcc. +#if SNAPPY_IS_BIG_ENDIAN + const uint8_t* const buffer = reinterpret_cast<const uint8_t*>(ptr); return (static_cast<uint32_t>(buffer0)) | (static_cast<uint32_t>(buffer1) << 8) | (static_cast<uint32_t>(buffer2) << 16) | (static_cast<uint32_t>(buffer3) << 24); +#else + // See Load16() for the rationale of using memcpy(). + uint32_t value; + std::memcpy(&value, ptr, 4); + return value; +#endif } static inline uint64_t Load64(const void *ptr) { - const uint8_t* const buffer = reinterpret_cast<const uint8_t*>(ptr); - // Compiles to a single mov/str on recent clang and gcc. +#if SNAPPY_IS_BIG_ENDIAN + const uint8_t* const buffer = reinterpret_cast<const uint8_t*>(ptr); return (static_cast<uint64_t>(buffer0)) | (static_cast<uint64_t>(buffer1) << 8) | (static_cast<uint64_t>(buffer2) << 16) | @@ -200,30 +215,44 @@ (static_cast<uint64_t>(buffer5) << 40) | (static_cast<uint64_t>(buffer6) << 48) | (static_cast<uint64_t>(buffer7) << 56); +#else + // See Load16() for the rationale of using memcpy(). + uint64_t value; + std::memcpy(&value, ptr, 8); + return value; +#endif } static inline void Store16(void *dst, uint16_t value) { - uint8_t* const buffer = reinterpret_cast<uint8_t*>(dst); - // Compiles to a single mov/str on recent clang and gcc. +#if SNAPPY_IS_BIG_ENDIAN + uint8_t* const buffer = reinterpret_cast<uint8_t*>(dst); buffer0 = static_cast<uint8_t>(value); buffer1 = static_cast<uint8_t>(value >> 8); +#else + // See Load16() for the rationale of using memcpy(). + std::memcpy(dst, &value, 2); +#endif } static void Store32(void *dst, uint32_t value) { - uint8_t* const buffer = reinterpret_cast<uint8_t*>(dst); - // Compiles to a single mov/str on recent clang and gcc. +#if SNAPPY_IS_BIG_ENDIAN + uint8_t* const buffer = reinterpret_cast<uint8_t*>(dst); buffer0 = static_cast<uint8_t>(value); buffer1 = static_cast<uint8_t>(value >> 8); buffer2 = static_cast<uint8_t>(value >> 16); buffer3 = static_cast<uint8_t>(value >> 24); +#else + // See Load16() for the rationale of using memcpy(). + std::memcpy(dst, &value, 4); +#endif } static void Store64(void* dst, uint64_t value) { - uint8_t* const buffer = reinterpret_cast<uint8_t*>(dst); - // Compiles to a single mov/str on recent clang and gcc. +#if SNAPPY_IS_BIG_ENDIAN + uint8_t* const buffer = reinterpret_cast<uint8_t*>(dst); buffer0 = static_cast<uint8_t>(value); buffer1 = static_cast<uint8_t>(value >> 8); buffer2 = static_cast<uint8_t>(value >> 16); @@ -232,14 +261,18 @@ buffer5 = static_cast<uint8_t>(value >> 40); buffer6 = static_cast<uint8_t>(value >> 48); buffer7 = static_cast<uint8_t>(value >> 56); +#else + // See Load16() for the rationale of using memcpy(). + std::memcpy(dst, &value, 8); +#endif } static inline constexpr bool IsLittleEndian() { -#if defined(SNAPPY_IS_BIG_ENDIAN) +#if SNAPPY_IS_BIG_ENDIAN return false; #else return true; -#endif // defined(SNAPPY_IS_BIG_ENDIAN) +#endif // SNAPPY_IS_BIG_ENDIAN } }; @@ -265,7 +298,7 @@ void operator=(const Bits&); }; -#if defined(HAVE_BUILTIN_CTZ) +#if HAVE_BUILTIN_CTZ inline int Bits::Log2FloorNonZero(uint32_t n) { assert(n != 0); @@ -354,7 +387,7 @@ #endif // End portable versions. -#if defined(HAVE_BUILTIN_CTZ) +#if HAVE_BUILTIN_CTZ inline int Bits::FindLSBSetNonZero64(uint64_t n) { assert(n != 0); @@ -388,7 +421,7 @@ } } -#endif // End portable version. +#endif // HAVE_BUILTIN_CTZ // Variable-length integer encoding. class Varint {
View file
_service:tar_scm:snappy-1.1.9.tar.gz/snappy-test.cc -> _service:tar_scm:snappy-1.1.10.tar.gz/snappy-test.cc
Changed
@@ -151,7 +151,7 @@ #pragma warning(pop) #endif -#ifdef HAVE_LIBZ +#if HAVE_LIBZ ZLib::ZLib() : comp_init_(false),
View file
_service:tar_scm:snappy-1.1.9.tar.gz/snappy-test.h -> _service:tar_scm:snappy-1.1.10.tar.gz/snappy-test.h
Changed
@@ -31,25 +31,25 @@ #ifndef THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_TEST_H_ #define THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_TEST_H_ -#ifdef HAVE_CONFIG_H +#if HAVE_CONFIG_H #include "config.h" #endif #include "snappy-stubs-internal.h" -#ifdef HAVE_SYS_MMAN_H +#if HAVE_SYS_MMAN_H #include <sys/mman.h> #endif -#ifdef HAVE_SYS_RESOURCE_H +#if HAVE_SYS_RESOURCE_H #include <sys/resource.h> #endif -#ifdef HAVE_SYS_TIME_H +#if HAVE_SYS_TIME_H #include <sys/time.h> #endif -#ifdef HAVE_WINDOWS_H +#if HAVE_WINDOWS_H // Needed to be able to use std::max without workarounds in the source code. // https://support.microsoft.com/en-us/help/143208/prb-using-stl-in-windows-program-can-cause-min-max-conflicts #define NOMINMAX @@ -58,15 +58,15 @@ #define InitGoogle(argv0, argc, argv, remove_flags) ((void)(0)) -#ifdef HAVE_LIBZ +#if HAVE_LIBZ #include "zlib.h" #endif -#ifdef HAVE_LIBLZO2 +#if HAVE_LIBLZO2 #include "lzo/lzo1x.h" #endif -#ifdef HAVE_LIBLZ4 +#if HAVE_LIBLZ4 #include "lz4.h" #endif @@ -216,7 +216,7 @@ #define CHECK_GT(a, b) CRASH_UNLESS((a) > (b)) #define CHECK_OK(cond) (cond).ok() -#ifdef HAVE_LIBZ +#if HAVE_LIBZ // Object-oriented wrapper around zlib. class ZLib {
View file
_service:tar_scm:snappy-1.1.9.tar.gz/snappy.cc -> _service:tar_scm:snappy-1.1.10.tar.gz/snappy.cc
Changed
@@ -29,18 +29,6 @@ #include "snappy-internal.h" #include "snappy-sinksource.h" #include "snappy.h" - -#if !defined(SNAPPY_HAVE_SSSE3) -// __SSSE3__ is defined by GCC and Clang. Visual Studio doesn't target SIMD -// support between SSE2 and AVX (so SSSE3 instructions require AVX support), and -// defines __AVX__ when AVX support is available. -#if defined(__SSSE3__) || defined(__AVX__) -#define SNAPPY_HAVE_SSSE3 1 -#else -#define SNAPPY_HAVE_SSSE3 0 -#endif -#endif // !defined(SNAPPY_HAVE_SSSE3) - #if !defined(SNAPPY_HAVE_BMI2) // __BMI2__ is defined by GCC and Clang. Visual Studio doesn't target BMI2 // specifically, but it does define __AVX2__ when AVX2 support is available. @@ -56,16 +44,34 @@ #endif #endif // !defined(SNAPPY_HAVE_BMI2) -#if SNAPPY_HAVE_SSSE3 -// Please do not replace with <x86intrin.h>. or with headers that assume more -// advanced SSE versions without checking with all the OWNERS. -#include <tmmintrin.h> +#if !defined(SNAPPY_HAVE_X86_CRC32) +#if defined(__SSE4_2__) +#define SNAPPY_HAVE_X86_CRC32 1 +#else +#define SNAPPY_HAVE_X86_CRC32 0 #endif +#endif // !defined(SNAPPY_HAVE_X86_CRC32) -#if SNAPPY_HAVE_BMI2 +#if !defined(SNAPPY_HAVE_NEON_CRC32) +#if SNAPPY_HAVE_NEON && defined(__ARM_FEATURE_CRC32) +#define SNAPPY_HAVE_NEON_CRC32 1 +#else +#define SNAPPY_HAVE_NEON_CRC32 0 +#endif +#endif // !defined(SNAPPY_HAVE_NEON_CRC32) + +#if SNAPPY_HAVE_BMI2 || SNAPPY_HAVE_X86_CRC32 // Please do not replace with <x86intrin.h>. or with headers that assume more // advanced SSE versions without checking with all the OWNERS. #include <immintrin.h> +#elif SNAPPY_HAVE_NEON_CRC32 +#include <arm_acle.h> +#endif + +#if defined(__GNUC__) +#define SNAPPY_PREFETCH(ptr) __builtin_prefetch(ptr, 0, 3) +#else +#define SNAPPY_PREFETCH(ptr) (void)(ptr) #endif #include <algorithm> @@ -91,6 +97,14 @@ using internal::COPY_4_BYTE_OFFSET; using internal::kMaximumTagLength; using internal::LITERAL; +#if SNAPPY_HAVE_VECTOR_BYTE_SHUFFLE +using internal::V128; +using internal::V128_Load; +using internal::V128_LoadU; +using internal::V128_Shuffle; +using internal::V128_StoreU; +using internal::V128_DupChar; +#endif // We translate the information encoded in a tag through a lookup table to a // format that requires fewer instructions to decode. Effectively we store @@ -133,21 +147,37 @@ return std::array<int16_t, 256>{LengthMinusOffset(seq)...}; } -// We maximally co-locate the two tables so that only one register needs to be -// reserved for the table address. -struct { - alignas(64) const std::array<int16_t, 256> length_minus_offset; - uint32_t extract_masks4; // Used for extracting offset based on tag type. -} table = {MakeTable(make_index_sequence<256>{}), {0, 0xFF, 0xFFFF, 0}}; - -// Any hash function will produce a valid compressed bitstream, but a good -// hash function reduces the number of collisions and thus yields better -// compression for compressible input, and more speed for incompressible -// input. Of course, it doesn't hurt if the hash function is reasonably fast -// either, as it gets called a lot. -inline uint32_t HashBytes(uint32_t bytes, uint32_t mask) { +alignas(64) const std::array<int16_t, 256> kLengthMinusOffset = + MakeTable(make_index_sequence<256>{}); + +// Given a table of uint16_t whose size is mask / 2 + 1, return a pointer to the +// relevant entry, if any, for the given bytes. Any hash function will do, +// but a good hash function reduces the number of collisions and thus yields +// better compression for compressible input. +// +// REQUIRES: mask is 2 * (table_size - 1), and table_size is a power of two. +inline uint16_t* TableEntry(uint16_t* table, uint32_t bytes, uint32_t mask) { + // Our choice is quicker-and-dirtier than the typical hash function; + // empirically, that seems beneficial. The upper bits of kMagic * bytes are a + // higher-quality hash than the lower bits, so when using kMagic * bytes we + // also shift right to get a higher-quality end result. There's no similar + // issue with a CRC because all of the output bits of a CRC are equally good + // "hashes." So, a CPU instruction for CRC, if available, tends to be a good + // choice. +#if SNAPPY_HAVE_NEON_CRC32 + // We use mask as the second arg to the CRC function, as it's about to + // be used anyway; it'd be equally correct to use 0 or some constant. + // Mathematically, _mm_crc32_u32 (or similar) is a function of the + // xor of its arguments. + const uint32_t hash = __crc32cw(bytes, mask); +#elif SNAPPY_HAVE_X86_CRC32 + const uint32_t hash = _mm_crc32_u32(bytes, mask); +#else constexpr uint32_t kMagic = 0x1e35a7bd; - return ((kMagic * bytes) >> (32 - kMaxHashTableBits)) & mask; + const uint32_t hash = (kMagic * bytes) >> (31 - kMaxHashTableBits); +#endif + return reinterpret_cast<uint16_t*>(reinterpret_cast<uintptr_t>(table) + + (hash & mask)); } } // namespace @@ -228,7 +258,7 @@ return op_limit; } -#if SNAPPY_HAVE_SSSE3 +#if SNAPPY_HAVE_VECTOR_BYTE_SHUFFLE // Computes the bytes for shuffle control mask (please read comments on // 'pattern_generation_masks' as well) for the given index_offset and @@ -248,19 +278,19 @@ // Computes the shuffle control mask bytes array for given pattern-sizes and // returns an array. template <size_t... pattern_sizes_minus_one> -inline constexpr std::array<std::array<char, sizeof(__m128i)>, +inline constexpr std::array<std::array<char, sizeof(V128)>, sizeof...(pattern_sizes_minus_one)> MakePatternMaskBytesTable(int index_offset, index_sequence<pattern_sizes_minus_one...>) { - return {MakePatternMaskBytes( - index_offset, pattern_sizes_minus_one + 1, - make_index_sequence</*indexes=*/sizeof(__m128i)>())...}; + return { + MakePatternMaskBytes(index_offset, pattern_sizes_minus_one + 1, + make_index_sequence</*indexes=*/sizeof(V128)>())...}; } // This is an array of shuffle control masks that can be used as the source // operand for PSHUFB to permute the contents of the destination XMM register // into a repeating byte pattern. -alignas(16) constexpr std::array<std::array<char, sizeof(__m128i)>, +alignas(16) constexpr std::array<std::array<char, sizeof(V128)>, 16> pattern_generation_masks = MakePatternMaskBytesTable( /*index_offset=*/0, @@ -271,40 +301,40 @@ // Basically, pattern_reshuffle_masks is a continuation of // pattern_generation_masks. It follows that, pattern_reshuffle_masks is same as // pattern_generation_masks for offsets 1, 2, 4, 8 and 16. -alignas(16) constexpr std::array<std::array<char, sizeof(__m128i)>, +alignas(16) constexpr std::array<std::array<char, sizeof(V128)>, 16> pattern_reshuffle_masks = MakePatternMaskBytesTable( /*index_offset=*/16, /*pattern_sizes_minus_one=*/make_index_sequence<16>()); SNAPPY_ATTRIBUTE_ALWAYS_INLINE -static inline __m128i LoadPattern(const char* src, const size_t pattern_size) { - __m128i generation_mask = _mm_load_si128(reinterpret_cast<const __m128i*>( +static inline V128 LoadPattern(const char* src, const size_t pattern_size) { + V128 generation_mask = V128_Load(reinterpret_cast<const V128*>( pattern_generation_maskspattern_size - 1.data())); // Uninitialized bytes are masked out by the shuffle mask. // TODO: remove annotation and macro defs once MSan is fixed. SNAPPY_ANNOTATE_MEMORY_IS_INITIALIZED(src + pattern_size, 16 - pattern_size); - return _mm_shuffle_epi8( - _mm_loadu_si128(reinterpret_cast<const __m128i*>(src)), generation_mask); + return V128_Shuffle(V128_LoadU(reinterpret_cast<const V128*>(src)), + generation_mask); } SNAPPY_ATTRIBUTE_ALWAYS_INLINE -static inline std::pair<__m128i /* pattern */, __m128i /* reshuffle_mask */> +static inline std::pair<V128 /* pattern */, V128 /* reshuffle_mask */> LoadPatternAndReshuffleMask(const char* src, const size_t pattern_size) { - __m128i pattern = LoadPattern(src, pattern_size); + V128 pattern = LoadPattern(src, pattern_size); // This mask will generate the next 16 bytes in-place. Doing so enables us to - // write data by at most 4 _mm_storeu_si128. + // write data by at most 4 V128_StoreU. // // For example, suppose pattern is: abcdefabcdefabcd // Shuffling with this mask will generate: efabcdefabcdefab // Shuffling again will generate: cdefabcdefabcdef
View file
_service:tar_scm:snappy-1.1.9.tar.gz/snappy.h -> _service:tar_scm:snappy-1.1.10.tar.gz/snappy.h
Changed
@@ -71,14 +71,21 @@ // Higher-level string based routines (should be sufficient for most users) // ------------------------------------------------------------------------ - // Sets "*compressed" to the compressed version of "input0,input_length-1". + // Sets "*compressed" to the compressed version of "input0..input_length-1". // Original contents of *compressed are lost. // // REQUIRES: "input" is not an alias of "*compressed". size_t Compress(const char* input, size_t input_length, std::string* compressed); - // Decompresses "compressed0,compressed_length-1" to "*uncompressed". + // Same as `Compress` above but taking an `iovec` array as input. Note that + // this function preprocesses the inputs to compute the sum of + // `iov0..iov_cnt-1.iov_len` before reading. To avoid this, use + // `RawCompressFromIOVec` below. + size_t CompressFromIOVec(const struct iovec* iov, size_t iov_cnt, + std::string* compressed); + + // Decompresses "compressed0..compressed_length-1" to "*uncompressed". // Original contents of "*uncompressed" are lost. // // REQUIRES: "compressed" is not an alias of "*uncompressed". @@ -124,6 +131,12 @@ char* compressed, size_t* compressed_length); + // Same as `RawCompress` above but taking an `iovec` array as input. Note that + // `uncompressed_length` is the total number of bytes to be read from the + // elements of `iov` (_not_ the number of elements in `iov`). + void RawCompressFromIOVec(const struct iovec* iov, size_t uncompressed_length, + char* compressed, size_t* compressed_length); + // Given data in "compressed0..compressed_length-1" generated by // calling the Snappy::Compress routine, this routine // stores the uncompressed data to
View file
_service:tar_scm:snappy-1.1.9.tar.gz/snappy_benchmark.cc -> _service:tar_scm:snappy-1.1.10.tar.gz/snappy_benchmark.cc
Changed
@@ -149,7 +149,55 @@ } BENCHMARK(BM_UValidateMedley); -void BM_UIOVec(benchmark::State& state) { +void BM_UIOVecSource(benchmark::State& state) { + // Pick file to process based on state.range(0). + int file_index = state.range(0); + + CHECK_GE(file_index, 0); + CHECK_LT(file_index, ARRAYSIZE(kTestDataFiles)); + std::string contents = + ReadTestDataFile(kTestDataFilesfile_index.filename, + kTestDataFilesfile_index.size_limit); + + // Create `iovec`s of the `contents`. + const int kNumEntries = 10; + struct iovec iovkNumEntries; + size_t used_so_far = 0; + for (int i = 0; i < kNumEntries; ++i) { + iovi.iov_base = const_cast<char*>(contents.data()) + used_so_far; + if (used_so_far == contents.size()) { + iovi.iov_len = 0; + continue; + } + if (i == kNumEntries - 1) { + iovi.iov_len = contents.size() - used_so_far; + } else { + iovi.iov_len = contents.size() / kNumEntries; + } + used_so_far += iovi.iov_len; + } + + char* dst = new charsnappy::MaxCompressedLength(contents.size()); + size_t zsize = 0; + for (auto s : state) { + snappy::RawCompressFromIOVec(iov, contents.size(), dst, &zsize); + benchmark::DoNotOptimize(iov); + } + state.SetBytesProcessed(static_cast<int64_t>(state.iterations()) * + static_cast<int64_t>(contents.size())); + const double compression_ratio = + static_cast<double>(zsize) / std::max<size_t>(1, contents.size()); + state.SetLabel(StrFormat("%s (%.2f %%)", kTestDataFilesfile_index.label, + 100.0 * compression_ratio)); + VLOG(0) << StrFormat("compression for %s: %d -> %d bytes", + kTestDataFilesfile_index.label, contents.size(), + zsize); + + delete dst; +} +BENCHMARK(BM_UIOVecSource)->DenseRange(0, ARRAYSIZE(kTestDataFiles) - 1); + +void BM_UIOVecSink(benchmark::State& state) { // Pick file to process based on state.range(0). int file_index = state.range(0); @@ -193,7 +241,7 @@ delete dst; } -BENCHMARK(BM_UIOVec)->DenseRange(0, 4); +BENCHMARK(BM_UIOVecSink)->DenseRange(0, 4); void BM_UFlatSink(benchmark::State& state) { // Pick file to process based on state.range(0).
View file
_service:tar_scm:snappy-1.1.9.tar.gz/snappy_test_tool.cc -> _service:tar_scm:snappy-1.1.10.tar.gz/snappy_test_tool.cc
Changed
@@ -66,7 +66,7 @@ namespace { -#if defined(HAVE_FUNC_MMAP) && defined(HAVE_FUNC_SYSCONF) +#if HAVE_FUNC_MMAP && HAVE_FUNC_SYSCONF // To test against code that reads beyond its input, this class copies a // string to a newly allocated group of pages, the last of which @@ -112,7 +112,7 @@ size_t size_; }; -#else // defined(HAVE_FUNC_MMAP) && defined(HAVE_FUNC_SYSCONF) +#else // HAVE_FUNC_MMAP && HAVE_FUNC_SYSCONF // Fallback for systems without mmap. using DataEndingAtUnreadablePage = std::string;
View file
_service:tar_scm:snappy-1.1.9.tar.gz/snappy_unittest.cc -> _service:tar_scm:snappy-1.1.10.tar.gz/snappy_unittest.cc
Changed
@@ -50,7 +50,7 @@ namespace { -#if defined(HAVE_FUNC_MMAP) && defined(HAVE_FUNC_SYSCONF) +#if HAVE_FUNC_MMAP && HAVE_FUNC_SYSCONF // To test against code that reads beyond its input, this class copies a // string to a newly allocated group of pages, the last of which @@ -96,7 +96,7 @@ size_t size_; }; -#else // defined(HAVE_FUNC_MMAP) && defined(HAVE_FUNC_SYSCONF) +#else // HAVE_FUNC_MMAP) && HAVE_FUNC_SYSCONF // Fallback for systems without mmap. using DataEndingAtUnreadablePage = std::string; @@ -137,21 +137,10 @@ CHECK_EQ(uncompressed, input); } -void VerifyIOVec(const std::string& input) { - std::string compressed; - DataEndingAtUnreadablePage i(input); - const size_t written = snappy::Compress(i.data(), i.size(), &compressed); - CHECK_EQ(written, compressed.size()); - CHECK_LE(compressed.size(), - snappy::MaxCompressedLength(input.size())); - CHECK(snappy::IsValidCompressedBuffer(compressed.data(), compressed.size())); - - // Try uncompressing into an iovec containing a random number of entries - // ranging from 1 to 10. - char* buf = new charinput.size(); +struct iovec* GetIOVec(const std::string& input, char*& buf, size_t& num) { std::minstd_rand0 rng(input.size()); std::uniform_int_distribution<size_t> uniform_1_to_10(1, 10); - size_t num = uniform_1_to_10(rng); + num = uniform_1_to_10(rng); if (input.size() < num) { num = input.size(); } @@ -175,8 +164,40 @@ } used_so_far += iovi.iov_len; } - CHECK(snappy::RawUncompressToIOVec( - compressed.data(), compressed.size(), iov, num)); + return iov; +} + +int VerifyIOVecSource(const std::string& input) { + std::string compressed; + std::string copy = input; + char* buf = const_cast<char*>(copy.data()); + size_t num = 0; + struct iovec* iov = GetIOVec(input, buf, num); + const size_t written = snappy::CompressFromIOVec(iov, num, &compressed); + CHECK_EQ(written, compressed.size()); + CHECK_LE(compressed.size(), snappy::MaxCompressedLength(input.size())); + CHECK(snappy::IsValidCompressedBuffer(compressed.data(), compressed.size())); + + std::string uncompressed; + DataEndingAtUnreadablePage c(compressed); + CHECK(snappy::Uncompress(c.data(), c.size(), &uncompressed)); + CHECK_EQ(uncompressed, input); + delete iov; + return uncompressed.size(); +} + +void VerifyIOVecSink(const std::string& input) { + std::string compressed; + DataEndingAtUnreadablePage i(input); + const size_t written = snappy::Compress(i.data(), i.size(), &compressed); + CHECK_EQ(written, compressed.size()); + CHECK_LE(compressed.size(), snappy::MaxCompressedLength(input.size())); + CHECK(snappy::IsValidCompressedBuffer(compressed.data(), compressed.size())); + char* buf = new charinput.size(); + size_t num = 0; + struct iovec* iov = GetIOVec(input, buf, num); + CHECK(snappy::RawUncompressToIOVec(compressed.data(), compressed.size(), iov, + num)); CHECK(!memcmp(buf, input.data(), input.size())); delete iov; delete buf; @@ -252,15 +273,18 @@ // Compress using string based routines const int result = VerifyString(input); + // Compress using `iovec`-based routines. + CHECK_EQ(VerifyIOVecSource(input), result); + // Verify using sink based routines VerifyStringSink(input); VerifyNonBlockedCompression(input); - VerifyIOVec(input); + VerifyIOVecSink(input); if (!input.empty()) { const std::string expanded = Expand(input); VerifyNonBlockedCompression(expanded); - VerifyIOVec(input); + VerifyIOVecSink(input); } return result; @@ -540,7 +564,27 @@ CHECK_EQ(uncompressed, src); } -TEST(Snappy, IOVecEdgeCases) { +TEST(Snappy, IOVecSourceEdgeCases) { + // Validate that empty leading, trailing, and in-between iovecs are handled: + // 'a' 'b' . + std::string data = "ab"; + char* buf = const_cast<char*>(data.data()); + size_t used_so_far = 0; + static const int kLengths = {0, 0, 1, 0, 1, 0}; + struct iovec iovARRAYSIZE(kLengths); + for (int i = 0; i < ARRAYSIZE(kLengths); ++i) { + iovi.iov_base = buf + used_so_far; + iovi.iov_len = kLengthsi; + used_so_far += kLengthsi; + } + std::string compressed; + snappy::CompressFromIOVec(iov, ARRAYSIZE(kLengths), &compressed); + std::string uncompressed; + snappy::Uncompress(compressed.data(), compressed.size(), &uncompressed); + CHECK_EQ(data, uncompressed); +} + +TEST(Snappy, IOVecSinkEdgeCases) { // Test some tricky edge cases in the iovec output that are not necessarily // exercised by random tests. @@ -905,7 +949,7 @@ // COPY_1_BYTE_OFFSET. // // The tag byte in the compressed data stores len-4 in 3 bits, and - // offset/256 in 5 bits. offset%256 is stored in the next byte. + // offset/256 in 3 bits. offset%256 is stored in the next byte. // // This format is used for length in range 4..11 and offset in // range 0..2047
View file
_service:tar_scm:snappy-1.1.9.tar.gz/.appveyor.yml
Deleted
@@ -1,37 +0,0 @@ -# Build matrix / environment variables are explained on: -# https://www.appveyor.com/docs/appveyor-yml/ -# This file can be validated on: https://ci.appveyor.com/tools/validate-yaml - -version: "{build}" - -environment: - matrix: - # AppVeyor currently has no custom job name feature. - # http://help.appveyor.com/discussions/questions/1623-can-i-provide-a-friendly-name-for-jobs - - JOB: Visual Studio 2019 - APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2019 - CMAKE_GENERATOR: Visual Studio 16 2019 - -platform: - - x86 - - x64 - -configuration: - - RelWithDebInfo - - Debug - -build_script: - - git submodule update --init --recursive - - mkdir build - - cd build - - if "%platform%"=="x86" (set CMAKE_GENERATOR_PLATFORM="Win32") - else (set CMAKE_GENERATOR_PLATFORM="%platform%") - - cmake --version - - cmake .. -G "%CMAKE_GENERATOR%" -A "%CMAKE_GENERATOR_PLATFORM%" - -DCMAKE_CONFIGURATION_TYPES="%CONFIGURATION%" -DSNAPPY_REQUIRE_AVX2=ON - - cmake --build . --config %CONFIGURATION% - - cd .. - -test_script: - - build\%CONFIGURATION%\snappy_unittest - - build\%CONFIGURATION%\snappy_benchmark
View file
_service:tar_scm:snappy-1.1.9.tar.gz/.travis.yml
Deleted
@@ -1,100 +0,0 @@ -# Build matrix / environment variables are explained on: -# http://about.travis-ci.org/docs/user/build-configuration/ -# This file can be validated on: http://lint.travis-ci.org/ - -language: cpp -dist: bionic -osx_image: xcode12.2 - -compiler: -- gcc -- clang -os: -- linux -- osx - -env: -- BUILD_TYPE=Debug CPU_LEVEL=AVX -- BUILD_TYPE=Debug CPU_LEVEL=AVX2 -- BUILD_TYPE=RelWithDebInfo CPU_LEVEL=AVX -- BUILD_TYPE=RelWithDebInfo CPU_LEVEL=AVX2 - -jobs: - exclude: - # Travis OSX servers seem to run on pre-Haswell CPUs. Attempting to use AVX2 - # results in crashes. - - env: BUILD_TYPE=Debug CPU_LEVEL=AVX2 - os: osx - - env: BUILD_TYPE=RelWithDebInfo CPU_LEVEL=AVX2 - os: osx - allow_failures: - # Homebrew's GCC is currently broken on XCode 11. - - compiler: gcc - os: osx - -addons: - apt: - sources: - - sourceline: 'deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-10 main' - key_url: 'https://apt.llvm.org/llvm-snapshot.gpg.key' - - sourceline: 'ppa:ubuntu-toolchain-r/test' - packages: - - clang-10 - - cmake - - gcc-10 - - g++-10 - - ninja-build - homebrew: - packages: - - cmake - - gcc@10 - - llvm@10 - - ninja - update: true - -install: -# The following Homebrew packages aren't linked by default, and need to be -# prepended to the path explicitly. -- if "$TRAVIS_OS_NAME" = "osx" ; then - export PATH="$(brew --prefix llvm)/bin:$PATH"; - fi -# Fuzzing is only supported on Clang. Perform fuzzing on Debug builds. -# LibFuzzer doesn't ship with CommandLineTools on osx. -- if "$CXX" = "clang++" && "$BUILD_TYPE" = "Debug" && "$TRAVIS_OS_NAME" != "osx" ; then - export FUZZING=1; - else - export FUZZING=0; - fi -# /usr/bin/gcc points to an older compiler on both Linux and macOS. -- if "$CXX" = "g++" ; then export CXX="g++-10" CC="gcc-10"; fi -# /usr/bin/clang points to an older compiler on both Linux and macOS. -# -# Homebrew's llvm package doesn't ship a versioned clang++ binary, so the values -# below don't work on macOS. Fortunately, the path change above makes the -# default values (clang and clang++) resolve to the correct compiler on macOS. -- if "$TRAVIS_OS_NAME" = "linux" ; then - if "$CXX" = "clang++" ; then export CXX="clang++-10" CC="clang-10"; fi; - fi -- echo ${CC} -- echo ${CXX} -- ${CXX} --version -- cmake --version - -before_script: -- mkdir -p build && cd build -- cmake .. -G Ninja -DCMAKE_BUILD_TYPE=$BUILD_TYPE - -DSNAPPY_REQUIRE_${CPU_LEVEL}=ON -DSNAPPY_FUZZING_BUILD=${FUZZING} - -DCMAKE_INSTALL_PREFIX=$HOME/.local -- cmake --build . -- cd .. - -script: -- build/snappy_unittest -- build/snappy_benchmark -- if -f build/snappy_compress_fuzzer ; then - build/snappy_compress_fuzzer -runs=1000 -close_fd_mask=3; - fi -- if -f build/snappy_uncompress_fuzzer ; then - build/snappy_uncompress_fuzzer -runs=1000 -close_fd_mask=3; - fi -- cd build && cmake --build . --target install
Locations
Projects
Search
Status Monitor
Help
Open Build Service
OBS Manuals
API Documentation
OBS Portal
Reporting a Bug
Contact
Mailing List
Forums
Chat (IRC)
Twitter
Open Build Service (OBS)
is an
openSUSE project
.
浙ICP备2022010568号-2