Projects
home:Eustace:branches:Eulaceura:Factory
armnn
_service:obs_scm:armnn.spec
Sign Up
Log In
Username
Password
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File _service:obs_scm:armnn.spec of Package armnn
# Disable Python binding for now %bcond_with PyArmnn %bcond_with compute_neon %if "%{target}" == "opencl" %bcond_without compute_cl %else %bcond_with compute_cl %endif %bcond_with armnn_tests %bcond_with armnn_extra_tests %bcond_with armnn_flatbuffers %bcond_with armnn_onnx Name: armnn Version: 22.11 Release: 2.1 Summary: Arm NN is the most performant machine learning (ML) inference engine for Android and Linux, accelerating ML on Arm Cortex-A CPUs and Arm Mali GPUs License: MIT URL: https://developer.arm.com/products/processors/machine-learning/arm-nn Source0: https://github.com/ARM-software/armnn/archive/v%{version}.tar.gz#/armnn-%{version}.tar.gz Patch1: armnn-use-static-libraries.patch BuildRequires: cmake gcc-c++ valgrind-devel protobuf-devel boost-devel vim-enhanced ComputeLibrary-devel >= 22.11 %if %{with armnn_onnx} BuildRequires: python3-onnx %endif %description Arm NN outperforms generic ML libraries due to Arm architecture-specific optimizations (e.g. SVE2) by utilizing Arm Compute Library (ACL). To target Arm Ethos-N NPUs, Arm NN utilizes the Ethos-N NPU Driver. For Arm Cortex-M acceleration, please see CMSIS-NN. Arm NN is written using portable C++14 and built using CMake - enabling builds for a wide variety of target platforms, from a wide variety of host environments. Python developers can interface with Arm NN through the use of our Arm NN TF Lite Delegate. %package devel Summary: Development headers and libraries for armnn # Make sure we do not install both openCL and non-openCL (CPU only) versions. Group: Development/Libraries/C and C++ Requires: %{name} = %{version} Requires: libarmnn = %{version} Requires: libarmnnBasePipeServer = %{version} Requires: libtimelineDecoder = %{version} Requires: libtimelineDecoderJson = %{version} # Make sure we do not install both openCL and non-openCL (CPU only) versions. %if "%{target}" == "opencl" Conflicts: armnn-devel %else Conflicts: armnn-opencl-devel %endif %if %{with armnn_flatbuffers} Requires: libarmnnSerializer = %{version} Requires: libarmnnTfLiteParser = %{version} %endif %if %{with armnn_onnx} Requires: libarmnnOnnxParser = %{version} %endif %description devel Arm NN is an inference engine for CPUs, GPUs and NPUs. It bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, allowing them to run efficiently – without modification – across Arm Cortex CPUs and Arm Mali GPUs. This package contains the development libraries and headers for armnn. %if %{with armnn_extra_tests} %package -n %{name}-extratests Summary: Additionnal downstream tests for Arm NN # Make sure we do not install both openCL and non-openCL (CPU only) versions. Group: Development/Libraries/C and C++ Requires: %{name} # Make sure we do not install both openCL and non-openCL (CPU only) versions. %if "%{target}" == "opencl" Conflicts: armnn-extratests %else Conflicts: armnn-opencl-extratests %endif %description -n %{name}-extratests Arm NN is an inference engine for CPUs, GPUs and NPUs. It bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, allowing them to run efficiently – without modification – across Arm Cortex CPUs and Arm Mali GPUs. This package contains additionnal downstream tests for armnn. %endif %package -n libarmnn Summary: libarmnn from armnn Group: Development/Libraries/C and C++ %if "%{target}" == "opencl" Conflicts: libarmnn %else Conflicts: libarmnn-opencl %endif %description -n libarmnn Arm NN is an inference engine for CPUs, GPUs and NPUs. It bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, allowing them to run efficiently – without modification – across Arm Cortex CPUs and Arm Mali GPUs. This package contains the libarmnn library from armnn. %package -n libarmnnBasePipeServer Summary: libarmnnBasePipeServer from armnn Group: Development/Libraries/C and C++ %if "%{target}" == "opencl" Conflicts: libarmnnBasePipeServer %else Conflicts: libarmnnBasePipeServer-opencl %endif %description -n libarmnnBasePipeServer Arm NN is an inference engine for CPUs, GPUs and NPUs. It bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, allowing them to run efficiently – without modification – across Arm Cortex CPUs and Arm Mali GPUs. This package contains the libarmnnBasePipeServer library from armnn. %package -n libarmnnTestUtils Summary: libarmnnTestUtils from armnn Group: Development/Libraries/C and C++ %if "%{target}" == "opencl" Conflicts: libarmnnTestUtils %else Conflicts: libarmnnTestUtils-opencl %endif %description -n libarmnnTestUtils Arm NN is an inference engine for CPUs, GPUs and NPUs. It bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, allowing them to run efficiently – without modification – across Arm Cortex CPUs and Arm Mali GPUs. This package contains the libarmnnTestUtils library from armnn. %package -n libtimelineDecoder Summary: libtimelineDecoder from armnn Group: Development/Libraries/C and C++ %if "%{target}" == "opencl" Conflicts: libtimelineDecoder %else Conflicts: libtimelineDecoder-opencl %endif %description -n libtimelineDecoder Arm NN is an inference engine for CPUs, GPUs and NPUs. It bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, allowing them to run efficiently – without modification – across Arm Cortex CPUs and Arm Mali GPUs. This package contains the libtimelineDecoder library from armnn. %package -n libtimelineDecoderJson Summary: libtimelineDecoderJson from armnn Group: Development/Libraries/C and C++ %if "%{target}" == "opencl" Conflicts: libtimelineDecoderJson %else Conflicts: libtimelineDecoderJson-opencl %endif %description -n libtimelineDecoderJson Arm NN is an inference engine for CPUs, GPUs and NPUs. It bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, allowing them to run efficiently – without modification – across Arm Cortex CPUs and Arm Mali GPUs. This package contains the libtimelineDecoder library from armnn. %if %{with armnn_flatbuffers} %package -n libarmnnSerializer Summary: libarmnnSerializer from armnn Group: Development/Libraries/C and C++ %if "%{target}" == "opencl" Conflicts: libarmnnSerializer %else Conflicts: libarmnnSerializer-opencl %endif %description -n libarmnnSerializer Arm NN is an inference engine for CPUs, GPUs and NPUs. It bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, allowing them to run efficiently – without modification – across Arm Cortex CPUs and Arm Mali GPUs. This package contains the libarmnnSerializer library from armnn. %package -n libarmnnTfLiteParser Summary: libarmnnTfLiteParser from armnn Group: Development/Libraries/C and C++ %if "%{target}" == "opencl" Conflicts: libarmnnTfLiteParser %else Conflicts: libarmnnTfLiteParser-opencl %endif %description -n libarmnnTfLiteParser Arm NN is an inference engine for CPUs, GPUs and NPUs. It bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, allowing them to run efficiently – without modification – across Arm Cortex CPUs and Arm Mali GPUs. This package contains the libarmnnTfLiteParser library from armnn. %endif %if %{with armnn_onnx} %package -n libarmnnOnnxParser Summary: libarmnnOnnxParser from armnn Group: Development/Libraries/C and C++ %if "%{target}" == "opencl" Conflicts: libarmnnOnnxParser %else Conflicts: libarmnnOnnxParser-opencl %endif %description -n libarmnnOnnxParser Arm NN is an inference engine for CPUs, GPUs and NPUs. It bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, allowing them to run efficiently – without modification – across Arm Cortex CPUs and Arm Mali GPUs. This package contains the libarmnnOnnxParser library from armnn. %endif %prep %setup -q -n armnn-%{version} %patch1 -p1 %build %if %{with armnn_onnx} mkdir onnx_deps PROTO=$(find %{_libdir} -name onnx.proto) protoc $PROTO --proto_path=. --proto_path=%{_includedir} --proto_path=$(dirname $(find %{_libdir} -name onnx)) --cpp_out=./onnx_deps %endif %cmake \ -DCMAKE_SKIP_RPATH=True \ -DSHARED_BOOST=1 \ -DCMAKE_CXX_FLAGS:STRING="%{optflags} -pthread " \ -DBOOST_LIBRARYDIR=%{_libdir} \ %if %{with armnn_onnx} -DBUILD_ONNX_PARSER=ON \ -DONNX_GENERATED_SOURCES=../onnx_deps/ \ %else -DBUILD_ONNX_PARSER=OFF \ %endif %if %{with armnn_flatbuffers} -DBUILD_ARMNN_SERIALIZER=ON \ -DFLATC_DIR=%{_bindir} \ -DFLATBUFFERS_INCLUDE_PATH=%{_includedir} \ -DBUILD_TF_LITE_PARSER=ON \ -DTfLite_Schema_INCLUDE_PATH=%{_includedir}/tensorflow/lite/schema/ \ -DTF_LITE_SCHEMA_INCLUDE_PATH=%{_includedir}/tensorflow/lite/schema/ \ %else -DBUILD_ARMNN_SERIALIZER=OFF \ -DBUILD_TF_LITE_PARSER=OFF \ %endif %if %{with compute_neon} || %{with compute_cl} -DARMCOMPUTE_INCLUDE=%{_includedir} \ -DHALF_INCLUDE=%{_includedir}/half \ -DARMCOMPUTE_BUILD_DIR=%{_libdir} \ -DARMCOMPUTE_ROOT=%{_prefix} \ %endif %if %{with compute_neon} -DARMCOMPUTENEON=ON \ %else -DARMCOMPUTENEON=OFF \ %endif %if %{with compute_cl} -DARMCOMPUTECL=ON \ -DOPENCL_INCLUDE=%{_includedir} \ %else -DARMCOMPUTECL=OFF \ %endif -DTHIRD_PARTY_INCLUDE_DIRS=%{_includedir} \ %if %{with armnn_flatbuffers} -DBUILD_SAMPLE_APP=ON \ %else -DBUILD_SAMPLE_APP=OFF \ %endif %if %{with armnn_tests} -DBUILD_UNIT_TESTS=ON \ -DBUILD_TESTS=ON \ %else -DBUILD_UNIT_TESTS=OFF \ -DBUILD_TESTS=OFF \ %endif %if %{with PyArmnn} -DBUILD_PYTHON_WHL=ON \ -DBUILD_PYTHON_SRC=ON \ %else -DBUILD_PYTHON_WHL=OFF \ -DBUILD_PYTHON_SRC=OFF \ %endif %if %{with armnn_extra_tests} -DBUILD_ARMNN_EXAMPLES=ON %else -DBUILD_ARMNN_EXAMPLES=OFF %endif %cmake %make_build %if %{with armnn_tests} pushd tests/ %cmake %make_build popd %endif %check # openCL UnitTests are failing in OBS due to the lack of openCL device %if %{without compute_cl} && %{with armnn_tests} # Run tests LD_LIBRARY_PATH="$(pwd)/build/" \ ./build/UnitTests $UnitTestFlags %endif %install %make_install %if %{with armnn_tests} # Install tests manually install -d %{buildroot}%{_bindir} CP_ARGS="-Prf --preserve=mode,timestamps --no-preserve=ownership" \ find ./build/tests -maxdepth 1 -type f -executable -exec cp $CP_ARGS {} %{buildroot}%{_bindir} \; %endif %if %{with armnn_flatbuffers} # Install Sample app cp $CP_ARGS ./build/samples/SimpleSample %{buildroot}%{_bindir} %endif # Drop static libs - https://github.com/ARM-software/armnn/issues/514 rm -f %{buildroot}%{_libdir}/*.a %post /sbin/ldconfig %postun /sbin/ldconfig %files %defattr(-,root,root) %doc README.md %license LICENSE %if %{with armnn_tests} %{_bindir}/ExecuteNetwork %if %{with armnn_flatbuffers} %{_bindir}/ArmnnConverter %{_bindir}/TfLite*-Armnn %endif %if %{with armnn_onnx} %{_bindir}/Onnx*-Armnn %endif %if %{with armnn_flatbuffers} %{_bindir}/SimpleSample %endif %endif %if %{with armnn_extra_tests} %files -n %{name}-extratests %{_bindir}/ArmnnExamples %endif %files -n libarmnn %{_libdir}/libarmnn.so.* %files -n libarmnnBasePipeServer %{_libdir}/libarmnnBasePipeServer.so.* %files -n libarmnnTestUtils %{_libdir}/libarmnnTestUtils.so.* %files -n libtimelineDecoder %{_libdir}/libtimelineDecoder.so.* %files -n libtimelineDecoderJson %{_libdir}/libtimelineDecoderJson.so.* %if %{with armnn_flatbuffers} %files -n libarmnnSerializer %{_libdir}/libarmnnSerializer.so.* %files -n libarmnnTfLiteParser %{_libdir}/libarmnnTfLiteParser.so.* %endif %if %{with armnn_onnx} %files -n libarmnnOnnxParser %{_libdir}/libarmnnOnnxParser.so.* %endif %files devel %defattr(-,root,root) %dir %{_includedir}/armnn/ %{_includedir}/armnn/*.hpp %dir %{_includedir}/armnn/backends %{_includedir}/armnn/backends/CMakeLists.txt %{_includedir}/armnn/backends/*.hpp %dir %{_includedir}/armnn/profiling %{_includedir}/armnn/profiling/*.hpp %dir %{_includedir}/armnn/utility %{_includedir}/armnn/utility/*.hpp %dir %{_includedir}/armnnUtils %{_includedir}/armnnUtils/*.hpp %dir %{_includedir}/armnnOnnxParser/ %{_includedir}/armnnOnnxParser/*.hpp %dir %{_includedir}/armnnTfLiteParser/ %{_includedir}/armnnTfLiteParser/*.hpp %dir %{_includedir}/armnnDeserializer/ %{_includedir}/armnnDeserializer/IDeserializer.hpp %dir %{_includedir}/armnnSerializer/ %{_includedir}/armnnSerializer/ISerializer.hpp %dir %{_includedir}/armnnTestUtils/ %{_includedir}/armnnTestUtils/*.hpp %dir %{_libdir}/cmake/armnn %{_libdir}/cmake/armnn/* %{_libdir}/libarmnn.so %{_libdir}/libarmnnBasePipeServer.so %{_libdir}/libtimelineDecoder.so %{_libdir}/libtimelineDecoderJson.so %if %{with armnn_flatbuffers} %{_libdir}/libarmnnSerializer.so %{_libdir}/libarmnnTfLiteParser.so %endif %{_libdir}/libarmnnTestUtils.so %if %{with armnn_onnx} %{_libdir}/libarmnnOnnxParser.so %endif %changelog * Wed Jan 4 2023 Dandan Xu <dandan@nj.iscas.ac.cn> - 22.11-2.1 - update version to 22.11 * Tue Aug 9 2022 kkz <zhaoshuang@uniontech.com> - 22.05-1 - Package init
Locations
Projects
Search
Status Monitor
Help
Open Build Service
OBS Manuals
API Documentation
OBS Portal
Reporting a Bug
Contact
Mailing List
Forums
Chat (IRC)
Twitter
Open Build Service (OBS)
is an
openSUSE project
.
浙ICP备2022010568号-2