Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help Search

libtorch-openmpi4-2.5.0-1.4 RPM for aarch64

From OpenSuSE Ports Tumbleweed for aarch64

Name: libtorch-openmpi4 Distribution: openSUSE Tumbleweed
Version: 2.5.0 Vendor: openSUSE
Release: 1.4 Build date: Fri Oct 18 08:24:55 2024
Group: Development/Libraries/Python Build host: reproducible
Size: 127699360 Source RPM: python-torch-openmpi4-2.5.0-1.4.src.rpm
Packager: http://bugs.opensuse.org
Url: https://pytorch.org
Summary: Library which used by python-torch-openmpi4
Library which is used by python-torch-openmpi4

Provides

Requires

License

Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND MIT AND Zlib AND BSL-1.0

Changelog

* Fri Oct 18 2024 Guillaume GARDET <[email protected]>
  -  Update to 2.5.0:
    * https://github.com/pytorch/pytorch/releases/tag/v2.5.0
* Fri Oct 04 2024 Guillaume GARDET <[email protected]>
  - Add patch to fix build with oneDNN:
    * pytorch-patch-onednn.patch
* Tue Oct 01 2024 Guillaume GARDET <[email protected]>
  - Update to 2.4.1:
    * https://github.com/pytorch/pytorch/releases/tag/v2.4.1
  - Skip update to 2.4.0:
    * https://github.com/pytorch/pytorch/releases/tag/v2.4.0
  - Remove _service since 'osc mr download_files' is easier to use
    and maintain
  - Drop config vars not used anymore: BUILD_CAFFE2, USE_LEVELDB, USE_LMDB,
    USE_OPENCV, USE_TBB
  - Remove examples package since code has been removed upstream
  - Refresh pacth:
    * skip-third-party-check.patch
* Thu Aug 29 2024 Guang Yee <[email protected]>
  - Enable sle15_python_module_pythons.
  - GCC 9.3 or newer is required, regardless if CUDA is enabled.
    See https://github.com/pytorch/pytorch/blob/v2.3.1/CMakeLists.txt#L48
    Therefore, for SLE15 we went with GCC 11 as it seems to be the most
    common one.
  - Use %gcc_version macro for Tumbleweed.
* Thu Jul 11 2024 Christian Goll <[email protected]>
  - update to 2.3.1 with following summarized highlights:
    * from 2.0.x:
    - torch.compile is the main API for PyTorch 2.0, which wraps your model and
      returns a compiled model. It is a fully additive (and optional) feature
      and hence 2.0 is 100% backward compatible by definition
    - Accelerated Transformers introduce high-performance support for training
      and inference using a custom kernel architecture for scaled dot product
      attention (SPDA). The API is integrated with torch.compile() and model
      developers may also use the scaled dot product attention kernels directly
      by calling the new scaled_dot_product_attention() operato
    * from 2.1.x:
    - automatic dynamic shape support in torch.compile,
      torch.distributed.checkpoint for saving/loading distributed training jobs
      on multiple ranks in parallel, and torch.compile support for the NumPy
      API.
    - In addition, this release offers numerous performance improvements (e.g.
      CPU inductor improvements, AVX512 support, scaled-dot-product-attention
      support) as well as a prototype release of torch.export, a sound
      full-graph capture mechanism, and torch.export-based quantization.
    * from 2.2.x:
    - 2x performance improvements to scaled_dot_product_attention via
      FlashAttention-v2 integration, as well as AOTInductor, a new
      ahead-of-time compilation and deployment tool built for non-python
      server-side deployments.
    * from 2.3.x:
    - support for user-defined Triton kernels in torch.compile, allowing for
      users to migrate their own Triton kernels from eager without
      experiencing performance complications or graph breaks. As well, Tensor
      Parallelism improves the experience for training Large Language Models
      using native PyTorch functions, which has been validated on training
      runs for 100B parameter models.
  - added seperate openmpi4 build
  - added sepetate vulcan build, although this functions isn't exposed to python
    abi
  - For the obs build all the vendored sources follow the pattern
    NAME-7digitcommit.tar.gz and not the NAME-COMMIT.tar.gz
  - added following patches:
    * skip-third-party-check.patch
    * fix-setup.patch
  - removed patches:
    * pytorch-rm-some-gitmodules.patch
    * fix-call-of-onnxInitGraph.patch
* Thu Jul 22 2021 Guillaume GARDET <[email protected]>
  - Fix build on x86_64 by using GCC10 instead of GCC11
    https://github.com/google/XNNPACK/issues/1550
* Thu Jul 22 2021 Guillaume GARDET <[email protected]>
  - Update to 1.9.0
  - Release notes: https://github.com/pytorch/pytorch/releases/tag/v1.9.0
  - Drop upstreamed patch:
    * fix-mov-operand-for-gcc.patch
  - Drop unneeded patches:
    * removed-peachpy-depedency.patch
  - Refresh patches:
    * skip-third-party-check.patch
    * fix-call-of-onnxInitGraph.patch
  - Add new patch:
    * pytorch-rm-some-gitmodules.patch
* Thu Jul 22 2021 Guillaume GARDET <[email protected]>
  - Add _service file to ease future update of deps
* Thu Jul 22 2021 Guillaume GARDET <[email protected]>
  - Update sleef to fix build on aarch64
* Fri Apr 23 2021 Matej Cepl <[email protected]>
  - Don't build python36-* package (missing pandas)
* Thu Jan 21 2021 Benjamin Greiner <[email protected]>
  - Fix python-rpm-macros usage

Files

/usr/lib64/libc10.so
/usr/lib64/libshm.so
/usr/lib64/libtorch.so
/usr/lib64/libtorch_cpu.so
/usr/lib64/libtorch_global_deps.so
/usr/lib64/libtorch_python.so


Generated by rpm2html 1.8.1

Fabrice Bellet, Wed Dec 11 00:02:06 2024