2024年10月17日 星期四

tensorflow (Build from source)

tensorflow, 有二個一定要裝的
  • Java
  • Bazel

ENV: python3.10 + tensorflow r2.13 (CPU only) + Bazel 5.3.0 + gcc 11.4.0


乖乖照著裡面的建議表,可以節省時間少找點資料, ref: tensorflow
CPU only
Version Python version Compiler Build tools
tensorflow-2.13.0 3.8-3.11 Clang 16.0.0(實際用gcc) Bazel 5.3.0


Java (upstream)
Java (binaries) (不知道跟upstream有什麼不一樣)
$ java --version (舊版的載點忘記了...)
openjdk 11.0.6 2020-01-14
OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.6+10)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.6+10, mixed mode)

還沒試過用 (upstream)
openjdk 11.0.16 2022-07-19
OpenJDK Runtime Environment 18.9 (build 11.0.16+8)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.16+8, mixed mode)

Bazel 5.3.0
5.3.0

$ unzip bazel-5.3.0-dist.zip -d ./bazel_5.3
$ ./compile.sh

正常的話,應該可以執行編完...

$ ./output/bazel --version
bazel 5.3.0- (@non-git)

再把它加入PATH

tensorflow, step1/3
$ git clone https://github.com/tensorflow/tensorflow.git

$ ./configure
You have bazel 5.3.0- (@non-git) installed.
Please specify the location of python. [Default is /usr/bin/python3]:


Found possible Python library paths:
  /usr/lib/python3/dist-packages
  /usr/local/lib/python3.10/dist-packages
Please input the desired Python library path to use.  Default is [/usr/lib/python3/dist-packages]

Do you wish to build TensorFlow with ROCm support? [y/N]: N
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: N
No CUDA support will be enabled for TensorFlow.

Do you wish to download a fresh release of clang? (Experimental) [y/N]: N
Clang will not be downloaded.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -Wno-sign-compare]:


Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: N
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
        --config=mkl            # Build with MKL support.
        --config=mkl_aarch64    # Build with oneDNN and Compute Library for the Arm Architecture (ACL).
        --config=monolithic     # Config for mostly static monolithic build.
        --config=numa           # Build with NUMA support.
        --config=dynamic_kernels        # (Experimental) Build kernels into separate shared objects.
        --config=v1             # Build with TensorFlow 1 API instead of TF 2 API.
Preconfigured Bazel build configs to DISABLE default on features:
        --config=nogcp          # Disable GCP support.
        --config=nonccl         # Disable NVIDIA NCCL support.
Configuration finished



tensorflow, step2/3, according to the website
看起來有些檔案找不到了
$ bazel build --config=opt --copt=-mssse3 --copt=-mcx16 --copt=-msse4.1 --copt=-msse4.2 --copt=-mpopcnt --copt=-mno-fma4 --copt=-mno-avx --copt=-mno-avx2 //tensorflow/tools/pip_package:build_pip_package
Starting local Bazel server and connecting to it...
INFO: Options provided by the client:
  Inherited 'common' options: --isatty=1 --terminal_columns=190
INFO: Reading rc options for 'build' from /home/ubuntu/Tools/tensorflow/.bazelrc:
  Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /home/ubuntu/Tools/tensorflow/.bazelrc:
  'build' options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --experimental_link_static_libraries_once=false --incompatible_enforce_config_setting_visibility
INFO: Reading rc options for 'build' from /home/ubuntu/Tools/tensorflow/.tf_configure.bazelrc:
  'build' options: --action_env PYTHON_BIN_PATH=/usr/bin/python3 --action_env PYTHON_LIB_PATH=/usr/lib/python3/dist-packages --python_path=/usr/bin/python3
INFO: Reading rc options for 'build' from /home/ubuntu/Tools/tensorflow/.bazelrc:
  'build' options: --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/ir,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_jitrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/graph_executor,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils,tensorflow/core/tfrt/utils/debug
INFO: Found applicable config definition build:short_logs in file /home/ubuntu/Tools/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/ubuntu/Tools/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:opt in file /home/ubuntu/Tools/tensorflow/.tf_configure.bazelrc: --copt=-Wno-sign-compare --host_copt=-Wno-sign-compare
INFO: Found applicable config definition build:linux in file /home/ubuntu/Tools/tensorflow/.bazelrc: --define=build_with_onednn_v2=true --host_copt=-w --copt=-Wno-all --copt=-Wno-extra --copt=-Wno-deprecated --copt=-Wno-deprecated-declarations --copt=-Wno-ignored-attributes --copt=-Wno-array-bounds --copt=-Wunused-result --copt=-Werror=unused-result --copt=-Wswitch --copt=-Werror=switch --copt=-Wno-error=unused-but-set-variable --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++17 --host_cxxopt=-std=c++17 --config=dynamic_kernels --experimental_guard_against_concurrent_changes
INFO: Found applicable config definition build:dynamic_kernels in file /home/ubuntu/Tools/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/tensorflow/runtime/archive/7d879c8b161085a4374ea481b93a52adb19c0529.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/llvm/llvm-project/archive/dc275fd03254d67d29cc70a5a0569acf24d2280d.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/google/XNNPACK/archive/b9d4073a6913891ce9cbd8965c8d506075d2a45a.zip failed: class java.io.FileNotFoundException GET returned 404 Not Found
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/openxla/stablehlo/archive/43d81c6883ade82052920bd367c61f9e52f09954.zip failed: class java.io.FileNotFoundException GET returned 404 Not Found
INFO: Analyzed target //tensorflow/tools/pip_package:build_pip_package (612 packages loaded, 38194 targets configured).
INFO: Found 1 target...
Target //tensorflow/tools/pip_package:build_pip_package up-to-date:
  bazel-bin/tensorflow/tools/pip_package/build_pip_package
INFO: Elapsed time: 4896.720s, Critical Path: 297.22s
INFO: 19120 processes: 2059 internal, 17061 local.
INFO: Build completed successfully, 19120 total actions

tensorflow, step3/3 (Wrap the binaries with python setup wheel file)
according to the website

$ sudo apt install patchelf
$ ./bazel-bin/tensorflow/tools/pip_package/build_pip_package ./share

Fri Oct 18 10:47:42 AM CST 2024 : === Preparing sources in dir: /tmp/tmp.qhKidt2m2u
~/Tools/tensorflow ~/Tools/tensorflow
~/Tools/tensorflow
~/Tools/tensorflow/bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/org_tensorflow ~/Tools/tensorflow
~/Tools/tensorflow
/tmp/tmp.qhKidt2m2u/tensorflow/include ~/Tools/tensorflow
~/Tools/tensorflow
Fri Oct 18 10:47:54 AM CST 2024 : === Building wheel
warning: no files found matching 'README'
warning: no files found matching '*.pyd' under directory '*'
warning: no files found matching '*.pyi' under directory '*'
warning: no files found matching '*.pd' under directory '*'
warning: no files found matching '*.dylib' under directory '*'
warning: no files found matching '*.dll' under directory '*'
warning: no files found matching '*.lib' under directory '*'
warning: no files found matching '*.csv' under directory '*'
warning: no files found matching '*.h' under directory 'tensorflow/include/tensorflow'
warning: no files found matching '*.proto' under directory 'tensorflow/include/tensorflow'
warning: no files found matching '*' under directory 'tensorflow/include/third_party'
/usr/lib/python3/dist-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
Fri Oct 18 10:48:17 AM CST 2024 : === Output wheel file is in: /home/ubuntu/Tools/tensorflow/share

Install and test, according to the website
$ cd share
$ python3 -m pip install ./tensorflow-2.13.1-cp310-cp310-linux_x86_64.whl

//Test
$ python3 -c "import tensorflow as tf; msg = tf.constant('TensorFlow 2.0 Hello World'); tf.print(msg)"
2024-10-18 10:51:54.047120: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-10-18 10:51:54.067290: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
TensorFlow 2.0 Hello World


ref:
1. Building Tensorflow from source. Step by step guide.
2. 在 Ubuntu 上安裝 Bazel
3. Build from source

2024年9月26日 星期四

xlsx convert csv

xlsx2csv
$ sudo apt install xlsx2csv

Explanation: Array of Search Terms: an array called search_terms where you can add as many keywords as needed.
Loop Through Search Terms: For each converted CSV file, it loops through each search term and checks if it exists in that file using grep.
Output: The script will print messages indicating whether each keyword was found in each CSV file.
#!/bin/bash

# Directory containing the .xlsx files
input_directory="/path/to/xlsx/files"
# Output directory for CSV files
output_directory="/path/to/output/csv/files"
# File containing search terms (one per line)
search_terms_file="/path/to/search_terms.txt"

# Create output directory if it doesn't exist
mkdir -p "$output_directory"

# Read search terms from the specified file into an array
mapfile -t search_terms < "$search_terms_file"

# Loop through all .xlsx files in the input directory
for file in "$input_directory"/*.xlsx; do
    # Check if the file exists
    if [[ -f "$file" ]]; then
        # Generate the output CSV filename
        csv_filename="$output_directory/$(basename "${file%.xlsx}.csv")"
        
        # Convert .xlsx to .csv using xlsx2csv
        xlsx2csv "$file" "$csv_filename"
        
        # Check if the conversion was successful
        if [[ $? -eq 0 ]]; then
            echo "Converted: $file to $csv_filename"
            
            # Search for each term in the converted CSV file
            for search_term in "${search_terms[@]}"; do
                if grep -q "$search_term" "$csv_filename"; then
                    echo "Found '$search_term' in $csv_filename"
                else
                    echo "'$search_term' not found in $csv_filename"
                fi
            done
            
        else
            echo "Failed to convert: $file"
        fi
    fi
done

Create a text file named search_terms.txt and add your keywords, one per line. For example:
apple
banana
cherry



ref: Perplexity

2024年9月23日 星期一

Cuda & cuDNN version

Cuda
$ nvcc --version or nvcc -V

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Jun_13_19:16:58_PDT_2023
Cuda compilation tools, release 12.2, V12.2.91
Build cuda_12.2.r12.2/compiler.32965470_0

Cudnn
$ whereis cudnn.h
cudnn.h: /usr/include/cudnn.h


$ cat /usr/include/cudnn.h
...
...
#include "cudnn_version.h"
#include "cudnn_graph.h"
#include "cudnn_ops.h"
#include "cudnn_adv.h"
#include "cudnn_cnn.h"
...
...


$ whereis cudnn_version.h
cudnn_version.h: /usr/include/cudnn_version.h

$ cat /usr/include/cudnn_version.h | grep CUDNN_MAJOR -A 2
#define CUDNN_MAJOR 9
#define CUDNN_MINOR 2
#define CUDNN_PATCHLEVEL 1
--
#define CUDNN_VERSION (CUDNN_MAJOR * 10000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)

/* cannot use constexpr here since this is a C-only file */

nvidia-smi
$ nvidia-smi
Tue Sep 24 13:17:46 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01             Driver Version: 535.183.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3070 ...    Off | 00000000:01:00.0 Off |                  N/A |
| N/A   34C    P0              N/A /  80W |      8MiB /  8192MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      2010      G   /usr/lib/xorg/Xorg                            4MiB |
+---------------------------------------------------------------------------------------+