如何为C API激活Tensorflow的XLA?

wdebmtf2  于 2023-08-06  发布在  其他
关注(0)|答案(3)|浏览(105)

我从源代码构建了Tensorflow,并使用它的C API。到目前为止一切都很好,我也在使用AVX /AVX 2。我从源代码构建的Tensorflow也是使用XLA支持构建的。我现在还想激活XLA(加速线性代数),因为我希望它能再次提高推理过程中的性能/速度。
如果我现在开始跑步,我会收到以下消息:

2019-06-17 16:09:06.753737: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1541] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.

字符串
在XLA的官方主页(https://www.tensorflow.org/xla/jit)上,我找到了关于如何在会话级别上打开jit的信息:

# Config to turn on JIT compilation
config = tf.ConfigProto()
config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1

sess = tf.Session(config=config)


这里(https://github.com/tensorflow/tensorflow/issues/13853)解释了如何在C API中设置TF_SetConfig。在使用以下Python代码的输出之前,我能够限制到一个核心:

config1 = tf.ConfigProto(device_count={'CPU':1})
serialized1 = config1.SerializeToString()
print(list(map(hex, serialized1)))


我实现了如下:

uint8_t intra_op_parallelism_threads = maxCores; // for operations that can be parallelized internally, such as matrix multiplication 
        uint8_t inter_op_parallelism_threads = maxCores; // for operations that are independent in your TensorFlow graph because there is no directed path between them in the dataflow graph
        uint8_t config[]={0x10,intra_op_parallelism_threads,0x28,inter_op_parallelism_threads};
        TF_SetConfig(sess_opts,config,sizeof(config),status);


因此,我认为这将有助于XLA激活:

config= tf.ConfigProto()
config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1
output = config.SerializeToString()
print(list(map(hex, output)))


本次实施:

uint8_t config[]={0x52,0x4,0x1a,0x2,0x28,0x1};
        TF_SetConfig(sess_opts,config,sizeof(config),status);


然而,XLA似乎仍然被停用。有人能帮我解决这个问题吗?或者,如果你再次在警告中获得战利品:

2019-06-17 16:09:06.753737: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1541] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.


这是否意味着我必须在构建过程中设置XLA_FLAGS?
提前感谢!

sshcrbum

sshcrbum1#

好吧,我想出了如何使用XLA JIT,它只能在c_api_experimental.h头中使用。只需包含此头文件,然后用途:

TF_EnableXLACompilation(sess_opts,true);

字符串

iih3973s

iih3973s2#

@tre95我试过了
第一个月
但编译失败,出现错误collect2:error:ld返回了1个exit status。但是如果我不这样做,它可以成功编译和运行。

l7wslrjt

l7wslrjt3#

对于1.14版本的tensorflow CPU,我为环境变量TF_XLA_FLAGS设置了值--tf_xla_cpu_global_jit,即

export TF_XLA_FLAGS = --tf_xla_cpu_global_jit

字符串
对我很有效或者,如果我们将XLA_FLAGS环境变量的值设置为--xla_hlo_profile,也可以工作,即

export XLA_FLAGS = --xla_hlo_profile


我希望它也能帮助你通过使用加速线性代数(XLA)来加速机器学习模型。比你好!

相关问题