- 版本、环境信息:
1)PaddlePaddle版本:请提供您的PaddlePaddle版本号,1.8.2
2)GPU:aistudio平台的脚步任务环境(选择的4卡环境进行运行测试)
项目连接:https://aistudio.baidu.com/aistudio/clusterprojectdetail/773316(已公开)
现象:每次运行到第二个epoch时发现每次循环的运行时间都变长了一倍,运行时间包括reader读数据的时间,进一步测试发现gpu的运行时间没有变化,主要是reader读数据的时间变长了一倍,不知道是什么原因。具体现象可以看输出的日志文件:
Loss at epoch 0 step 5: [4.3775268], acc: [0.125],period: 16.51509690284729
Loss at epoch 0 step 10: [3.8490183], acc: [0.40625],period: 16.0886127948761
Loss at epoch 0 step 15: [2.9390435], acc: [0.5625],period: 16.38063144683838
Loss at epoch 0 step 20: [2.3567476], acc: [0.65625],period: 16.37914490699768
Loss at epoch 0 step 25: [1.5900159], acc: [0.734375],period: 16.665042877197266
Loss at epoch 0 step 30: [1.2181798], acc: [0.734375],period: 15.94478726387024
Loss at epoch 0 step 35: [0.6449917], acc: [0.90625],period: 16.192134141921997
valid Loss at step 0: [0.25852293], acc: [1.]
valid Loss at step 20: [0.46329147], acc: [0.875]
valid Loss at step 40: [1.2772802], acc: [0.71875]
valid Loss at step 60: [1.3179752], acc: [0.6875]
valid Loss at step 80: [1.1119821], acc: [0.75]
start data reader (trainers_num: 4, trainer_id: 1)
valid Loss at step 100: [2.2030392], acc: [0.1875]
start data reader (trainers_num: 4, trainer_id: 3)
start data reader (trainers_num: 4, trainer_id: 2)
验证集准确率为:0.8328919410705566
验证集loss为:0.7212506532669067
Epoch1 lr=0.009999999776482582
start data reader (trainers_num: 4, trainer_id: 0)
Loss at epoch 1 step 5: [0.55662304], acc: [0.890625],period: 36.16345572471619
Loss at epoch 1 step 10: [0.42250103], acc: [0.875],period: 37.54211711883545
Loss at epoch 1 step 15: [0.27849782], acc: [0.96875],period: 37.35439491271973
Loss at epoch 1 step 20: [0.33116293], acc: [0.90625],period: 35.67357039451599
Loss at epoch 1 step 25: [0.37910157], acc: [0.890625],period: 37.441500663757324
Loss at epoch 1 step 30: [0.36237404], acc: [0.953125],period: 37.42715764045715
Loss at epoch 1 step 35: [0.36434418], acc: [0.921875],period: 35.077528953552246
valid Loss at step 0: [0.08099228], acc: [1.]
valid Loss at step 20: [0.43469694], acc: [0.84375]
valid Loss at step 40: [1.1833289], acc: [0.71875]
valid Loss at step 60: [0.94777864], acc: [0.71875]
valid Loss at step 80: [0.42069796], acc: [0.9375]
valid Loss at step 100: [1.0385896], acc: [0.71875]
start data reader (trainers_num: 4, trainer_id: 2)
start data reader (trainers_num: 4, trainer_id: 3)
start data reader (trainers_num: 4, trainer_id: 1)
验证集准确率为:0.8781779408454895
验证集loss为:0.44851189851760864
Epoch2 lr=0.0010000000474974513
start data reader (trainers_num: 4, trainer_id: 0)
Loss at epoch 2 step 5: [0.38328272], acc: [0.921875],period: 37.521843671798706
Loss at epoch 2 step 10: [0.3836357], acc: [0.90625],period: 36.96732306480408
Loss at epoch 2 step 15: [0.2954501], acc: [0.9375],period: 37.78758931159973
Loss at epoch 2 step 20: [0.28290433], acc: [0.9375],period: 36.79577970504761
Loss at epoch 2 step 25: [0.23077166], acc: [0.984375],period: 35.68168759346008
Loss at epoch 2 step 30: [0.303536], acc: [0.9375],period: 37.21159076690674
Loss at epoch 2 step 35: [0.25395435], acc: [0.9375],period: 36.25998377799988
valid Loss at step 0: [0.07491596], acc: [1.]
valid Loss at step 20: [0.39159745], acc: [0.84375]
valid Loss at step 40: [1.0963948], acc: [0.71875]
valid Loss at step 60: [0.9259555], acc: [0.71875]
valid Loss at step 80: [0.5368143], acc: [0.90625]
valid Loss at step 100: [0.93665516], acc: [0.78125]
验证集准确率为:0.8916842937469482
验证集loss为:0.41894418001174927
INFO 2020-09-01 00:58:53,138 launch.py:223] Local procs complete, POD info:rank:0 id:None addr:127.0.0.1 port:None visible_gpu:[] trainers:["gpu:['0'] endpoint:127.0.0.1:52136 rank:0", "gpu:['1'] endpoint:127.0.0.1:35241 rank:1", "gpu:['2'] endpoint:127.0.0.1:47208 rank:2", "gpu:['3'] endpoint:127.0.0.1:49438 rank:3"]
period就是统计的每个iter的运行时间(含读数时间),测试代码如下:
cur_time = 0
for batch_id, data in enumerate(train_reader()):
dy_x_data = np.array([x[0] for x in data]).astype('float32')
y_data = np.array([[x[1]] for x in data]).astype('int64')
period = time.time()-cur_time
cur_time = time.time()
img = fluid.dygraph.to_variable(dy_x_data)
label = fluid.dygraph.to_variable(y_data)
label.stop_gradient = True
out = train_model(img)
acc = fluid.layers.accuracy(out,label)
loss = fluid.layers.cross_entropy(out, label)
avg_loss = fluid.layers.mean(loss)
avg_loss = train_model.scale_loss(avg_loss)
avg_loss.backward()
train_model.apply_collective_grads()
opt.minimize(avg_loss)
train_model.clear_gradients()
if(dev_id==0):
if batch_id % 5 == 0 and batch_id>0:
print("Loss at epoch {} step {}: {}, acc: {},period: {}".format(i, batch_id, avg_loss.numpy()*4, acc.numpy(),period))
29条答案
按热度按时间nbysray516#
同学你好,
paddle.reader.xmap_readers
是一个用多线程加速读取的方法,而在动态图模式下,由于python GIL的限制,多线程加速意义不大,几乎是串行读取数据,在动态图的场景中需要使用多进程加速数据读取才能有比较明显的效果对于这个现象,我的推测是,第一个epoch的时间短可能是启动的时候训练尚未开始之前,各线程从磁盘上读取了一些数据,但到第二个epoch数据没有准备时间给线程读数据了
paddle.reader.xmap_readers
将来会逐渐淘汰,有兴趣的建议尝试下Paddle的多进程DataLoader,http://180.76.141.178/documentation/docs/en/api/io/DataLoader.html
cedebl8k17#
您可以在我的代码上改动验证,但是如果按照您的解释是后面的epoch没有给xmap_reader加速准备的时间,那也不应该比单线程还要慢吧,这个应该算xmap_reader的bug?
mpgws1up18#
前面说过了,之前调试此类问题,这个现象的原因是:第一个epoch的时间短可能是启动的时候训练尚未开始之前,各线程从磁盘上读取了一些数据,但到第二个epoch数据没有准备时间给线程读数据了
这个原因,加日志也很容易看出来,前面也说过了:加一些日志定位下真正的原因:在reader的实现里,统计一份数据从磁盘加载进来的时间,print一下,看日志是否是第一个epoch执行之前就先load了一些数据进来,然后第二个epoch没有提前load数据了;以及看下第二个epoch和第三个epoch的时间差距,如果第二三个epoch时间一致,说明这就是xmap_reader加速能达到的稳定时间
如果您确实不想加日志验证,请问我要如果在您的代码上改动验证?是Fork您公开的项目之后就能够实验吗?
jtoj6r0c19#
1.我在静态图的框架下面也试了这个问题,现象和动态图的一样,从第二个epoch开始循环时间突然变长
2.我把xmap_reader换成了map_reader,也就是单线程的版本,循环时间确实比多进程的要长,但是各个epoch之间比较稳定没有出现第二个epoch循环时间变长的现象。
3.综合来讲xmap——reader从第二个循环开始似乎就失去了多线程加速的性能,甚至比单线程的速度还要慢,这肯定还有隐藏的问题
z2acfund20#
您说的是,我看DateSet的文档貌似都没公开,不是这几个
我反馈下
只能先委屈下看下代码:https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/dataloader/dataset.py
非常抱歉
还有xmap_reader变慢的原因,如果您方便的话,可以做一下上面几个实验
k7fdbhmy21#
我看了一下这个英文文档,要使用这个api首先要构造一个dataset类,但是你们关于dataset类的说明文档又非常简单,都没有介绍怎么自己构造
3duebb1j22#
还可以这样,去掉xmap_reader,试下串行情况下,数据读取的耗时是多少
eqzww0vc23#
我试过这个函数,就是用的from_generator这个方法,就是在之前的xmap_readers外面又封装了一层,paddle官网的历程也是这样做得,但我测的速度没有什么区别。
这个接口的中文文档,相关开发同学一直没补充,目前仅有英文的
lokaqttq24#
不是用from_generator,注意没有from_generator
我理解其实根本的需求还是要降低数据读取开销吧,之所以推荐新的,主要是原理上xmap_reader就不快,如果你确实想搞明白,那可以做以下几个实验:
ztmd8pv525#
我试过这个函数,就是用的from_generator这个方法,就是在之前的xmap_readers外面又封装了一层,paddle官网的历程也是这样做得,但我测的速度没有什么区别。
wribegjk26#
哦 sorry,是这个:https://www.paddlepaddle.org.cn/documentation/docs/en/api/io/DataLoader.html
vbopmzt127#
拆成图片以后只需要读取所要的帧,而视频必须先把全部帧解码,所以拆成图片以后整体的运行速度肯定比直接读视频要快。但是没有出现后面的循环比前面的循环耗时长两倍的现象,时间基本是固定的。由于拆成图片以后会占用大量硬盘空间,目前没有那么大的资源,还是只能直接读取视频,我还是需要解决后面的epoch比前面耗时长的问题,还有就是你给的dataloader的链接打不开啊。
Loss at epoch 0 step 5: [4.282148], acc: [0.171875],period: 4.781822443008423
Loss at epoch 0 step 10: [3.6861625], acc: [0.453125],period: 4.744587182998657
Loss at epoch 0 step 15: [2.8119168], acc: [0.546875],period: 4.819806337356567
Loss at epoch 0 step 20: [2.2978542], acc: [0.625],period: 4.864906072616577
Loss at epoch 0 step 25: [1.502399], acc: [0.765625],period: 4.836191177368164
Loss at epoch 0 step 30: [1.064183], acc: [0.8125],period: 4.834831953048706
Loss at epoch 0 step 35: [0.6212496], acc: [0.859375],period: 4.700877666473389
valid Loss at step 0: [0.3081354], acc: [1.]
valid Loss at step 20: [0.45718595], acc: [0.875]
valid Loss at step 40: [1.1972005], acc: [0.8125]
valid Loss at step 60: [1.3843231], acc: [0.75]
valid Loss at step 80: [0.9100992], acc: [0.8125]
valid Loss at step 100: [1.7595642], acc: [0.375]
start data reader (trainers_num: 4, trainer_id: 2)
start data reader (trainers_num: 4, trainer_id: 3)
start data reader (trainers_num: 4, trainer_id: 1)
验证集准确率为:0.8474576473236084
验证集loss为:0.7082627415657043
Epoch1 lr=0.009999999776482582
start data reader (trainers_num: 4, trainer_id: 0)
Loss at epoch 1 step 5: [0.5127614], acc: [0.90625],period: 4.738304138183594
Loss at epoch 1 step 10: [0.41135028], acc: [0.875],period: 4.8494274616241455
Loss at epoch 1 step 15: [0.2587752], acc: [0.9375],period: 4.924009799957275
Loss at epoch 1 step 20: [0.28454575], acc: [0.96875],period: 4.774179220199585
Loss at epoch 1 step 25: [0.3116944], acc: [0.9375],period: 4.83136248588562
Loss at epoch 1 step 30: [0.29290053], acc: [0.9375],period: 4.864048004150391
Loss at epoch 1 step 35: [0.31969106], acc: [0.921875],period: 4.670931577682495
start data reader (trainers_num: 4, trainer_id: 3)start data reader (trainers_num: 4, trainer_id: 2)start data reader (trainers_num: 4, trainer_id: 1)
Epoch2 lr=0.009999999776482582
start data reader (trainers_num: 4, trainer_id: 0)
Loss at epoch 2 step 5: [0.16409254], acc: [0.96875],period: 4.869810581207275
Loss at epoch 2 step 10: [0.26757717], acc: [0.921875],period: 4.827917575836182
Loss at epoch 2 step 15: [0.1798913], acc: [0.9375],period: 4.828458547592163
Loss at epoch 2 step 20: [0.171868], acc: [0.9375],period: 4.774251222610474
Loss at epoch 2 step 25: [0.1257487], acc: [0.984375],period: 4.686167478561401
Loss at epoch 2 step 30: [0.24517612], acc: [0.90625],period: 4.849564790725708
Loss at epoch 2 step 35: [0.13770753], acc: [0.984375],period: 4.822800636291504
valid Loss at step 0: [0.07536523], acc: [1.]
valid Loss at step 20: [0.3767517], acc: [0.875]
valid Loss at step 40: [0.81333774], acc: [0.75]
valid Loss at step 60: [0.7042489], acc: [0.78125]
valid Loss at step 80: [0.35271907], acc: [0.84375]
valid Loss at step 100: [0.36941373], acc: [0.90625]
start data reader (trainers_num: 4, trainer_id: 3)
start data reader (trainers_num: 4, trainer_id: 2)
start data reader (trainers_num: 4, trainer_id: 1)
验证集准确率为:0.9025423526763916
tzxcd3kk28#
62o28rlo29#
1.请问静态图模式下是否还可以用xmap_readers?性能如何,有更好的方法推荐吗?
2.这个任务主要处理视频样本(avi、MP4)格式,有什么比较快的数据读入方法能推荐吗?
3.我把视频抽成帧以jpg格式读入后就不存在这个训练时间变长的问题了,是否还是跟视频读入的方式有关?跟xmap_reader无关
同学你好,
paddle.reader.xmap_readers
是一个用多线程加速读取的方法,而在动态图模式下,由于python GIL的限制,多线程加速意义不大,几乎是串行读取数据,在动态图的场景中需要使用多进程加速数据读取才能有比较明显的效果对于这个现象,我的推测是,第一个epoch的时间短可能是启动的时候训练尚未开始之前,各线程从磁盘上读取了一些数据,但到第二个epoch数据没有准备时间给线程读数据了
paddle.reader.xmap_readers
将来会逐渐淘汰,有兴趣的建议尝试下Paddle的多进程DataLoader,http://180.76.141.178/documentation/docs/en/api/io/DataLoader.html