在我将couchdb从1.6.1
迁移到2.3.1
的过程中,couchup实用程序需要花费大量时间来重建视图。couchup实用程序存在内存问题。数据库的大小在500 GB范围内。这需要很长时间。已经用了差不多5到6天,但仍然没有完成。有没有办法加快速度?
当尝试复制时,couchup运行2-3分钟后,couchdb由于内存泄漏问题而死亡,然后再次启动。复制将需要大约10天。对于复制,它显示进度条,但对于重建视图,它不显示进度条。我不知道已经做了多少。
couchdb安装在RHEL Linux服务器中。
1条答案
按热度按时间fnx2tebb1#
reducing backlog growth:
As couchup encounters views that take longer than 5 seconds to rebuild, couchup is going to carry on calling additional view urls, triggering their rebuild. Once a number of long running view rebuilds are running even rebuilds that would have been shorter will take at least 5 seconds, leading to a large backlog. If individual databases are large or (map/reduce functions are very inefficient) it would probably be best to set the timeout to something like 5 minutes. If you see more than a couple:
messages it is probably time to kill couchup and double the timeout.
Observing Index growth
By default
view_index_dir
is the same as the database directory so if data is in/var/lib/couchdb/shards
then/var/lib/couchdb
is the configured directory and indexes are stored in/var/lib/coucdh/.shards
. You can observe which index shard files are being created and growing or moveview_index_dir
somewhere separate for easier observation.What resources are running out?
You can tune couchdb in general, it is hard to say whether tuning is needed once the system is not rebuilding all indexes, etc.
In particular, you would want to look for and disable any autocompaction. Look at files in
/proc/[couchdb proc]
to figure out the effective fd limits and how many open files there are and whether the crash happens around a specific number of open files. Due to sharding the number of open files is usually a multiple of the number of those in earlier versions.Look at memory growth and figure out if it is stabalizing enough to use swap to prevent the problem.