无法将迁移数据导入mariadb数据库

pb3skfrl  于 2021-06-21  发布在  Mysql
关注(0)|答案(3)|浏览(484)

我正在尝试将一些导出的迁移数据导入到 MariaDB 数据库。
我可以成功地导入到 H2 数据库。
但是当你想在 MariaDB 第一,它创造了 87 数据库中的表而不是 91 表,并最终出错:

2018-04-22 14:13:33,275 INFO  [org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider] (ServerService Thread Pool -- 58) Initializing database schema. Using changelog META-INF/jpa-changelog-master.xml
2018-04-22 14:18:22,393 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0348: Timeout after [300] seconds waiting for service container stability. Operation will roll back. Step that first updated the service container was 'add' at address '[
    ("core-service" => "management"),
    ("management-interface" => "http-interface")
]'

这个新的日志块显示它几乎需要5mn的时间。太长了。
stacktrace的更多信息:

16:16:55,690 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 58) MSC000001: Failed to start service jboss.undertow.deployment.default-server.default-host./auth: org.jboss.msc.service.StartException in service jboss.undertow.deployment.default-server.default-host./auth: java.lang.RuntimeException: RESTEASY003325: Failed to construct public org.keycloak.services.resources.KeycloakApplication(javax.servlet.ServletContext,org.jboss.resteasy.core.Dispatcher)
    at org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:84)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
    at org.jboss.threads.JBossThread.run(JBossThread.java:320)
Caused by: java.lang.RuntimeException: RESTEASY003325: Failed to construct public org.keycloak.services.resources.KeycloakApplication(javax.servlet.ServletContext,org.jboss.resteasy.core.Dispatcher)
    at org.jboss.resteasy.core.ConstructorInjectorImpl.construct(ConstructorInjectorImpl.java:162)
    at org.jboss.resteasy.spi.ResteasyProviderFactory.createProviderInstance(ResteasyProviderFactory.java:2298)
    at org.jboss.resteasy.spi.ResteasyDeployment.createApplication(ResteasyDeployment.java:340)
    at org.jboss.resteasy.spi.ResteasyDeployment.start(ResteasyDeployment.java:253)
    at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.init(ServletContainerDispatcher.java:120)
    at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.init(HttpServletDispatcher.java:36)
    at io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:117)
    at org.wildfly.extension.undertow.security.RunAsLifecycleInterceptor.init(RunAsLifecycleInterceptor.java:78)
    at io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:103)
    at io.undertow.servlet.core.ManagedServlet$DefaultInstanceStrategy.start(ManagedServlet.java:250)
    at io.undertow.servlet.core.ManagedServlet.createServlet(ManagedServlet.java:133)
    at io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:565)
    at io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:536)
    at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:42)
    at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
    at org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
    at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
    at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
    at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
    at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
    at io.undertow.servlet.core.DeploymentManagerImpl.start(DeploymentManagerImpl.java:578)
    at org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:100)
    at org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:81)
    ... 6 more
Caused by: java.lang.RuntimeException: Failed to update database
    at org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider.update(LiquibaseJpaUpdaterProvider.java:102)
    at org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider.update(LiquibaseJpaUpdaterProvider.java:67)
    at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.update(DefaultJpaConnectionProviderFactory.java:322)
    at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.migration(DefaultJpaConnectionProviderFactory.java:292)
    at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.lambda$lazyInit$0(DefaultJpaConnectionProviderFactory.java:179)
    at org.keycloak.models.utils.KeycloakModelUtils.suspendJtaTransaction(KeycloakModelUtils.java:544)
    at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.lazyInit(DefaultJpaConnectionProviderFactory.java:130)
    at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.create(DefaultJpaConnectionProviderFactory.java:78)
    at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.create(DefaultJpaConnectionProviderFactory.java:56)
    at org.keycloak.services.DefaultKeycloakSession.getProvider(DefaultKeycloakSession.java:163)
    at org.keycloak.models.jpa.JpaRealmProviderFactory.create(JpaRealmProviderFactory.java:51)
    at org.keycloak.models.jpa.JpaRealmProviderFactory.create(JpaRealmProviderFactory.java:33)
    at org.keycloak.services.DefaultKeycloakSession.getProvider(DefaultKeycloakSession.java:163)
    at org.keycloak.models.cache.infinispan.RealmCacheSession.getDelegate(RealmCacheSession.java:144)
    at org.keycloak.models.cache.infinispan.RealmCacheSession.getMigrationModel(RealmCacheSession.java:137)
    at org.keycloak.migration.MigrationModelManager.migrate(MigrationModelManager.java:76)
    at org.keycloak.services.resources.KeycloakApplication.migrateModel(KeycloakApplication.java:246)
    at org.keycloak.services.resources.KeycloakApplication.migrateAndBootstrap(KeycloakApplication.java:187)
    at org.keycloak.services.resources.KeycloakApplication$1.run(KeycloakApplication.java:146)
    at org.keycloak.models.utils.KeycloakModelUtils.runJobInTransaction(KeycloakModelUtils.java:227)
    at org.keycloak.services.resources.KeycloakApplication.<init>(KeycloakApplication.java:137)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
    at org.jboss.resteasy.core.ConstructorInjectorImpl.construct(ConstructorInjectorImpl.java:150)
    ... 28 more
Caused by: liquibase.exception.MigrationFailedException: Migration failed for change set META-INF/jpa-changelog-2.1.0.xml::2.1.0::bburke@redhat.com:
     Reason: liquibase.exception.UnexpectedLiquibaseException: java.sql.SQLException: IJ031040: Connection is not associated with a managed connection: org.jboss.jca.adapters.jdbc.jdk8.WrappedConnectionJDK8@55194ba1
    at liquibase.changelog.ChangeSet.execute(ChangeSet.java:573)
    at liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:51)
    at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:73)
    at liquibase.Liquibase.update(Liquibase.java:210)
    at liquibase.Liquibase.update(Liquibase.java:190)
    at liquibase.Liquibase.update(Liquibase.java:186)
    at org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider.updateChangeSet(LiquibaseJpaUpdaterProvider.java:135)
    at org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider.update(LiquibaseJpaUpdaterProvider.java:88)
    ... 53 more
Caused by: liquibase.exception.UnexpectedLiquibaseException: java.sql.SQLException: IJ031040: Connection is not associated with a managed connection: org.jboss.jca.adapters.jdbc.jdk8.WrappedConnectionJDK8@55194ba1
    at liquibase.database.jvm.JdbcConnection.getURL(JdbcConnection.java:79)
    at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:62)
    at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:122)
    at liquibase.database.AbstractJdbcDatabase.execute(AbstractJdbcDatabase.java:1247)
    at liquibase.database.AbstractJdbcDatabase.executeStatements(AbstractJdbcDatabase.java:1230)
    at liquibase.changelog.ChangeSet.execute(ChangeSet.java:548)
    ... 60 more
Caused by: java.sql.SQLException: IJ031040: Connection is not associated with a managed connection: org.jboss.jca.adapters.jdbc.jdk8.WrappedConnectionJDK8@55194ba1
    at org.jboss.jca.adapters.jdbc.WrappedConnection.lock(WrappedConnection.java:164)
    at org.jboss.jca.adapters.jdbc.WrappedConnection.getMetaData(WrappedConnection.java:913)
    at liquibase.database.jvm.JdbcConnection.getURL(JdbcConnection.java:77)
    ... 65 more

导出命令是:

$KEYCLOAK_HOME/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=exported_realms -Dkeycloak.migration.strategy=OVERWRITE_EXISTING

失败的导入命令是:

$KEYCLOAK_HOME/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=exported_realms -Dkeycloak.migration.strategy=OVERWRITE_EXISTING

以下是在 standalone/configuration/standalone.xml 文件:

<datasource jndi-name="java:/jboss/datasources/KeycloakDS" pool-name="KeycloakDS" enabled="true">
  <connection-url>jdbc:mysql://localhost:3306/keycloak?useSSL=false&amp;characterEncoding=UTF-8</connection-url>
  <driver>mysql</driver>
  <pool>
    <min-pool-size>5</min-pool-size>
    <max-pool-size>15</max-pool-size>
  </pool>
  <security>
    <user-name>keycloak</user-name>
    <password>xxxxxx</password>
  </security>
  <validation>
    <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker"/>
    <validate-on-match>true</validate-on-match>
    <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter"/>
  </validation>
</datasource>

我在用 keycloak-3.4.1.Final 以及 mariadb-10.1.24 在java版本上 1.8.0_60 .
运行 ./mysqltuner.pl 实用程序显示:

-------- InnoDB Metrics ----------------------------------------------------------------------------
[--] InnoDB is enabled.
[--] InnoDB Thread Concurrency: 0
[OK] InnoDB File per table is activated
[OK] InnoDB buffer pool / data size: 2.0G/222.6M
[OK] Ratio InnoDB log file size / InnoDB Buffer pool size: 256.0M * 2/2.0G should be equal 25%
[OK] InnoDB buffer pool instances: 2
[--] InnoDB Buffer Pool Chunk Size not used or defined in your version
[!!] InnoDB Read buffer efficiency: 63.85% (802 hits/ 1256 total)
[!!] InnoDB Write Log efficiency: 0% (1 hits/ 0 total)
[OK] InnoDB log waits: 0.00% (0 waits / 1 writes)

General recommendations:
    Control warning line(s) into /home/stephane/programs/install/mariadb/mariadb.error.log file
    1 CVE(s) found for your MySQL release. Consider upgrading your version !
    MySQL started within last 24 hours - recommendations may be inaccurate
    Dedicate this server to your database for highest performance.
    Reduce or eliminate unclosed connections and network issues
    Consider installing Sys schema from https://github.com/mysql/mysql-sys
Variables to adjust:
    query_cache_size (=0)
    query_cache_type (=0)
    query_cache_limit (> 1M, or use smaller result sets)
zbq4xfa0

zbq4xfa01#

你的ulimit-a报告显示'打开的文件'是1024ulimit-n1000将扩大你的容量,以更好地容纳mysql。
在show global status中报告的1318秒中,我们统计了33个com\u回滚项,1个handler\u回滚可能是上面记录的所有java失败的结果。
为my.cnf-ini[mysqld]部分考虑的建议,可能会加快处理速度。

back to basics,inc.对本次进口加工的建议,2018年5月24日

max_connect_errors=10  # why tolerate 100 hacker/cracker attempts?
thread_cache_size=30  # from 4  to ensure threads ready to go
innodb_io_capacity_max=10000  # from 2000 default, for SSD vs HDD
innodb_io_capacity=5000  # from 200 default, for SSD vs HDD
have_symlink=NO  # to protect server from RANSOMWARE crowd
innodb_flush_neighbors=0  # from 1, no need when SSD - no rotational delay
innodb_lru_scan_depth=512  # from 1024 to conserve CPU see v8 refman
innodb_print_all_deadlocks=ON  # from OFF in error log for proactive correction
innodb_purge_threads=4  # from 1 to speed purge processing
log_bin=OFF  # from ON unless you need to invest the resources during import
log_warnings=2  # from 1 for addl info on aborted_connection in error log
max_join_size=1000000000  # from upper limit of 4 Billion rows
max_seeks_for_key=32  # rather than allowing optimizer to search 4 Billion ndx's.
max_write_lock_count=16  # to allow RD after nn lcks rather than 4 Billion
performance_schema=OFF  # from ON for this IMPORT processing speed
log_queries_not_using_indexes=0  # not likely to look at these, for import

复制并粘贴在[mysqld}的末尾为了快速测试,在时间允许的情况下从[mysqld]的顶部删除重复的变量名。这些不会解决记录的错误,但会加快处理速度。
如果时间允许,请提供反馈。

busg9geu

busg9geu2#

安装mysqltuner实用程序后:

wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.pl
wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/basic_passwords.txt -O basic_passwords.txt
wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/vulnerabilities.csv -O vulnerabilities.csv
chmod +x mysqltuner.pl
./mysqltuner.pl

我知道我的数据库服务器配置不当,写入速度太慢,导致导入操作超时。
然后我配置了 my.cnf 与指令一起归档:

skip-name-resolve = 1
performance_schema = 1
innodb_log_file_size = 256M
innodb_buffer_pool_size = 2G
innodb_buffer_pool_instances = 2
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
thread_cache_size = 4

允许导入成功完成的一个指令是:

innodb_flush_log_at_trx_commit = 2

更新:我评论了 innodb_flush_log_at_trx_commit = 2 指令,以便再次触发错误。然后我可以根据下面的评论收集额外的信息。

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 14761
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14761
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

$ free -m
              total        used        free      shared  buff/cache   available
Mem:           3743        2751         116         115         875         700
Swap:          4450          74        4376

完整的 my.cnf 文件:

[mysqld]
sql_mode        = NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION # This is strict mode: NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
socket          = /home/stephane/programs/install/mariadb/tmp/mariadb.sock
user            = stephane
basedir         = /home/stephane/programs/install/mariadb
datadir         = /home/stephane/programs/install/mariadb/data
log-bin         = /home/stephane/programs/install/mariadb/mariadb.bin.log
log-error       = /home/stephane/programs/install/mariadb/mariadb.error.log
general-log-file     = /home/stephane/programs/install/mariadb/mariadb.log
slow-query-log-file  = /home/stephane/programs/install/mariadb/mariadb.slow.queries.log
long_query_time = 1
log-queries-not-using-indexes = 1
innodb_file_per_table = 1
sync_binlog = 1
character-set-client-handshake = FALSE
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci
wait_timeout            = 28800 # amount of seconds during inactivity that MySQL will wait before it will close a connection on a non-interactive connection
interactive_timeout     = 28800 # same, but for interactive sessions
max_allowed_packet = 128M
net_write_timeout = 180
skip-name-resolve = 1
thread_cache_size = 4

# skip-networking

# skip-host-cache

# bulk_insert_buffer_size = 1G

performance_schema = 1
innodb_log_file_size = 128M
innodb_buffer_pool_size = 1G
innodb_buffer_pool_instances = 2

# innodb_flush_log_at_trx_commit = 2

innodb_flush_method = O_DIRECT
[client]
socket          = /home/stephane/programs/install/mariadb/tmp/mariadb.sock
default-character-set = utf8mb4
[mysql]
default-character-set = utf8mb4

环境状态和变量
其他命令输出:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1,9G     0  1,9G   0% /dev
tmpfs           375M  6,1M  369M   2% /run
/dev/sda1        17G  7,6G  8,4G  48% /
tmpfs           1,9G   21M  1,9G   2% /dev/shm
tmpfs           5,0M  4,0K  5,0M   1% /run/lock
tmpfs           1,9G     0  1,9G   0% /sys/fs/cgroup
/dev/sda5       438G   51G  365G  13% /home
tmpfs           375M   16K  375M   1% /run/user/1000

$ top - 19:22:22 up  1:13,  1 user,  load average: 2,04, 1,27, 1,17
Tasks: 223 total,   1 running, 222 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2,9 us,  0,6 sy,  0,0 ni, 73,3 id, 23,1 wa,  0,0 hi,  0,1 si,  0,0 st
KiB Mem :  3833232 total,   196116 free,  2611360 used,  1025756 buff/cache
KiB Swap:  4557820 total,  4488188 free,    69632 used.   985000 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                   
10399 stephane  20   0 4171240 448816  25724 S   3,6 11,7   0:39.78 java                                                                      
 8110 stephane  20   0 1217152 111392  40800 S   2,3  2,9   0:54.32 chrome                                                                    
 8290 stephane  20   0 1276140 148024  41360 S   2,0  3,9   0:43.52 chrome                                                                    
 1272 root      20   0  373844  45632  28108 S   1,0  1,2   1:37.31 Xorg                                                                      
 3172 stephane  20   0  729100  37060  22116 S   1,0  1,0   0:14.50 gnome-terminal-                                                           
11433 stephane  20   0 3163040 324680   9288 S   1,0  8,5   0:05.09 mysqld                                                                    
 8260 stephane  20   0 1242104 142028  42292 S   0,7  3,7   0:31.93 chrome                                                                    
 8358 stephane  20   0 1252060  99884  40876 S   0,7  2,6   0:34.06 chrome                                                                    
12580 root      20   0 1095296  78872  36456 S   0,7  2,1   0:10.29 dockerd                                                                   
   14 root      rt   0       0      0      0 S   0,3  0,0   0:00.01 watchdog/1                                                                
 2461 stephane  20   0 1232332 203156  74752 S   0,3  5,3   4:29.75 chrome                                                                    
 7437 stephane  20   0 3509576 199780  46004 S   0,3  5,2   0:20.66 skypeforlinux                                                             
 8079 stephane  20   0 1243784 130948  38848 S   0,3  3,4   0:23.82 chrome                                                                    
 8191 stephane  20   0 1146672  72848  37536 S   0,3  1,9   0:12.41 chrome                                                                    
 8501 root      20   0       0      0      0 S   0,3  0,0   0:00.80 kworker/0:1                                                               
 9331 stephane  20   0   46468   4164   3380 R   0,3  0,1   0:01.38 top                                                                       
    1 root      20   0  220368   8492   6404 S   0,0  0,2   0:02.26 systemd                                                                   
    2 root      20   0       0      0      0 S   0,0  0,0   0:00.00 kthreadd                                                                  
    4 root       0 -20       0      0      0 S   0,0  0,0   0:00.00 kworker/0:0H                                                              
    6 root       0 -20       0      0      0 S   0,0  0,0   0:00.00 mm_percpu_wq    

$ iostat -x 
Linux 4.13.0-39-generic (stephane-ThinkPad-X201)    22/05/2018  _x86_64_    (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           8,65    0,92    2,17    8,73    0,00   79,53

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda             42,68   23,49    816,72    905,13     8,40    36,83  16,45  61,06   17,18   52,96   1,98    19,14    38,53   4,31  28,53

$ free -m
              total        used        free      shared  buff/cache   available
Mem:           3743        2571         137         107        1034         924
Swap:          4450          68        4382

top、iostat和free命令是在导入迁移脚本执行期间执行的。
完整的mysql调谐器输出

xdyibdwo

xdyibdwo3#

这个错误的根本原因是linux服务器需要增加打开文件的数量。实际上,您需要首先调整您的bdd,因为当它非常慢时,这会导致超时。使用以下命令检查“打开文件”属性:

ulimit -n

就我而言,我用了20万。
请用这个例子:


# maximum capability of system

user@ubuntu:~$ cat /proc/sys/fs/file-max
708444

# available limit

user@ubuntu:~$ ulimit -n
1024

# To increase the available limit to say 200000

user@ubuntu:~$ sudo vim /etc/sysctl.conf

# add the following line to it

fs.file-max = 200000

# run this to refresh with new config

user@ubuntu:~$ sudo sysctl -p

# edit the following file

user@ubuntu:~$ sudo vim /etc/security/limits.conf

# add following lines to it

* soft  nofile  200000
* hard  nofile  200000

www-data  soft  nofile  200000
www-data  hard  nofile  200000
root soft nofile 200000   
root hard nofile 200000

# edit the following file

user@ubuntu:~$ sudo vim /etc/pam.d/common-session

# add this line to it

session required pam_limits.so

# logout and login and try the following command

user@ubuntu:~$ ulimit -n
200000

# now you can increase no.of.connections per Nginx worker

# in Nginx main config /etc/nginx/nginx.conf

worker_connections 200000;
worker_rlimit_nofile 200000;

相关问题