postgresql 如何在postgres中批量更新5500万条记录的单个列

rvpgvaaj  于 2023-02-12  发布在  PostgreSQL
关注(0)|答案(4)|浏览(290)

我想更新postgres表的一列。记录约为5500万,所以我们需要在一批10000条记录中更新它。注意:我们想更新所有的行,但是我们不想锁定我们的表。
我正在尝试以下查询-

Update account set name = Some name where id between 1 and 10000

我们怎样才能每更新10000条记录就循环一次呢?
如有任何建议和帮助,我们将不胜感激。

PostgreSQL 10.5版本

kmbjn2e3

kmbjn2e31#

与其一次提交所有更改(或者像其他答案中建议的那样提交5500万次),我宁愿尝试将更新行分成小批,例如您建议的10k条记录。在PL/pgSQL中,可以使用关键字BY以给定的步骤迭代集合。因此,您可以在anonymous code block中执行批更新,如下所示:

PostgreSQL 11+版本

DO $$ 
DECLARE 
  page int := 10000;
  min_id bigint; max_id bigint;
BEGIN
  SELECT max(id),min(id) INTO max_id,min_id FROM account;
  FOR j IN min_id..max_id BY page LOOP 
    UPDATE account SET name = 'your magic goes here'
    WHERE id >= j AND id < j+page;
    COMMIT;            
  END LOOP;
END; $$;
  • 您可能需要调整WHERE子句以避免不必要的重叠。

测试

具有1051行的数据样本,这些行具有连续ID:

CREATE TABLE account (id int, name text);
INSERT INTO account VALUES(generate_series(0,1050),'untouched record..');

正在执行匿名代码块...

DO $$ 
DECLARE 
  page int := 100;
  min_id bigint; max_id bigint;
BEGIN
  SELECT max(id),min(id) INTO max_id,min_id FROM account;
  FOR j IN min_id..max_id BY page LOOP 
    UPDATE account SET name = now() ||' -> UPDATED ' || j  || ' to ' || j+page
    WHERE id >= j AND id < j+page;
    RAISE INFO 'committing data from % to % at %', j,j+page,now();
    COMMIT;            
  END LOOP;
END; $$;
    
INFO:  committing data from 0 to 100 at 2021-04-14 17:35:42.059025+02
INFO:  committing data from 100 to 200 at 2021-04-14 17:35:42.070274+02
INFO:  committing data from 200 to 300 at 2021-04-14 17:35:42.07806+02
INFO:  committing data from 300 to 400 at 2021-04-14 17:35:42.087201+02
INFO:  committing data from 400 to 500 at 2021-04-14 17:35:42.096548+02
INFO:  committing data from 500 to 600 at 2021-04-14 17:35:42.105876+02
INFO:  committing data from 600 to 700 at 2021-04-14 17:35:42.114514+02
INFO:  committing data from 700 to 800 at 2021-04-14 17:35:42.121946+02
INFO:  committing data from 800 to 900 at 2021-04-14 17:35:42.12897+02
INFO:  committing data from 900 to 1000 at 2021-04-14 17:35:42.134388+02
INFO:  committing data from 1000 to 1100 at 2021-04-14 17:35:42.13951+02

..您可以成批更新行。为了证明我的观点,下面的查询计算按更新时间分组的记录数:

SELECT DISTINCT ON (name) name, count(id)
FROM account 
GROUP BY name ORDER BY name;

                         name                         | count 
------------------------------------------------------+-------
 2021-04-14 17:35:42.059025+02 -> UPDATED 0 to 100    |   100
 2021-04-14 17:35:42.070274+02 -> UPDATED 100 to 200  |   100
 2021-04-14 17:35:42.07806+02 -> UPDATED 200 to 300   |   100
 2021-04-14 17:35:42.087201+02 -> UPDATED 300 to 400  |   100
 2021-04-14 17:35:42.096548+02 -> UPDATED 400 to 500  |   100
 2021-04-14 17:35:42.105876+02 -> UPDATED 500 to 600  |   100
 2021-04-14 17:35:42.114514+02 -> UPDATED 600 to 700  |   100
 2021-04-14 17:35:42.121946+02 -> UPDATED 700 to 800  |   100
 2021-04-14 17:35:42.12897+02 -> UPDATED 800 to 900   |   100
 2021-04-14 17:35:42.134388+02 -> UPDATED 900 to 1000 |   100
 2021-04-14 17:35:42.13951+02 -> UPDATED 1000 to 1100 |    51

演示:db<>fiddle

mrzz3bfm

mrzz3bfm2#

您可以使用procedure(从版本11开始提供),然后逐个执行,如下所示:

CREATE or replace PROCEDURE do_update()
LANGUAGE plpgsql
AS $$
BEGIN
    FOR i IN 1..55000000 -- 55 million, or whatever number you need
    LOOP 

        Update account set name = Some name where id = i;
        COMMIT;
        
        RAISE INFO 'id: %', i;
    END LOOP;
END;
$$;

CALL do_update();
vh0rcniy

vh0rcniy3#

设置测试环境:

DROP TABLE IF EXISTS account;
CREATE TABLE account(id integer, name text);

INSERT INTO account
VALUES (1, 'jonas'),(10002, 'petras');

更新脚本:

DO $$
DECLARE
  _id integer;
  _min_id integer;
  _max_id integer; 
  _batch_size integer = 10000;
BEGIN
  SELECT 
    MIN(id),
    MAX(id)
  INTO
    _min_id,
    _max_id
  FROM
    account;

  _id := _min_id;

  LOOP
    UPDATE account SET
      name = 'Some name' 
    WHERE id >=_id 
      AND id < _id + _batch_size;

    COMMIT;

    _id := _id + _batch_size;
    IF _id > _max_id THEN
      EXIT;
    END IF;
  END LOOP;
END;
$$;
tjjdgumg

tjjdgumg4#

但我们不想锁定我们的表。
在许多情况下都有意义,但您没有公开您的实际设置。您是否需要锁?是否存在并发写入活动?如果没有,是否有足够的存储空间写入表的另一个副本?然后,最好在后台构建一个新的原始更新表,然后切换并删除旧表。请参阅:

假设对表的并发写操作,你不想阻塞太多的写操作太长时间,你想重用死元组来防止表膨胀和索引膨胀,所以批量更新是有意义的,你必须COMMIT(和VACUUM),以便可以重用死元组所占用的空间。并在表中分散写操作,以允许连续的事务在相同的块中产生和使用死元组。
Postgres 11或更高版本中,允许在DO语句的过程或匿名代码块中使用事务控制语句(如COMMIT)。其他人回答提供了使用该语句的解决方案。
autovacuum应使用积极的设置运行,以及时释放失效的元组以便重新使用。或者以一定的间隔手动运行VACUUM-但(当前)根本无法在事务上下文中运行(只能作为单个命令),因此无法在PL/pgSQL循环中运行。

Postgres 10岁或以上

代码块中还不允许事务控制。不过我们可以用dblink模拟自治事务。请参见:

  • Postgres支持嵌套或自治事务吗?
  • 如何在PostgreSQL中执行大型非阻塞更新?
  • 模拟PostgreSQL的“如果不存在则创建数据库”?

可能类似于:

DO
$do$
DECLARE
   _cur  int := 0;  -- just start with 0 unless min is far off
   _step int := 10000;  -- batch size
   _max  CONSTANT int := (SELECT max(id) FROM account);  -- max id
   _val  CONSTANT text := 'SOME name';
BEGIN
   -- as superuser, or you must also provide the password for the current role;
   PERFORM dblink_connect('dbname=' || current_database());  -- current db

   LOOP
      RAISE NOTICE '%', _cur;
      PERFORM dblink_exec(  -- committed implicitly!
         $$
         UPDATE account
         SET    name = 'SOME name'
         WHERE  id BETWEEN _cur AND _cur + _step   -- gaps don't matter unless huge
         AND    name IS DISTINCT FROM 'SOME name'  -- avoid empty updates
         $$);

      
      _cur := _cur + _step;
      EXIT WHEN _cur > _max;          -- stop when done
   END LOOP;

   PERFORM dblink_disconnect();
END
$do$;

我还添加了另一个 predicate :

AND    name IS DISTINCT FROM 'SOME name'  -- avoid empty updates

跳过行已具有新名称的空更新的成本。仅在这种情况下才有用。请参阅:

  • 如何(或可以)对多列执行SELECT DISTINCT操作?

您可能希望进一步拆分它,并在它们之间运行VACUUM,并且您可能希望使用id之外的其他列(未聚集的列)进行选择,以便在整个表中获得良好的分布。

相关问题