We have a log table in our production database which is hosted on Azure. The table has grown to about 4.5 million records and now we just want to delete all the records from that log table.
I tried running
Delete from log_table
And I also tried
Delete top 100 from log_table
Delete top 20 from log_table
When I run the queries, database usage jumps to 100% and the query just hangs. I believe this is because of the large number of records in the table. Is there a way we can overcome the issue?
2条答案
按热度按时间5f0d552i1#
To delete all rows in a big table, you can use the truncate table command. This command is used to remove all rows from a table, and it is a faster operation than using the DELETE command
Ex:
0h4hbjxa2#
In an Azure SQL database, you have several options to choose from, for you to control the size of your database and its log file. First of all, let's start with some defenitions:
In Azure SQL Database, to shrink files you can use either DBCC SHRINKDATABASE or DBCC SHRINKFILE commands:
DBCC SHRINKDATABASE shrinks all data and log files in a database using a single command. The command shrinks one data file at a time, which can take a long time for larger databases. It also shrinks the log file, which is usually unnecessary because Azure SQL Database shrinks log files automatically as needed.
DBCC SHRINKFILE command supports more advanced scenarios:
Now going on to some useful SQL queries:
When it comes to the log file of a database, you can use the following queries:
... and to set the database log file to automatically shrink itself and keep a certain amount of space, you may use:
For reference purposes, I didn't get this information myself, but extracted it from this official article in Microsoft Docs