SQL Server A strategy to join a large number of tables on multiple columns?

gcxthw6b  于 2023-03-28  发布在  其他
关注(0)|答案(7)|浏览(185)

I know that I can join 2-3 small tables easily by writing simple joins. However, these joins can become very slow when you have 7-8 tables with 20 million+ rows, joining on 1-3 columns, even when you have the right indices. Moreover, the query becomes long and ugly too.

Is there an alternative strategy for doing such big joins, preferably database agnostic?

EDIT

Here is pseudocode for the join. Note that some tables may have to be unpivoted before they are used in the join -

select * from
    (select c1,c2,c3... From t1 where) as s1
inner join 
    (select c1,... From t2 where) as s2
inner join
    (unpivot table to get c1,c2... From t3 where) as s3
inner join 
    (select c1,c2,c3... From t2 where) as s4
on
    (s1.c1 = s2.c1)
and
    (s1.c1 = s3.c1 and s1.c2 = s3.c2)
and
    (s1.c1 = s4.c1 and s2.c2 = s4.c2 and s1.c3 = s4.c3)

Clearly, this is complicated and ugly. Is there a way to get the same result set in a much neater way without using such a complex join?

tpgth1q7

tpgth1q71#

"7-8 tables" doesn't sound worrying at all. Modern RDBMS can handle a lot more. Your pseudo-code query can be radically simplified to this form:

SELECT a.c1 AS a_c1, a.c2 AS a_c2, ...  -- use column aliases to ...
     , b.c1, b.c2, ...                  -- disambiguate conflicting column names
     , c.c1, c.c2, ...
     , d.c1, d.c2, ...
FROM   t1 a
JOIN   t2 b  USING (c1)
JOIN  (unpivot table to get c1,c2... From t3 where) c USING (c1,c2)
JOIN   t2 d ON d.c1 = a.c1 AND d.c2 = b.c2 AND d.c3 = d.c3
WHERE <some condition on a>
AND   <more conditions> ...

As long as matching column names are unambiguous in the tables left of a JOIN , the USING syntax shortens the code. If anything can be ambiguous, use the explicit form demonstrated in my last join condition. That's all standard SQL, but according to this Wikipedia page :

The USING clause is not supported by MS SQL Server and Sybase.

It wouldn't make sense to use all those subqueries in your pseudo-code in most RDBMS. The query planner finds the best way to apply conditions and fetch columns itself. Smart query planners also rearrange tables in any order they see fit to arrive at a fast query plan.

Also, that thing called "database agnostic" only exists in theory. None of the major RDBMS completely implements the SQL standard and all of them have different weaknesses and strengths. You have to optimize for your RDBMS or get mediocre performance at best.

Indexing strategies are very important. 20 million rows doesn't matter much in a SELECT , as long as we can plug a hand full of row pointers from an index. Indexing strategies heavily depend on your brand of RDBMS. Basically, columns can benefit from an index when used in ...

  • ... the JOIN clause
  • ... the WHERE clause
  • ... the ORDER BY clause

Very much depends on details.

There are also various types of indexes, designed for various requirements. B-tree, GIN, GiST, . Partial, multicolumn, functional, covering. Various operator classes. To optimize performance you just need to know the basics and the capabilities of your RDBMS.
More in the excellent PostgreSQL manual on indexes.

0tdrvxhp

0tdrvxhp2#

I have seen three ways of handling this if indexing fails to give a big enough performance boost. The first is to use temp tables. The more joins the database performed, the wores the estimated rows gets which can really slow down your query. If you run your joins and where clauses that will return the smallest number of rows, and store the intermediate results in a temp table to allow the cardinality estimator a more accurate count performance can improve significantly. This solution is the only once that doesn't create new database objects.

The second solution is a database warehouse, or at least an additional denormalized table(s). In this case you would create an additional table to hold the final results of the query, or several tables that perform the major joins and hold intermediate results. As an example, if you had a customers table, and three other tables that hold information about a customer, you could create a new table that holds the result of joining thsoe four tables. This solution generally works when you are using this query for reports and you can load the report table(s) each night with the new data generated during the day. This solution will be faster than the first, but is harder to implement and keep the results current.

The third solution is a materilized view/ indexed view . This solution depends heavily on the db platform you use. Oracle and Sql Server both have a way to create a view and then index it, giving you greater performance on the view. This can come at the cost of not having current records or greater data cost to store the view results but it can help.

oxiaedzo

oxiaedzo3#

Create materialized views and refresh them over the night. Or refresh them only when consider necessary. For example, you can have 2 views, one materialized with old data that is not going to be ever changed, and another normal view with actual data. And then a union between these. So you could have more views like these for any output you need.

If your database engine doesn't support materialized views, just denormalize the old data in another table over the night.

Check this also: Refresh a Complex Materialized View

3zwtqj6y

3zwtqj6y4#

I've been in same situation before, and my strategy was use WITH clause.

See more here .

WITH 

-- group some tables into a "temporary" view called MY_TABLE_A
MY_TABLE_A AS 
(
SELECT T1.FIELD1, T2.FIELD2, T3.FIELD3
  FROM T1
  JOIN T2 ON T2.PKEY = T1.FKEY
  JOIN T3 ON T3.PKEY = T2.FKEY
),

-- group some tables into another "temporary" view called MY_TABLE_B
MY_TABLE_B AS 
(
SELECT T4.FIELD1, T5.FIELD2, T6.FIELD3
  FROM T4
  JOIN T5 ON T5.PKEY = T4.FKEY
  JOIN T6 ON T6.PKEY = T5.FKEY
)

-- use those views
SELECT A.FIELD2, B.FIELD3
  FROM MY_TABLE_A A
  JOIN MY_TABLE_B B ON B.FIELD1 = A.FIELD1
 WHERE A.FIELD3 = "X"
   AND B.FIELD2 = "Y"
;
xnifntxz

xnifntxz5#

If you want to know if there is another way to access the data. One approach would be to take an interest in the object concept. I any event on Oracle. it's works very well and simplify dev. But it requires a business object approach.

From your example we can use two concept :

  • Reference
  • Inherence

Who can ease the readability of a query and sometimes speed.

1 : References

A reference is a pointer to an object. It allows the removal of joins between tables as they will be pointed.

Here is a simple Exemple :

CREATE TYPE S7 AS OBJECT (
    id          NUMBER(11)
    , code      NUMBER(11)
    , label2    VARCHAR2(1024 CHAR) 
);

CREATE TABLE S7_tbl OF S7 (
CONSTRAINT s7_k PRIMARY KEY(id)
);

CREATE TABLE S8 (
    info    VARCHAR2(500 CHAR)
    , info2 NUMBER(5)
    , ref_s7 REF S7 -- creation of the reference
);

We insert some datas in both table :

INSERT INTO S7_tbl VALUES ( S7 (1,1111, 'test'));
INSERT INTO S7_tbl VALUES ( S7 (2,2222, 'test2'));
INSERT INTO S7_tbl VALUES ( S7 (3,3333, 'test3'));
--
INSERT INTO S8 VALUES ('text', 22, (SELECT REF(s) FROM S7_TBL s WHERE s.code = 1111));
INSERT INTO S8 VALUES ('text2', 44, (SELECT REF(s) FROM S7_TBL s WHERE s.code = 1111));
INSERT INTO S8 VALUES ('text3', 11, (SELECT REF(s) FROM S7_TBL s WHERE s.code = 2222));

And the SELECT :

SELECT s8.info, s8.info2  FROM S8 s8 WHERE s8.ref_s7.code = 1111;

RETURN :

  • text2 | 44
  • text | 22

Here is a type of implicit join

2 : inherence

CREATE TYPE S6 AS OBJECT (
    name            VARCHAR2(255 CHAR)
    , date_start    DATE
)
/
DROP TYPE S1;;

CREATE TYPE S1 AS OBJECT(
    data1       NUMBER(11)
    , data2     VARCHAR(255 CHAR)
    , data3     VARCHAR(255 CHAR)
) INSTANTIABLE NOT FINAL
/
CREATE TYPE S2  UNDER S1 (
    dummy1      VARCHAR2(1024 CHAR)
    , dummy2    NUMBER(11)
    , dummy3    NUMBER(11)
    , info_s6      S6
) INSTANTIABLE  FINAL
/
CREATE TABLE S5 
(
    info1           VARCHAR2(128 CHAR)
    , info2         NUMBER(6)
    , object_s2   S2
)

We just insert a row in the table

INSERT INTO S5 
VALUES (
    'info'
    , 2
    ,  S2(
        1       -- fill data1
        , 'xxx' -- fill data2 
        , 'yyy' -- fill data3
        , 'zzz' -- fill dummy1
        , 2     -- fill dummy2  
        , 4     -- fill dummy3        
         , S6(
             'example1'
             ,SYSDATE
          )
     )
);

And the SELECT :

SELECT 
 s.info1
 , s.objet_s2.data1
 ,s.objet_s2.dummy1
 ,s.objet_s2.info_s6.name
FROM S5 s;

We can see that by this method we can easily access related data without using.

hoping that it can serve you

bqf10yzr

bqf10yzr6#

if it's all subqueries you can do it in the sub queries for each and as all the matching data happens it should be as simple as below so long as all the tables c1,c2,c3

select * from
    (select c1,c2,c3... from t1) as s1
inner join 
    (select c1,... from t2 where c1 = s1.c1) as s2
inner join
    (unpivot table to get c1,c2... from t3 where c2 = s2.c2) as s3
inner join 
    (select c1,c2,c3... from t2 where c3 = s3.c3) as s4
uujelgoq

uujelgoq7#

You can make use of views and functions. Views make SQL code elegant and easy to read and compose. Functions can return single values or rowsets permitting fine-tuning the underlying code for efficiency. Finally, filtering at subquery level instead of joining and filtering at query level permits the engine produce smaller sets of data to join later, where indices are not that significant since the amount of data to join is small and can be efficiently computed on the fly. Something like the query below can be include highly complex queries involving dozens of tables and complex business logic hidden in views and functions, and still be very efficient.

SELECT a.*, b.*
FROM (SELECT * FROM ComplexView
      WHERE <filter that limits output to a few rows>) a
JOIN (SELECT x, y, z FROM AlreadySignificantlyFilteredView
      WHERE x IN (SELECT f_XValuesForDate(CURRENT_DATE))) b
  ON (a.x = b.x AND a.y = b.y AND a.z <= b.z)
WHERE <condition for filtering even further>

相关问题