如果rank()+1溢出JOIN,则选择第一列



我有一个名为loop_msg的表,其中包含msg_idcontent。我有一个名为loop_msg_status的第二个表,其中包含channelmsg_id。这用于在不同的通道中以循环方式发布消息,因此我需要跟踪每个channel中最后发布的msg_id

SELECT 
a.msg_id, 
b.content, 
b.rank, 
b.rank + 1, 
c.rank, 
c.content as next_content 
FROM 
loop_msg_status as a 
LEFT JOIN (
SELECT 
*, 
RANK() OVER (
ORDER BY 
msg_id ASC
) as rank 
FROM 
loop_msg
) b ON a.msg_id = b.msg_id 
LEFT JOIN (
SELECT 
*, 
RANK() OVER (
ORDER BY 
msg_id ASC
) as rank 
FROM 
loop_msg
) c ON b.rank + 1 = c.rank

通过这个查询,我可以从每个channel中获得当前的msg_id及其content。我还从表loop_msg中得到了msg_idrank。如果有意义的话,我还得到了它的rank+1rank+1content。它是有效的。然而,如果rank是最高的,那么rank+1不存在,并且我得到了NULLnext_content。在这种情况下,我希望SELECTloop_msg中最低的rank,即1,并将其内容改为next_content。我应该添加IF()吗?如果是,在哪里?或者有更好的方法吗?

SELECT version();
> 10.5.13-MariaDB

完整的SQL示例:

CREATE TABLE `loop_msg` (
`msg_id` int(11) NOT NULL,
`content` varchar(2000) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO `loop_msg` (`msg_id`, `content`) VALUES
(2, 'Content 2'),
(3, 'Content 3'),
(4, 'Content 4'),
(6, 'Content 6'),
(7, 'Content 7'),
(8, 'Content 8');
CREATE TABLE `loop_msg_status` (
`channel` bigint(20) NOT NULL,
`msg_id` int(11) NOT NULL DEFAULT 0
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO `loop_msg_status` (`channel`, `msg_id`) VALUES
(316757642527768577, 4),
(384071823261696010, 6),
(939746456632438804, 8);

ALTER TABLE `loop_msg`
ADD PRIMARY KEY (`msg_id`);
ALTER TABLE `loop_msg_status`
ADD PRIMARY KEY (`channel`);

ALTER TABLE `loop_msg`
MODIFY `msg_id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=9;
COMMIT;

预期结果

channel            | nextContent
--------------------------------
316757642527768577 | Content 6
384071823261696010 | Content 7
939746456632438804 | Content 2

问题中给出的模式(和预期结果(的最终SQL:

最后的小提琴

WITH cte AS (
SELECT loop_msg.msg_id
, channel
, COALESCE(
LEAD(content)        OVER w1
, FIRST_VALUE(content) OVER w1
) AS content_next
FROM      loop_msg
LEFT JOIN loop_msg_status
ON loop_msg.msg_id = loop_msg_status.msg_id
WINDOW w1 AS (ORDER BY loop_msg.msg_id)
)
SELECT channel, content_next
FROM cte
WHERE channel IS NOT NULL
;

结果:

+--------------------+--------------+
| channel            | content_next |
+--------------------+--------------+
| 316757642527768577 | Content 6    |
| 384071823261696010 | Content 7    |
| 939746456632438804 | Content 2    |
+--------------------+--------------+

要查看当前和下一个msg_id,下面是调整后的SQL:

调整后的小提琴

WITH cte AS (
SELECT loop_msg.msg_id
, channel
, ROW_NUMBER() OVER w1 as rankx
, COALESCE(
LEAD(loop_msg.msg_id)        OVER w1
, FIRST_VALUE(loop_msg.msg_id) OVER w1
) AS msgid_next
, COALESCE(
LEAD(content)        OVER w1
, FIRST_VALUE(content) OVER w1
) AS content_next
FROM      loop_msg
LEFT JOIN loop_msg_status
ON loop_msg.msg_id = loop_msg_status.msg_id
WINDOW w1 AS (ORDER BY loop_msg.msg_id)
)
SELECT channel, content_next, msg_id, msgid_next
FROM cte
WHERE channel IS NOT NULL
;

结果:

+--------------------+--------------+--------+------------+
| channel            | content_next | msg_id | msgid_next |
+--------------------+--------------+--------+------------+
| 316757642527768577 | Content 6    |      4 |          6 |
| 384071823261696010 | Content 7    |      6 |          7 |
| 939746456632438804 | Content 2    |      8 |          2 |
+--------------------+--------------+--------+------------+

更多详细信息:

你可以试试这样的东西。在这种情况下,我们可以使用ROW_NUMBER而不是RANK,因为msg_id是唯一的,并且没有两个消息具有相同的msg_id,而这两个消息将具有相同的RANK。如果你愿意,可以随意用RANK代替。

小提琴

稍后,我们可以按通道进行分区,以便分别为每个通道进行分区。你们的问题并不完全清楚你们想如何使用频道。

SELECT *
, ROW_NUMBER() OVER (ORDER BY msg_id ASC) as rankx
, COALESCE(
LEAD(msg_id)        OVER (ORDER BY msg_id ASC)
, FIRST_VALUE(msg_id) OVER (ORDER BY msg_id ASC)
) AS msgid_next
, COALESCE(
LEAD(content)        OVER (ORDER BY msg_id ASC)
, FIRST_VALUE(content) OVER (ORDER BY msg_id ASC)
) AS content_next
FROM loop_msg
;

结果:

+--------+---------+-------+------------+--------------+
| msg_id | content | rankx | msgid_next | content_next |
+--------+---------+-------+------------+--------------+
|      1 | c1      |     1 |          2 | c2           |
|      2 | c2      |     2 |          3 | c3           |
|      3 | c3      |     3 |          4 | c4           |
|      4 | c4      |     4 |          5 | c5           |
|      5 | c5      |     5 |          6 | c6           |
|      6 | c6      |     6 |          7 | c7           |
|      7 | c7      |     7 |          1 | c1           |
+--------+---------+-------+------------+--------------+

设置:

CREATE TABLE loop_msg (
msg_id  int  auto_increment primary key
, content varchar(20)
);
INSERT INTO loop_msg (content) VALUES
('c1'), ('c2'), ('c3'), ('c4'), ('c5'), ('c6'), ('c7')
;

测试广播#2,每个频道的处理:

CREATE TABLE loop_msg (
msg_id  int  auto_increment primary key
, chan    varchar(20)
, content varchar(20)
);
INSERT INTO loop_msg (content, chan) VALUES
('c1', 'chan1')
, ('c2', 'chan1')
, ('c3', 'chan1')
, ('c4', 'chan1')
, ('c5', 'chan1')
, ('c6', 'chan1')
, ('c7', 'chan1')
, ('d2', 'chan2')
, ('d3', 'chan2')
, ('d4', 'chan2')
, ('d5', 'chan2')
, ('d6', 'chan2')
, ('d7', 'chan2')
, ('d8', 'chan2')
;
SELECT *
, ROW_NUMBER() OVER (PARTITION BY chan ORDER BY msg_id ASC) as rankx
, COALESCE(
LEAD(msg_id)        OVER (PARTITION BY chan ORDER BY msg_id)
, FIRST_VALUE(msg_id) OVER (PARTITION BY chan ORDER BY msg_id)
) AS msgid_next
, COALESCE(
LEAD(content)        OVER (PARTITION BY chan ORDER BY msg_id)
, FIRST_VALUE(content) OVER (PARTITION BY chan ORDER BY msg_id)
) AS content_next
FROM loop_msg
;

结果:

+--------+-------+---------+-------+------------+--------------+
| msg_id | chan  | content | rankx | msgid_next | content_next |
+--------+-------+---------+-------+------------+--------------+
|      1 | chan1 | c1      |     1 |          2 | c2           |
|      2 | chan1 | c2      |     2 |          3 | c3           |
|      3 | chan1 | c3      |     3 |          4 | c4           |
|      4 | chan1 | c4      |     4 |          5 | c5           |
|      5 | chan1 | c5      |     5 |          6 | c6           |
|      6 | chan1 | c6      |     6 |          7 | c7           |
|      7 | chan1 | c7      |     7 |          1 | c1           |
|      8 | chan2 | d2      |     1 |          9 | d3           |
|      9 | chan2 | d3      |     2 |         10 | d4           |
|     10 | chan2 | d4      |     3 |         11 | d5           |
|     11 | chan2 | d5      |     4 |         12 | d6           |
|     12 | chan2 | d6      |     5 |         13 | d7           |
|     13 | chan2 | d7      |     6 |         14 | d8           |
|     14 | chan2 | d8      |     7 |          8 | d2           |
+--------+-------+---------+-------+------------+--------------+

最后:

我们还可以定义一个窗口子句来避免每次重写规范:

SELECT *
, ROW_NUMBER() OVER w1 as rankx
, COALESCE(
LEAD(msg_id)        OVER w1
, FIRST_VALUE(msg_id) OVER w1
) AS msgid_next
, COALESCE(
LEAD(content)        OVER w1
, FIRST_VALUE(content) OVER w1
) AS content_next
FROM loop_msg
WINDOW w1 AS (PARTITION BY chan ORDER BY msg_id)
;

结果:

+--------+-------+---------+-------+------------+--------------+
| msg_id | chan  | content | rankx | msgid_next | content_next |
+--------+-------+---------+-------+------------+--------------+
|      1 | chan1 | c1      |     1 |          2 | c2           |
|      2 | chan1 | c2      |     2 |          3 | c3           |
|      3 | chan1 | c3      |     3 |          4 | c4           |
|      4 | chan1 | c4      |     4 |          5 | c5           |
|      5 | chan1 | c5      |     5 |          6 | c6           |
|      6 | chan1 | c6      |     6 |          7 | c7           |
|      7 | chan1 | c7      |     7 |          1 | c1           |
|      8 | chan2 | d2      |     1 |          9 | d3           |
|      9 | chan2 | d3      |     2 |         10 | d4           |
|     10 | chan2 | d4      |     3 |         11 | d5           |
|     11 | chan2 | d5      |     4 |         12 | d6           |
|     12 | chan2 | d6      |     5 |         13 | d7           |
|     13 | chan2 | d7      |     6 |         14 | d8           |
|     14 | chan2 | d8      |     7 |          8 | d2           |
+--------+-------+---------+-------+------------+--------------+

相关内容

  • 没有找到相关文章

最新更新