Hive 学习(七) Hive之常用内置函数二


一,引言

二,表生成函数

  2.1 行转列函数(explode)

  2.2 表生成函数(lateral view explode)

三,集合函数

  3.1 判断值是否存在某集合(array_contains)

  3.2 集合排序(sort_array)

  3.3 集合长度(size)

  3.4 Map集合的keys值返回(map_keys)

  3.5 Map集合的values值返回(map_values)

四,Json解析函数

  4.1 get_json_object

  4.2 json_tuple

五,窗口分析函数

  5.1 聚会函数+over

    5.1.1 partition by 字句

    5.1.2 order by字句

    5.1.3 window 子句

  5.2 分析函数

    5.2.1 ntile

    5.2.2 row_number,rank,dense_rank

    5.2.3 lag和lead

    5.2.4 first_value和last_value

  5.3 扩展

 

 

 

 

正文

一,引言

  不同于普通的SQL函数,Hive支持一些其他sql不支持的函数,如表生成函数和窗口函数,集合函数等等,接下来对应一一解答。

二,表生成函数

  2.1 行转列函数(explode)

  explode:可以将集合的数据遍历出来,遍历出的每一个元素为新的一行。注意,使用explode会生成一个新的表:

  如下示例:

select * from t_stu_subject;

+-------------------+---------------------+-----------------------------+--+
| t_stu_subject.id  | t_stu_subject.name  |   t_stu_subject.subjects    |
+-------------------+---------------------+-----------------------------+--+
| 1                 | zhangsan            | ["化学","物理","数学","语文"]       |
| 2                 | lisi                | ["化学","数学","生物","生理","卫生"]  |
| 3                 | wangwu              | ["化学","语文","英语","体育","生物"]  |
+-------------------+---------------------+-----------------------------+--+
3 rows selected (0.176 seconds)

-->对subjects字段进行行转列

select explode(subjects) from t_stu_subject;

+------+--+
| col  |
+------+--+
| 化学   |
| 物理   |
| 数学   |
| 语文   |
| 化学   |
| 数学   |
| 生物   |
| 生理   |
| 卫生   |
| 化学   |
| 语文   |
| 英语   |
| 体育   |
| 生物   |
+------+--+

-->错误语句:explode生成的是一个表,所以下面会报错
select id,name,explode(subjects) from t_stu_subject;

Error: Error while compiling statement: FAILED: SemanticException [Error 10081]: UDTF's are not supported outside the SELECT clause, nor nested in expressions (state=42000,code=10081)

  2.2 表生成函数(lateral view explode)

  lateral view用于和split、explode等UDTF一起使用的,能将一行数据拆分成多行数据,在此基础上可以对拆分的数据进行聚合,lateral view首先为原始表的每行调用UDTF,UDTF会把一行拆分成一行或者多行,lateral view在把结果组合,产生一个支持别名表的虚拟表。

  上面的explode使用看起来毫无意义,但是我们可以结合lateral view来一起使用,如下:

select id,name,sub from t_stu_subject lateral view explode(subjects) tmp as sub;

+-----+-----------+------+--+
| id  |   name    | sub  |
+-----+-----------+------+--+
| 1   | zhangsan  | 化学   |
| 1   | zhangsan  | 物理   |
| 1   | zhangsan  | 数学   |
| 1   | zhangsan  | 语文   |
| 2   | lisi      | 化学   |
| 2   | lisi      | 数学   |
| 2   | lisi      | 生物   |
| 2   | lisi      | 生理   |
| 2   | lisi      | 卫生   |
| 3   | wangwu    | 化学   |
| 3   | wangwu    | 语文   |
| 3   | wangwu    | 英语   |
| 3   | wangwu    | 体育   |
| 3   | wangwu    | 生物   |
+-----+-----------+------+--+

  SQL代码解析:

理解: lateral view 相当于两个表在join
左表:是原表
右表:是explode(某个集合字段)之后产生的表
而且:这个join只在同一行的数据间进行

  实例二:求word count

实例一: ==== 利用explode和lateral view 实现hive版的wordcount
 有以下数据:
a b c d e f g
a b c
e f g a
b c d b

对数据建表:
create table t_juzi(line string) row format delimited;
导入数据:
load data local inpath '/root/words.txt' into table t_juzi;

** ***** ******** ***** ******** ***** ******** wordcount查询语句:***** ******** ***** ******** ***** ******** 
select a.word,count(1) cnt
from 
(select tmp.* from t_juzi lateral view explode(split(line,' ')) tmp as word) a
group by a.word
order by cnt desc;
+---------+------+--+
| a.word  | cnt  |
+---------+------+--+
| b       | 4    |
| c       | 3    |
| a       | 3    |
| g       | 2    |
| f       | 2    |
| e       | 2    |
| d       | 2    |
+---------+------+--+

三,集合函数

  3.1 判断值是否存在某集合(array_contains)

  array_contains:语法结构

array_contains(Array<T>, value)  返回boolean值

  示例:

-->源数据查看
select * from t_stu_subject;

+-------------------+---------------------+-----------------------------+--+
| t_stu_subject.id  | t_stu_subject.name  |   t_stu_subject.subjects    |
+-------------------+---------------------+-----------------------------+--+
| 1                 | zhangsan            | ["化学","物理","数学","语文"]       |
| 2                 | lisi                | ["化学","数学","生物","生理","卫生"]  |
| 3                 | wangwu              | ["化学","语文","英语","体育","生物"]  |
+-------------------+---------------------+-----------------------------+--+
3 rows selected (0.066 seconds)

-->array_contains使用
select id, name, array_contains(subjects, '语文') from t_stu_subject;

+-----+-----------+--------+--+
| id  |   name    |  _c2   |
+-----+-----------+--------+--+
| 1   | zhangsan  | true   |
| 2   | lisi      | false  |
| 3   | wangwu    | true   |
+-----+-----------+--------+--+
3 rows selected (13.573 seconds)

  3.2 集合排序(sort_array)

  sort_array:语法结构

sort_array(Array<T>) 返回排序后的数组

  示例:

select sort_array(array(3,2,6));

+----------+--+
|   _c0    |
+----------+--+
| [2,3,6]  |
+----------+--+
1 row selected (12.599 seconds)

  3.3 集合长度(size)

-->数据查询

select * from t_stu_subject;

+-------------------+---------------------+-----------------------------+--+
| t_stu_subject.id  | t_stu_subject.name  |   t_stu_subject.subjects    |
+-------------------+---------------------+-----------------------------+--+
| 1                 | zhangsan            | ["化学","物理","数学","语文"]       |
| 2                 | lisi                | ["化学","数学","生物","生理","卫生"]  |
| 3                 | wangwu              | ["化学","语文","英语","体育","生物"]  |
+-------------------+---------------------+-----------------------------+--+
3 rows selected (0.069 seconds)

-->size测试

select id, name, size(subjects) as sub_num from t_stu_subject;

+-----+-----------+----------+--+
| id  |   name    | sub_num  |
+-----+-----------+----------+--+
| 1   | zhangsan  | 4        |
| 2   | lisi      | 5        |
| 3   | wangwu    | 5        |
+-----+-----------+----------+--+
3 rows selected (13.578 seconds)

  3.4 Map集合的keys值返回

  语法格式:

map_keys(Map<T,T>)

  实例:

select * from t_family;
+--------------+----------------+----------------------------------------------------------------+---------------+--+
| t_family.id  | t_family.name  |                    t_family.family_members                     | t_family.age  |
+--------------+----------------+----------------------------------------------------------------+---------------+--+
| 1            | zhangsan       | {"father":"xiaoming","mother":"xiaohuang","brother":"xiaoxu"}  | 28            |
| 2            | lisi           | {"father":"mayun","mother":"huangyi","brother":"guanyu"}       | 22            |
| 3            | wangwu         | {"father":"wangjianlin","mother":"ruhua","sister":"jingtian"}  | 29            |
| 4            | mayun          | {"father":"mayongzhen","mother":"angelababy"}                  | 26            |
+--------------+----------------+----------------------------------------------------------------+---------------+--+


-- 查出每个人有哪些亲属关系
select id,name,map_keys(family_members) as relations,age
from  t_family;

+-----+-----------+--------------------------------+------+--+
| id  |   name    |           relations            | age  |
+-----+-----------+--------------------------------+------+--+
| 1   | zhangsan  | ["father","mother","brother"]  | 28   |
| 2   | lisi      | ["father","mother","brother"]  | 22   |
| 3   | wangwu    | ["father","mother","sister"]   | 29   |
| 4   | mayun     | ["father","mother"]            | 26   |
+-----+-----------+--------------------------------+------+--+
4 rows selected (0.129 seconds)

  3.5 Map集合的values值返回

  语法结构:map_values(Map<T,T>)

  实例:

-- 查出每个人的亲人名字
select id,name,map_values(family_members) as relations,age
from  t_family;

+-----+-----------+-------------------------------------+------+--+
| id  |   name    |              relations              | age  |
+-----+-----------+-------------------------------------+------+--+
| 1   | zhangsan  | ["xiaoming","xiaohuang","xiaoxu"]   | 28   |
| 2   | lisi      | ["mayun","huangyi","guanyu"]        | 22   |
| 3   | wangwu    | ["wangjianlin","ruhua","jingtian"]  | 29   |
| 4   | mayun     | ["mayongzhen","angelababy"]         | 26   |
+-----+-----------+-------------------------------------+------+--+
4 rows selected (0.132 seconds)

四,Json解析函数

  4.1 get_json_object

  作用:解析json字符串对象,通过 '$.key' 来获取json串的value值

  语法格式:

get_json_object(json字符串,'$.key')

  实例:

select get_json_object('{"key1":3333, "key2": 4444}', '$.key1');

+-------+--+
|  _c0  |
+-------+--+
| 3333  |
+-------+--+

  4.2 json_tuple

  作用:将json字符串的value值进行提取

  语法格式:

json_tuple(json字符串,key值1,key值2) as (key1, key2)

  实例:

select json_tuple('{"key1":3333, "key2": 4444}', 'key1', 'key2') as (key1, key2);

+-------+-------+--+
| key1  | key2  |
+-------+-------+--+
| 3333  | 4444  |
+-------+-------+--+

五,窗口分析函数

  5.1 聚会函数+over

  hive中的窗口函数和sql中的窗口函数相类似,都是用来做一些数据分析类的工作,一般用于olap分析(在线分析处理)。

  我们都知道在sql中有一类函数叫做聚合函数,例如sum()、avg()、max()等等,这类函数可以将多行数据按照规则聚集为一行,一般来讲聚集后的行数是要少于聚集前的行数的.但是有时我们想要既显示聚集前的数据,又要显示聚集后的数据,这时我们便引入了窗口函数。

  在深入研究over字句之前,一定要注意:在SQL处理中,窗口函数都是最后一步执行,而且仅位于Order by字句之前。

  数据准备:

-->表创建
create table t_order(name string, orderdate string, cost int)
row format delimited
fields terminated by ',';

-->数据导入
load data local inpath '/root/hiveData/t_order.txt' into table t_order;

-->数据展示
1: jdbc:hive2://localhost:10000> select * from t_order;
+---------------+--------------------+---------------+--+
| t_order.name  | t_order.orderdate  | t_order.cost  |
+---------------+--------------------+---------------+--+
| jack          | 2015-01-01         | 10            |
| tony          | 2015-01-02         | 15            |
| jack          | 2015-02-03         | 23            |
| tony          | 2015-01-04         | 29            |
| jack          | 2015-01-05         | 46            |
| jack          | 2015-04-06         | 42            |
| tony          | 2015-01-07         | 50            |
| jack          | 2015-01-08         | 55            |
| mart          | 2015-04-08         | 62            |
| mart          | 2015-04-09         | 68            |
| neil          | 2015-05-10         | 12            |
| mart          | 2015-04-11         | 75            |
| neil          | 2015-06-12         | 80            |
| mart          | 2015-04-13         | 94            |
+---------------+--------------------+---------------+--+
14 rows selected (0.159 seconds)

  假如说我们想要查询在2015年4月份购买过的顾客及总人数,我们便可以使用窗口函数去去实现

  实例:

1: jdbc:hive2://localhost:10000> select name,count(*) over() from t_order where substring(orderdate, 1, 7)='2015-04'+-------+-----------------+--+
| name  | count_window_0  |
+-------+-----------------+--+
| mart  | 5               |
| mart  | 5               |
| mart  | 5               |
| mart  | 5               |
| jack  | 5               |
+-------+-----------------+--+
5 rows selected (1.857 seconds)

-->注意:正常情况下,使用count函数是必须结合分组使用,但这里配合over可以显示聚合后的数据

  可见其实在2015年4月一共有5次购买记录,mart购买了4次,jack购买了1次.事实上,大多数情况下,我们是只看去重后的结果的.针对于这种情况,我们有两种实现方式:

--->distinct方式

1: jdbc:hive2://localhost:10000> select distinct name,count(*) over ()
1: jdbc:hive2://localhost:10000> from t_order
1: jdbc:hive2://localhost:10000> where substring(orderdate,1,7) = '2015-04';


--->group by方式

1: jdbc:hive2://localhost:10000> select name,count(*) over ()
1: jdbc:hive2://localhost:10000> from t_order
1: jdbc:hive2://localhost:10000> where substring(orderdate,1,7) = '2015-04'
1: jdbc:hive2://localhost:10000> group by name;


-->输出结果都为:这里就体现出了over是在后面执行的

+-------+-----------------+--+
| name  | count_window_0  |
+-------+-----------------+--+
| mart  | 2               |
| jack  | 2               |
+-------+-----------------+--+
2 rows selected (2.889 seconds)

  5.1.1 partition by 字句

  over子句之后第一个提到的就是partition by。partition by子句也可以称为查询分区子句,非常类似于Group By,都是将数据按照边界值分组,而Over之前的函数在每一个分组之内进行,如果超出了分组,则函数会重新计算.

  需求:我们想要去看顾客的购买明细及月购买总额。

1: jdbc:hive2://localhost:10000> select name,orderdate,cost,sum(cost) over(partition by month(orderdate))  # 这里month(orderdate) 提取出月份
1: jdbc:hive2://localhost:10000> from t_order;

+-------+-------------+-------+---------------+--+
| name  |  orderdate  | cost  | sum_window_0  |
+-------+-------------+-------+---------------+--+
| jack  | 2015-01-01  | 10    | 205           |
| jack  | 2015-01-08  | 55    | 205           |
| tony  | 2015-01-07  | 50    | 205           |
| jack  | 2015-01-05  | 46    | 205           |
| tony  | 2015-01-04  | 29    | 205           |
| tony  | 2015-01-02  | 15    | 205           |
| jack  | 2015-02-03  | 23    | 23            |
| mart  | 2015-04-13  | 94    | 341           |
| jack  | 2015-04-06  | 42    | 341           |
| mart  | 2015-04-11  | 75    | 341           |
| mart  | 2015-04-09  | 68    | 341           |
| mart  | 2015-04-08  | 62    | 341           |
| neil  | 2015-05-10  | 12    | 12            |
| neil  | 2015-06-12  | 80    | 80            |
+-------+-------------+-------+---------------+--+
14 rows selected (1.56 seconds)

  可以看出数据已经按照月进行汇总了.

  5.1.2 order by字句

  order by子句会让输入的数据强制排序(文章前面提到过,窗口函数是SQL语句最后执行的函数,因此可以把SQL结果集想象成输入数据)。Order By子句对于诸如Row_Number(),Lead(),LAG()等函数是必须的,因为如果数据无序,这些函数的结果就没有任何意义。因此如果有了Order By子句,则Count(),Min()等计算出来的结果就没有任何意义。

  实例:

-->假如我们想要将cost按照月进行累加.这时我们引入order by子句.

1: jdbc:hive2://localhost:10000> select name,orderdate,cost,sum(cost) over(partition by month(orderdate) order by orderdate )
1: jdbc:hive2://localhost:10000> from t_order;

+-------+-------------+-------+---------------+--+
| name  |  orderdate  | cost  | sum_window_0  |
+-------+-------------+-------+---------------+--+
| jack  | 2015-01-01  | 10    | 10            |
| tony  | 2015-01-02  | 15    | 25            |
| tony  | 2015-01-04  | 29    | 54            |
| jack  | 2015-01-05  | 46    | 100           |
| tony  | 2015-01-07  | 50    | 150           |
| jack  | 2015-01-08  | 55    | 205           |
| jack  | 2015-02-03  | 23    | 23            |
| jack  | 2015-04-06  | 42    | 42            |
| mart  | 2015-04-08  | 62    | 104           |
| mart  | 2015-04-09  | 68    | 172           |
| mart  | 2015-04-11  | 75    | 247           |
| mart  | 2015-04-13  | 94    | 341           |
| neil  | 2015-05-10  | 12    | 12            |
| neil  | 2015-06-12  | 80    | 80            |
+-------+-------------+-------+---------------+--+
14 rows selected (1.7 seconds)

-->从上面可以看出,对月进行分组切排序

  5.1.3 window 子句

  我们在上面已经通过使用partition by子句将数据进行了分组的处理.如果我们想要更细粒度的划分,我们就要引入window子句了

  我们首先要理解两个概念: 
  如果只使用partition by子句,未指定order by的话,我们的聚合是分组内的聚合. 
  使用了order by子句,未使用window子句的情况下,默认从起点到当前行.当同一个select查询中存在多个窗口函数时,他们相互之间是没有影响的.每个窗口函数应用自己的规则.

window子句: 
- PRECEDING:往前 
- FOLLOWING:往后 
- CURRENT ROW:当前行 
- UNBOUNDED:起点,UNBOUNDED PRECEDING 表示从前面的起点, UNBOUNDED FOLLOWING:表示到后面的终点

  实例:

-->我们按照name进行分区,按照购物时间进行排序,做cost的累加. 如下我们结合使用window子句进行查询

select name,orderdate,cost,
sum(cost) over() as sample1,--所有行相加
sum(cost) over(partition by name) as sample2,--按name分组,组内数据相加
sum(cost) over(partition by name order by orderdate) as sample3,--按name分组,组内数据累加
sum(cost) over(partition by name order by orderdate rows between UNBOUNDED PRECEDING and current row )  as sample4 ,--和sample3一样,由起点到当前行的聚合
sum(cost) over(partition by name order by orderdate rows between 1 PRECEDING   and current row) as sample5, --当前行和前面一行做聚合
sum(cost) over(partition by name order by orderdate rows between 1 PRECEDING   AND 1 FOLLOWING  ) as sample6,--当前行和前边一行及后面一行
sum(cost) over(partition by name order by orderdate rows between current row and UNBOUNDED FOLLOWING ) as sample7 --当前行及后面所有行
from t_order;


+-------+-------------+-------+----------+----------+----------+----------+----------+----------+----------+--+
| name  |  orderdate  | cost  | sample1  | sample2  | sample3  | sample4  | sample5  | sample6  | sample7  |
+-------+-------------+-------+----------+----------+----------+----------+----------+----------+----------+--+
| jack  | 2015-01-01  | 10    | 661      | 176      | 10       | 10       | 10       | 56       | 176      |
| jack  | 2015-01-05  | 46    | 661      | 176      | 56       | 56       | 56       | 111      | 166      |
| jack  | 2015-01-08  | 55    | 661      | 176      | 111      | 111      | 101      | 124      | 120      |
| jack  | 2015-02-03  | 23    | 661      | 176      | 134      | 134      | 78       | 120      | 65       |
| jack  | 2015-04-06  | 42    | 661      | 176      | 176      | 176      | 65       | 65       | 42       |
| mart  | 2015-04-08  | 62    | 661      | 299      | 62       | 62       | 62       | 130      | 299      |
| mart  | 2015-04-09  | 68    | 661      | 299      | 130      | 130      | 130      | 205      | 237      |
| mart  | 2015-04-11  | 75    | 661      | 299      | 205      | 205      | 143      | 237      | 169      |
| mart  | 2015-04-13  | 94    | 661      | 299      | 299      | 299      | 169      | 169      | 94       |
| neil  | 2015-05-10  | 12    | 661      | 92       | 12       | 12       | 12       | 92       | 92       |
| neil  | 2015-06-12  | 80    | 661      | 92       | 92       | 92       | 92       | 92       | 80       |
| tony  | 2015-01-02  | 15    | 661      | 94       | 15       | 15       | 15       | 44       | 94       |
| tony  | 2015-01-04  | 29    | 661      | 94       | 44       | 44       | 44       | 94       | 79       |
| tony  | 2015-01-07  | 50    | 661      | 94       | 94       | 94       | 79       | 79       | 50       |
+-------+-------------+-------+----------+----------+----------+----------+----------+----------+----------+--+
14 rows selected (4.959 seconds)

  5.2 分析函数

  5.2.1 ntile

  功能:用于将分组数据按顺序切分成n片,返回当前切片值

  注意:  

ntile不支持 rows between

  使用实例:

-->假如我们想要每位顾客购买金额前1/3的交易记录,我们便可以使用这个函数.

select name,orderdate,cost,
    ntile(3) over() as sample1 , --全局数据切片
    ntile(3) over(partition by name), -- 按照name进行分组,在分组内将数据切成3份
    ntile(3) over(order by cost),--全局按照cost升序排列,数据切成3份
    ntile(3) over(partition by name order by cost ) --按照name分组,在分组内按照cost升序排列,数据切成3份
from t_order;

+-------+-------------+-------+----------+------+------+-----------------+--+
| name  |  orderdate  | cost  | sample1  | _c4  | _c5  | ntile_window_3  |
+-------+-------------+-------+----------+------+------+-----------------+--+
| jack  | 2015-01-01  | 10    | 3        | 1    | 1    | 1               |
| jack  | 2015-02-03  | 23    | 3        | 1    | 1    | 1               |
| jack  | 2015-04-06  | 42    | 2        | 2    | 2    | 2               |
| jack  | 2015-01-05  | 46    | 2        | 2    | 2    | 2               |
| jack  | 2015-01-08  | 55    | 2        | 3    | 2    | 3               |
| mart  | 2015-04-08  | 62    | 2        | 1    | 2    | 1               |
| mart  | 2015-04-09  | 68    | 1        | 2    | 3    | 1               |
| mart  | 2015-04-11  | 75    | 1        | 3    | 3    | 2               |
| mart  | 2015-04-13  | 94    | 1        | 1    | 3    | 3               |
| neil  | 2015-05-10  | 12    | 1        | 2    | 1    | 1               |
| neil  | 2015-06-12  | 80    | 1        | 1    | 3    | 2               |
| tony  | 2015-01-02  | 15    | 3        | 2    | 1    | 1               |
| tony  | 2015-01-04  | 29    | 3        | 3    | 1    | 2               |
| tony  | 2015-01-07  | 50    | 2        | 1    | 2    | 3               |
+-------+-------------+-------+----------+------+------+-----------------+--+
14 rows selected (5.981 seconds)

  如上述数据,我们去sample4 = 1的那部分数据就是我们要的结果

  5.2.2 row_number,rank,dense_rank

  这三个窗口函数的使用场景非常多 
  row_number()从1开始,按照顺序,生成分组内记录的序列,row_number()的值不会存在重复,当排序的值相同时,按照表中记录的顺序进行排列 
  rank() 生成数据项在分组中的排名,排名相等会在名次中留下空位 
  dense_rank() 生成数据项在分组中的排名,排名相等会在名次中不会留下空位

  注意: 
  rank和dense_rank的区别在于排名相等时会不会留下空位.

  实例如下:

  5.2.3 lag和lead

  这两个函数为常用的窗口函数,可以返回上下数据行的数据. 
  实例:

-->以我们的订单表为例,假如我们想要查看顾客上次的购买时间可以这样去查询

select name,orderdate,cost,
lag(orderdate,1,'1900-01-01') over(partition by name order by orderdate ) as time1,
lag(orderdate,2) over (partition by name order by orderdate) as time2
from t_order;

+-------+-------------+-------+-------------+-------------+--+
| name  |  orderdate  | cost  |    time1    |    time2    |
+-------+-------------+-------+-------------+-------------+--+
| jack  | 2015-01-01  | 10    | 1900-01-01  | NULL        |
| jack  | 2015-01-05  | 46    | 2015-01-01  | NULL        |
| jack  | 2015-01-08  | 55    | 2015-01-05  | 2015-01-01  |
| jack  | 2015-02-03  | 23    | 2015-01-08  | 2015-01-05  |
| jack  | 2015-04-06  | 42    | 2015-02-03  | 2015-01-08  |
| mart  | 2015-04-08  | 62    | 1900-01-01  | NULL        |
| mart  | 2015-04-09  | 68    | 2015-04-08  | NULL        |
| mart  | 2015-04-11  | 75    | 2015-04-09  | 2015-04-08  |
| mart  | 2015-04-13  | 94    | 2015-04-11  | 2015-04-09  |
| neil  | 2015-05-10  | 12    | 1900-01-01  | NULL        |
| neil  | 2015-06-12  | 80    | 2015-05-10  | NULL        |
| tony  | 2015-01-02  | 15    | 1900-01-01  | NULL        |
| tony  | 2015-01-04  | 29    | 2015-01-02  | NULL        |
| tony  | 2015-01-07  | 50    | 2015-01-04  | 2015-01-02  |
+-------+-------------+-------+-------------+-------------+--+
14 rows selected (1.6 seconds)

  5.2.4 first_value和last_value

  first_value取分组内排序后,截止到当前行,第一个值 
  last_value取分组内排序后,截止到当前行,最后一个值

select name,orderdate,cost,
first_value(orderdate) over(partition by name order by orderdate) as time1,
last_value(orderdate) over(partition by name order by orderdate) as time2
from t_order;

+-------+-------------+-------+-------------+-------------+--+
| name  |  orderdate  | cost  |    time1    |    time2    |
+-------+-------------+-------+-------------+-------------+--+
| jack  | 2015-01-01  | 10    | 2015-01-01  | 2015-01-01  |
| jack  | 2015-01-05  | 46    | 2015-01-01  | 2015-01-05  |
| jack  | 2015-01-08  | 55    | 2015-01-01  | 2015-01-08  |
| jack  | 2015-02-03  | 23    | 2015-01-01  | 2015-02-03  |
| jack  | 2015-04-06  | 42    | 2015-01-01  | 2015-04-06  |
| mart  | 2015-04-08  | 62    | 2015-04-08  | 2015-04-08  |
| mart  | 2015-04-09  | 68    | 2015-04-08  | 2015-04-09  |
| mart  | 2015-04-11  | 75    | 2015-04-08  | 2015-04-11  |
| mart  | 2015-04-13  | 94    | 2015-04-08  | 2015-04-13  |
| neil  | 2015-05-10  | 12    | 2015-05-10  | 2015-05-10  |
| neil  | 2015-06-12  | 80    | 2015-05-10  | 2015-06-12  |
| tony  | 2015-01-02  | 15    | 2015-01-02  | 2015-01-02  |
| tony  | 2015-01-04  | 29    | 2015-01-02  | 2015-01-04  |
| tony  | 2015-01-07  | 50    | 2015-01-02  | 2015-01-07  |
+-------+-------------+-------+-------------+-------------+--+
14 rows selected (1.588 seconds)

  5.3 扩展

  row_number的用途非常广泛,排序最好用它,它会为查询出来的每一行记录生成一个序号,依次排序且不会重复,注意使用row_number函数时必须要用over子句选择对某一列进行排序才能生成序号。

  rank函数用于返回结果集的分区内每行的排名,行的排名是相关行之前的排名数加一。简单来说rank函数就是对查询出来的记录进行排名,与row_number函数不同的是,rank函数考虑到了over子句中排序字段值相同的情况,如果使用rank函数来生成序号,over子句中排序字段值相同的序号是一样的,后面字段值不相同的序号将跳过相同的排名号排下一个,也就是相关行之前的排名数加一,可以理解为根据当前的记录数生成序号,后面的记录依此类推。

  dense_rank函数的功能与rank函数类似,dense_rank函数在生成序号时是连续的,而rank函数生成的序号有可能不连续。dense_rank函数出现相同排名时,将不跳过相同排名号,rank值紧接上一次的rank值。在各个分组内,rank()是跳跃排序,有两个第一名时接下来就是第四名,dense_rank()是连续排序,有两个第一名时仍然跟着第二名。

  借助实例能更直观地理解:

  假设现在有一张学生表student,学生表中有姓名、分数、课程编号

1: jdbc:hive2://localhost:10000> select * from student;
+-------------+---------------+----------------+-----------------+--+
| student.id  | student.name  | student.score  | student.course  |
+-------------+---------------+----------------+-----------------+--+
| 5           | elic          | 70             | 1               |
| 4           | dock          | 100            | 1               |
| 3           | clark         | 80             | 1               |
| 2           | bob           | 90             | 1               |
| 1           | alce          | 60             | 1               |
| 10          | jacky         | 80             | 2               |
| 9           | iris          | 60             | 2               |
| 8           | hill          | 70             | 2               |
| 7           | grace         | 50             | 2               |
| 6           | frank         | 70             | 2               |
+-------------+---------------+----------------+-----------------+--+
10 rows selected (0.115 seconds)

  现在需要按照课程对学生的成绩进行排序:

--row_number() 顺序排序

select name,course,row_number() over(partition by course order by score desc) rank from student;

+--------+---------+-------+--+
|  name  | course  | rank  |
+--------+---------+-------+--+
| dock   | 1       | 1     |
| bob    | 1       | 2     |
| clark  | 1       | 3     |
| elic   | 1       | 4     |
| alce   | 1       | 5     |
| jacky  | 2       | 1     |
| frank  | 2       | 2     |
| hill   | 2       | 3     |
| iris   | 2       | 4     |
| grace  | 2       | 5     |
+--------+---------+-------+--+
--rank() 跳跃排序,如果有两个第一级别时,接下来是第三级别
select name,course,rank() over(partition by course order by score desc) rank from student;

+--------+---------+-------+--+
|  name  | course  | rank  |
+--------+---------+-------+--+
| dock   | 1       | 1     |
| bob    | 1       | 2     |
| clark  | 1       | 3     |
| elic   | 1       | 4     |
| alce   | 1       | 5     |
| jacky  | 2       | 1     |
| frank  | 2       | 2     |
| hill   | 2       | 2     |
| iris   | 2       | 4     |
| grace  | 2       | 5     |
+--------+---------+-------+--+
--dense_rank() 连续排序,如果有两个第一级别时,接下来是第二级别

select name,course,dense_rank() over(partition by course order by score desc) rank from student;

+--------+---------+-------+--+
|  name  | course  | rank  |
+--------+---------+-------+--+
| dock   | 1       | 1     |
| bob    | 1       | 2     |
| clark  | 1       | 3     |
| elic   | 1       | 4     |
| alce   | 1       | 5     |
| jacky  | 2       | 1     |
| frank  | 2       | 2     |
| hill   | 2       | 2     |
| iris   | 2       | 3     |
| grace  | 2       | 4     |
+--------+---------+-------+--+
10 rows selected (1.635 seconds)

关于Parttion by:

  Parttion by关键字是Oracle中分析性函数的一部分,用于给结果集进行分区。它和聚合函数Group by不同的地方在于它只是将原始数据进行名次排列,能够返回一个分组中的多条记录(记录数不变),而Group by是对原始数据进行聚合统计,一般只有一条反映统计值的结果(每组返回一条)。

  TIPS:

  使用rank over()的时候,空值是最大的,如果排序字段为null, 可能造成null字段排在最前面,影响排序结果。

  可以这样: rank over(partition by course order by score desc nulls last)

总结:

  在使用排名函数的时候需要注意以下三点:

  1、排名函数必须有 OVER 子句。

  2、排名函数必须有包含 ORDER BY 的 OVER 子句。

  3、分组内从1开始排序。

 

智能推荐

注意!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系我们删除。



 
© 2014-2019 ITdaan.com 粤ICP备14056181号  

赞助商广告