Compare commits

...

195 Commits
master ... dev

Author SHA1 Message Date
opengauss-bot 14ed467a73
!2890 回退PR1998对代价估算的修改
Merge pull request !2890 from cc_db_dev/dev_fix
2023-02-13 02:42:45 +00:00
opengauss-bot ec27718103
!2880 完善SRF逻辑及其用例
Merge pull request !2880 from ljy/fix_srf
2023-02-13 02:08:43 +00:00
opengauss-bot f36f72c88a
!2848 聚集Limit优化(planner部份)
Merge pull request !2848 from ljy/plan_nodeSortGroup
2023-02-13 02:06:59 +00:00
opengauss-bot a26f670fa5
!2839 聚集Limit优化(算子部份)
Merge pull request !2839 from ljy/exec_nodeSortGroup
2023-02-13 02:05:10 +00:00
opengauss-bot 3299128dcd
!2652 STANDARD CONTEXT逻辑简化
Merge pull request !2652 from april01xxx/cherry-pick-1671446912
2023-02-13 02:04:00 +00:00
opengauss-bot 2262a009cf
!2555 优化内置函数(Internal Language实现)的查找方式
Merge pull request !2555 from april01xxx/dev_005_fmgr_lookup
2023-02-13 02:02:57 +00:00
cc_db_dev 8782ab8f1d 回退fba0ec6143141f457763ab0847781ddeb848f5c4对代价的修改
仅保留对执行器部分进行修改
2023-02-09 17:37:52 +08:00
opengauss-bot c17b1f2b61
!2866 绕过printtup的dataoutput的functioncall
Merge pull request !2866 from yyl/dev
2023-02-09 03:42:17 +00:00
ljy eb492fb3df 完善SRF逻辑及其用例 2023-02-08 15:41:37 +08:00
y30036740 a452732a6d 绕过printtup的dataoutput的functioncall 2023-02-05 18:42:45 -08:00
ljy fa6df03368 生成SortGroup计划 2023-01-31 11:56:36 +08:00
ljy d10d4cc3d2 添加nodeSortGroup算子 2023-01-30 17:38:53 +08:00
april01xxx f0d02cd292 Merge branch 'dev' of gitee.com:opengauss/openGauss-server into cherry-pick-1671446912
Signed-off-by: april01xxx <lancer_cool@163.com>
2023-01-20 02:17:27 +00:00
opengauss-bot 8deef3fa1e
!2552 对函数表达式计算阶段判断是否包含游标(refcursor)的逻辑进行剪枝优化
Merge pull request !2552 from april01xxx/dev_002_funcexpr
2023-01-19 10:32:10 +00:00
opengauss-bot 0b7992f889
!2825 优化RowToVec算子在投影列与过滤列不重叠情况下的性能
Merge pull request !2825 from wanghao19920907/repeat_cp
2023-01-19 09:01:32 +00:00
opengauss-bot 84b92e54c9
!2824 优化行转列中numeric转大数的逻辑
Merge pull request !2824 from wanghao19920907/vec_agg_func
2023-01-19 09:01:11 +00:00
opengauss-bot 7d9c5df17e
!2805 合并PG pr:提高timestamp/time/date输出函数的速度
Merge pull request !2805 from ab2020c/dev_datetime
2023-01-19 09:00:59 +00:00
opengauss-bot 538bf517cf
!2795 据是否开启向量化对向量化计划中的targetlist进行剪裁,只保留使用到列,以降低行转列时候的开销
Merge pull request !2795 from wanghao19920907/cut_targetlist
2023-01-19 09:00:37 +00:00
opengauss-bot 25602f4079
!2554 修复部分情况下整数偏移溢出导致死循环的问题
Merge pull request !2554 from april01xxx/dev_004_overflow
2023-01-19 09:00:20 +00:00
opengauss-bot 788274ba24
!2553 优化hint warning的处理逻辑
Merge pull request !2553 from april01xxx/dev_003_hint_warning
2023-01-19 09:00:04 +00:00
wanghao19920907 45352f8e26 优化RowToVec算子在投影列与过滤列不重叠情况下的性能 2023-01-17 00:36:19 -08:00
wanghao19920907 3a6e24dd42 优化numeric转大数的逻辑 2023-01-15 19:06:53 -08:00
ab2020c 7c40d789b1 合入PG pr:提高timestamp/time/date输出函数的速度 2023-01-12 18:57:38 +08:00
wanghao19920907 95421580c4 优化numeric转大数的逻辑 2023-01-11 05:20:02 -08:00
opengauss-bot 3db81583c3
!2797 单表查询无索引优化-part3
Merge pull request !2797 from zhengshaoyu/dev
2023-01-10 09:04:27 +00:00
zhengshaoyu 7683deb42f SeqScan优化 2023-01-10 15:19:34 +08:00
wanghao19920907 c883406e78 根据是否开启向量化对向量化计划中的targetlist进行剪裁,只保留使用到列,以降低行转列时候的开销 2023-01-09 22:24:13 -08:00
opengauss-bot d5dab2f8e3
!2792 单表查询无索引优化-part2
Merge pull request !2792 from zhengshaoyu/dev
2023-01-10 03:33:36 +00:00
zhengshaoyu a9d16cae4e printtup优化:outputfunctioncall优化 2023-01-09 19:22:49 +08:00
zhengshaoyu c943a592e7 Revert "Revert "printtup优化""
This reverts commit e025e2b018.
2023-01-09 18:48:43 +08:00
opengauss-bot 041b338788
!2776 单表查询无索引优化
Merge pull request !2776 from zhengshaoyu/dev
2023-01-09 01:30:16 +00:00
zhengshaoyu e025e2b018 Revert "printtup优化"
This reverts commit 69348ac2c2.
2023-01-08 10:35:16 +08:00
zhengshaoyu e7e3ccb0e1 printtup优化:处理空格 2023-01-08 10:23:27 +08:00
zhengshaoyu 69348ac2c2 printtup优化 2023-01-07 17:15:55 +08:00
zhengshaoyu 91594ce000 printtup优化 2023-01-07 15:23:14 +08:00
zhengshaoyu 6e13bb8517 Seqscan优化和printtup优化,单表查询无索引优化 2023-01-07 14:58:47 +08:00
zhengshaoyu 7bebe1e789 fix gsql \timing 时间打印问题 2023-01-07 14:14:39 +08:00
opengauss-bot 63bb0fd266
!2665 indexscan优化
Merge pull request !2665 from ZYM/dev
2022-12-24 02:16:37 +00:00
ZYM 5ac138d548 indexscan优化 2022-12-22 04:12:51 -08:00
opengauss-bot 7f7ac666a6
!2654 IUD OPTIMIZE
Merge pull request !2654 from wuchenglin/dev_iud_1219
2022-12-20 09:04:08 +00:00
wuchenglin 84cc71d10e [Huawei] IUD OPTIMIZE
Offering: GaussDB Kernel

More detail: optimize iud performance
2022-12-19 19:51:16 -08:00
opengauss-bot 80f61fafad
!2338 合并pg pr:PrivateRefCount相关问题修复
Merge pull request !2338 from ab2020c/dev_pinbuffer
2022-12-19 21:44:39 +00:00
lancer 8d7e8763bf fixed b25acff from https://gitee.com/april01xxx/openGauss-server/pulls/2651
新增GUC参数disable_memory_stats控制STANDARD MEMORY CONTEXT的分配,
开启该参数时逻辑简化,去除一些内存跟踪统计。
2022-12-19 10:48:33 +00:00
April01xxx 1419ab8ef5 优化内置函数(Internal Language实现)的查找方式:由hash改为根据OID查表。 2022-12-06 11:48:41 +08:00
April01xxx f8ff9f3afe 修复部分情况下整数偏移溢出导致死循环的问题。
该问题在主线已经修复,合入dev分支避免回归测试卡住。
2022-12-06 11:44:36 +08:00
April01xxx 3a4a64bfda 优化hint warning的处理逻辑,在实际没有warning的情况下避免在计划阶段
遍历搜索Query Tree,这对于批量逐条INSERT INTO的场景略有提升。
2022-12-06 10:54:36 +08:00
April01xxx e32e7daed3 对函数表达式计算阶段判断是否包含游标(refcursor)的逻辑进行剪枝优化,
减少INSERT INTO场景下内置函数调用开销。
2022-12-06 10:22:31 +08:00
opengauss-bot 233ccd0218
!2478 Numeric表达式优化
Merge pull request !2478 from 夏自豪/dev_0.1
2022-11-25 01:43:23 +00:00
opengauss-bot 63313bed87
!2462 TPCH查询重写优化,子连接提升
Merge pull request !2462 from haruworms/ddd
2022-11-25 01:42:44 +00:00
opengauss-bot 5bacdb5bbc
!2451 printtup优化
Merge pull request !2451 from liuquanyi@huawei/dev
2022-11-25 01:42:02 +00:00
夏自豪 cf7c73619c
numeric表达式
Signed-off-by: 夏自豪 <xiazihao3@huawei.com>
2022-11-24 01:53:32 +00:00
opengauss-bot 4d8aec12ae
!2456 通信优化
Merge pull request !2456 from yyl/dev_1.1.0
2022-11-22 06:45:45 +00:00
opengauss-bot 497547faa9
!2452 seqscan 优化
Merge pull request !2452 from wuchenglin/dev-seqscan_parser
2022-11-22 06:41:41 +00:00
h00502768 0d264cebad TPCH query optimization 2022-11-20 23:39:20 -08:00
yyl 37d263e5b9 通信优化 2022-11-20 17:43:09 -08:00
wuchenglin fabd0159e4 [Huawei] seqscan 优化
Offering: GaussDB Kernel

    More detail: seqscan 优化
2022-11-17 17:58:22 -08:00
liuquanyi@huawei 66748cf39d
printtup优化
Signed-off-by: liuquanyi@huawei <liuquanyi2@huawei.com>
2022-11-18 01:08:52 +00:00
liuquanyi@huawei 460b3d15b5
edition3
Signed-off-by: liuquanyi@huawei <liuquanyi2@huawei.com>
2022-11-17 10:57:06 +00:00
liuquanyi@huawei 640a73772e
second edition
Signed-off-by: liuquanyi@huawei <liuquanyi2@huawei.com>
2022-11-17 10:34:16 +00:00
liuquanyi@huawei e9f02d394b
first edition
Signed-off-by: liuquanyi@huawei <liuquanyi2@huawei.com>
2022-11-17 08:42:53 +00:00
opengauss-bot e20304beae
!2314 根据当前可用线程数自动调整计划并行度,防止出现No free procs错误
Merge pull request !2314 from xiyanziran/dev
2022-11-17 02:34:04 +00:00
Vastdata xyzr 565cc85196 根据当前可用线程数自动调整计划并行度,防止出现No free procs错误 2022-11-15 22:07:43 -05:00
opengauss-bot 1ae5fbfc0e
!2330 对于在聚集中多次使用的numeric,将detoast前置,防止重复detoast
Merge pull request !2330 from cc_db_dev/pre_detoast
2022-11-09 02:24:32 +00:00
opengauss-bot 589734a686
!2332 合并PG pr:优化nbtree high key "continuescan"
Merge pull request !2332 from ab2020c/dev_continuescan
2022-11-08 07:00:59 +00:00
opengauss-bot dc3b2db447
!2331 合并PG pr:Fix predicate-locking of HOT updated rows
Merge pull request !2331 from ab2020c/dev_predicate_locking
2022-11-08 07:00:02 +00:00
opengauss-bot 94b084b2cf
!2328 为x86添加一些原子操作实现
Merge pull request !2328 from ab2020c/dev_automic
2022-11-08 06:57:54 +00:00
ab2020c 5eba530510 PinBuffer_Locked函数调用修改,修复回归测试错误 2022-10-28 15:15:57 +08:00
ab2020c 63bf4c2b3a Fix various shortcomings of the new PrivateRefCount infrastructure 2022-10-27 18:35:56 +08:00
ab2020c 6ceebeb770 Add nbtree high key continuescan optimization 2022-10-26 17:44:13 +08:00
ab2020c 18639107af Fix predicate-locking of HOT updated rows 2022-10-26 16:42:38 +08:00
cc_db_dev f9905e4724 predetoast重复的var 2022-10-26 16:20:33 +08:00
ab2020c a2cd79b36d 为x86添加一些原子操作实现 2022-10-26 11:17:53 +08:00
opengauss-bot 809e2ebbae
!2307 提升numeric除法性能
Merge pull request !2307 from cc_db_dev/improve_div
2022-10-18 09:20:02 +00:00
opengauss-bot f21869b87a
!2299 hashjoin细节优化
Merge pull request !2299 from ljy/perf_hashjoin3
2022-10-18 09:19:42 +00:00
opengauss-bot ebbf5ec8b2
!2297 合并pg优化,必要时增加hashjoin的bucket数量
Merge pull request !2297 from ljy/perf_hashjoin2
2022-10-18 09:19:22 +00:00
opengauss-bot 164d34c2a2
!2296 优化hashjoin的hash策略
Merge pull request !2296 from ljy/perf_hashjoin1
2022-10-18 09:19:07 +00:00
opengauss-bot 8a6e620831
!2211 提升external sort性能
Merge pull request !2211 from Oreo/pjr_commit_sort3
2022-10-18 09:18:59 +00:00
cc_db_dev cf96c5fa83 提升除法性能,参考
SHA-1: d996d648f333b04ae3da3c5853120f6f37601fb2

* Simplify the inner loop of numeric division in div_var().
2022-10-17 18:49:37 +08:00
cc_db_dev 964fac0c6f Merge branch 'dev' of gitee.com:opengauss/openGauss-server into pjr_commit_sort3
Signed-off-by: cc_db_dev <chenjh2@vastdata.com.cn>
2022-10-17 03:33:56 +00:00
pujr 1e78dbcdc5 参考以下PG优化合入
SHA-1: 3856cf9607f41245ec9462519c53f1109e781fc5

* Remove should_free arguments to tuplesort routines.

Since commit e94568ecc10f2638e542ae34f2990b821bbf90ac, the answer is
always "false", and we do not need to complicate the API by arranging
to return a constant value.

Peter Geoghegan

Discussion: http://postgr.es/m/CAM3SWZQWZZ_N=DmmL7tKy_OUjGH_5mN=N=A6h7kHyyDvEhg2DA@mail.gmail.com
2022-10-17 11:26:08 +08:00
ljy b665aea10c Merge remote-tracking branch 'gauss/dev' into perf_hashjoin3 2022-10-16 22:31:27 -04:00
opengauss-bot 709e1dadf5
!2304 解决之前PR产生的编译冲突
Merge pull request !2304 from wanghao19920907/dev
2022-10-17 01:59:12 +00:00
wanghao19920907 26a00db02d 解决冲突 2022-10-13 21:33:39 -07:00
ljy 9c9864a283 Merge remote-tracking branch 'gauss/dev' into perf_hashjoin3 2022-10-13 22:45:31 -04:00
ljy 0bae05818b Make use of compiler builtins and/or assembly for CLZ, CTZ, POPCNT 2022-10-13 06:21:43 -04:00
ljy 20cad27c28 删除多余的CHECK_FOR_INTERRUPTS、hashjoin的hash值计算 2022-10-12 23:31:07 -04:00
opengauss-bot 3b36e1dc68
!2295 修复init_var_from_num中性能瓶颈
Merge pull request !2295 from wanghao19920907/numeric_short
2022-10-12 02:46:06 +00:00
opengauss-bot 5b0f39d6bb
!2281 减少bufferdesc结构体大小,提升性能
Merge pull request !2281 from cc_db_dev/bufferdesc_imporve
2022-10-12 02:44:55 +00:00
opengauss-bot 60ecb35543
!2264 优化函数表达式中参数的const表达式
Merge pull request !2264 from wanghao19920907/const
2022-10-12 02:44:13 +00:00
opengauss-bot 350dc97148
!2227 回退ustore引擎中对tts_nvalid的修改,并修改表达式中的断言适配ustore
Merge pull request !2227 from wanghao19920907/fix_bug
2022-10-12 02:42:04 +00:00
opengauss-bot d0b4b7834e
!2165 删除新表达式框架中*_FIRST相关的表达式
Merge pull request !2165 from wanghao19920907/first
2022-10-12 02:40:32 +00:00
opengauss-bot 7b39c6f636
!2129 agg算子性能提升,减少聚集中step的步数
Merge pull request !2129 from cc_db_dev/fewer_step
2022-10-12 02:39:34 +00:00
opengauss-bot 2011dccec7
!2109 在agg中使用性能更佳的hash表
Merge pull request !2109 from cc_db_dev/simple_hash_table
2022-10-12 02:38:51 +00:00
ljy ecc17c2faf Increase number of hash join buckets for underestimate 2022-10-11 08:17:27 -04:00
ljy 3cdbf57624 Rotate instead of shifting hash join batch number 2022-10-11 04:51:42 -04:00
opengauss-bot e717a1ead8
!2228 新表达式框架已经不使用planstate中targetlist,修复之前的遗漏
Merge pull request !2228 from wanghao19920907/fix_bug2
2022-10-11 07:30:59 +00:00
opengauss-bot b79be12ee3
!2220 添加断言,部分tupledesc->attrs[n]修改为TupleDescAttr
Merge pull request !2220 from ab2020c/dev_indexattr
2022-10-11 07:26:32 +00:00
wanghao19920907 d4a2ea2050 优化numeric var初始化的性能 2022-10-08 01:29:38 -07:00
cc_db_dev 6ecf784271 较少bufferdesc大小,提升性能 2022-10-08 11:25:14 +08:00
wanghao19920907 ca5f9c1067 优化函数表达式参数中的const表达式,减少step步长 2022-09-28 18:48:39 -07:00
ab2020c 07f7ca2df8 修复提交代码冲突 2022-09-27 06:42:27 +00:00
opengauss-bot 786262989d
!2150 Relation相关函数调用优化
Merge pull request !2150 from ljy/perf_relation_rebase
2022-09-27 06:18:45 +00:00
opengauss-bot d03aeeb37e
!2148 flatten struct tupleDesc、tupleDesc相关函数调用优化
Merge pull request !2148 from ljy/perf_tuple_desc_rebase
2022-09-27 06:18:12 +00:00
opengauss-bot d82f611310
!2147 TupleTableSlots、相关函数调用优化
Merge pull request !2147 from ljy/perf_slot_rebase
2022-09-27 06:17:55 +00:00
opengauss-bot 46b157373a
!2218 合并pg pr:集中逻辑保护复制的效用语句
Merge pull request !2218 from ab2020c/dev_indexscan
2022-09-27 06:15:55 +00:00
wanghao19920907 004748b0d7 新表达式框架已经不使用planstate中targetlist,修复之前的遗漏 2022-09-23 00:43:19 -07:00
wanghao19920907 a872c9b739 回退ustore引擎中对tts_nvalid的修改,并修改表达式中的断言适配ustore 2022-09-22 23:44:14 -07:00
ab2020c 3fdb245d5a 修复回归测试报错 2022-09-22 17:54:48 +08:00
ab2020c 6c4b32aa33 添加断言,部分tupledesc->attrs[n]修改为TupleDescAttr 2022-09-22 16:02:49 +08:00
ab2020c 4396131a82 修复回归测试报错 2022-09-22 15:28:19 +08:00
ab2020c 661c7ca0ff 合并pg pr:集中逻辑保护复制的效用语句 2022-09-22 14:52:25 +08:00
pujr 2fb21c8701 external sort引入slab内存管理 2022-09-21 18:08:37 +08:00
pujr 6f7eef3db6 合并一个PG外排的优化 2022-09-21 15:47:08 +08:00
wanghao19920907 13d6b03c17 删除*_FIRST相关的表达式 2022-09-14 08:53:39 -07:00
lijunyun 3c9568f172 Relation相关函数调用优化 2022-09-09 16:54:11 +08:00
lijunyun 32dbfa6463 flatten struct tupleDesc、tupleDesc相关函数调用优化 2022-09-09 11:48:59 +08:00
lijunyun 1d710b6ce7 Move TupleTableSlots boolean member into one flag variable; TupleTableSlots相关函数优化 2022-09-09 10:43:07 +08:00
cc_db_dev f4a7e8251f 2742c45080077ed3b08b810bb96341499b86d530 合入 2022-09-05 12:17:52 +08:00
cc_db_dev aa42cd0664 在agg中使用性能更佳的hash表 2022-09-02 17:51:53 +08:00
opengauss-bot 55a8dc30ed
!2101 执行器优化
Merge pull request !2101 from ljy/perf_executor_rebase
2022-09-01 01:58:05 +00:00
opengauss-bot 45303e42f1
!2099 优化plpgsql导致SRF性能下降,resname导致性能下降
Merge pull request !2099 from ljy/perf_plgpsql_resname_rebase
2022-09-01 01:57:46 +00:00
opengauss-bot 5e62b6d33a
!2097 移除表达式的isDone状态,简化代码逻辑
Merge pull request !2097 from ljy/remove_is_done_rebase
2022-09-01 01:56:34 +00:00
lijunyun 6604fd498c 执行器优化 2022-09-01 07:03:59 +08:00
lijunyun ee9e1c03af 优化plpgsql导致SRF性能下降,列名打印导致性能下降 2022-08-31 22:56:16 +08:00
lijunyun 47793df932 Merge remote-tracking branch 'gauss/dev' into remove_is_done_rebase
Conflicts:
	src/gausskernel/runtime/executor/nodeAgg.cpp
2022-08-31 15:58:51 +08:00
opengauss-bot 8ab353cce1
!2102 合并部分PG使用内建位操作指令的优化
Merge pull request !2102 from wanghao19920907/expr
2022-08-31 07:13:15 +00:00
opengauss-bot 5bbebb308d
!2096 修复SRF内存问题
Merge pull request !2096 from ljy/fix_srf_memory_rbase
2022-08-31 07:12:10 +00:00
opengauss-bot a79e1d54fa
!2090 提升numeric的乘法性能
Merge pull request !2090 from cc_db_dev/improve_mul_numeric
2022-08-31 07:11:49 +00:00
opengauss-bot ad4088f131
!2087 合并parser阶段性能提升PR
Merge pull request !2087 from Oreo/pjr_scan
2022-08-31 07:11:26 +00:00
opengauss-bot 7f040a0e40
!2086 提升AGG算子性能
Merge pull request !2086 from cc_db_dev/improve_agg_expr_step
2022-08-31 06:23:48 +00:00
opengauss-bot d2b84ee198
!2061 提升do_numeric_accum函数性能,以提升sum、avg聚集的计算速度
Merge pull request !2061 from cc_db_dev/improve_numeric
2022-08-31 06:23:11 +00:00
cc_db_dev 39ef9fee30 69c3936a1499b772a749ae629fc59b2d72722332 804163bc25e979fcd91b02e58fa2d1c6b587cc65
(1)用表达式框架拉平agg计算(2)合并相同transfn的聚集函数
2022-08-30 19:01:33 +08:00
pujr eab31941d1 修复gram.y处修改有误的地方 2022-08-30 14:14:04 +08:00
wanghao19920907 8f829f0ff3 合并部分PG位运算指令级优化 2022-08-29 21:10:26 -07:00
wanghao19920907 13da8d6725 合并部分PG位运算指令级优化 2022-08-29 21:08:52 -07:00
wanghao19920907 a8de00f253 合并部分PG位运算指令级优化 2022-08-29 21:05:22 -07:00
lijunyun 91f6f93819 移除表达式的isDone状态 2022-08-29 18:13:33 +08:00
lijunyun df1bc33e83 修复srf内存问题 2022-08-29 17:11:20 +08:00
cc_db_dev ec375cf00e 提升numeric_mul的性能 2022-08-29 11:23:12 +08:00
opengauss-bot dc952af4a8
!2079 合并PG对表达式计算框架的优化
Merge pull request !2079 from wanghao19920907/expr
2022-08-26 09:11:43 +00:00
pujr bc287c614c 合入gp社区PR,将关键字查找方法修改为hash
* Use perfect hashing, instead of binary search, for keyword lookup.

We've been speculating for a long time that hash-based keyword lookup
ought to be faster than binary search, but up to now we hadn't found
a suitable tool for generating the hash function.  Joerg Sonnenberger
provided the inspiration, and sample code, to show us that rolling our
own generator wasn't a ridiculous idea.  Hence, do that.

The method used here requires a lookup table of approximately 4 bytes
per keyword, but that's less than what we saved in the predecessor commit
afb0d0712, so it's not a big problem.  The time savings is indeed
significant: preliminary testing suggests that the total time for raw
parsing (flex + bison phases) drops by ~20%.

Patch by me, but it owes its existence to Joerg Sonnenberger;
thanks also to John Naylor for review.

Discussion: https://postgr.es/m/20190103163340.GA15803@britannica.bec.de
2022-08-26 16:02:56 +08:00
wanghao19920907 d77491d25d fix pg_regress 2022-08-26 14:58:01 +08:00
pujr 9cb7462667 合并PG社区PR,更改数据结构,改善cache miss。
* Replace the data structure used for keyword lookup.

Previously, ScanKeywordLookup was passed an array of string pointers.
This had some performance deficiencies: the strings themselves might
be scattered all over the place depending on the compiler (and some
quick checking shows that at least with gcc-on-Linux, they indeed
weren't reliably close together).  That led to very cache-unfriendly
behavior as the binary search touched strings in many different pages.
Also, depending on the platform, the string pointers might need to
be adjusted at program start, so that they couldn't be simple constant
data.  And the ScanKeyword struct had been designed with an eye to
32-bit machines originally; on 64-bit it requires 16 bytes per
keyword, making it even more cache-unfriendly.

Redesign so that the keyword strings themselves are allocated
consecutively (as part of one big char-string constant), thereby
eliminating the touch-lots-of-unrelated-pages syndrome.  And get
rid of the ScanKeyword array in favor of three separate arrays:
uint16 offsets into the keyword array, uint16 token codes, and
uint8 keyword categories.  That reduces the overhead per keyword
to 5 bytes instead of 16 (even less in programs that only need
one of the token codes and categories); moreover, the binary search
only touches the offsets array, further reducing its cache footprint.
This also lets us put the token codes somewhere else than the
keyword strings are, which avoids some unpleasant build dependencies.

While we're at it, wrap the data used by ScanKeywordLookup into
a struct that can be treated as an opaque type by most callers.
That doesn't change things much right now, but it will make it
less painful to switch to a hash-based lookup method, as is being
discussed in the mailing list thread.

Most of the change here is associated with adding a generator
script that can build the new data structure from the same
list-of-PG_KEYWORD header representation we used before.
The PG_KEYWORD lists that plpgsql and ecpg used to embed in
their scanner .c files have to be moved into headers, and the
Makefiles have to be taught to invoke the generator script.
This work is also necessary if we're to consider hash-based lookup,
since the generator script is what would be responsible for
constructing a hash table.

Aside from saving a few kilobytes in each program that includes
the keyword table, this seems to speed up raw parsing (flex+bison)
by a few percent.  So it's worth doing even as it stands, though
we think we can gain even more with a follow-on patch to switch
to hash-based lookup.

John Naylor, with further hacking by me

Discussion: https://postgr.es/m/CAJVSVGXdFVU2sgym89XPL=Lv1zOS5=EHHQ8XWNzFL=mTXkKMLw@mail.gmail.com
2022-08-26 14:55:28 +08:00
wanghao19920907 a68f029622 merge conflicts 2022-08-25 15:15:44 +08:00
wanghao19920907 a239637118 parent 3a42dc9c8b
author wanghao19920907 <wanghao1@vastdata.com.cn> 1657104095 +0800
committer wanghao19920907 <wanghao1@vastdata.com.cn> 1661408092 +0800

parent 3a42dc9c8b
author wanghao19920907 <wanghao1@vastdata.com.cn> 1657104095 +0800
committer wanghao19920907 <wanghao1@vastdata.com.cn> 1661408072 +0800

parent 3a42dc9c8b
author wanghao19920907 <wanghao1@vastdata.com.cn> 1657104095 +0800
committer wanghao19920907 <wanghao1@vastdata.com.cn> 1661408003 +0800

1.合并PG对表达式计算框架的优化

1.修复表达式计算框架在执行回归测试时的BUG

1.修改换行符错误

1.添加HAVE_COMPUTED_GOTO宏

1.修复聚集的宕机问题

1.修复median.sql用例测试结果异常的问题

1.完善合并PG表达式计算框架的功能,在合并SRF的基础上,基于新的表达式计算框架对SRF进行了处理,并且合并了AGG优化相关的功能

优化编译选项,提高性能

优化编译选项

fixed 09928a5 from https://gitee.com/april01xxx/openGauss-server/pulls/1820
权限检查优化:缓存角色的类型,减少不必要的系统表访问。

获取CSN时调用合理的clog状态获取函数,以使cachedCommitLSN能被正确设置,提高SetHintBits调用TransactionIdGetCommitLSN的效率

fix test

优化SRF

fix mot
2022-08-25 14:16:34 +08:00
opengauss-bot 830340def0
!2068 SRF执行优化
Merge pull request !2068 from lijunyun/srf_dev_rebase
2022-08-24 03:24:09 +00:00
lijunyun 202045e11d Merge remote-tracking branch 'gauss/dev' into srf_dev_rebase
Conflicts:
	src/test/regress/expected/bypass_simplequery_support.out
	src/test/regress/expected/sqlbypass_partition.out
2022-08-23 16:05:07 +08:00
lijunyun 32acc25fdd SRF执行优化 2022-08-23 16:03:16 +08:00
opengauss-bot fba0ec6143
!1998 优化特定条件下hashjoin、nestloop、mergejoin的内层循环元组数
Merge pull request !1998 from cc_db_dev/inner_uniuqe_pr
2022-08-22 08:46:17 +00:00
cc_db_dev 72c07837ab 9cca11c915e458323d0e746c68203f2c11da0302 提升sum、avg的速度 2022-08-22 16:32:33 +08:00
opengauss-bot 529f5e505e
!1885 提升numeric sum(), avg(), stddev(), variance() 性能
Merge pull request !1885 from cc_db_dev/improve_agg
2022-08-04 07:41:20 +00:00
cc_db_dev cbdb7d0e0e 优化特定条件下hashjoin、nestloop、mergejoin的内层循环元组数 2022-08-03 10:12:16 +08:00
cc_db_dev ed746739c2 Merge branch 'dev' of gitee.com:opengauss/openGauss-server into improve_agg 2022-08-02 10:31:18 +00:00
opengauss-bot 7941581269
!1957 修改一些测试用例:explain (costs off)
Merge pull request !1957 from lijunyun/explain_costs_off
2022-07-22 03:29:56 +00:00
lijunyun ef4092ff09 fix test 2022-07-21 09:54:39 +08:00
lijunyun 8d4f0c4164 fix test 2022-07-20 18:10:15 +08:00
lijunyun e2be91e0c0 修改一些回归测试用例:explain (costs off) 2022-07-20 17:01:47 +08:00
cc_db_dev 1052c12246 提高numeric类型的sum()、avg()等聚集函数性能 2022-07-18 16:14:37 +08:00
opengauss-bot 9e9fe453bf
!1801 获取CSN时调用合理的clog状态获取函数,以使cachedCommitLSN能被正确设置
Merge pull request !1801 from cc_db_dev/mvcc_op
2022-07-18 02:49:29 +00:00
opengauss-bot aff104a2ac
!1822 对于存在多个聚集函数的查询,减少算子nodeagg的投影次数
Merge pull request !1822 from cc_db_dev/hashagg_allproject
2022-07-12 06:54:18 +00:00
cc_db_dev e0870fb47a 减少agg算子中的投影次数 2022-07-07 16:57:50 +08:00
opengauss-bot 53541c813b
!1889 简化内置namespace的判断逻辑
Merge pull request !1889 from april01xxx/cherry-pick-1655980576
2022-07-07 02:30:56 +00:00
opengauss-bot 9d0d29044b
!1821 权限检查优化
Merge pull request !1821 from april01xxx/cherry-pick-1655178813
2022-07-07 02:25:05 +00:00
opengauss-bot 38a87bdb66
!1800 优化编译选项,提高数据库性能
Merge pull request !1800 from cc_db_dev/complie_op
2022-07-07 02:24:52 +00:00
opengauss-bot c028907375
!1830 减少nodeAgg算子中初始化转移函数的次数
Merge pull request !1830 from cc_db_dev/hashagg_funcinit
2022-07-07 02:23:27 +00:00
April01xxx 3a42dc9c8b fixed 7481d37 from https://gitee.com/april01xxx/openGauss-server/pulls/1888
简化内置namespace的判断逻辑。
2022-06-23 10:36:17 +00:00
opengauss-bot 9d891410f9
!1832 回收站功能优化
Merge pull request !1832 from april01xxx/cherry-pick-1655353269
2022-06-23 09:49:06 +00:00
cc_db_dev 85a773b916 减少hashagg中transfn函数初始化的次数 2022-06-22 10:00:11 +08:00
opengauss-bot c17fdb15b1
!1799 获取CPU使用情况的函数应该放在执行计时以外,以使explain analyze的结果更加合理,便于与PG进行性能分析对比
Merge pull request !1799 from cc_db_dev/master
2022-06-21 02:20:22 +00:00
April01xxx 51da7c4d17 fixed 047ac20 from https://gitee.com/april01xxx/openGauss-server/pulls/1813
回收站功能优化,优先判断对象名称然后判断回收站是否为空,减少不必要
的系统表访问。
2022-06-16 04:21:09 +00:00
April01xxx fb197cfea6 fixed 09928a5 from https://gitee.com/april01xxx/openGauss-server/pulls/1820
权限检查优化:缓存角色的类型,减少不必要的系统表访问。
2022-06-14 03:53:33 +00:00
yupeng fe358bcb2d improve agg 2022-06-07 18:37:26 -07:00
opengauss-bot 92be2cd901
!1779 修复unused_oids 分析结果
Merge pull request !1779 from 仲夏十三/unused
2022-05-31 06:53:34 +00:00
opengauss-bot 846b21330f
!1673 解决maintainer-clean之后无法找到hint_gram.hpp的编译错误
Merge pull request !1673 from xiyanziran/master
2022-05-31 06:51:17 +00:00
opengauss-bot f6fe8ca0a7
!1776 使用like including all/reloptoins时,即使包含compresstype,也会成功;compresstype可以与segment=off一起使用。
Merge pull request !1776 from 吴岳川/I56B4V
2022-05-31 06:48:44 +00:00
opengauss-bot 0b93d2ea1f
!1750 数据备份时,页面校验增加非压缩页面校验。
Merge pull request !1750 from 吴岳川/I58B0E
2022-05-31 06:47:08 +00:00
opengauss-bot cbe8618e0c
!1780 bugfix: 压缩表的mdtruncate无法实际应用到备机
Merge pull request !1780 from 吴岳川/I57V03
2022-05-31 02:36:57 +00:00
cc_db_dev 80eab2aa8d 获取CSN时调用合理的clog状态获取函数,以使cachedCommitLSN能被正确设置,提高SetHintBits调用TransactionIdGetCommitLSN的效率 2022-05-30 06:20:39 -04:00
wuyuechuan ef5c7afd4c bugfix: mdtruncate redo can not read compress_opt 2022-05-30 17:36:23 +08:00
ganyang 6edbfdce01 fix unused script 2022-05-30 17:16:36 +08:00
opengauss-bot 934efefbb6
!1727 支持dolphin插件自动安装加载和升级
Merge pull request !1727 from 仲夏十三/dolphin
2022-05-30 08:44:02 +00:00
wuyuechuan 8b6d9bd1da 1. (like...including reloptions) will success even if reloptions contains compresstype
2. compresstype can be used with (semgent=off) reloption
2022-05-30 15:30:03 +08:00
cc_db_dev c72fd89343 优化编译选项 2022-05-30 00:01:38 -04:00
cc_db_dev 96383be785 优化编译选项,提高性能 2022-05-29 23:32:14 -04:00
opengauss-bot 72f4d68de6
!1772 修复在部分场景下,非sysadmin用户执行gs_dump失败的问题
Merge pull request !1772 from pengjiong/drop_ext
2022-05-28 11:52:08 +00:00
TotaJ 62e3bed09b Fix gs_dump 2022-05-28 17:58:05 +08:00
opengauss-bot 7d4d95a5ad
!1764 函数使用internal类型的限制添加wm_concat
Merge pull request !1764 from 吕辉/wm_concat
2022-05-28 07:00:10 +00:00
ganyang 4b87c61c23 支持dolphin插件自动安装加载 2022-05-28 11:28:54 +08:00
opengauss-bot 240f61c595
!1740 compressByteConvert\compressDiffConvert内存判断当且仅当用户输入为true而不是判断用户是否使用该参数
Merge pull request !1740 from 吴岳川/I54KW0
2022-05-28 02:02:39 +00:00
opengauss-bot a0d1a547f8
!1741 【压缩特性】在回放过程中,假设表文件已经被创建而表_pca等文件未被创建,flags会被修改,丢失O_CREATE标记,导致备机回放失败
Merge pull request !1741 from 吴岳川/I56GGM
2022-05-28 01:43:32 +00:00
lvhui e2163187ca add wm_concat in InternalAggIsSupported 2022-05-26 15:34:47 +08:00
wuyuechuan df87af4701 bugfix: backup failed when uncompressed page in compressed table 2022-05-20 17:45:39 +08:00
wuyuechuan 5af581f12a use `RetryDataFileIdOpenFile` to prevent the flags from being modified 2022-05-20 11:57:25 +08:00
wuyuechuan dbcf479205 set compressByteConvert/compressDiffConvert to true when defElem is set to true 2022-05-18 14:41:15 +08:00
vastdata-xyzr 9fbaa99062 解决maintainer-clean之后无法找到hint_gram.hpp的编译错误 2022-04-13 11:34:05 +08:00
638 changed files with 36026 additions and 18038 deletions

View File

@ -93,6 +93,7 @@ install:
$(MAKE) install_pldebugger
$(MAKE) -C contrib/postgres_fdw $@
$(MAKE) -C contrib/hstore $@
@if test -d contrib/dolphin; then $(MAKE) -C contrib/dolphin $@; fi
+@echo "openGauss installation complete."
endif
endif

View File

@ -60,6 +60,8 @@
./share/postgresql/extension/hstore.control
./share/postgresql/extension/security_plugin.control
./share/postgresql/extension/security_plugin--1.0.sql
./share/postgresql/extension/dolphin.control
./share/postgresql/extension/dolphin--1.0.sql
./share/postgresql/extension/file_fdw--1.0.sql
./share/postgresql/extension/plpgsql.control
./share/postgresql/extension/dist_fdw.control
@ -748,6 +750,7 @@
./lib/postgresql/pg_plugin
./lib/postgresql/proc_srclib
./lib/postgresql/security_plugin.so
./lib/postgresql/dolphin.so
./lib/postgresql/pg_upgrade_support.so
./lib/postgresql/java/pljava.jar
./lib/postgresql/postgres_fdw.so

View File

@ -60,6 +60,8 @@
./share/postgresql/extension/hstore.control
./share/postgresql/extension/security_plugin.control
./share/postgresql/extension/security_plugin--1.0.sql
./share/postgresql/extension/dolphin.control
./share/postgresql/extension/dolphin--1.0.sql
./share/postgresql/extension/file_fdw--1.0.sql
./share/postgresql/extension/plpgsql.control
./share/postgresql/extension/dist_fdw.control
@ -749,6 +751,7 @@
./lib/postgresql/pg_plugin
./lib/postgresql/proc_srclib
./lib/postgresql/security_plugin.so
./lib/postgresql/dolphin.so
./lib/postgresql/pg_upgrade_support.so
./lib/postgresql/java/pljava.jar
./lib/postgresql/postgres_fdw.so

View File

@ -60,6 +60,8 @@
./share/postgresql/extension/hstore.control
./share/postgresql/extension/security_plugin.control
./share/postgresql/extension/security_plugin--1.0.sql
./share/postgresql/extension/dolphin.control
./share/postgresql/extension/dolphin--1.0.sql
./share/postgresql/extension/file_fdw--1.0.sql
./share/postgresql/extension/plpgsql.control
./share/postgresql/extension/dist_fdw.control
@ -748,6 +750,7 @@
./lib/postgresql/pg_plugin
./lib/postgresql/proc_srclib
./lib/postgresql/security_plugin.so
./lib/postgresql/dolphin.so
./lib/postgresql/pg_upgrade_support.so
./lib/postgresql/java/pljava.jar
./lib/postgresql/postgres_fdw.so

View File

@ -100,7 +100,7 @@ elseif($ENV{DEBUG_TYPE} STREQUAL "release")
#close something for release version.
set(ENABLE_LLT OFF)
set(ENABLE_UT OFF)
set(OPTIMIZE_LEVEL -O2 -g3)
set(OPTIMIZE_LEVEL -O2)
elseif($ENV{DEBUG_TYPE} STREQUAL "memcheck")
message("DEBUG_TYPE:$ENV{DEBUG_TYPE}")
set(ENABLE_MEMORY_CHECK ON)
@ -134,9 +134,9 @@ endif()
set(PROTECT_OPTIONS -fwrapv -std=c++14 -fnon-call-exceptions ${OPTIMIZE_LEVEL})
set(WARNING_OPTIONS -Wall -Wendif-labels -Werror -Wformat-security)
set(OPTIMIZE_OPTIONS -pipe -pthread -fno-aggressive-loop-optimizations -fno-expensive-optimizations -fno-omit-frame-pointer -fno-strict-aliasing -freg-struct-return)
set(OPTIMIZE_OPTIONS -pipe -pthread -fno-aggressive-loop-optimizations -fno-strict-aliasing -freg-struct-return)
set(CHECK_OPTIONS -Wmissing-format-attribute -Wno-attributes -Wno-unused-but-set-variable -Wno-write-strings -Wpointer-arith)
set(MACRO_OPTIONS -D_GLIBCXX_USE_CXX11_ABI=0 -DENABLE_GSTRACE -D_GNU_SOURCE -DPGXC -D_POSIX_PTHREAD_SEMANTICS -D_REENTRANT -DSTREAMPLAN -D_THREAD_SAFE ${DB_COMMON_DEFINE})
set(MACRO_OPTIONS -D_GLIBCXX_USE_CXX11_ABI=0 -D_GNU_SOURCE -DPGXC -D_POSIX_PTHREAD_SEMANTICS -DSTREAMPLAN ${DB_COMMON_DEFINE})
# libraries need secure options during compling
set(LIB_SECURE_OPTIONS -fPIC -fno-common -fstack-protector)

View File

@ -909,6 +909,8 @@
* (--enable-multiple-nodes) */
#cmakedefine ENABLE_MULTIPLE_NODES
#cmakedefine ENABLE_TRACE_COLUMN_DEFAULT_VALUE
/* Define to 1 if you want to generate gauss product as privategauss nodes.
* * (--enable-privategauss) */
#cmakedefine ENABLE_PRIVATEGAUSS

View File

@ -121,6 +121,57 @@ fi])# PGAC_C_FUNCNAME_SUPPORT
# PGAC_C_COMPUTED_GOTO
# -----------------------
# Check if the C compiler knows computed gotos (gcc extension, also
# available in at least clang). If so, define HAVE_COMPUTED_GOTO.
#
# Checking whether computed gotos are supported syntax-wise ought to
# be enough, as the syntax is otherwise illegal.
AC_DEFUN([PGAC_C_COMPUTED_GOTO],
[AC_CACHE_CHECK(for computed goto support, pgac_cv_computed_goto,
[AC_COMPILE_IFELSE([AC_LANG_PROGRAM([],
[[void *labeladdrs[] = {&&my_label};
goto *labeladdrs[0];
my_label:
return 1;
]])],
[pgac_cv_computed_goto=yes],
[pgac_cv_computed_goto=no])])
if test x"$pgac_cv_computed_goto" = xyes ; then
AC_DEFINE(HAVE_COMPUTED_GOTO, 1,
[Define to 1 if your compiler handles computed gotos.])
fi])# PGAC_C_COMPUTED_GOTO
# PGAC_CHECK_BUILTIN_FUNC
# -----------------------
# This is similar to AC_CHECK_FUNCS(), except that it will work for compiler
# builtin functions, as that usually fails to.
# The first argument is the function name, eg [__builtin_clzl], and the
# second is its argument list, eg [unsigned long x]. The current coding
# works only for a single argument named x; we might generalize that later.
# It's assumed that the function's result type is coercible to int.
# On success, we define "HAVEfuncname" (there's usually more than enough
# underscores already, so we don't add another one).
AC_DEFUN([PGAC_CHECK_BUILTIN_FUNC],
[AC_CACHE_CHECK(for $1, pgac_cv$1,
[AC_LINK_IFELSE([AC_LANG_PROGRAM([
int
call$1($2)
{
return $1(x);
}], [])],
[pgac_cv$1=yes],
[pgac_cv$1=no])])
if test x"${pgac_cv$1}" = xyes ; then
AC_DEFINE_UNQUOTED(AS_TR_CPP([HAVE$1]), 1,
[Define to 1 if your compiler understands $1.])
fi])# PGAC_CHECK_BUILTIN_FUNC
# PGAC_PROG_CC_CFLAGS_OPT
# -----------------------
# Given a string, check if the compiler supports the string as a

362
configure vendored
View File

@ -2915,6 +2915,20 @@ _ACEOF
fi
if test "$enable_debug" = yes; then
cat >>confdefs.h <<\_ACEOF
#define ENABLE_TRACE_COLUMN_DEFAULT_VALUE true
_ACEOF
else
cat >>confdefs.h <<\_ACEOF
#define ENABLE_TRACE_COLUMN_DEFAULT_VALUE false
_ACEOF
fi
#
# '-memory-check'-like feature can be enabled
#
@ -15535,6 +15549,42 @@ _ACEOF
fi
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for computed goto support" >&5
$as_echo_n "checking for computed goto support... " >&6; }
if ${pgac_cv_computed_goto+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
void *labeladdrs[] = {&&my_label};
goto *labeladdrs[0];
my_label:
return 1;
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
pgac_cv_computed_goto=yes
else
pgac_cv_computed_goto=no
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_computed_goto" >&5
$as_echo "$pgac_cv_computed_goto" >&6; }
if test x"$pgac_cv_computed_goto" = xyes ; then
$as_echo "#define HAVE_COMPUTED_GOTO 1" >>confdefs.h
fi
{ $as_echo "$as_me:$LINENO: checking whether struct tm is in sys/time.h or time.h" >&5
$as_echo_n "checking whether struct tm is in sys/time.h or time.h... " >&6; }
if test "${ac_cv_struct_tm+set}" = set; then
@ -18202,9 +18252,47 @@ fi
# On PPC, check if assembler supports LWARX instruction's mutex hint bit
case $host_cpu in
x86_64)
# On x86_64, check if we can compile a popcntq instruction
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether assembler supports x86_64 popcntq" >&5
$as_echo_n "checking whether assembler supports x86_64 popcntq... " >&6; }
if ${pgac_cv_have_x86_64_popcntq+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
long long x = 1; long long r;
__asm__ __volatile__ (" popcntq %1,%0\n" : "=q"(r) : "rm"(x));
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
pgac_cv_have_x86_64_popcntq=yes
else
pgac_cv_have_x86_64_popcntq=no
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_have_x86_64_popcntq" >&5
$as_echo "$pgac_cv_have_x86_64_popcntq" >&6; }
if test x"$pgac_cv_have_x86_64_popcntq" = xyes ; then
$as_echo "#define HAVE_X86_64_POPCNTQ 1" >>confdefs.h
fi
;;
ppc*|powerpc*)
{ $as_echo "$as_me:$LINENO: checking whether assembler supports lwarx hint bit" >&5
$as_echo_n "checking whether assembler supports lwarx hint bit... " >&6; }
if ${pgac_cv_have_ppc_mutex_hint+:} false; then :
$as_echo_n "(cached) " >&6
else
cat >conftest.$ac_ext <<_ACEOF
/* confdefs.h. */
_ACEOF
@ -18248,6 +18336,7 @@ sed 's/^/| /' conftest.$ac_ext >&5
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
fi
{ $as_echo "$as_me:$LINENO: result: $pgac_cv_have_ppc_mutex_hint" >&5
$as_echo "$pgac_cv_have_ppc_mutex_hint" >&6; }
if test x"$pgac_cv_have_ppc_mutex_hint" = xyes ; then
@ -18260,6 +18349,46 @@ _ACEOF
;;
esac
# We assume that we needn't test all widths of these explicitly:
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for __builtin_clz" >&5
$as_echo_n "checking for __builtin_clz... " >&6; }
if ${pgac_cv__builtin_clz+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
call__builtin_clz(unsigned int x)
{
return __builtin_clz(x);
}
int
main ()
{
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
pgac_cv__builtin_clz=yes
else
pgac_cv__builtin_clz=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__builtin_clz" >&5
$as_echo "$pgac_cv__builtin_clz" >&6; }
if test x"${pgac_cv__builtin_clz}" = xyes ; then
cat >>confdefs.h <<_ACEOF
#define HAVE__BUILTIN_CLZ 1
_ACEOF
fi
# Check largefile support. You might think this is a system service not a
# compiler characteristic, but you'd be wrong. We must check this before
# probing existence of related functions such as fseeko, since the largefile
@ -24827,6 +24956,237 @@ cat >>confdefs.h <<_ACEOF
#define SIZEOF_VOID_P $ac_cv_sizeof_void_p
_ACEOF
# These typically are compiler builtins, for which AC_CHECK_FUNCS fails.
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for __builtin_bswap16" >&5
$as_echo_n "checking for __builtin_bswap16... " >&6; }
if ${pgac_cv__builtin_bswap16+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
call__builtin_bswap16(int x)
{
return __builtin_bswap16(x);
}
int
main ()
{
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
pgac_cv__builtin_bswap16=yes
else
pgac_cv__builtin_bswap16=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__builtin_bswap16" >&5
$as_echo "$pgac_cv__builtin_bswap16" >&6; }
if test x"${pgac_cv__builtin_bswap16}" = xyes ; then
cat >>confdefs.h <<_ACEOF
#define HAVE__BUILTIN_BSWAP16 1
_ACEOF
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for __builtin_bswap32" >&5
$as_echo_n "checking for __builtin_bswap32... " >&6; }
if ${pgac_cv__builtin_bswap32+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
call__builtin_bswap32(int x)
{
return __builtin_bswap32(x);
}
int
main ()
{
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
pgac_cv__builtin_bswap32=yes
else
pgac_cv__builtin_bswap32=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__builtin_bswap32" >&5
$as_echo "$pgac_cv__builtin_bswap32" >&6; }
if test x"${pgac_cv__builtin_bswap32}" = xyes ; then
cat >>confdefs.h <<_ACEOF
#define HAVE__BUILTIN_BSWAP32 1
_ACEOF
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for __builtin_bswap64" >&5
$as_echo_n "checking for __builtin_bswap64... " >&6; }
if ${pgac_cv__builtin_bswap64+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
call__builtin_bswap64(long int x)
{
return __builtin_bswap64(x);
}
int
main ()
{
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
pgac_cv__builtin_bswap64=yes
else
pgac_cv__builtin_bswap64=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__builtin_bswap64" >&5
$as_echo "$pgac_cv__builtin_bswap64" >&6; }
if test x"${pgac_cv__builtin_bswap64}" = xyes ; then
cat >>confdefs.h <<_ACEOF
#define HAVE__BUILTIN_BSWAP64 1
_ACEOF
fi
# We assume that we needn't test all widths of these explicitly:
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for __builtin_clz" >&5
$as_echo_n "checking for __builtin_clz... " >&6; }
if ${pgac_cv__builtin_clz+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
call__builtin_clz(unsigned int x)
{
return __builtin_clz(x);
}
int
main ()
{
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
pgac_cv__builtin_clz=yes
else
pgac_cv__builtin_clz=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__builtin_clz" >&5
$as_echo "$pgac_cv__builtin_clz" >&6; }
if test x"${pgac_cv__builtin_clz}" = xyes ; then
cat >>confdefs.h <<_ACEOF
#define HAVE__BUILTIN_CLZ 1
_ACEOF
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for __builtin_ctz" >&5
$as_echo_n "checking for __builtin_ctz... " >&6; }
if ${pgac_cv__builtin_ctz+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
call__builtin_ctz(unsigned int x)
{
return __builtin_ctz(x);
}
int
main ()
{
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
pgac_cv__builtin_ctz=yes
else
pgac_cv__builtin_ctz=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__builtin_ctz" >&5
$as_echo "$pgac_cv__builtin_ctz" >&6; }
if test x"${pgac_cv__builtin_ctz}" = xyes ; then
cat >>confdefs.h <<_ACEOF
#define HAVE__BUILTIN_CTZ 1
_ACEOF
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for __builtin_popcount" >&5
$as_echo_n "checking for __builtin_popcount... " >&6; }
if ${pgac_cv__builtin_popcount+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
call__builtin_popcount(unsigned int x)
{
return __builtin_popcount(x);
}
int
main ()
{
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
pgac_cv__builtin_popcount=yes
else
pgac_cv__builtin_popcount=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__builtin_popcount" >&5
$as_echo "$pgac_cv__builtin_popcount" >&6; }
if test x"${pgac_cv__builtin_popcount}" = xyes ; then
cat >>confdefs.h <<_ACEOF
#define HAVE__BUILTIN_POPCOUNT 1
_ACEOF
fi
# The cast to long int works around a bug in the HP C Compiler
# version HP92453-01 B.11.11.23709.GP, which incorrectly rejects
@ -29836,7 +30196,7 @@ platform_version=`sh ./src/get_PlatForm_str.sh`
is_euler=`echo $platform_version | grep euleros`
# New gcc version should be compatible to old one
if test "$CC_VERSION" = "$NEW_GCC"; then
CFLAGS=" -std=c++11 -D_GLIBCXX_USE_CXX11_ABI=0 $CFLAGS -fno-aggressive-loop-optimizations -Wno-attributes -fno-omit-frame-pointer -fno-expensive-optimizations"
CFLAGS=" -D_GLIBCXX_USE_CXX11_ABI=0 $CFLAGS -Wno-attributes "
CFLAGS="$CFLAGS -Wno-unused-but-set-variable"
if test "$is_euler" != ""; then
if test "${enable_cassert}" = no -o "${enable_memory_check}" = no -a "${enable_ut}" = no; then

View File

@ -1173,6 +1173,19 @@ fi
# On PPC, check if assembler supports LWARX instruction's mutex hint bit
case $host_cpu in
x86_64)
# On x86_64, check if we can compile a popcntq instruction
AC_CACHE_CHECK([whether assembler supports x86_64 popcntq],
[pgac_cv_have_x86_64_popcntq],
[AC_COMPILE_IFELSE([AC_LANG_PROGRAM([],
[long long x = 1; long long r;
__asm__ __volatile__ (" popcntq %1,%0\n" : "=q"(r) : "rm"(x));])],
[pgac_cv_have_x86_64_popcntq=yes],
[pgac_cv_have_x86_64_popcntq=no])])
if test x"$pgac_cv_have_x86_64_popcntq" = xyes ; then
AC_DEFINE(HAVE_X86_64_POPCNTQ, 1, [Define to 1 if the assembler supports X86_64's POPCNTQ instruction.])
fi
;;
ppc*|powerpc*)
AC_MSG_CHECKING([whether assembler supports lwarx hint bit])
AC_TRY_COMPILE([],
@ -1221,6 +1234,9 @@ LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
AC_CHECK_FUNCS([cbrt dlopen fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll pstat readlink setproctitle setsid sigprocmask symlink sync_file_range towlower utime utimes waitpid wcstombs wcstombs_l])
# We assume that we needn't test all widths of these explicitly:
PGAC_CHECK_BUILTIN_FUNC([__builtin_clz], [unsigned int x])
AC_REPLACE_FUNCS(fseeko)
case $host_os in
# NetBSD uses a custom fseeko/ftello built on fsetpos/fgetpos

View File

@ -1917,13 +1917,13 @@ static char* get_sql_insert(Relation rel, int* pkattnums, int pknumatts, char**
needComma = false;
for (i = 0; i < natts; i++) {
if (tupdesc->attrs[i]->attisdropped)
if (tupdesc->attrs[i].attisdropped)
continue;
if (needComma)
appendStringInfo(&buf, ",");
appendStringInfoString(&buf, quote_ident_cstr(NameStr(tupdesc->attrs[i]->attname)));
appendStringInfoString(&buf, quote_ident_cstr(NameStr(tupdesc->attrs[i].attname)));
needComma = true;
}
@ -1934,7 +1934,7 @@ static char* get_sql_insert(Relation rel, int* pkattnums, int pknumatts, char**
*/
needComma = false;
for (i = 0; i < natts; i++) {
if (tupdesc->attrs[i]->attisdropped)
if (tupdesc->attrs[i].attisdropped)
continue;
if (needComma)
@ -1980,7 +1980,7 @@ static char* get_sql_delete(Relation rel, int* pkattnums, int pknumatts, char**
if (i > 0)
appendStringInfo(&buf, " AND ");
appendStringInfoString(&buf, quote_ident_cstr(NameStr(tupdesc->attrs[pkattnum]->attname)));
appendStringInfoString(&buf, quote_ident_cstr(NameStr(tupdesc->attrs[pkattnum].attname)));
if (tgt_pkattvals[i] != NULL)
appendStringInfo(&buf, " = %s", quote_literal_cstr(tgt_pkattvals[i]));
@ -2022,13 +2022,13 @@ static char* get_sql_update(Relation rel, int* pkattnums, int pknumatts, char**
*/
needComma = false;
for (i = 0; i < natts; i++) {
if (tupdesc->attrs[i]->attisdropped)
if (tupdesc->attrs[i].attisdropped)
continue;
if (needComma)
appendStringInfo(&buf, ", ");
appendStringInfo(&buf, "%s = ", quote_ident_cstr(NameStr(tupdesc->attrs[i]->attname)));
appendStringInfo(&buf, "%s = ", quote_ident_cstr(NameStr(tupdesc->attrs[i].attname)));
key = get_attnum_pk_pos(pkattnums, pknumatts, i);
@ -2053,7 +2053,7 @@ static char* get_sql_update(Relation rel, int* pkattnums, int pknumatts, char**
if (i > 0)
appendStringInfo(&buf, " AND ");
appendStringInfo(&buf, "%s", quote_ident_cstr(NameStr(tupdesc->attrs[pkattnum]->attname)));
appendStringInfo(&buf, "%s", quote_ident_cstr(NameStr(tupdesc->attrs[pkattnum].attname)));
val = tgt_pkattvals[i];
@ -2135,10 +2135,10 @@ static HeapTuple get_tuple_of_interest(Relation rel, int* pkattnums, int pknumat
if (i > 0)
appendStringInfoString(&buf, ", ");
if (tupdesc->attrs[i]->attisdropped)
if (tupdesc->attrs[i].attisdropped)
appendStringInfoString(&buf, "NULL");
else
appendStringInfoString(&buf, quote_ident_cstr(NameStr(tupdesc->attrs[i]->attname)));
appendStringInfoString(&buf, quote_ident_cstr(NameStr(tupdesc->attrs[i].attname)));
}
appendStringInfo(&buf, " FROM %s WHERE ", relname);
@ -2149,7 +2149,7 @@ static HeapTuple get_tuple_of_interest(Relation rel, int* pkattnums, int pknumat
if (i > 0)
appendStringInfo(&buf, " AND ");
appendStringInfoString(&buf, quote_ident_cstr(NameStr(tupdesc->attrs[pkattnum]->attname)));
appendStringInfoString(&buf, quote_ident_cstr(NameStr(tupdesc->attrs[pkattnum].attname)));
if (src_pkattvals[i] != NULL)
appendStringInfo(&buf, " = %s", quote_literal_cstr(src_pkattvals[i]));
@ -2523,7 +2523,7 @@ static void validate_pkattnums(
lnum = 0;
for (j = 0; j < natts; j++) {
/* dropped columns don't count */
if (tupdesc->attrs[j]->attisdropped)
if (tupdesc->attrs[j].attisdropped)
continue;
if (++lnum == pkattnum)

View File

@ -375,7 +375,7 @@ static List* get_file_fdw_attribute_options(Oid relid)
/* Retrieve FDW options for all user-defined attributes. */
for (attnum = 1; attnum <= natts; attnum++) {
Form_pg_attribute attr = tupleDesc->attrs[attnum - 1];
Form_pg_attribute attr = &tupleDesc->attrs[attnum - 1];
List* options = NIL;
ListCell* lc = NULL;
@ -708,7 +708,7 @@ static void estimate_size(PlannerInfo* root, RelOptInfo* baserel, FileFdwPlanSta
*/
int tuple_width;
tuple_width = MAXALIGN(baserel->width) + MAXALIGN(sizeof(HeapTupleHeaderData));
tuple_width = MAXALIGN(baserel->reltarget->width) + MAXALIGN(sizeof(HeapTupleHeaderData));
ntuples = clamp_row_est((double)stat_buf.st_size / (double)tuple_width);
baserel->tuples = ntuples;

View File

@ -929,7 +929,7 @@ static void gcdeparseTargetList(StringInfo buf, PlannerInfo* root, Index rtindex
have_wholerow = bms_is_member(0 - FirstLowInvalidHeapAttributeNumber, attrs_used);
for (i = 1; i <= tupdesc->natts; i++) {
Form_pg_attribute attr = tupdesc->attrs[i - 1];
Form_pg_attribute attr = &tupdesc->attrs[i - 1];
/* Ignore dropped attributes. */
if (attr->attisdropped)
@ -1122,7 +1122,7 @@ static void gcDeparseSubqueryTargetList(deparse_expr_cxt* context)
/* Should only be called in these cases. */
Assert(IS_SIMPLE_REL(foreignrel) || IS_JOIN_REL(foreignrel));
foreach (lc, foreignrel->reltargetlist) {
foreach (lc, foreignrel->reltarget->exprs) {
Node* node = (Node*)lfirst(lc);
if (!first)
@ -2213,7 +2213,7 @@ static void get_relation_column_alias_ids(Var* node, RelOptInfo* foreignrel, int
/* Get the column alias ID */
i = 1;
foreach (lc, foreignrel->reltargetlist) {
foreach (lc, foreignrel->reltarget->exprs) {
if (equal(lfirst(lc), (Node*)node)) {
*colno = i;
return;

View File

@ -745,8 +745,8 @@ static void gcBeginForeignScan(ForeignScanState* node, int eflags)
fsstate->resultSlot->tts_isnull[i] = true;
}
fsstate->resultSlot->tts_isempty = false;
fsstate->scanSlot->tts_isempty = false;
fsstate->resultSlot->tts_flags &= ~TTS_FLAG_EMPTY;
fsstate->scanSlot->tts_flags &= ~TTS_FLAG_EMPTY;
fsstate->attinmeta = TupleDescGetAttInMetadata(fsstate->tupdesc);
@ -862,7 +862,7 @@ static void postgresConstructResultSlotWithArray(ForeignScanState* node)
for (scanAttr = 0, resultAttr = 0; resultAttr < resultDesc->natts; resultAttr++, scanAttr += map) {
Assert(list_length(colmap) == resultDesc->natts);
Oid typoid = resultDesc->attrs[resultAttr]->atttypid;
Oid typoid = resultDesc->attrs[resultAttr].atttypid;
Value* val = (Value*)list_nth(colmap, resultAttr);
map = val->val.ival;
@ -917,7 +917,7 @@ static void postgresConstructResultSlotWithArray(ForeignScanState* node)
}
resultSlot->tts_nvalid = resultDesc->natts;
resultSlot->tts_isempty = false;
resultSlot->tts_flags &= ~TTS_FLAG_EMPTY;
}
static void postgresMapResultFromScanSlot(ForeignScanState* node)
@ -956,7 +956,7 @@ static TupleTableSlot* gcIterateNormalForeignScan(ForeignScanState* node)
/* reset tupleslot on the begin */
(void)ExecClearTuple(fsstate->resultSlot);
fsstate->resultSlot->tts_isempty = false;
fsstate->resultSlot->tts_flags &= ~TTS_FLAG_EMPTY;
TupleTableSlot* slot = node->ss.ss_ScanTupleSlot;
@ -1811,17 +1811,17 @@ static void gcfdw_fetch_remote_table_info(
pq_sendint(&retbuf, tupdesc->natts, 4);
for (int i = 0; i < tupdesc->natts; i++) {
att_name_len = strlen(tupdesc->attrs[i]->attname.data);
att_name_len = strlen(tupdesc->attrs[i].attname.data);
pq_sendint(&retbuf, att_name_len, 4);
pq_sendbytes(&retbuf, tupdesc->attrs[i]->attname.data, att_name_len);
pq_sendbytes(&retbuf, tupdesc->attrs[i].attname.data, att_name_len);
Assert(InvalidOid != tupdesc->attrs[i]->atttypid);
Assert(InvalidOid != tupdesc->attrs[i].atttypid);
type_name = get_typename(tupdesc->attrs[i]->atttypid);
type_name = get_typename(tupdesc->attrs[i].atttypid);
type_name_len = strlen(type_name);
pq_sendint(&retbuf, type_name_len, 4);
pq_sendbytes(&retbuf, type_name, type_name_len);
pq_sendint(&retbuf, tupdesc->attrs[i]->atttypmod, 4);
pq_sendint(&retbuf, tupdesc->attrs[i].atttypmod, 4);
pfree(type_name);
}
@ -2044,7 +2044,7 @@ static void prepare_query_params(PlanState* node, List* fdw_exprs, int numParams
* benefit, and it'd require gc_fdw to know more than is desirable
* about Param evaluation.)
*/
*param_exprs = (List *)ExecInitExpr((Expr *)fdw_exprs, node);
*param_exprs = ExecInitExprList(fdw_exprs, node);
/* Allocate buffer for text form of query parameters. */
*param_values = (const char **)palloc0(numParams * sizeof(char *));
@ -2066,7 +2066,7 @@ static void process_query_params(
bool isNull = false;
/* Evaluate the parameter expression */
expr_value = ExecEvalExpr(expr_state, econtext, &isNull, NULL);
expr_value = ExecEvalExpr(expr_state, econtext, &isNull);
/*
* Get string representation of each parameter value by invoking
@ -2366,7 +2366,7 @@ static void conversion_error_callback(void *arg)
TupleDesc tupdesc = RelationGetDescr(errpos->rel);
if (errpos->cur_attno > 0 && errpos->cur_attno <= tupdesc->natts) {
attname = NameStr(tupdesc->attrs[errpos->cur_attno - 1]->attname);
attname = NameStr(tupdesc->attrs[errpos->cur_attno - 1].attname);
} else if (errpos->cur_attno == SelfItemPointerAttributeNumber) {
attname = "ctid";
} else if (errpos->cur_attno == ObjectIdAttributeNumber) {
@ -2586,7 +2586,7 @@ static void GcFdwCopyRemoteInfo(PgFdwRemoteInfo* new_remote_info, PgFdwRemoteInf
bool hasSpecialArrayType(TupleDesc desc)
{
for (int i = 0; i < desc->natts; i++) {
Oid typoid = desc->attrs[i]->atttypid;
Oid typoid = desc->attrs[i].atttypid;
if (INT8ARRAYOID == typoid || FLOAT8ARRAYOID == typoid || FLOAT4ARRAYOID == typoid || NUMERICARRAY == typoid) {
return true;

View File

@ -769,7 +769,7 @@ static void estimate_size(PlannerInfo* root, RelOptInfo* baserel, HdfsFdwPlanSta
statBuffer.st_size = 10 * BLCKSZ;
}
tupleWidth = MAXALIGN((unsigned int)baserel->width) + MAXALIGN(sizeof(HeapTupleHeaderData));
tupleWidth = MAXALIGN((unsigned int)baserel->reltarget->width) + MAXALIGN(sizeof(HeapTupleHeaderData));
fdw_private->tuplesCount = clamp_row_est((double)statBuffer.st_size / (double)tupleWidth);
baserel->tuples = fdw_private->tuplesCount;
}
@ -1327,7 +1327,7 @@ static TupleTableSlot* HdfsIterateForeignScan(ForeignScanState* scanState)
* @hdfs
* Optimize foreign scan by using informational constraint.
*/
if (((ForeignScan*)scanState->ss.ps.plan)->scan.predicate_pushdown_optimized && false == tupleSlot->tts_isempty) {
if (((ForeignScan*)scanState->ss.ps.plan)->scan.predicate_pushdown_optimized && !TTS_EMPTY(tupleSlot)) {
/*
* If we find a suitable tuple, set is_scan_end value is true.
* It means that we do not find suitable tuple in the next iteration,
@ -1656,7 +1656,7 @@ int HdfsAcquireSampleRows(Relation relation, int logLevel, HeapTuple* sampleRows
SplitMap* fileMap = (SplitMap*)additionalData;
fileList = fileMap->splits;
columnList = CreateColList((Form_pg_attribute*)tupleDescriptor->attrs, columnCount);
columnList = CreateColList(tupleDescriptor->attrs, columnCount);
unsigned int totalFileNumbers = list_length(fileList);
/* description: change file -> stride: Jason's advice */
@ -1681,7 +1681,7 @@ int HdfsAcquireSampleRows(Relation relation, int logLevel, HeapTuple* sampleRows
* columnList changed in HdfsBeginForeignScan
*/
list_free(columnList);
columnList = CreateColList((Form_pg_attribute*)tupleDescriptor->attrs, columnCount);
columnList = CreateColList(tupleDescriptor->attrs, columnCount);
/* Put file information into SplitInfo struct */
SplitInfo* splitinfo = (SplitInfo*)list_nth(fileList, targFile);
@ -1767,7 +1767,7 @@ int HdfsAcquireSampleRows(Relation relation, int logLevel, HeapTuple* sampleRows
(void)MemoryContextSwitchTo(oldContext);
/* if there are no more records to read, break */
if (scanTupleSlot->tts_isempty) {
if (TTS_EMPTY(scanTupleSlot)) {
break;
}

View File

@ -705,7 +705,7 @@ List* CNSchedulingForAnalyze(unsigned int* totalFilesNum, unsigned int* numOfDns
srvType = getServerType(foreignTableId);
tupleDescriptor = RelationGetDescr(relation);
columnList = CreateColList((Form_pg_attribute*)tupleDescriptor->attrs, tupleDescriptor->natts);
columnList = CreateColList(tupleDescriptor->attrs, tupleDescriptor->natts);
RelationClose(relation);
@ -2261,7 +2261,7 @@ static bool PartitionFilterClause(SplitInfo* split, List* scanClauses, Var* valu
static void CollectPartPruneInfo(List*& prunningResult, int sum, int notprunning, int colno, Oid relOid)
{
Relation rel = heap_open(relOid, AccessShareLock);
char* attName = NameStr(rel->rd_att->attrs[colno - 1]->attname);
char* attName = NameStr(rel->rd_att->attrs[colno - 1].attname);
/*
* Add 16 here because we need to add some description words, three separate characters

View File

@ -700,15 +700,15 @@ Datum hstore_from_record(PG_FUNCTION_ARGS)
for (i = 0, j = 0; i < ncolumns; ++i) {
ColumnIOData* column_info = &my_extra->columns[i];
Oid column_type = tupdesc->attrs[i]->atttypid;
Oid column_type = tupdesc->attrs[i].atttypid;
char* value = NULL;
/* Ignore dropped columns in datatype */
if (tupdesc->attrs[i]->attisdropped)
if (tupdesc->attrs[i].attisdropped)
continue;
pairs[j].key = NameStr(tupdesc->attrs[i]->attname);
pairs[j].keylen = hstoreCheckKeyLen(strlen(NameStr(tupdesc->attrs[i]->attname)));
pairs[j].key = NameStr(tupdesc->attrs[i].attname);
pairs[j].keylen = hstoreCheckKeyLen(strlen(NameStr(tupdesc->attrs[i].attname)));
if (nulls == NULL || nulls[i]) {
pairs[j].val = NULL;
@ -862,18 +862,18 @@ Datum hstore_populate_record(PG_FUNCTION_ARGS)
for (i = 0; i < ncolumns; ++i) {
ColumnIOData* column_info = &my_extra->columns[i];
Oid column_type = tupdesc->attrs[i]->atttypid;
Oid column_type = tupdesc->attrs[i].atttypid;
char* value = NULL;
int idx;
int vallen;
/* Ignore dropped columns in datatype */
if (tupdesc->attrs[i]->attisdropped) {
if (tupdesc->attrs[i].attisdropped) {
nulls[i] = true;
continue;
}
idx = hstoreFindKey(hs, 0, NameStr(tupdesc->attrs[i]->attname), strlen(NameStr(tupdesc->attrs[i]->attname)));
idx = hstoreFindKey(hs, 0, NameStr(tupdesc->attrs[i].attname), strlen(NameStr(tupdesc->attrs[i].attname)));
/*
* we can't just skip here if the key wasn't found since we might have
@ -901,7 +901,7 @@ Datum hstore_populate_record(PG_FUNCTION_ARGS)
* checks are done
*/
values[i] =
InputFunctionCall(&column_info->proc, NULL, column_info->typioparam, tupdesc->attrs[i]->atttypmod);
InputFunctionCall(&column_info->proc, NULL, column_info->typioparam, tupdesc->attrs[i].atttypmod);
nulls[i] = true;
} else {
vallen = HS_VALLEN(entries, idx);
@ -911,7 +911,7 @@ Datum hstore_populate_record(PG_FUNCTION_ARGS)
value[vallen] = 0;
values[i] =
InputFunctionCall(&column_info->proc, value, column_info->typioparam, tupdesc->attrs[i]->atttypmod);
InputFunctionCall(&column_info->proc, value, column_info->typioparam, tupdesc->attrs[i].atttypmod);
nulls[i] = false;
}
}

View File

@ -6,6 +6,7 @@
#include "access/gist.h"
#include "access/skey.h"
#include "port/pg_bitutils.h"
#include "_int.h"
@ -29,264 +30,6 @@ extern "C" Datum g_intbig_picksplit(PG_FUNCTION_ARGS);
extern "C" Datum g_intbig_union(PG_FUNCTION_ARGS);
extern "C" Datum g_intbig_same(PG_FUNCTION_ARGS);
/* Number of one-bits in an unsigned byte */
static const uint8 number_of_ones[256] = {0,
1,
1,
2,
1,
2,
2,
3,
1,
2,
2,
3,
2,
3,
3,
4,
1,
2,
2,
3,
2,
3,
3,
4,
2,
3,
3,
4,
3,
4,
4,
5,
1,
2,
2,
3,
2,
3,
3,
4,
2,
3,
3,
4,
3,
4,
4,
5,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
1,
2,
2,
3,
2,
3,
3,
4,
2,
3,
3,
4,
3,
4,
4,
5,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
3,
4,
4,
5,
4,
5,
5,
6,
4,
5,
5,
6,
5,
6,
6,
7,
1,
2,
2,
3,
2,
3,
3,
4,
2,
3,
3,
4,
3,
4,
4,
5,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
3,
4,
4,
5,
4,
5,
5,
6,
4,
5,
5,
6,
5,
6,
6,
7,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
3,
4,
4,
5,
4,
5,
5,
6,
4,
5,
5,
6,
5,
6,
6,
7,
3,
4,
4,
5,
4,
5,
5,
6,
4,
5,
5,
6,
5,
6,
6,
7,
4,
5,
5,
6,
5,
6,
6,
7,
5,
6,
6,
7,
6,
7,
7,
8};
PG_FUNCTION_INFO_V1(_intbig_in);
extern "C" Datum _intbig_in(PG_FUNCTION_ARGS);
@ -428,11 +171,7 @@ Datum g_intbig_compress(PG_FUNCTION_ARGS)
static int4 sizebitvec(BITVECP sign)
{
int4 size = 0, i;
LOOPBYTE
size += number_of_ones[(unsigned char)sign[i]];
return size;
return pg_popcount(sign, SIGLEN);
}
static int hemdistsign(BITVECP a, BITVECP b)
@ -442,7 +181,8 @@ static int hemdistsign(BITVECP a, BITVECP b)
LOOPBYTE
{
diff = (unsigned char)(a[i] ^ b[i]);
dist += number_of_ones[diff];
/* Using the popcount functions here isn't likely to win */
dist += pg_number_of_ones[diff];
}
return dist;
}

View File

@ -1530,7 +1530,7 @@ static void fill_pglog_planstate_from_logft_rel(pglogPlanState* pg_log, Relation
Assert(PGLOG_ATTR_MAX == tupdesc->natts);
for (int i = 0; i < PGLOG_ATTR_MAX; ++i) {
Form_pg_attribute att = tupdesc->attrs[i];
Form_pg_attribute att = &tupdesc->attrs[i];
pg_log->allattr_typmod[i] = att->atttypmod;
getTypeInputInfo(att->atttypid, &in_func_oid, &pg_log->allattr_typioparam[i]);
fmgr_info(in_func_oid, &pg_log->allattr_fmgrinfo[i]);

View File

@ -10,6 +10,8 @@
#include "access/gist.h"
#include "access/skey.h"
#include "port/pg_bitutils.h"
#include "crc32.h"
#include "ltree.h"
@ -34,264 +36,6 @@ extern "C" Datum _ltree_consistent(PG_FUNCTION_ARGS);
#define GETENTRY(vec, pos) ((ltree_gist*)DatumGetPointer((vec)->vector[(pos)].key))
#define NEXTVAL(x) ((ltree*)((char*)(x) + INTALIGN(VARSIZE(x))))
/* Number of one-bits in an unsigned byte */
static const uint8 number_of_ones[256] = {0,
1,
1,
2,
1,
2,
2,
3,
1,
2,
2,
3,
2,
3,
3,
4,
1,
2,
2,
3,
2,
3,
3,
4,
2,
3,
3,
4,
3,
4,
4,
5,
1,
2,
2,
3,
2,
3,
3,
4,
2,
3,
3,
4,
3,
4,
4,
5,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
1,
2,
2,
3,
2,
3,
3,
4,
2,
3,
3,
4,
3,
4,
4,
5,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
3,
4,
4,
5,
4,
5,
5,
6,
4,
5,
5,
6,
5,
6,
6,
7,
1,
2,
2,
3,
2,
3,
3,
4,
2,
3,
3,
4,
3,
4,
4,
5,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
3,
4,
4,
5,
4,
5,
5,
6,
4,
5,
5,
6,
5,
6,
6,
7,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
3,
4,
4,
5,
4,
5,
5,
6,
4,
5,
5,
6,
5,
6,
6,
7,
3,
4,
4,
5,
4,
5,
5,
6,
4,
5,
5,
6,
5,
6,
6,
7,
4,
5,
5,
6,
5,
6,
6,
7,
5,
6,
6,
7,
6,
7,
7,
8};
#define WISH_F(a, b, c) (double)(-(double)(((a) - (b)) * ((a) - (b)) * ((a) - (b))) * (c))
static void hashing(BITVECP sign, ltree* t)
@ -431,11 +175,7 @@ Datum _ltree_union(PG_FUNCTION_ARGS)
static int4 sizebitvec(BITVECP sign)
{
int4 size = 0, i;
ALOOPBYTE
size += number_of_ones[(unsigned char)sign[i]];
return size;
return pg_popcount((const char *) sign, ASIGLEN);
}
static int hemdistsign(BITVECP a, BITVECP b)
@ -445,7 +185,8 @@ static int hemdistsign(BITVECP a, BITVECP b)
ALOOPBYTE
{
diff = (unsigned char)(a[i] ^ b[i]);
dist += number_of_ones[diff];
/* Using the popcount functions here isn't likely to win */
dist += pg_number_of_ones[diff];
}
return dist;
}

View File

@ -234,7 +234,7 @@ static void TupleToJsoninfo(
Datum origval = 0; /* possibly toasted Datum */
bool isnull = false; /* column is null? */
Form_pg_attribute attr = tupdesc->attrs[natt]; /* the attribute itself */
Form_pg_attribute attr = &tupdesc->attrs[natt]; /* the attribute itself */
/*
* don't print dropped columns, we can't be sure everything is

View File

@ -435,7 +435,7 @@ static void parse_compress_meta(StringInfo outputBuf, char* page_content, Relati
int size = PageGetSpecialSize(page_header);
TupleDesc desc = RelationGetDescr(rel);
Form_pg_attribute* att = desc->attrs;
FormData_pg_attribute* att = desc->attrs;
int attrno;
int attrnum = desc->natts;
@ -444,7 +444,7 @@ static void parse_compress_meta(StringInfo outputBuf, char* page_content, Relati
char mode = 0;
for (attrno = 0; attrno < attrnum && cmprsOff < size; ++attrno) {
Form_pg_attribute thisatt = att[attrno];
Form_pg_attribute thisatt = &att[attrno];
int metaSize = 0;
metaInfo = PageCompress::FetchAttrCmprMeta(start + cmprsOff, thisatt->attlen, &metaSize, &mode);

View File

@ -223,7 +223,7 @@ static void pgss_ExecutorStart(QueryDesc* queryDesc, int eflags);
static void pgss_ExecutorRun(QueryDesc* queryDesc, ScanDirection direction, long count);
static void pgss_ExecutorFinish(QueryDesc* queryDesc);
static void pgss_ExecutorEnd(QueryDesc* queryDesc);
static void pgss_ProcessUtility(Node* parsetree, const char* queryString, ParamListInfo params, bool isTopLevel,
static void pgss_ProcessUtility(Node* parsetree, const char* queryString, bool readOnlyTree, ParamListInfo params, bool isTopLevel,
DestReceiver* dest,
#ifdef PGXC
bool sentToRemote,
@ -722,7 +722,7 @@ static void pgss_ExecutorEnd(QueryDesc* queryDesc)
/*
* ProcessUtility hook
*/
static void pgss_ProcessUtility(Node* parsetree, const char* queryString, ParamListInfo params, bool isTopLevel,
static void pgss_ProcessUtility(Node* parsetree, const char* queryString, bool readOnlyTree, ParamListInfo params, bool isTopLevel,
DestReceiver* dest,
#ifdef PGXC
bool sentToRemote,
@ -756,6 +756,7 @@ static void pgss_ProcessUtility(Node* parsetree, const char* queryString, ParamL
if (prev_ProcessUtility)
prev_ProcessUtility(parsetree,
queryString,
readOnlyTree,
params,
isTopLevel,
dest,
@ -766,6 +767,7 @@ static void pgss_ProcessUtility(Node* parsetree, const char* queryString, ParamL
else
standard_ProcessUtility(parsetree,
queryString,
readOnlyTree,
params,
isTopLevel,
dest,
@ -819,6 +821,7 @@ static void pgss_ProcessUtility(Node* parsetree, const char* queryString, ParamL
if (prev_ProcessUtility)
prev_ProcessUtility(parsetree,
queryString,
readOnlyTree,
params,
isTopLevel,
dest,
@ -829,6 +832,7 @@ static void pgss_ProcessUtility(Node* parsetree, const char* queryString, ParamL
else
standard_ProcessUtility(parsetree,
queryString,
readOnlyTree,
params,
isTopLevel,
dest,
@ -1858,7 +1862,7 @@ static void fill_in_constant_lengths(pgssJumbleState* jstate, const char* query)
locs = jstate->clocations;
/* initialize the flex scanner --- should match raw_parser() */
yyscanner = scanner_init(query, &yyextra, ScanKeywords, NumScanKeywords);
yyscanner = scanner_init(query, &yyextra, &ScanKeywords, ScanKeywordTokens);
/* Search for each constant, in sequence */
for (i = 0; i < jstate->clocations_count; i++) {

View File

@ -3,6 +3,7 @@
*/
#include "postgres.h"
#include "knl/knl_variable.h"
#include "port/pg_bitutils.h"
#include "trgm.h"
@ -40,264 +41,6 @@ extern "C" Datum gtrgm_picksplit(PG_FUNCTION_ARGS);
#define GETENTRY(vec, pos) ((TRGM*)DatumGetPointer((vec)->vector[(pos)].key))
/* Number of one-bits in an unsigned byte */
static const uint8 number_of_ones[256] = {0,
1,
1,
2,
1,
2,
2,
3,
1,
2,
2,
3,
2,
3,
3,
4,
1,
2,
2,
3,
2,
3,
3,
4,
2,
3,
3,
4,
3,
4,
4,
5,
1,
2,
2,
3,
2,
3,
3,
4,
2,
3,
3,
4,
3,
4,
4,
5,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
1,
2,
2,
3,
2,
3,
3,
4,
2,
3,
3,
4,
3,
4,
4,
5,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
3,
4,
4,
5,
4,
5,
5,
6,
4,
5,
5,
6,
5,
6,
6,
7,
1,
2,
2,
3,
2,
3,
3,
4,
2,
3,
3,
4,
3,
4,
4,
5,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
3,
4,
4,
5,
4,
5,
5,
6,
4,
5,
5,
6,
5,
6,
6,
7,
2,
3,
3,
4,
3,
4,
4,
5,
3,
4,
4,
5,
4,
5,
5,
6,
3,
4,
4,
5,
4,
5,
5,
6,
4,
5,
5,
6,
5,
6,
6,
7,
3,
4,
4,
5,
4,
5,
5,
6,
4,
5,
5,
6,
5,
6,
6,
7,
4,
5,
5,
6,
5,
6,
6,
7,
5,
6,
6,
7,
6,
7,
7,
8};
Datum gtrgm_in(PG_FUNCTION_ARGS)
{
elog(ERROR, "not implemented");
@ -677,11 +420,7 @@ Datum gtrgm_same(PG_FUNCTION_ARGS)
static int4 sizebitvec(BITVECP sign)
{
int4 size = 0, i;
LOOPBYTE
size += number_of_ones[(unsigned char)sign[i]];
return size;
return pg_popcount(sign, SIGLEN);
}
static int hemdistsign(BITVECP a, BITVECP b)
@ -691,7 +430,8 @@ static int hemdistsign(BITVECP a, BITVECP b)
LOOPBYTE
{
diff = (unsigned char)(a[i] ^ b[i]);
dist += number_of_ones[diff];
/* Using the popcount functions here isn't likely to win */
dist += pg_number_of_ones[diff];
}
return dist;
}

View File

@ -790,7 +790,7 @@ static void deparseTargetList(StringInfo buf, RangeTblEntry *rte, Index rtindex,
bool first = true;
for (i = 1; i <= tupdesc->natts; i++) {
Form_pg_attribute attr = tupdesc->attrs[i - 1];
Form_pg_attribute attr = &tupdesc->attrs[i - 1];
/* Ignore dropped attributes. */
if (attr->attisdropped) {
@ -1065,7 +1065,7 @@ void deparseAnalyzeSql(StringInfo buf, Relation rel, List **retrieved_attrs)
appendStringInfoString(buf, "SELECT ");
for (i = 0; i < tupdesc->natts; i++) {
/* Ignore dropped columns. */
if (tupdesc->attrs[i]->attisdropped) {
if (tupdesc->attrs[i].attisdropped) {
continue;
}
@ -1075,7 +1075,7 @@ void deparseAnalyzeSql(StringInfo buf, Relation rel, List **retrieved_attrs)
first = false;
/* Use attribute name or column_name option. */
char *colname = NameStr(tupdesc->attrs[i]->attname);
char *colname = NameStr(tupdesc->attrs[i].attname);
List *options = GetForeignColumnOptions(relid, i + 1);
foreach (lc, options) {

View File

@ -390,7 +390,7 @@ static void postgresGetForeignRelSize(PlannerInfo *root, RelOptInfo *baserel, Oi
* columns used in them. Doesn't seem worth detecting that case though.)
*/
fpinfo->attrs_used = NULL;
pull_varattnos((Node *)baserel->reltargetlist, baserel->relid, &fpinfo->attrs_used);
pull_varattnos((Node *)baserel->reltarget->exprs, baserel->relid, &fpinfo->attrs_used);
foreach (lc, fpinfo->local_conds) {
RestrictInfo *rinfo = (RestrictInfo *)lfirst(lc);
@ -424,7 +424,7 @@ static void postgresGetForeignRelSize(PlannerInfo *root, RelOptInfo *baserel, Oi
/* Report estimated baserel size to planner. */
baserel->rows = fpinfo->rows;
baserel->width = fpinfo->width;
baserel->reltarget->width = fpinfo->width;
} else {
/*
* If the foreign table has never been ANALYZEd, it will have relpages
@ -437,7 +437,7 @@ static void postgresGetForeignRelSize(PlannerInfo *root, RelOptInfo *baserel, Oi
*/
if (baserel->pages == 0 && baserel->tuples == 0) {
baserel->pages = 10;
baserel->tuples = (double)(10 * BLCKSZ) / (baserel->width + sizeof(HeapTupleHeaderData));
baserel->tuples = (double)(10 * BLCKSZ) / (baserel->reltarget->width + sizeof(HeapTupleHeaderData));
}
/* Estimate baserel size as best we can with local statistics. */
@ -773,7 +773,7 @@ static void postgresBeginForeignScan(ForeignScanState *node, int eflags)
* benefit, and it'd require postgres_fdw to know more than is desirable
* about Param evaluation.)
*/
fsstate->param_exprs = (List *)ExecInitExpr((Expr *)fsplan->fdw_exprs, (PlanState *)node);
fsstate->param_exprs = ExecInitExprList(fsplan->fdw_exprs, (PlanState *)node);
/*
* Allocate buffer for text form of query parameters, if any.
@ -967,7 +967,7 @@ static List *postgresPlanForeignModify(PlannerInfo *root, ModifyTable *plan, Ind
int attnum;
for (attnum = 1; attnum <= tupdesc->natts; attnum++) {
Form_pg_attribute attr = tupdesc->attrs[attnum - 1];
Form_pg_attribute attr = &tupdesc->attrs[attnum - 1];
if (!attr->attisdropped) {
targetAttrs = lappend_int(targetAttrs, attnum);
@ -1124,7 +1124,7 @@ static PgFdwModifyState *createForeignModify(EState *estate, RangeTblEntry *rte,
/* Set up for remaining transmittable parameters */
foreach (lc, fmstate->target_attrs) {
int attnum = lfirst_int(lc);
Form_pg_attribute attr = tupdesc->attrs[attnum - 1];
Form_pg_attribute attr = &tupdesc->attrs[attnum - 1];
Assert(!attr->attisdropped);
@ -1161,7 +1161,7 @@ static TupleTableSlot *postgresExecForeignInsert(EState *estate, ResultRelInfo *
initStringInfo(&sql);
/* We transmit all columns that are defined in the foreign table. */
for (int attnum = 1; attnum <= tupdesc->natts; attnum++) {
Form_pg_attribute attr = tupdesc->attrs[attnum - 1];
Form_pg_attribute attr = &tupdesc->attrs[attnum - 1];
if (!attr->attisdropped) {
targetAttrs = lappend_int(targetAttrs, attnum);
@ -1535,7 +1535,7 @@ static void estimate_path_cost_size(PlannerInfo *root, RelOptInfo *baserel, List
/* Use rows/width estimates made by set_baserel_size_estimates. */
rows = baserel->rows;
width = baserel->width;
width = baserel->reltarget->width;
/*
* Back into an estimate of the number of retrieved rows. Just in
@ -1665,7 +1665,7 @@ static void create_cursor(ForeignScanState *node)
bool isNull = false;
/* Evaluate the parameter expression */
expr_value = ExecEvalExpr(expr_state, econtext, &isNull, NULL);
expr_value = ExecEvalExpr(expr_state, econtext, &isNull);
/*
* Get string representation of each parameter value by invoking
@ -2360,7 +2360,7 @@ static void conversion_error_callback(void *arg)
ConversionLocation *errpos = (ConversionLocation *)arg;
TupleDesc tupdesc = RelationGetDescr(errpos->rel);
if (errpos->cur_attno > 0 && errpos->cur_attno <= tupdesc->natts) {
errcontext("column \"%s\" of foreign table \"%s\"", NameStr(tupdesc->attrs[errpos->cur_attno - 1]->attname),
errcontext("column \"%s\" of foreign table \"%s\"", NameStr(tupdesc->attrs[errpos->cur_attno - 1].attname),
RelationGetRelationName(errpos->rel));
}
}

View File

@ -811,7 +811,7 @@ static void light_unified_audit_executor(const Query *query)
access_audit_policy_run(query->rtable, query->commandType);
}
static void gsaudit_ProcessUtility_hook(Node *parsetree, const char *queryString, ParamListInfoData *params,
static void gsaudit_ProcessUtility_hook(Node *parsetree, const char *queryString, bool readOnlyTree, ParamListInfoData *params,
bool isTopLevel, DestReceiver *dest, bool sentToRemote, char *completionTag, bool isCTAS = false)
{
/* do nothing when enable_security_policy is off */
@ -819,10 +819,10 @@ static void gsaudit_ProcessUtility_hook(Node *parsetree, const char *queryString
u_sess->proc_cxt.IsInnerMaintenanceTools || IsConnFromCoord() ||
!is_audit_policy_exist_load_policy_info()) {
if (next_ProcessUtility_hook) {
next_ProcessUtility_hook(parsetree, queryString, params, isTopLevel, dest, sentToRemote, completionTag,
next_ProcessUtility_hook(parsetree, queryString, readOnlyTree, params, isTopLevel, dest, sentToRemote, completionTag,
false);
} else {
standard_ProcessUtility(parsetree, queryString, params, isTopLevel, dest, sentToRemote, completionTag,
standard_ProcessUtility(parsetree, queryString, readOnlyTree, params, isTopLevel, dest, sentToRemote, completionTag,
false);
}
return;
@ -1615,10 +1615,10 @@ static void gsaudit_ProcessUtility_hook(Node *parsetree, const char *queryString
PG_TRY();
{
if (next_ProcessUtility_hook) {
next_ProcessUtility_hook(parsetree, queryString, params, isTopLevel, dest, sentToRemote, completionTag,
next_ProcessUtility_hook(parsetree, queryString, readOnlyTree, params, isTopLevel, dest, sentToRemote, completionTag,
false);
} else {
standard_ProcessUtility(parsetree, queryString, params, isTopLevel, dest, sentToRemote, completionTag,
standard_ProcessUtility(parsetree, queryString, readOnlyTree, params, isTopLevel, dest, sentToRemote, completionTag,
false);
}
flush_access_logs(AUDIT_OK);

View File

@ -242,7 +242,7 @@ static void sepgsql_executor_start(QueryDesc* queryDesc, int eflags)
* It tries to rough-grained control on utility commands; some of them can
* break whole of the things if nefarious user would use.
*/
static void sepgsql_utility_command(Node* parsetree, const char* queryString, ParamListInfo params, bool isTopLevel,
static void sepgsql_utility_command(Node* parsetree, const char* queryString, bool readOnlyTree, ParamListInfo params, bool isTopLevel,
DestReceiver* dest,
#ifdef PGXC
bool sentToRemote,
@ -302,6 +302,7 @@ static void sepgsql_utility_command(Node* parsetree, const char* queryString, Pa
if (next_ProcessUtility_hook)
(*next_ProcessUtility_hook)(parsetree,
queryString,
readOnlyTree,
params,
isTopLevel,
dest,
@ -312,6 +313,7 @@ static void sepgsql_utility_command(Node* parsetree, const char* queryString, Pa
else
standard_ProcessUtility(parsetree,
queryString,
readOnlyTree,
params,
isTopLevel,
dest,

View File

@ -312,7 +312,7 @@ Datum /* have to return HeapTuple to Executor */
snprintf(sql, sizeof(sql), "INSERT INTO %s VALUES (", relname);
for (i = 1; i <= natts; i++) {
ctypes[i - 1] = SPI_gettypeid(tupdesc, i);
if (!(tupdesc->attrs[i - 1]->attisdropped)) /* skip dropped columns */
if (!(tupdesc->attrs[i - 1].attisdropped)) /* skip dropped columns */
{
snprintf(sql + strlen(sql), sizeof(sql) - strlen(sql), "%c$%d", separ, i);
separ = ',';

View File

@ -302,7 +302,7 @@ static void TupleToStringinfo(StringInfo s, TupleDesc tupdesc, HeapTuple tuple,
Oid typoutput = 0; /* output function */
Datum origval = 0; /* possibly toasted Datum */
Form_pg_attribute attr = tupdesc->attrs[natt]; /* the attribute itself */
Form_pg_attribute attr = &tupdesc->attrs[natt]; /* the attribute itself */
if (attr->attisdropped || attr->attnum < 0) {
continue;
@ -358,7 +358,7 @@ static void TupleToStringinfoUpd(StringInfo s, TupleDesc tupdesc, HeapTuple tupl
bool isnull = false; /* column is null? */
bool typisvarlena = false;
Form_pg_attribute attr = tupdesc->attrs[natt]; /* the attribute itself */
Form_pg_attribute attr = &tupdesc->attrs[natt]; /* the attribute itself */
if (attr->attisdropped || attr->attnum < 0) {
continue;

View File

@ -1337,35 +1337,35 @@ static void validateConnectbyTupleDesc(TupleDesc tupdesc, bool show_branch, bool
}
/* check that the types of the first two columns match */
if (tupdesc->attrs[0]->atttypid != tupdesc->attrs[1]->atttypid)
if (tupdesc->attrs[0].atttypid != tupdesc->attrs[1].atttypid)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("invalid return type"),
errdetail("First two columns must be the same type.")));
/* check that the type of the third column is INT4 */
if (tupdesc->attrs[2]->atttypid != INT4OID)
if (tupdesc->attrs[2].atttypid != INT4OID)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("invalid return type"),
errdetail("Third column must be type %s.", format_type_be(INT4OID))));
/* check that the type of the fourth column is TEXT if applicable */
if (show_branch && tupdesc->attrs[3]->atttypid != TEXTOID)
if (show_branch && tupdesc->attrs[3].atttypid != TEXTOID)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("invalid return type"),
errdetail("Fourth column must be type %s.", format_type_be(TEXTOID))));
/* check that the type of the fifth column is INT4 */
if (show_branch && show_serial && tupdesc->attrs[4]->atttypid != INT4OID)
if (show_branch && show_serial && tupdesc->attrs[4].atttypid != INT4OID)
elog(ERROR,
"query-specified return tuple not valid for Connectby: "
"fifth column must be type %s",
format_type_be(INT4OID));
/* check that the type of the fifth column is INT4 */
if (!show_branch && show_serial && tupdesc->attrs[3]->atttypid != INT4OID)
if (!show_branch && show_serial && tupdesc->attrs[3].atttypid != INT4OID)
elog(ERROR,
"query-specified return tuple not valid for Connectby: "
"fourth column must be type %s",
@ -1383,8 +1383,8 @@ static bool compatConnectbyTupleDescs(TupleDesc ret_tupdesc, TupleDesc sql_tupde
Oid sql_atttypid;
/* check the key_fld types match */
ret_atttypid = ret_tupdesc->attrs[0]->atttypid;
sql_atttypid = sql_tupdesc->attrs[0]->atttypid;
ret_atttypid = ret_tupdesc->attrs[0].atttypid;
sql_atttypid = sql_tupdesc->attrs[0].atttypid;
if (ret_atttypid != sql_atttypid)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@ -1393,8 +1393,8 @@ static bool compatConnectbyTupleDescs(TupleDesc ret_tupdesc, TupleDesc sql_tupde
"not match return key field datatype.")));
/* check the parent_key_fld types match */
ret_atttypid = ret_tupdesc->attrs[1]->atttypid;
sql_atttypid = sql_tupdesc->attrs[1]->atttypid;
ret_atttypid = ret_tupdesc->attrs[1].atttypid;
sql_atttypid = sql_tupdesc->attrs[1].atttypid;
if (ret_atttypid != sql_atttypid)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@ -1421,8 +1421,8 @@ static bool compatCrosstabTupleDescs(TupleDesc ret_tupdesc, TupleDesc sql_tupdes
return false;
/* check the rowid types match */
ret_atttypid = ret_tupdesc->attrs[0]->atttypid;
sql_atttypid = sql_tupdesc->attrs[0]->atttypid;
ret_atttypid = ret_tupdesc->attrs[0].atttypid;
sql_atttypid = sql_tupdesc->attrs[0].atttypid;
if (ret_atttypid != sql_atttypid)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),

View File

@ -224,7 +224,7 @@ static void tuple_to_stringinfo(StringInfo s, TupleDesc tupdesc, HeapTuple tuple
Datum origval; /* possibly toasted Datum */
bool isnull = true; /* column is null? */
attr = tupdesc->attrs[natt];
attr = &tupdesc->attrs[natt];
/*
* don't print dropped columns, we can't be sure everything is

View File

@ -823,15 +823,13 @@ CXXFLAGS = @CFLAGS@
ifeq ($(enable_memory_check), yes)
CXXFLAGS += -fsanitize=address -fsanitize=leak -fno-omit-frame-pointer
else
CXXFLAGS += -fstack-protector
#CXXFLAGS += -fstack-protector
endif
ifeq ($(enable_thread_check), yes)
CXXFLAGS += -fsanitize=thread -fno-omit-frame-pointer
endif
CXXFLAGS += -Wl,-z,relro,-z,now
CXXFLAGS += -Wl,-z,noexecstack
CXXFLAGS += -std=c++14
override CXXFLAGS += $(PTHREAD_CFLAGS)
# Kind-of compilers

View File

@ -203,6 +203,8 @@ enable_partitionwise|bool|0,0|NULL|NULL|
enable_pbe_optimization|bool|0,0|NULL|NULL|
enable_prevent_job_task_startup|bool|0,0|NULL|It is not recommended to enable this parameter except for scaling out.|
enable_security_policy|bool|0,0|NULL|NULL|
enable_seqscan_fusion|bool|0,0|NULL|NULL|
enable_parser_fusion|bool|0,0|NULL|NULL|
use_elastic_search|bool|0,0|NULL|NULL|
elastic_search_ip_addr|string|0,0|NULL|NULL
enable_resource_track|bool|0,0|NULL|NULL|
@ -656,7 +658,8 @@ enable_auto_explain|bool|0,0|NULL|NULL|
auto_explain_level|enum|off,log,notice|NULL|NULL|
cost_weight_index|real|1e-10,1e+10|NULL|NULL|
default_limit_rows|real|-100,1.79769e+308|NULL|NULL|
sql_beta_feature|enum|partition_fdw_on,partition_opfusion,index_cost_with_leaf_pages_only,canonical_pathkey,join_sel_with_cast_func,no_unique_index_first,sel_semi_poisson,sel_expr_instr,param_path_gen,rand_cost_opt,param_path_opt,page_est_opt,a_style_coerce,predpush_same_level,none|NULL|NULL|
sql_beta_feature|enum|partition_fdw_on,partition_opfusion,index_cost_with_leaf_pages_only,canonical_pathkey,join_sel_with_cast_func,no_unique_index_first,sel_semi_poisson,sel_expr_instr,param_path_gen,rand_cost_opt,param_path_opt,page_est_opt,a_style_coerce,predpush_same_level,sublink_pullup_enhanced,none|NULL|NULL|
sql_fusion_engine|enum|iud_checksum_remove,iud_node_context_remove,iud_is_system_class_remove_package,iud_errorrel_remove,iud_block_chain_remove,iud_trigger_remove,iud_memory_context_track_remove,iud_instr_time_remove,iud_markdrop_remove,iud_code_optimize,iud_report_remove,iud_pending,none|NULL|NULL|
max_logical_replication_workers|int|0,262143|NULL|Maximum number of logical replication worker processes.|
walwriter_sleep_threshold|int64|1,50000|NULL|NULL|
walwriter_cpu_bind|int|-1,2147483647|NULL|NULL|
@ -826,4 +829,5 @@ cost_weight_index|real|1e-10,1e+10|NULL|NULL|
default_limit_rows|real|-100,1.79769e+308|NULL|NULL|
enable_auto_explain|bool|0,0|NULL|NULL|
auto_explain_level|enum|off,log,notice|NULL|NULL|
enable_indexscan_optimization|bool|0,0|NULL|NULL|
[end]

View File

@ -32,10 +32,6 @@
#include "gauss_sft.h"
#endif
/* Globals from keywords.c */
extern ScanKeyword FEScanKeywords[];
extern int NumFEScanKeywords;
/* Globals exported by this file */
int quote_all_identifiers = 0;
const char* progname = NULL;
@ -152,9 +148,9 @@ const char* fmtId(const char* rawid)
* Note: ScanKeywordLookup() does case-insensitive comparison, but
* that's fine, since we already know we have all-lower-case.
*/
const ScanKeyword* keyword = ScanKeywordLookup(rawid, FEScanKeywords, NumFEScanKeywords);
int kwnum = ScanKeywordLookup(rawid, &ScanKeywords);
if (keyword != NULL && keyword->category != UNRESERVED_KEYWORD)
if (kwnum >= 0 && ScanKeywordCategories[kwnum] != UNRESERVED_KEYWORD)
need_quotes = true;
}

View File

@ -17,14 +17,17 @@
#include "parser/keywords.h"
#include "parser/kwlist_d.h"
#define PG_KEYWORD(kwname,value,category) category,
const uint8 ScanKeywordCategories[SCANKEYWORDS_NUM_KEYWORDS] = {
#include "parser/kwlist.h"
};
#undef PG_KEYWORD
/*
* We don't need the token number, so leave it out to avoid requiring other
* backend headers.
*/
#define PG_KEYWORD(a, b, c) {a, 0, c},
ScanKeyword FEScanKeywords[] = {
#include "parser/kwlist.h"
};
int NumFEScanKeywords = lengthof(FEScanKeywords);

View File

@ -4454,16 +4454,7 @@ void getSubscriptions(Archive *fout)
}
if (!isExecUserSuperRole(fout)) {
res = ExecuteSqlQuery(fout,
"SELECT count(*) FROM pg_subscription "
"WHERE subdbid = (SELECT oid FROM pg_catalog.pg_database"
" WHERE datname = current_database())",
PGRES_TUPLES_OK);
uint64 n = (res != NULL) ? strtoul(PQgetvalue(res, 0, 0), NULL, 10) : 0;
if (n > 0) {
write_msg(NULL, "WARNING: subscriptions not dumped because current user is not a superuser\n");
}
PQclear(res);
return;
}
@ -10795,6 +10786,11 @@ static void dumpDirectory(Archive* fout)
char* dirpath = NULL;
char* diracl = NULL;
if (!isExecUserSuperRole(fout)) {
write_msg(NULL, "WARNING: directory not dumped because current user is not a superuser\n");
return;
}
/* Make sure we are in proper schema */
selectSourceSchema(fout, "pg_catalog");
@ -21404,6 +21400,11 @@ static void dumpSynonym(Archive* fout)
PQExpBuffer q;
PQExpBuffer delq;
if (!isExecUserSuperRole(fout)) {
write_msg(NULL, "WARNING: synonym not dumped because current user is not a superuser\n");
return;
}
selectSourceSchema(fout, "pg_catalog");
query = createPQExpBuffer();
printfPQExpBuffer(query,

View File

@ -1205,7 +1205,27 @@ bool SendQuery(const char* query, bool is_print, bool print_error)
else if (!PQsendQuery(pset.db, query))
results = NULL;
if (is_explain) {
OK = GetPrintResult(&results, is_explain, is_print, query, print_error);
if (pset.timing && is_print) {
INSTR_TIME_SET_CURRENT(after);
INSTR_TIME_SUBTRACT(after, before);
elapsed_msec = INSTR_TIME_GET_MILLISEC(after);
}
} else {
OK = ProcessResult(&results, is_explain, print_error);
if (pset.timing && is_print) {
INSTR_TIME_SET_CURRENT(after);
INSTR_TIME_SUBTRACT(after, before);
elapsed_msec = INSTR_TIME_GET_MILLISEC(after);
}
/* but printing results isn't: */
if (OK && is_print && results) {
OK = PrintQueryResults(results);
/* record the set stmts when needed. */
RecordGucStmt(results, query);
}
}
#ifndef WIN32
/* Clear password related memory to avoid leaks when core. */
if (pset.cur_cmd_interactive) {
@ -1216,12 +1236,6 @@ bool SendQuery(const char* query, bool is_print, bool print_error)
}
#endif
if (pset.timing && is_print) {
INSTR_TIME_SET_CURRENT(after);
INSTR_TIME_SUBTRACT(after, before);
elapsed_msec = INSTR_TIME_GET_MILLISEC(after);
}
// For EXPLAIN PERFORMANCE command, the query is sent by PQsendQuery.
// But PQsendQuery doesn't wait for it to finish and then goes to the do-while
// loop to process results. It is more reasonable to put ResetCancelConn here

View File

@ -5087,6 +5087,7 @@ AclMode pg_class_aclmask(Oid table_oid, Oid roleid, AclMode mask, AclMaskHow how
errcause("System error."), erraction("Contact engineer to support.")));
classForm = (Form_pg_class)GETSTRUCT(tuple);
#ifdef ENABLE_MULTIPLE_NODES
/* Check current user has privilige to this group */
if (IS_PGXC_COORDINATOR && !IsInitdb && check_nodegroup && is_pgxc_class_table(table_oid) &&
roleid != classForm->relowner) {
@ -5103,6 +5104,7 @@ AclMode pg_class_aclmask(Oid table_oid, Oid roleid, AclMode mask, AclMaskHow how
}
}
}
#endif
/*
* Deny anyone permission to update a system catalog unless
@ -5114,9 +5116,8 @@ AclMode pg_class_aclmask(Oid table_oid, Oid roleid, AclMode mask, AclMaskHow how
* themselves. ACL_USAGE is if we ever have system sequences.
*/
if (!is_ddl_privileges && (mask & (ACL_INSERT | ACL_UPDATE | ACL_DELETE | ACL_TRUNCATE | ACL_USAGE))
&& IsSystemClass(classForm) &&
classForm->relkind != RELKIND_VIEW && classForm->relkind != RELKIND_CONTQUERY && !has_rolcatupdate(roleid) &&
!g_instance.attr.attr_common.allowSystemTableMods) {
&& !g_instance.attr.attr_common.allowSystemTableMods && IsSystemClass(classForm) &&
classForm->relkind != RELKIND_VIEW && classForm->relkind != RELKIND_CONTQUERY && !has_rolcatupdate(roleid)) {
#ifdef ACLDEBUG
elog(DEBUG2, "permission denied for system catalog update");
#endif
@ -5127,13 +5128,14 @@ AclMode pg_class_aclmask(Oid table_oid, Oid roleid, AclMode mask, AclMaskHow how
* initial user and monitorsdmin bypass all permission-checking.
*/
Oid namespaceId = classForm->relnamespace;
if (IsMonitorSpace(namespaceId) && (roleid == INITIAL_USER_ID || isMonitoradmin(roleid))) {
bool isMonitorNs = IsMonitorSpace(namespaceId);
if (isMonitorNs && (roleid == INITIAL_USER_ID || isMonitoradmin(roleid))) {
ReleaseSysCache(tuple);
return mask;
}
/* Blockchain hist table cannot be modified */
if (table_oid == GsGlobalChainRelationId || classForm->relnamespace == PG_BLOCKCHAIN_NAMESPACE) {
if (table_oid == GsGlobalChainRelationId || namespaceId == PG_BLOCKCHAIN_NAMESPACE) {
if (isRelSuperuser() || isAuditadmin(roleid)) {
mask &= ~(ACL_INSERT | ACL_UPDATE | ACL_DELETE | ACL_TRUNCATE | ACL_USAGE | ACL_REFERENCES);
} else {
@ -5145,7 +5147,7 @@ AclMode pg_class_aclmask(Oid table_oid, Oid roleid, AclMode mask, AclMaskHow how
/* Otherwise, superusers bypass all permission-checking, except access independent role's objects. */
/* Database Security: Support separation of privilege. */
if (!is_ddl_privileges && !IsMonitorSpace(namespaceId) && (superuser_arg(roleid) || systemDBA_arg(roleid)) &&
if (!is_ddl_privileges && !isMonitorNs && (superuser_arg(roleid) || systemDBA_arg(roleid)) &&
((classForm->relowner == roleid) || !is_role_independent(classForm->relowner) ||
independent_priv_aclcheck(mask, classForm->relkind))) {
#ifdef ACLDEBUG
@ -5155,7 +5157,7 @@ AclMode pg_class_aclmask(Oid table_oid, Oid roleid, AclMode mask, AclMaskHow how
return mask;
}
if (is_security_policy_relation(table_oid) && isPolicyadmin(roleid)) {
if (isPolicyadmin(roleid) && is_security_policy_relation(table_oid)) {
ReleaseSysCache(tuple);
return mask;
}

View File

@ -2030,11 +2030,11 @@
),
AddFuncGroup(
"cursor_to_xml", 1,
AddBuiltinFunc(_0(2925), _1("cursor_to_xml"), _2(5), _3(false), _4(false), _5(cursor_to_xml), _6(142), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(100), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('s'), _19(0), _20(5, 1790, 23, 16, 16, 25), _21(NULL), _22(NULL), _23(5, "cursor", "count", "nulls", "tableforest", "targetns"), _24(NULL), _25("cursor_to_xml"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("map rows from cursor to XML"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
AddBuiltinFunc(_0(CURSORTOXMLOID), _1("cursor_to_xml"), _2(5), _3(false), _4(false), _5(cursor_to_xml), _6(142), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(100), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('s'), _19(0), _20(5, 1790, 23, 16, 16, 25), _21(NULL), _22(NULL), _23(5, "cursor", "count", "nulls", "tableforest", "targetns"), _24(NULL), _25("cursor_to_xml"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("map rows from cursor to XML"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"cursor_to_xmlschema", 1,
AddBuiltinFunc(_0(2928), _1("cursor_to_xmlschema"), _2(4), _3(false), _4(false), _5(cursor_to_xmlschema), _6(142), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(100), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('s'), _19(0), _20(4, 1790, 16, 16, 25), _21(NULL), _22(NULL), _23(4, "cursor", "nulls", "tableforest", "targetns"), _24(NULL), _25("cursor_to_xmlschema"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("map cursor structure to XML Schema"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
AddBuiltinFunc(_0(CURSORTOXMLSCHEMAOID), _1("cursor_to_xmlschema"), _2(4), _3(false), _4(false), _5(cursor_to_xmlschema), _6(142), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(100), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('s'), _19(0), _20(4, 1790, 16, 16, 25), _21(NULL), _22(NULL), _23(4, "cursor", "nulls", "tableforest", "targetns"), _24(NULL), _25("cursor_to_xmlschema"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("map cursor structure to XML Schema"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"database_to_xml", 1,
@ -4941,7 +4941,7 @@
),
AddFuncGroup(
"int2_accum", 1,
AddBuiltinFunc(_0(1834), _1("int2_accum"), _2(2), _3(true), _4(false), _5(int2_accum), _6(1231), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(2, 1231, 21), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("int2_accum"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate transition function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
AddBuiltinFunc(_0(1834), _1("int2_accum"), _2(2), _3(false), _4(false), _5(int2_accum), _6(2281), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(2, 2281, 21), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("int2_accum"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate transition function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"int2_avg_accum", 1,
@ -5213,7 +5213,7 @@
),
AddFuncGroup(
"int4_accum", 1,
AddBuiltinFunc(_0(1835), _1("int4_accum"), _2(2), _3(true), _4(false), _5(int4_accum), _6(1231), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(2, 1231, 23), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("int4_accum"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate transition function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
AddBuiltinFunc(_0(1835), _1("int4_accum"), _2(2), _3(false), _4(false), _5(int4_accum), _6(2281), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(2, 2281, 23), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("int4_accum"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate transition function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"int4_avg_accum", 1,
@ -5461,7 +5461,7 @@
),
AddFuncGroup(
"int8_accum", 1,
AddBuiltinFunc(_0(1836), _1("int8_accum"), _2(2), _3(true), _4(false), _5(int8_accum), _6(1231), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(2, 1231, 20), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("int8_accum"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate transition function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
AddBuiltinFunc(_0(1836), _1("int8_accum"), _2(2), _3(false), _4(false), _5(int8_accum), _6(2281), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(2, 2281, 20), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("int8_accum"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate transition function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"int8_avg", 1,
@ -5469,7 +5469,7 @@
),
AddFuncGroup(
"int8_avg_accum", 1,
AddBuiltinFunc(_0(2746), _1("int8_avg_accum"), _2(2), _3(true), _4(false), _5(int8_avg_accum), _6(1231), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(2, 1231, 20), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("int8_avg_accum"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33(NULL), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
AddBuiltinFunc(_0(2746), _1("int8_avg_accum"), _2(2), _3(false), _4(false), _5(int8_avg_accum), _6(2281), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(2, 2281, 20), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("int8_avg_accum"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33(NULL), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"int8_avg_collect", 1,
@ -6986,7 +6986,7 @@
),
AddFuncGroup(
"numeric_accum", 1,
AddBuiltinFunc(_0(1833), _1("numeric_accum"), _2(2), _3(true), _4(false), _5(numeric_accum), _6(1231), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(2, 1231, 1700), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_accum"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate transition function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
AddBuiltinFunc(_0(1833), _1("numeric_accum"), _2(2), _3(false), _4(false), _5(numeric_accum), _6(2281), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(2, 2281, 1700), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_accum"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate transition function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"numeric_add", 1,
@ -6994,11 +6994,11 @@
),
AddFuncGroup(
"numeric_avg", 1,
AddBuiltinFunc(_0(1837), _1("numeric_avg"), _2(1), _3(true), _4(false), _5(numeric_avg), _6(1700), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(1, 1231), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_avg"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate final function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
AddBuiltinFunc(_0(1837), _1("numeric_avg"), _2(1), _3(false), _4(false), _5(numeric_avg), _6(1700), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(1, 2281), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_avg"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate final function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"numeric_avg_accum", 1,
AddBuiltinFunc(_0(2858), _1("numeric_avg_accum"), _2(2), _3(true), _4(false), _5(numeric_avg_accum), _6(1231), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(2, 1231, 1700), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_avg_accum"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate transition function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
AddBuiltinFunc(_0(2858), _1("numeric_avg_accum"), _2(2), _3(false), _4(false), _5(numeric_avg_accum), _6(2281), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(2, 2281, 1700), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_avg_accum"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate transition function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"numeric_avg_collect", 1,
@ -7130,16 +7130,20 @@
),
AddFuncGroup(
"numeric_stddev_pop", 1,
AddBuiltinFunc(_0(2596), _1("numeric_stddev_pop"), _2(1), _3(true), _4(false), _5(numeric_stddev_pop), _6(1700), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(1, 1231), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_stddev_pop"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate final function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
AddBuiltinFunc(_0(2596), _1("numeric_stddev_pop"), _2(1), _3(false), _4(false), _5(numeric_stddev_pop), _6(1700), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(1, 2281), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_stddev_pop"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate final function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"numeric_stddev_samp", 1,
AddBuiltinFunc(_0(1839), _1("numeric_stddev_samp"), _2(1), _3(true), _4(false), _5(numeric_stddev_samp), _6(1700), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(1, 1231), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_stddev_samp"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate final function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
AddBuiltinFunc(_0(1839), _1("numeric_stddev_samp"), _2(1), _3(false), _4(false), _5(numeric_stddev_samp), _6(1700), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(1, 2281), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_stddev_samp"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate final function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"numeric_sub", 1,
AddBuiltinFunc(_0(1725), _1("numeric_sub"), _2(2), _3(true), _4(false), _5(numeric_sub), _6(1700), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(2, 1700, 1700), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_sub"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33(NULL), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"numeric_sum", 1,
AddBuiltinFunc(_0(5435), _1("numeric_sum"), _2(1), _3(false), _4(false), _5(numeric_sum), _6(1700), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(1, 2281), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_sum"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33(NULL), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"numeric_text", 1,
AddBuiltinFunc(_0(4171), _1("numeric_text"), _2(1), _3(true), _4(false), _5(numeric_text), _6(25), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(1, 1700), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_text"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33(NULL), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
@ -7158,11 +7162,11 @@
),
AddFuncGroup(
"numeric_var_pop", 1,
AddBuiltinFunc(_0(2514), _1("numeric_var_pop"), _2(1), _3(true), _4(false), _5(numeric_var_pop), _6(1700), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(1, 1231), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_var_pop"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate final function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
AddBuiltinFunc(_0(2514), _1("numeric_var_pop"), _2(1), _3(false), _4(false), _5(numeric_var_pop), _6(1700), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(1, 2281), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_var_pop"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate final function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"numeric_var_samp", 1,
AddBuiltinFunc(_0(1838), _1("numeric_var_samp"), _2(1), _3(true), _4(false), _5(numeric_var_samp), _6(1700), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(1, 1231), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_var_samp"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate final function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
AddBuiltinFunc(_0(1838), _1("numeric_var_samp"), _2(1), _3(false), _4(false), _5(numeric_var_samp), _6(1700), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('i'), _19(0), _20(1, 2281), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("numeric_var_samp"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33("aggregate final function"), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"numeric_varchar", 1,
@ -7601,7 +7605,7 @@
),
AddFuncGroup(
"pg_cancel_session", 1,
AddBuiltinFunc(_0(3991), _1("pg_cancel_session"), _2(2), _3(true), _4(false), _5(pg_cancel_session), _6(16), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('v'), _19(0), _20(2, 20, 20), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("pg_cancel_session"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33(NULL))
AddBuiltinFunc(_0(3991), _1("pg_cancel_session"), _2(2), _3(true), _4(false), _5(pg_cancel_session), _6(16), _7(PG_CATALOG_NAMESPACE), _8(BOOTSTRAP_SUPERUSERID), _9(INTERNALlanguageId), _10(1), _11(0), _12(0), _13(0), _14(false), _15(false), _16(false), _17(false), _18('v'), _19(0), _20(2, 20, 20), _21(NULL), _22(NULL), _23(NULL), _24(NULL), _25("pg_cancel_session"), _26(NULL), _27(NULL), _28(NULL), _29(0), _30(false), _31(NULL), _32(false), _33(NULL), _34('f'), _35(NULL), _36(0), _37(false), _38(NULL), _39(NULL), _40(0))
),
AddFuncGroup(
"pg_cbm_force_track", 1,

View File

@ -645,6 +645,10 @@ bool IsSystemClass(Form_pg_class reltuple)
{
Oid relnamespace = reltuple->relnamespace;
if (ENABLE_SQL_FUSION_ENGINE(IUD_IS_SYSTEM_CLASS_REMOVE_PACKAGE)) {
return IsSystemNamespace(relnamespace) || IsToastNamespace(relnamespace);
}
return IsSystemNamespace(relnamespace) || IsToastNamespace(relnamespace) || IsPackageSchemaOid(relnamespace);
}
@ -682,6 +686,13 @@ bool IsCatalogClass(Oid relid, Form_pg_class reltuple)
{
Oid relnamespace = reltuple->relnamespace;
/* Optimize if judgment */
if (ENABLE_SQL_FUSION_ENGINE(IUD_CODE_OPTIMIZE)) {
if ((relid < FirstNormalObjectId) && (IsSystemNamespace(relnamespace) || IsToastNamespace(relnamespace))) {
return true;
}
return false;
}
/*
* Never consider relations outside pg_catalog/pg_toast to be catalog
* relations.
@ -1087,6 +1098,7 @@ Oid GetNewRelFileNode(Oid reltablespace, Relation pg_class, char relpersistence)
bool IsPackageSchemaOid(Oid relnamespace)
{
#ifdef ENABLE_MULTIPLE_NODES
const char* packageSchemaList[] = {
"dbe_lob",
"dbe_random",
@ -1102,7 +1114,7 @@ bool IsPackageSchemaOid(Oid relnamespace)
"dbe_perf",
"dbe_session"
};
int schemaNum = 10;
int schemaNum = 13;
char* schemaName = get_namespace_name(relnamespace);
if (schemaName == NULL) {
return false;
@ -1114,10 +1126,14 @@ bool IsPackageSchemaOid(Oid relnamespace)
}
}
return false;
#else
return (relnamespace == PG_PKG_SERVICE_NAMESPACE || relnamespace == PG_DBEPERF_NAMESPACE);
#endif
}
bool IsPackageSchemaName(const char* schemaName)
{
#ifdef ENABLE_MULTIPLE_NODES
const char* packageSchemaList[] = {
"dbe_lob",
"dbe_random",
@ -1128,13 +1144,20 @@ bool IsPackageSchemaName(const char* schemaName)
"dbe_sql",
"dbe_file",
"pkg_service",
"pkg_util"
"pkg_util",
"dbe_match",
"dbe_perf",
"dbe_session"
};
int schemaNum = 10;
int schemaNum = 13;
for (int i = 0; i < schemaNum; ++i) {
if (strcmp(schemaName, packageSchemaList[i]) == 0) {
return true;
}
}
return false;
#else
return (strcmp(schemaName, "dbe_perf") == 0
|| strcmp(schemaName, "pkg_service") == 0);
#endif
}

View File

@ -729,11 +729,11 @@ void CheckAttributeNamesTypes(TupleDesc tupdesc, char relkind, bool allow_system
*/
if (relkind != RELKIND_VIEW && relkind != RELKIND_COMPOSITE_TYPE && relkind != RELKIND_CONTQUERY) {
for (i = 0; i < natts; i++) {
if (SystemAttributeByName(NameStr(tupdesc->attrs[i]->attname), tupdesc->tdhasoid) != NULL)
if (SystemAttributeByName(NameStr(tupdesc->attrs[i].attname), tupdesc->tdhasoid) != NULL)
ereport(ERROR,
(errcode(ERRCODE_DUPLICATE_COLUMN),
errmsg("column name \"%s\" conflicts with a system column name",
NameStr(tupdesc->attrs[i]->attname))));
NameStr(tupdesc->attrs[i].attname))));
}
}
@ -742,10 +742,10 @@ void CheckAttributeNamesTypes(TupleDesc tupdesc, char relkind, bool allow_system
*/
for (i = 1; i < natts; i++) {
for (j = 0; j < i; j++) {
if (strcmp(NameStr(tupdesc->attrs[j]->attname), NameStr(tupdesc->attrs[i]->attname)) == 0)
if (strcmp(NameStr(tupdesc->attrs[j].attname), NameStr(tupdesc->attrs[i].attname)) == 0)
ereport(ERROR,
(errcode(ERRCODE_DUPLICATE_COLUMN),
errmsg("column name \"%s\" specified more than once", NameStr(tupdesc->attrs[j]->attname))));
errmsg("column name \"%s\" specified more than once", NameStr(tupdesc->attrs[j].attname))));
}
}
@ -753,9 +753,9 @@ void CheckAttributeNamesTypes(TupleDesc tupdesc, char relkind, bool allow_system
* next check the attribute types
*/
for (i = 0; i < natts; i++) {
CheckAttributeType(NameStr(tupdesc->attrs[i]->attname),
tupdesc->attrs[i]->atttypid,
tupdesc->attrs[i]->attcollation,
CheckAttributeType(NameStr(tupdesc->attrs[i].attname),
tupdesc->attrs[i].atttypid,
tupdesc->attrs[i].attcollation,
NIL, /* assume we're creating a new rowtype */
allow_system_table_mods);
}
@ -846,7 +846,7 @@ void CheckAttributeType(
tupdesc = RelationGetDescr(relation);
for (i = 0; i < tupdesc->natts; i++) {
Form_pg_attribute attr = tupdesc->attrs[i];
Form_pg_attribute attr = &tupdesc->attrs[i];
if (attr->attisdropped)
continue;
@ -1003,7 +1003,7 @@ static void AddNewAttributeTuples(Oid new_rel_oid, TupleDesc tupdesc, char relki
* add dependencies on their datatypes and collations.
*/
for (i = 0; i < natts; i++) {
attr = tupdesc->attrs[i];
attr = &tupdesc->attrs[i];
/* Fill in the correct relation OID */
attr->attrelid = new_rel_oid;
/* Make sure these are OK, too */
@ -1339,13 +1339,13 @@ static List* GetDistColsPos(DistributeBy* distributeBy, TupleDesc desc)
List* pos = NULL;
ListCell* cell = NULL;
char* colname = NULL;
Form_pg_attribute *attrs = desc->attrs;
FormData_pg_attribute *attrs = desc->attrs;
foreach (cell, distributeBy->colname) {
colname = strVal((Value*)lfirst(cell));
for (i = 0; i < desc->natts; i++) {
if (strcmp(colname, attrs[i]->attname.data) == 0) {
if (strcmp(colname, attrs[i].attname.data) == 0) {
break;
}
}
@ -1478,7 +1478,7 @@ static int GetTotalBoundariesNum(List* sliceList)
return result;
}
static void CheckDuplicateListSlices(List* pos, Form_pg_attribute* attrs, DistributeBy *distby)
static void CheckDuplicateListSlices(List* pos, FormData_pg_attribute* attrs, DistributeBy *distby)
{
List* boundary = NULL;
List* sliceList = NULL;
@ -1546,7 +1546,7 @@ static void CheckOneBoundaryValue(List* boundary, List* posList, TupleDesc desc)
Const* targetConst = NULL;
ListCell* boundaryCell = NULL;
ListCell* posCell = NULL;
Form_pg_attribute* attrs = desc->attrs;
FormData_pg_attribute* attrs = desc->attrs;
forboth(boundaryCell, boundary, posCell, posList) {
srcConst = (Const*)lfirst(boundaryCell);
@ -1555,7 +1555,7 @@ static void CheckOneBoundaryValue(List* boundary, List* posList, TupleDesc desc)
}
pos = lfirst_int(posCell);
targetConst = (Const*)GetTargetValue(attrs[pos], srcConst, false);
targetConst = (Const*)GetTargetValue(&attrs[pos], srcConst, false);
if (!PointerIsValid(targetConst)) {
ereport(ERROR,
(errcode(ERRCODE_INVALID_OPERATION),
@ -1796,7 +1796,7 @@ static void CheckSliceReferenceValidity(Oid relid, DistributeBy *distributeby, T
baseKeyType = get_atttype(distributeby->referenceoid, baseKeyIdx);
keyIdx = get_attnum(relid, colname);
keyType = descriptor->attrs[keyIdx - 1]->atttypid;
keyType = descriptor->attrs[keyIdx - 1].atttypid;
if (baseKeyType != keyType) {
FreeRelationLocInfo(baseLocInfo);
@ -2015,14 +2015,14 @@ static void CheckDistributeKeyAndType(Oid relid, DistributeBy *distributeby,
}
if (distributeby->disttype == DISTTYPE_LIST || distributeby->disttype == DISTTYPE_RANGE) {
if (!IsTypeDistributableForSlice(descriptor->attrs[localAttrNum - 1]->atttypid)) {
if (!IsTypeDistributableForSlice(descriptor->attrs[localAttrNum - 1].atttypid)) {
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("Column %s is not a %s distributable data type", colname,
GetDistributeTypeName(distributeby->disttype))));
}
} else {
if (!IsTypeDistributable(descriptor->attrs[localAttrNum - 1]->atttypid)) {
if (!IsTypeDistributable(descriptor->attrs[localAttrNum - 1].atttypid)) {
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("Column %s is not a %s distributable data type", colname,
@ -2054,14 +2054,14 @@ void GetRelationDistributionItems(Oid relid, DistributeBy* distributeby, TupleDe
* one based on primary key or foreign key, use first column with
* a supported data type.
*/
Form_pg_attribute attr;
FormData_pg_attribute attr;
int i;
local_locatortype = LOCATOR_TYPE_HASH;
for (i = 0; i < descriptor->natts; i++) {
attr = descriptor->attrs[i];
if (IsTypeDistributable(attr->atttypid)) {
if (IsTypeDistributable(attr.atttypid)) {
/* distribute on this column */
local_attnum = i + 1;
attnum[0] = local_attnum;
@ -2077,7 +2077,7 @@ void GetRelationDistributionItems(Oid relid, DistributeBy* distributeby, TupleDe
(errcode(ERRCODE_SUCCESSFUL_COMPLETION),
errmsg("The 'DISTRIBUTE BY' clause is not specified. Using '%s' as the distribution column "
"by default.",
attr->attname.data),
attr.attname.data),
errhint(
"Please use 'DISTRIBUTE BY' clause to specify suitable data distribution column.")));
break;
@ -2194,7 +2194,7 @@ HashBucketInfo* GetRelationBucketInfo(DistributeBy* distributeby,
bucketkey = buildint2vector(NULL, 1);
for (i = 0; i < nattr; i++) {
attr = tupledsc->attrs[i];
attr = &tupledsc->attrs[i];
if (IsTypeDistributable(attr->atttypid)) {
bucketkey->values[0] = attr->attnum;
bucketinfo->bucketcol = bucketkey;
@ -2231,7 +2231,7 @@ HashBucketInfo* GetRelationBucketInfo(DistributeBy* distributeby,
foreach (cell, distributeby->colname) {
colname = strVal(lfirst(cell));
for (j = 0; j < nattr; j++) {
attr = tupledsc->attrs[j];
attr = &tupledsc->attrs[j];
if (strcmp(colname, attr->attname.data) == 0) {
local_attnum = attr->attnum;
break;
@ -3931,7 +3931,7 @@ List* AddRelationNewConstraints(
*/
foreach (cell, newColDefaults) {
RawColumnDefault* colDef = (RawColumnDefault*)lfirst(cell);
Form_pg_attribute atp = rel->rd_att->attrs[colDef->attnum - 1];
Form_pg_attribute atp = &rel->rd_att->attrs[colDef->attnum - 1];
expr = cookDefault(pstate, colDef->raw_default, atp->atttypid, atp->atttypmod, NameStr(atp->attname),
colDef->generatedCol);
@ -4325,7 +4325,7 @@ Node *cookDefault(ParseState *pstate, Node *raw_default, Oid atttypid, int32 att
* Transform raw parsetree to executable expression.
*/
pstate->p_expr_kind = generatedCol ? EXPR_KIND_GENERATED_COLUMN : EXPR_KIND_COLUMN_DEFAULT;
expr = transformExpr(pstate, raw_default);
expr = transformExpr(pstate, raw_default, pstate->p_expr_kind);
pstate->p_expr_kind = EXPR_KIND_NONE;
if (generatedCol == ATTRIBUTE_GENERATED_STORED)
@ -4354,11 +4354,10 @@ Node *cookDefault(ParseState *pstate, Node *raw_default, Oid atttypid, int32 att
ExcludeRownumExpr(pstate, expr);
#endif
/*
* It can't return a set either.
* transformExpr() should have already rejected subqueries, aggregates,
* window functions, and SRFs, based on the EXPR_KIND_ for a default
* expression.
*/
if (expression_returns_set(expr))
ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH),
errmsg("%s expression must not return a set", generatedCol ? "generated column" : "default")));
/*
* No subplans or aggregates, either...
@ -4412,7 +4411,7 @@ static Node* cookConstraint(ParseState* pstate, Node* raw_constraint, char* reln
/*
* Transform raw parsetree to executable expression.
*/
expr = transformExpr(pstate, raw_constraint);
expr = transformExpr(pstate, raw_constraint, EXPR_KIND_CHECK_CONSTRAINT);
/*
* Make sure it yields a boolean result.
@ -4980,7 +4979,7 @@ int2vector* buildPartitionKey(List* keys, TupleDesc tupledsc)
int partkeyNum = keys->length;
char* columName = NULL;
bool finded = false;
Form_pg_attribute* attrs = tupledsc->attrs;
FormData_pg_attribute* attrs = tupledsc->attrs;
int2vector* partkey = NULL;
partkey = buildint2vector(NULL, partkeyNum);
@ -4989,8 +4988,8 @@ int2vector* buildPartitionKey(List* keys, TupleDesc tupledsc)
columName = ((Value*)linitial(col->fields))->val.str;
finded = false;
for (j = 0; j < attnum; j++) {
if (strcmp(columName, attrs[j]->attname.data) == 0) {
partkey->values[i] = attrs[j]->attnum;
if (strcmp(columName, attrs[j].attname.data) == 0) {
partkey->values[i] = attrs[j].attnum;
finded = true;
break;
}
@ -6829,7 +6828,7 @@ static void IsPartitionKeyContainTimestampwithzoneType(const PartitionState *par
char *columName = NULL;
int partKeyIdx = 0;
int attnum = tupledesc->natts;
Form_pg_attribute *attrs = tupledesc->attrs;
FormData_pg_attribute *attrs = tupledesc->attrs;
foreach (partKeyCell, partTableState->partitionKey) {
col = (ColumnRef *)lfirst(partKeyCell);
@ -6837,7 +6836,7 @@ static void IsPartitionKeyContainTimestampwithzoneType(const PartitionState *par
isTimestamptz[partKeyIdx] = false;
for (int i = 0; i < attnum; i++) {
if (TIMESTAMPTZOID == attrs[i]->atttypid && 0 == strcmp(columName, attrs[i]->attname.data)) {
if (TIMESTAMPTZOID == attrs[i].atttypid && 0 == strcmp(columName, attrs[i].attname.data)) {
isTimestamptz[partKeyIdx] = true;
break;
}
@ -7518,15 +7517,15 @@ char* make_column_map(TupleDesc tuple_desc)
{
#define COLS_IN_BYTE 8
Form_pg_attribute* attrs = tuple_desc->attrs;
FormData_pg_attribute* attrs = tuple_desc->attrs;
char* col_map = (char*)palloc0((MaxHeapAttributeNumber + COLS_IN_BYTE) / COLS_IN_BYTE);
int col_cnt;
Assert(tuple_desc->natts > 0);
for (col_cnt = 0; col_cnt < tuple_desc->natts; col_cnt++) {
if (!attrs[col_cnt]->attisdropped && attrs[col_cnt]->attnum > 0) {
col_map[attrs[col_cnt]->attnum >> 3] |= (1 << (attrs[col_cnt]->attnum % COLS_IN_BYTE));
if (!attrs[col_cnt].attisdropped && attrs[col_cnt].attnum > 0) {
col_map[attrs[col_cnt].attnum >> 3] |= (1 << (attrs[col_cnt].attnum % COLS_IN_BYTE));
}
}
@ -7550,7 +7549,7 @@ bool* CheckPartkeyHasTimestampwithzone(Relation partTableRel, bool isForSubParti
int16* attnums = NULL;
int relationAttNumber = 0;
TupleDesc relationTupleDesc = NULL;
Form_pg_attribute* relationAtts = NULL;
FormData_pg_attribute* relationAtts = NULL;
pgPartRel = relation_open(PartitionRelationId, AccessShareLock);
@ -7607,7 +7606,7 @@ bool* CheckPartkeyHasTimestampwithzone(Relation partTableRel, bool isForSubParti
for (int i = 0; i < n_key_column; i++) {
int attnum = (int)(attnums[i]);
if (attnum >= 1 && attnum <= relationAttNumber) {
if (relationAtts[attnum - 1]->atttypid == TIMESTAMPTZOID) {
if (relationAtts[attnum - 1].atttypid == TIMESTAMPTZOID) {
isTimestamptz[i] = true;
}
} else {

View File

@ -304,7 +304,7 @@ static TupleDesc ConstructTupleDescriptor(Relation heapRelation, IndexInfo* inde
/*
* allocate the new tuple descriptor
*/
indexTupDesc = CreateTemplateTupleDesc(numatts, false, TAM_HEAP);
indexTupDesc = CreateTemplateTupleDesc(numatts, false);
/*
* For simple index columns, we copy the pg_attribute row from the parent
@ -313,7 +313,7 @@ static TupleDesc ConstructTupleDescriptor(Relation heapRelation, IndexInfo* inde
*/
for (i = 0; i < numatts; i++) {
AttrNumber atnum = indexInfo->ii_KeyAttrNumbers[i];
Form_pg_attribute to = indexTupDesc->attrs[i];
Form_pg_attribute to = &indexTupDesc->attrs[i];
HeapTuple tuple;
Form_pg_type typeTup;
Form_pg_opclass opclassTup;
@ -336,7 +336,7 @@ static TupleDesc ConstructTupleDescriptor(Relation heapRelation, IndexInfo* inde
if (atnum > natts) /* safety check */
ereport(
ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("invalid column number %d", atnum)));
from = heapTupDesc->attrs[AttrNumberGetAttrOffset(atnum)];
from = &heapTupDesc->attrs[AttrNumberGetAttrOffset(atnum)];
}
/*
@ -499,7 +499,7 @@ static void InitializeAttributeOids(Relation indexRelation, int numatts, Oid ind
tupleDescriptor = RelationGetDescr(indexRelation);
for (i = 0; i < numatts; i += 1)
tupleDescriptor->attrs[i]->attrelid = indexoid;
tupleDescriptor->attrs[i].attrelid = indexoid;
}
/* ----------------------------------------------------------------
@ -530,10 +530,10 @@ static void AppendAttributeTuples(Relation indexRelation, int numatts)
* There used to be very grotty code here to set these fields, but I
* think it's unnecessary. They should be set already.
*/
Assert(indexTupDesc->attrs[i]->attnum == i + 1);
Assert(indexTupDesc->attrs[i]->attcacheoff == -1);
Assert(indexTupDesc->attrs[i].attnum == i + 1);
Assert(indexTupDesc->attrs[i].attcacheoff == -1);
InsertPgAttributeTuple(pg_attribute, indexTupDesc->attrs[i], indstate);
InsertPgAttributeTuple(pg_attribute, &indexTupDesc->attrs[i], indstate);
}
CatalogCloseIndexes(indstate);
@ -2070,7 +2070,7 @@ void FormIndexDatum(IndexInfo* indexInfo, TupleTableSlot* slot, EState* estate,
if (indexInfo->ii_Expressions != NIL && indexInfo->ii_ExpressionsState == NIL) {
/* First time through, set up expression evaluation state */
indexInfo->ii_ExpressionsState = (List*)ExecPrepareExpr((Expr*)indexInfo->ii_Expressions, estate);
indexInfo->ii_ExpressionsState = ExecPrepareExprList(indexInfo->ii_Expressions, estate);
/* Check caller has set up context correctly */
Assert(GetPerTupleExprContext(estate)->ecxt_scantuple == slot);
}
@ -2096,7 +2096,7 @@ void FormIndexDatum(IndexInfo* indexInfo, TupleTableSlot* slot, EState* estate,
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), errmsg("wrong number of index expressions")));
iDatum = ExecEvalExprSwitchContext(
(ExprState*)lfirst(indexpr_item), GetPerTupleExprContext(estate), &isNull, NULL);
(ExprState*)lfirst(indexpr_item), GetPerTupleExprContext(estate), &isNull);
indexpr_item = lnext(indexpr_item);
}
values[i] = iDatum;
@ -3037,7 +3037,7 @@ double IndexBuildHeapScan(Relation heapRelation, Relation indexRelation, IndexIn
econtext->ecxt_scantuple = slot;
/* Set up execution state for predicate, if any. */
predicate = (List*)ExecPrepareExpr((Expr*)indexInfo->ii_Predicate, estate);
predicate = (List*)ExecPrepareQual(indexInfo->ii_Predicate, estate);
/*
* Prepare for scan of the base relation. In a normal index build, we use
@ -3336,7 +3336,7 @@ double IndexBuildHeapScan(Relation heapRelation, Relation indexRelation, IndexIn
* predicate.
*/
if (predicate != NIL) {
if (!ExecQual(predicate, econtext, false)) {
if (!ExecQual((ExprState*)predicate, econtext)) {
continue;
}
}
@ -3435,13 +3435,13 @@ double IndexBuildUHeapScan(Relation heapRelation, Relation indexRelation, IndexI
*/
estate = CreateExecutorState();
econtext = GetPerTupleExprContext(estate);
slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRelation), false, TAM_USTORE);
slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRelation), false, TableAmUstore);
/* Arrange for econtext's scan tuple to be the tuple under test */
econtext->ecxt_scantuple = slot;
/* Set up execution state for predicate, if any. */
predicate = (List *)ExecPrepareExpr((Expr *)indexInfo->ii_Predicate, estate);
predicate = (List *)ExecPrepareQual(indexInfo->ii_Predicate, estate);
if (indexInfo->ii_Concurrent) {
ereport(ERROR,
@ -3483,7 +3483,7 @@ double IndexBuildUHeapScan(Relation heapRelation, Relation indexRelation, IndexI
* predicate.
*/
if (predicate != NIL) {
if (!ExecQual(predicate, econtext, false)) {
if (!ExecQual((ExprState*)predicate, econtext)) {
continue;
}
}
@ -3693,7 +3693,7 @@ double IndexBuildVectorBatchScan(Relation heapRelation, Relation indexRelation,
econtext = GetPerTupleExprContext(estate);
slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRelation));
econtext->ecxt_scantuple = slot;
predicate = (List*)ExecPrepareExpr((Expr*)indexInfo->ii_Predicate, estate);
predicate = (List*)ExecPrepareQual(indexInfo->ii_Predicate, estate);
List* vars = pull_var_clause((Node*)indexInfo->ii_Expressions, PVC_RECURSE_AGGREGATES, PVC_RECURSE_PLACEHOLDERS);
@ -3728,7 +3728,7 @@ double IndexBuildVectorBatchScan(Relation heapRelation, Relation indexRelation,
(void)ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
if (predicate != NIL) {
if (!ExecQual(predicate, econtext, false)) {
if (!ExecQual((ExprState*)predicate, econtext)) {
continue;
}
}
@ -3813,7 +3813,7 @@ static void IndexCheckExclusion(Relation heapRelation, Relation indexRelation, I
econtext->ecxt_scantuple = slot;
/* Set up execution state for predicate, if any. */
predicate = (List*)ExecPrepareExpr((Expr*)indexInfo->ii_Predicate, estate);
predicate = (List*)ExecPrepareQual(indexInfo->ii_Predicate, estate);
/*
* Scan all live tuples in the base relation.
@ -3837,7 +3837,7 @@ static void IndexCheckExclusion(Relation heapRelation, Relation indexRelation, I
* In a partial index, ignore tuples that don't satisfy the predicate.
*/
if (predicate != NIL) {
if (!ExecQual(predicate, econtext, false)) {
if (!ExecQual((ExprState*)predicate, econtext)) {
continue;
}
}
@ -4061,7 +4061,7 @@ void validate_index_heapscan(
econtext->ecxt_scantuple = slot;
/* Set up execution state for predicate, if any. */
predicate = (List*)ExecPrepareExpr((Expr*)indexInfo->ii_Predicate, estate);
predicate = (List*)ExecPrepareQual(indexInfo->ii_Predicate, estate);
/*
* Prepare for scan of the base relation. We need just those tuples
@ -4163,7 +4163,7 @@ void validate_index_heapscan(
* predicate.
*/
if (predicate != NIL) {
if (!ExecQual(predicate, econtext, false)) {
if (!ExecQual((ExprState*)predicate, econtext)) {
continue;
}
}
@ -5424,10 +5424,10 @@ void ScanHeapInsertCBI(Relation parentRel, Relation heapRel, Relation idxRel, Oi
tupleDesc = heapRel->rd_att;
estate = CreateExecutorState();
econtext = GetPerTupleExprContext(estate);
slot = MakeSingleTupleTableSlot(RelationGetDescr(parentRel), false, parentRel->rd_tam_type);
slot = MakeSingleTupleTableSlot(RelationGetDescr(parentRel), false, parentRel->rd_tam_ops);
econtext->ecxt_scantuple = slot;
/* Set up execution state for predicate, if any. */
predicate = (List*)ExecPrepareExpr((Expr*)idxInfo->ii_Predicate, estate);
predicate = (List*)ExecPrepareQual(idxInfo->ii_Predicate, estate);
scan = scan_handler_tbl_beginscan(heapRel, SnapshotAny, 0, NULL, NULL, true);
if (scan == NULL) {
@ -5588,7 +5588,7 @@ void ScanHeapInsertCBI(Relation parentRel, Relation heapRel, Relation idxRel, Oi
* predicate.
*/
if (predicate != NIL) {
if (!ExecQual(predicate, econtext, false)) {
if (!ExecQual((ExprState*)predicate, econtext)) {
continue;
}
}
@ -5657,7 +5657,7 @@ void ScanPartitionInsertIndex(Relation partTableRel, Relation partRel, const Lis
if (PointerIsValid(indexRelList)) {
estate = CreateExecutorState();
slot = MakeSingleTupleTableSlot(RelationGetDescr(partTableRel), false, partTableRel->rd_tam_type);
slot = MakeSingleTupleTableSlot(RelationGetDescr(partTableRel), false, partTableRel->rd_tam_ops);
}
scan = scan_handler_tbl_beginscan(partRel, SnapshotNow, 0, NULL);
@ -5877,7 +5877,7 @@ void ScanPartitionDeleteGPITuples(Relation partTableRel, Relation partRel, const
if (PointerIsValid(indexRelList)) {
estate = CreateExecutorState();
slot = MakeSingleTupleTableSlot(RelationGetDescr(partTableRel), false, partTableRel->rd_tam_type);
slot = MakeSingleTupleTableSlot(RelationGetDescr(partTableRel), false, partTableRel->rd_tam_ops);
}
scan = scan_handler_tbl_beginscan(partRel, SnapshotNow, 0, NULL);
@ -6169,7 +6169,7 @@ TupleDesc GetPsortTupleDesc(TupleDesc indexTupDesc)
/* Add key columns */
for (int i = 0; i < numatts - 1; i++) {
Form_pg_attribute from = indexTupDesc->attrs[i];
Form_pg_attribute from = &indexTupDesc->attrs[i];
AttrNumber attId = i + 1;
TupleDescInitEntry(psortTupDesc, attId, from->attname.data, from->atttypid, from->atttypmod, from->attndims);

View File

@ -793,7 +793,11 @@ char* RelnameGetRelidExtended(const char* relname, Oid* relOid, Oid* refSynOid,
recomputeNamespacePath();
if (ENABLE_SQL_FUSION_ENGINE(IUD_CODE_OPTIMIZE)) {
tempActiveSearchPath = u_sess->catalog_cxt.activeSearchPath;
} else {
tempActiveSearchPath = list_copy(u_sess->catalog_cxt.activeSearchPath);
}
foreach (l, tempActiveSearchPath) {
Oid namespaceId = lfirst_oid(l);
@ -812,7 +816,9 @@ char* RelnameGetRelidExtended(const char* relname, Oid* relOid, Oid* refSynOid,
if (relOid != NULL && !OidIsValid(*relOid) && module_logging_is_on(MOD_SCHEMA)) {
AddSchemaSearchPathInfo(tempActiveSearchPath, detailInfo);
}
if (!ENABLE_SQL_FUSION_ENGINE(IUD_CODE_OPTIMIZE)) {
list_free_ext(tempActiveSearchPath);
}
/* return checking details. */
return errDetail;
@ -1058,6 +1064,11 @@ bool IsPlpgsqlLanguageOid(Oid langoid)
bool isNull = true;
char* langName = NULL;
if (langoid == INTERNALlanguageId || langoid == ClanguageId
|| langoid == SQLlanguageId || langoid == JavalanguageId) {
return false;
}
Relation relation = heap_open(LanguageRelationId, NoLock);
tp = SearchSysCache1(LANGOID, ObjectIdGetDatum(langoid));
if (!HeapTupleIsValid(tp)) {
@ -4508,7 +4519,7 @@ static void InitTempTableNamespace(void)
ret = snprintf_s(
str, sizeof(str), sizeof(str) - 1, "CREATE SCHEMA %s AUTHORIZATION \"%s\"", namespaceName, bootstrap_username);
securec_check_ss(ret, "\0", "\0");
ProcessUtility((Node*)create_stmt, str, NULL, false, None_Receiver, false, NULL);
ProcessUtility((Node*)create_stmt, str, false, NULL, false, None_Receiver, false, NULL);
if (IS_PGXC_COORDINATOR)
if (PoolManagerSetCommand(POOL_CMD_TEMP, namespaceName) < 0)
@ -4561,7 +4572,7 @@ static void InitTempTableNamespace(void)
toastNamespaceName,
bootstrap_username);
securec_check_ss(ret, "\0", "\0");
ProcessUtility((Node*)create_stmt, str, NULL, false, None_Receiver, false, NULL);
ProcessUtility((Node*)create_stmt, str, false, NULL, false, None_Receiver, false, NULL);
/* Advance command counter to make namespace visible */
CommandCounterIncrement();
@ -5384,7 +5395,7 @@ dropExistTempNamespace(char *namespaceName)
ereport(NOTICE, (errmsg("Deleting invalid temp schema %s.", namespaceName)));
ret = snprintf_s(str, sizeof(str), sizeof(str) - 1, "DROP SCHEMA %s CASCADE", namespaceName);
securec_check_ss(ret, "\0", "\0");
ProcessUtility((Node*)drop_stmt, str, NULL, false, None_Receiver, false, NULL);
ProcessUtility((Node*)drop_stmt, str, false, NULL, false, None_Receiver, false, NULL);
CommandCounterIncrement();
}

View File

@ -47,7 +47,8 @@ static void InternalAggIsSupported(const char *aggName)
"json_agg",
"json_object_agg",
"st_summarystatsagg",
"st_union"
"st_union",
"wm_concat"
};
uint len = lengthof(supportList);

View File

@ -998,7 +998,7 @@ int pgxc_find_primarykey(Oid relid, int16** indexed_col, bool check_is_immediate
* 1. skip expression index( it is an expression index when index attribute is zero )
* 2. skip the idex if the index key is null
*/
if (!idxKey || !rel->rd_att->attrs[idxKey - 1]->attnotnull) {
if (!idxKey || !rel->rd_att->attrs[idxKey - 1].attnotnull) {
equalPrimaryKey = false;
break;
}

View File

@ -888,8 +888,8 @@ static void get_interval_nextdate_by_spi(int4 job_id, bool ischeck, const char*
/* The result should be timestamp type or interval type. */
if (!(SPI_tuptable && SPI_tuptable->tupdesc &&
(SPI_tuptable->tupdesc->attrs[0]->atttypid == TIMESTAMPOID ||
SPI_tuptable->tupdesc->attrs[0]->atttypid == INTERVALOID))) {
(SPI_tuptable->tupdesc->attrs[0].atttypid == TIMESTAMPOID ||
SPI_tuptable->tupdesc->attrs[0].atttypid == INTERVALOID))) {
ereport(ERROR,
(errcode(ERRCODE_SPI_ERROR), errmsg("Execute job interval for get next_date error, job_id: %d.", job_id)));
}
@ -897,12 +897,12 @@ static void get_interval_nextdate_by_spi(int4 job_id, bool ischeck, const char*
/* We don't need get value if only check the interval is valid. */
if (!ischeck) {
/* If INTERVALOID, start_date+INTERVALOID=next_date */
if (INTERVALOID == SPI_tuptable->tupdesc->attrs[0]->atttypid) {
if (INTERVALOID == SPI_tuptable->tupdesc->attrs[0].atttypid) {
Datum new_interval = heap_getattr(SPI_tuptable->vals[0], 1, SPI_tuptable->tupdesc, &isnull);
MemoryContext oldcontext = MemoryContextSwitchTo(current_context);
new_interval = datumCopy(
new_interval, SPI_tuptable->tupdesc->attrs[0]->attbyval, SPI_tuptable->tupdesc->attrs[0]->attlen);
new_interval, SPI_tuptable->tupdesc->attrs[0].attbyval, SPI_tuptable->tupdesc->attrs[0].attlen);
*new_next_date = DirectFunctionCall2(timestamp_pl_interval, start_date, new_interval);
(void)MemoryContextSwitchTo(oldcontext);
} else {
@ -910,7 +910,7 @@ static void get_interval_nextdate_by_spi(int4 job_id, bool ischeck, const char*
}
} else {
/* The interval should greater than current time if it is timestamp. */
if (TIMESTAMPOID == SPI_tuptable->tupdesc->attrs[0]->atttypid) {
if (TIMESTAMPOID == SPI_tuptable->tupdesc->attrs[0].atttypid) {
Datum check_next_date;
check_next_date = heap_getattr(SPI_tuptable->vals[0], 1, SPI_tuptable->tupdesc, &isnull);

View File

@ -106,8 +106,8 @@ static void GetDistribColsTzFlag(DistributeBy *distributeby, TupleDesc desc, boo
colname = strVal(lfirst(cell));
isTimestampTz[i] = false;
for (int j = 0; j < desc->natts; j++) {
if (desc->attrs[j]->atttypid == TIMESTAMPTZOID &&
strcmp(colname, desc->attrs[j]->attname.data) == 0) {
if (desc->attrs[j].atttypid == TIMESTAMPTZOID &&
strcmp(colname, desc->attrs[j].attname.data) == 0) {
isTimestampTz[i] = true;
break;
}

View File

@ -351,11 +351,11 @@ static void CStoreRelDropStorage(Relation rel, RelFileNode* rnode, Oid ownerid)
TupleDesc desc = RelationGetDescr(rel);
int nattrs = desc->natts;
Form_pg_attribute* attrs = desc->attrs;
FormData_pg_attribute* attrs = desc->attrs;
/* add all the cu files to the list of stuff to delete at commit */
for (int i = 0; i < nattrs; ++i) {
InsertStorageIntoPendingList(rnode, attrs[i]->attnum, rel->rd_backend, ownerid, true);
InsertStorageIntoPendingList(rnode, attrs[i].attnum, rel->rd_backend, ownerid, true);
}
}
@ -722,7 +722,7 @@ void RelationTruncate(Relation rel, BlockNumber nblocks)
XLogBeginInsert();
XLogRegisterData((char*)&xlrec, size);
lsn = XLogInsert(RM_SMGR_ID, XLOG_SMGR_TRUNCATE | XLR_SPECIAL_REL_UPDATE, rel->rd_node.bucketNode);
lsn = XLogInsert(RM_SMGR_ID, info, rel->rd_node.bucketNode);
/*
* Flush, because otherwise the truncation of the main relation might

View File

@ -205,9 +205,9 @@ static bool create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid, Da
* toast :-(. This is essential for chunk_data because type bytea is
* toastable; hit the other two just to be sure.
*/
tupdesc->attrs[0]->attstorage = 'p';
tupdesc->attrs[1]->attstorage = 'p';
tupdesc->attrs[2]->attstorage = 'p';
tupdesc->attrs[0].attstorage = 'p';
tupdesc->attrs[1].attstorage = 'p';
tupdesc->attrs[2].attstorage = 'p';
/*
* Toast tables for regular relations go in pg_toast; those for temp
@ -525,7 +525,7 @@ static bool needs_toast_table(Relation rel)
bool maxlength_unknown = false;
bool has_toastable_attrs = false;
TupleDesc tupdesc;
Form_pg_attribute* att = NULL;
FormData_pg_attribute* att = NULL;
int32 tuple_length;
int i;
@ -548,19 +548,19 @@ static bool needs_toast_table(Relation rel)
att = tupdesc->attrs;
for (i = 0; i < tupdesc->natts; i++) {
if (att[i]->attisdropped)
if (att[i].attisdropped)
continue;
data_length = att_align_nominal(data_length, att[i]->attalign);
if (att[i]->attlen > 0) {
data_length = att_align_nominal(data_length, att[i].attalign);
if (att[i].attlen > 0) {
/* Fixed-length types are never toastable */
data_length += att[i]->attlen;
data_length += att[i].attlen;
} else {
int32 maxlen = type_maximum_size(att[i]->atttypid, att[i]->atttypmod);
int32 maxlen = type_maximum_size(att[i].atttypid, att[i].atttypmod);
if (maxlen < 0)
maxlength_unknown = true;
else
data_length += maxlen;
if (att[i]->attstorage != 'p')
if (att[i].attstorage != 'p')
has_toastable_attrs = true;
}
}
@ -741,7 +741,7 @@ static void InitTempToastNamespace(void)
toastNamespaceName,
bootstrap_username);
securec_check_ss(ret, "\0", "\0");
ProcessUtility((Node*)create_stmt, str, NULL, false, None_Receiver, false, NULL);
ProcessUtility((Node*)create_stmt, str, false, NULL, false, None_Receiver, false, NULL);
/* Advance command counter to make namespace visible */
CommandCounterIncrement();
@ -790,9 +790,9 @@ bool create_toast_by_sid(Oid *toastOid)
* toast :-(. This is essential for chunk_data because type bytea is
* toastable; hit the other two just to be sure.
*/
tupdesc->attrs[0]->attstorage = 'p';
tupdesc->attrs[1]->attstorage = 'p';
tupdesc->attrs[2]->attstorage = 'p';
tupdesc->attrs[0].attstorage = 'p';
tupdesc->attrs[1].attstorage = 'p';
tupdesc->attrs[2].attstorage = 'p';
if (OidIsValid(u_sess->catalog_cxt.myTempToastNamespace)) {
namespaceid = GetTempToastNamespace();
} else {

View File

@ -1333,7 +1333,7 @@ Datum get_client_info(PG_FUNCTION_ARGS)
Tuplestorestate* tupstore = NULL;
const int COLUMN_NUM = 2;
MemoryContext oldcontext = MemoryContextSwitchTo(rsinfo->econtext->ecxt_per_query_memory);
tupdesc = CreateTemplateTupleDesc(COLUMN_NUM, false, TAM_HEAP);
tupdesc = CreateTemplateTupleDesc(COLUMN_NUM, false);
TupleDescInitEntry(tupdesc, (AttrNumber)1, "sid", INT8OID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)2, "client_info", TEXTOID, -1, 0);

View File

@ -228,8 +228,7 @@ void appendBinaryStringInfo(StringInfo str, const char* data, int datalen)
enlargeStringInfo(str, datalen);
/* OK, append the data */
errno_t rc = memcpy_s(str->data + str->len, (size_t)(str->maxlen - str->len), data, (size_t)datalen);
securec_check(rc, "\0", "\0");
memcpy(str->data + str->len, data, (size_t)datalen);
str->len += datalen;
/*
@ -276,24 +275,24 @@ void enlargeBuffer(int needed, // needed more bytes
* an overflow or infinite loop in the following.
*/
/* should not happen */
if (needed < 0) {
if (unlikely(needed < 0)) {
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("invalid string enlargement request size: %d", needed)));
}
if (((Size)len > MaxAllocSize) || ((Size)needed) >= (MaxAllocSize - (Size)len)) {
ereport(ERROR,
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
errmsg("out of memory"),
errdetail("Cannot enlarge buffer containing %d bytes by %d more bytes.", len, needed)));
}
needed += len + 1; /* total space required now */
/* Because of the above test, we now have needed <= MaxAllocSize */
if (needed <= (int)*maxlen) {
if (likely(needed <= (int)*maxlen)) {
return; /* got enough space already */
}
if (unlikely(((Size)len > MaxAllocSize) || ((Size)(needed - 1)) >= MaxAllocSize)) {
ereport(ERROR,
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
errmsg("out of memory"),
errdetail("Cannot enlarge buffer containing %d bytes by %d more bytes.", len, needed)));
}
/*
* We don't want to allocate just a little more space with each append;
* for efficiency, double the buffer size each time it overflows.

View File

@ -75,8 +75,11 @@
#endif /* USE_SSL */
#include "libpq/libpq.h"
#include "miscadmin.h"
#include "storage/latch.h"
#include "tcop/tcopprot.h"
#include "utils/memutils.h"
#include "storage/proc.h"
#include "libcomm/libcomm.h"
#include "miscadmin.h"
#include "cipher.h"
@ -223,18 +226,66 @@ void secure_close(Port* port)
#endif
}
ssize_t logic_read(Port *port, void *ptr, size_t len)
{
ssize_t n;
prepare_for_logic_conn_read();
int producer;
retry:
NetWorkTimePollStart(t_thrd.pgxc_cxt.GlobalNetInstr);
n = gs_wait_poll(&(port->gs_sock), 1, &producer, -1, false);
NetWorkTimePollEnd(t_thrd.pgxc_cxt.GlobalNetInstr);
/* no data but wake up, retry */
if (n == 0) {
logic_conn_read_check_ended();
goto retry;
}
if (n > 0) {
n = gs_recv(&(port->gs_sock), ptr, len);
LIBCOMM_DEBUG_LOG("secure_read to node %s[nid:%d,sid:%d] with msg:%c, len:%d.", port->remote_hostname,
port->gs_sock.idx, port->gs_sock.sid, ((char *)ptr)[0], (int)len);
}
logic_conn_read_check_ended();
return n;
}
ssize_t secure_read_ord(Port *port, void *ptr, size_t len)
{
ssize_t n;
/*
* Try to read from the socket without blocking. If it succeeds we're
* done, otherwise we'll wait for the socket using the latch mechanism.
*/
PGSTAT_INIT_TIME_RECORD();
PGSTAT_START_TIME_RECORD();
#ifdef WIN32
pgwin32_noblock = true;
#endif
n = recv(port->sock, ptr, len, 0);
#ifdef WIN32
pgwin32_noblock = false;
#endif
END_NET_RECV_INFO(n);
return n;
}
/*
* Read data from a secure connection.
*/
ssize_t secure_read(Port* port, void* ptr, size_t len)
{
ssize_t n;
int waitfor = 0;
readloop:
#ifdef USE_SSL
if (port->ssl != NULL) {
int err;
rloop:
errno = 0;
ERR_clear_error();
n = SSL_read(port->ssl, ptr, len);
@ -244,18 +295,15 @@ ssize_t secure_read(Port* port, void* ptr, size_t len)
port->count += n;
break;
case SSL_ERROR_WANT_READ:
case SSL_ERROR_WANT_WRITE:
if (port->noblock) {
waitfor = WL_SOCKET_READABLE;
errno = EWOULDBLOCK;
n = -1;
break;
case SSL_ERROR_WANT_WRITE:
waitfor = WL_SOCKET_WRITEABLE;
errno = EWOULDBLOCK;
n = -1;
break;
}
#ifdef WIN32
pgwin32_waitforsinglesocket(SSL_get_fd(port->ssl),
(err == SSL_ERROR_WANT_READ) ? (FD_READ | FD_CLOSE) : (FD_WRITE | FD_CLOSE),
INFINITE);
#endif
goto rloop;
case SSL_ERROR_SYSCALL:
/* leave it to caller to ereport the value of errno */
if (n != -1) {
@ -280,47 +328,112 @@ ssize_t secure_read(Port* port, void* ptr, size_t len)
n = -1;
break;
}
ereport(COMMERROR, (errmsg("SSL read")));
} else
#endif
{
if (port->is_logic_conn) {
prepare_for_logic_conn_read();
int producer;
retry:
NetWorkTimePollStart(t_thrd.pgxc_cxt.GlobalNetInstr);
n = gs_wait_poll(&(port->gs_sock), 1, &producer, -1, false);
NetWorkTimePollEnd(t_thrd.pgxc_cxt.GlobalNetInstr);
/* no data but wake up, retry */
if (n == 0) {
logic_conn_read_check_ended();
goto retry;
}
if (n > 0) {
n = gs_recv(&(port->gs_sock), ptr, len);
LIBCOMM_DEBUG_LOG("secure_read to node %s[nid:%d,sid:%d] with msg:%c, len:%d.",
port->remote_hostname,
port->gs_sock.idx,
port->gs_sock.sid,
((char*)ptr)[0],
(int)len);
}
logic_conn_read_check_ended();
n = logic_read(port, ptr, len);
} else {
prepare_for_client_read();
n = secure_read_ord(port, ptr, len);
waitfor = WL_SOCKET_READABLE;
}
}
/* In blocking mode, wait until the socket is ready */
if (!port->is_logic_conn && n < 0 && !port->noblock && (errno == EWOULDBLOCK || errno == EAGAIN) && t_thrd.proc) {
int w;
if (!waitfor)
{
ereport(COMMERROR, (errmsg("wait event should not be zero")));
/* for log printing, dn receive message */
IPC_PERFORMANCE_LOG_COLLECT(port->msgLog, ptr, n, port->remote_hostname, &port->gs_sock, SECURE_READ);
return n;
}
w = WaitLatchOrSocket(&t_thrd.proc->procLatch, WL_LATCH_SET | WL_POSTMASTER_DEATH | waitfor, port->sock, 0);
/*
* If the postmaster has died, it's not safe to continue running,
* because it is the postmaster's job to kill us if some other backend
* exists uncleanly. Moreover, we won't run very well in this state;
* helper processes like walwriter and the bgwriter will exit, so
* performance may be poor. Finally, if we don't exit, pg_ctl will
* be unable to restart the postmaster without manual intervention,
* so no new connections can be accepted. Exiting clears the deck
* for a postmaster restart.
*
* (Note that we only make this check when we would otherwise sleep
* on our latch. We might still continue running for a while if the
* postmaster is killed in mid-query, or even through multiple queries
* if we never have to wait for read. We don't want to burn too many
* cycles checking for this very rare condition, and this should cause
* us to exit quickly in most cases.)
*/
if (w & WL_POSTMASTER_DEATH)
ereport(FATAL,
(errcode(ERRCODE_ADMIN_SHUTDOWN), errmsg("terminating connection due to unexpected postmaster exit")));
if (w & WL_LATCH_SET) {
/* Handle interrupt */
ResetLatch(&t_thrd.proc->procLatch);
ProcessClientReadInterrupt(true);
/*
* We'll retry the read. Most likely it will return immediately
* because there's still no data available, and we'll wait
* for the socket to become ready again.
*/
}
goto readloop;
}
#ifdef USE_SSL
if (!port->is_logic_conn && n < 0 && !port->noblock && (errno == EWOULDBLOCK || errno == EAGAIN) && !t_thrd.proc) {
goto readloop;
}
#endif
/*
* Process interrupts that happened while (or before) receiving. Note that
* we signal that we're not blocking, which will prevent some types of
* interrupts from being processed.
*/
ProcessClientReadInterrupt(false);
/* for log printing, dn receive message */
IPC_PERFORMANCE_LOG_COLLECT(port->msgLog, ptr, n, port->remote_hostname, &port->gs_sock, SECURE_READ);
return n;
}
ssize_t logic_write(Port *port, void *ptr, size_t len)
{
ssize_t n;
n = gs_send(&(port->gs_sock), (char *)ptr, len, -1, TRUE);
LIBCOMM_DEBUG_LOG("secure_write to node[nid:%d,sid:%d] with msg:%c, len:%d.", port->gs_sock.idx, port->gs_sock.sid,
((char *)ptr)[0], (int)len);
/* for log printing, send message */
IPC_PERFORMANCE_LOG_COLLECT(port->msgLog, ptr, n, port->remote_hostname, &port->gs_sock, SECURE_WRITE);
return n;
}
ssize_t secure_write_ord(Port *port, void *ptr, size_t len)
{
PGSTAT_INIT_TIME_RECORD();
PGSTAT_START_TIME_RECORD();
/* CommProxy Interface Support */
n = comm_recv(port->sock, ptr, len, 0);
END_NET_RECV_INFO(n);
client_read_ended();
}
}
ssize_t n;
/* for log printing, dn receive message */
IPC_PERFORMANCE_LOG_COLLECT(port->msgLog, ptr, n, port->remote_hostname, &port->gs_sock, SECURE_READ);
#ifdef WIN32
pgwin32_noblock = true;
#endif
/* CommProxy Interface Support */
n = comm_send(port->sock, ptr, len, 0);
#ifdef WIN32
pgwin32_noblock = false;
#endif
PGSTAT_END_TIME_RECORD(NET_SEND_TIME);
END_NET_SEND_INFO(n);
/* for log printing, send message */
IPC_PERFORMANCE_LOG_COLLECT(port->msgLog, ptr, n, port->remote_hostname, NULL, SECURE_WRITE);
return n;
}
@ -330,11 +443,12 @@ ssize_t secure_read(Port* port, void* ptr, size_t len)
ssize_t secure_write(Port* port, void* ptr, size_t len)
{
ssize_t n;
int waitfor = 0;
StreamTimeSendStart(t_thrd.pgxc_cxt.GlobalNetInstr);
retry:
#ifdef USE_SSL
if (port->ssl != NULL) {
int err;
wloop:
errno = 0;
ERR_clear_error();
n = SSL_write(port->ssl, ptr, len);
@ -345,18 +459,15 @@ ssize_t secure_write(Port* port, void* ptr, size_t len)
port->count += n;
break;
case SSL_ERROR_WANT_READ:
case SSL_ERROR_WANT_WRITE:
if (port->noblock) {
waitfor = WL_SOCKET_READABLE;
errno = EWOULDBLOCK;
n = -1;
break;
case SSL_ERROR_WANT_WRITE:
waitfor = WL_SOCKET_WRITEABLE;
errno = EWOULDBLOCK;
n = -1;
break;
}
#ifdef WIN32
pgwin32_waitforsinglesocket(SSL_get_fd(port->ssl),
(err == SSL_ERROR_WANT_READ) ? (FD_READ | FD_CLOSE) : (FD_WRITE | FD_CLOSE),
INFINITE);
#endif
goto wloop;
case SSL_ERROR_SYSCALL:
/* leave it to caller to ereport the value of errno */
if (n != -1) {
@ -377,6 +488,7 @@ ssize_t secure_write(Port* port, void* ptr, size_t len)
n = -1;
break;
}
ereport(COMMERROR, (errmsg("SSL write")));
} else
#endif
/*
@ -401,28 +513,52 @@ ssize_t secure_write(Port* port, void* ptr, size_t len)
* as only one connection needed to send.
*/
else if (port->is_logic_conn) {
n = gs_send(&(port->gs_sock), (char*)ptr, len, -1, TRUE);
LIBCOMM_DEBUG_LOG("secure_write to node[nid:%d,sid:%d] with msg:%c, len:%d.",
port->gs_sock.idx,
port->gs_sock.sid,
((char*)ptr)[0],
(int)len);
/* for log printing, send message */
IPC_PERFORMANCE_LOG_COLLECT(port->msgLog, ptr, n, port->remote_hostname, &port->gs_sock, SECURE_WRITE);
n = logic_write(port, ptr, len);
} else {
PGSTAT_INIT_TIME_RECORD();
PGSTAT_START_TIME_RECORD();
/* CommProxy Interface Support */
n = comm_send(port->sock, ptr, len, 0);
PGSTAT_END_TIME_RECORD(NET_SEND_TIME);
END_NET_SEND_INFO(n);
/* for log printing, send message */
IPC_PERFORMANCE_LOG_COLLECT(port->msgLog, ptr, n, port->remote_hostname, NULL, SECURE_WRITE);
n = secure_write_ord(port, ptr, len);
waitfor = WL_SOCKET_WRITEABLE;
}
if (!StreamThreadAmI() && !port->is_logic_conn && n < 0 && !port->noblock &&
(errno == EWOULDBLOCK || errno == EAGAIN) && t_thrd.proc) {
int w;
if (!waitfor) {
StreamTimeSendEnd(t_thrd.pgxc_cxt.GlobalNetInstr);
ereport(COMMERROR, (errmsg("wait event should not be zero")));
return n;
}
w = WaitLatchOrSocket(&t_thrd.proc->procLatch, WL_LATCH_SET | WL_POSTMASTER_DEATH | waitfor, port->sock, 0);
/* See comments in secure_read. */
if (w & WL_POSTMASTER_DEATH)
ereport(FATAL,
(errcode(ERRCODE_ADMIN_SHUTDOWN), errmsg("terminating connection due to unexpected postmaster exit")));
if (w & WL_LATCH_SET) {
/* Handle interrupt. */
ResetLatch(&t_thrd.proc->procLatch);
ProcessClientWriteInterrupt(true);
/*
* We'll retry the write. Most likely it will return immediately
* because there's still no data available, and we'll wait
* for the socket to become ready again.
*/
}
goto retry;
}
#ifdef USE_SSL
if (!StreamThreadAmI() && !port->is_logic_conn && n <0 && !port->noblock &&
(errno == EWOULDBLOCK || errno == EAGAIN) && !t_thrd.proc) {
goto retry;
}
#endif
/*
* Process interrupts that happened while (or before) sending. Note that
* we signal that we're not blocking, which will prevent some types of
* interrupts from being processed.
*/
ProcessClientWriteInterrupt(false);
StreamTimeSendEnd(t_thrd.pgxc_cxt.GlobalNetInstr);
return n;

View File

@ -452,6 +452,21 @@ void pq_init(void)
pq_disk_reset_tempfile_contextinfo();
on_proc_exit(pq_close, 0);
/*
* In backends we operate the underlying socket in
* nonblocking mode and use latches to implement blocking semantics if
* needed. That allows us to provide safely interruptible reads.
*
* Use COMMERROR on failure, because ERROR would try to send the error to
* the client, which might require changing the mode again, leading to
* infinite recursion.
*/
#ifndef WIN32
if (u_sess->proc_cxt.MyProcPort->sock != PGINVALID_SOCKET && !pg_set_noblock(u_sess->proc_cxt.MyProcPort->sock)){
ereport(COMMERROR, (errmsg("could not set socket to nonblocking mode: %m")));
}
#endif
}
/* --------------------------------
@ -1125,29 +1140,6 @@ void TouchSocketFile(void)
*/
void pq_set_nonblocking(bool nonblocking)
{
if (u_sess->proc_cxt.MyProcPort->noblock == nonblocking) {
return;
}
#ifdef WIN32
pgwin32_noblock = nonblocking ? 1 : 0;
#else
/*
* Use COMMERROR on failure, because ERROR would try to send the error to
* the client, which might require changing the mode again, leading to
* infinite recursion.
*/
if (nonblocking) {
if (!pg_set_noblock(u_sess->proc_cxt.MyProcPort->sock)) {
ereport(COMMERROR, (errmsg("fd:[%d] could not set socket to non-blocking mode: %m", u_sess->proc_cxt.MyProcPort->sock)));
}
} else {
if (!pg_set_block(u_sess->proc_cxt.MyProcPort->sock)) {
ereport(COMMERROR, (errmsg("fd:[%d] could not set socket to blocking mode: %m", u_sess->proc_cxt.MyProcPort->sock)));
}
}
#endif
u_sess->proc_cxt.MyProcPort->noblock = nonblocking;
}

View File

@ -137,10 +137,12 @@ void pq_sendbytes(StringInfo buf, const char* data, int datalen)
void pq_sendcountedtext(StringInfo buf, const char* str, int slen, bool countincludesself)
{
int extra = countincludesself ? 4 : 0;
char* p = NULL;
char* p = (char*)str;
if (unlikely(u_sess->mb_cxt.DatabaseEncoding->encoding != u_sess->mb_cxt.ClientEncoding->encoding)) {
p = pg_server_to_client(str, slen);
if (p != str) { /* actual conversion has been done? */
}
if (unlikely(p != str)) { /* actual conversion has been done? */
slen = strlen(p);
pq_sendint32(buf, slen + extra);
appendBinaryStringInfo(buf, p, slen);
@ -152,6 +154,32 @@ void pq_sendcountedtext(StringInfo buf, const char* str, int slen, bool countinc
}
}
void pq_sendcountedtext_printtup(StringInfo buf, const char* str, int slen)
{
char* p = (char*)str;
if (unlikely(u_sess->mb_cxt.DatabaseEncoding->encoding != u_sess->mb_cxt.ClientEncoding->encoding)) {
p = pg_server_to_client(str, slen);
}
if (unlikely(p != str)) { /* actual conversion has been done? */
slen = strlen(p);
enlargeStringInfo(buf, slen + sizeof(uint32));
pq_writeint32(buf, (uint32)slen);
memcpy(buf->data + buf->len, p, (size_t)slen);
buf->len += slen;
buf->data[buf->len] = '\0';
pfree(p);
p = NULL;
} else {
enlargeStringInfo(buf, slen + sizeof(uint32));
pq_writeint32(buf, (uint32)slen);
memcpy(buf->data + buf->len, str, (size_t)slen);
buf->len += slen;
buf->data[buf->len] = '\0';
}
}
/* --------------------------------
* pq_sendtext - append a text string (with conversion)
*

View File

@ -20,6 +20,7 @@
*/
#include "postgres.h"
#include "knl/knl_variable.h"
#include "port/pg_bitutils.h"
#include "access/hash.h"
@ -49,35 +50,18 @@
#define HAS_MULTIPLE_ONES(x) ((bitmapword)RIGHTMOST_ONE(x) != (x))
/*
* Lookup tables to avoid need for bit-by-bit groveling
*
* rightmost_one_pos[x] gives the bit number (0-7) of the rightmost one bit
* in a nonzero byte value x. The entry for x=0 is never used.
*
* number_of_ones[x] gives the number of one-bits (0-8) in a byte value x.
*
* We could make these tables larger and reduce the number of iterations
* in the functions that use them, but bytewise shifts and masks are
* especially fast on many machines, so working a byte at a time seems best.
*/
static const uint8 rightmost_one_pos[256] = {0, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3,
0, 1, 0, 2, 0, 1, 0, 5, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1,
0, 6, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 5, 0, 1, 0, 2,
0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 7, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1,
0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 5, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 6, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1,
0, 3, 0, 1, 0, 2, 0, 1, 0, 5, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2,
0, 1, 0};
static const uint8 number_of_ones[256] = {0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3,
3, 4, 3, 4, 4, 5, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4,
4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4,
3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4,
4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6,
4, 5, 5, 6, 5, 6, 6, 7, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7,
7, 8};
/* Select appropriate bit-twiddling functions for bitmap word size */
#if BITS_PER_BITMAPWORD == 32
#define bmw_leftmost_one_pos(w) pg_leftmost_one_pos32(w)
#define bmw_rightmost_one_pos(w) pg_rightmost_one_pos32(w)
#define bmw_popcount(w) pg_popcount32(w)
#elif BITS_PER_BITMAPWORD == 64
#define bmw_leftmost_one_pos(w) pg_leftmost_one_pos64(w)
#define bmw_rightmost_one_pos(w) pg_rightmost_one_pos64(w)
#define bmw_popcount(w) pg_popcount64(w)
#else
#error "invalid BITS_PER_BITMAPWORD"
#endif
/*
* bms_copy - make a palloc'd copy of a bitmapset
@ -478,11 +462,7 @@ int bms_singleton_member(const Bitmapset* a)
(errmodule(MOD_CACHE), errcode(ERRCODE_DATA_EXCEPTION), errmsg("bitmapset has multiple members")));
}
result = wordnum * BITS_PER_BITMAPWORD;
while ((w & 255) == 0) {
w >>= 8;
result += 8;
}
result += rightmost_one_pos[w & 255];
result += bmw_rightmost_one_pos(w);
}
}
if (result < 0) {
@ -507,11 +487,9 @@ int bms_num_members(const Bitmapset* a)
for (wordnum = 0; wordnum < nwords; wordnum++) {
bitmapword w = a->words[wordnum];
/* we assume here that bitmapword is an unsigned type */
while (w != 0) {
result += number_of_ones[w & 255];
w >>= 8;
}
/* No need to count the bits in a zero word */
if (w != 0)
result += bmw_popcount(w);
}
return result;
}
@ -795,11 +773,7 @@ int bms_first_member(Bitmapset* a)
a->words[wordnum] &= ~w;
result = wordnum * BITS_PER_BITMAPWORD;
while ((w & 255) == 0) {
w >>= 8;
result += 8;
}
result += rightmost_one_pos[w & 255];
result += bmw_rightmost_one_pos(w);
return result;
}
}
@ -880,11 +854,7 @@ int bms_next_member(const Bitmapset* a, int prevbit)
int result;
result = wordnum * BITS_PER_BITMAPWORD;
while ((w & 255) == 0) {
w >>= 8;
result += 8;
}
result += rightmost_one_pos[w & 255];
result += bmw_rightmost_one_pos(w);
return result;
}

View File

@ -270,6 +270,21 @@ static BaseResult* _copyResult(const BaseResult* from)
return newnode;
}
/*
* _copyProjectSet
*/
static ProjectSet *_copyProjectSet(const ProjectSet *from)
{
ProjectSet *newnode = makeNode(ProjectSet);
/*
* copy node superclass fields
*/
CopyPlanFields((const Plan *)from, (Plan *)newnode);
return newnode;
}
/*
* _copyModifyTable
*/
@ -1140,6 +1155,7 @@ static void CopyJoinFields(const Join* from, Join* newnode)
CopyPlanFields((const Plan*)from, (Plan*)newnode);
COPY_SCALAR_FIELD(jointype);
COPY_SCALAR_FIELD(inner_unique);
COPY_NODE_FIELD(joinqual);
COPY_SCALAR_FIELD(optimizable);
COPY_NODE_FIELD(nulleqqual);
@ -1219,6 +1235,7 @@ static MergeJoin* _copyMergeJoin(const MergeJoin* from)
/*
* copy remainder of node
*/
COPY_SCALAR_FIELD(skip_mark_restore);
COPY_NODE_FIELD(mergeclauses);
numCols = list_length(from->mergeclauses);
if (numCols > 0) {
@ -1326,6 +1343,37 @@ static Sort* _copySort(const Sort* from)
return newnode;
}
/*
* CopySortGroupFields
*
* This function copies the fields of the SortGroup node.
*/
static void CopySortGroupFields(const SortGroup *from, SortGroup *newnode)
{
CopyPlanFields((const Plan *)from, (Plan *)newnode);
COPY_SCALAR_FIELD(numCols);
COPY_POINTER_FIELD(sortColIdx, from->numCols * sizeof(AttrNumber));
COPY_POINTER_FIELD(sortOperators, from->numCols * sizeof(Oid));
COPY_POINTER_FIELD(collations, from->numCols * sizeof(Oid));
COPY_POINTER_FIELD(nullsFirst, from->numCols * sizeof(bool));
}
/*
* _copySortGroup
*/
static SortGroup *_copySortGroup(const SortGroup *from)
{
SortGroup *newnode = makeNode(SortGroup);
/*
* copy node superclass fields
*/
CopySortGroupFields(from, newnode);
return newnode;
}
/*
* _copyGroup
*/
@ -4536,6 +4584,7 @@ static Query* _copyQuery(const Query* from)
COPY_SCALAR_FIELD(resultRelation);
COPY_SCALAR_FIELD(hasAggs);
COPY_SCALAR_FIELD(hasWindowFuncs);
COPY_SCALAR_FIELD(hasTargetSRFs);
COPY_SCALAR_FIELD(hasSubLinks);
COPY_SCALAR_FIELD(hasDistinctOn);
COPY_SCALAR_FIELD(hasRecursive);
@ -6770,6 +6819,9 @@ void* copyObject(const void* from)
case T_BaseResult:
retval = _copyResult((BaseResult*)from);
break;
case T_ProjectSet:
retval = _copyProjectSet((ProjectSet*)from);
break;
case T_ModifyTable:
retval = _copyModifyTable((ModifyTable*)from);
break;
@ -6890,6 +6942,9 @@ void* copyObject(const void* from)
case T_Sort:
retval = _copySort((Sort*)from);
break;
case T_SortGroup:
retval = _copySortGroup((SortGroup*)from);
break;
case T_Group:
retval = _copyGroup((Group*)from);
break;

View File

@ -838,6 +838,7 @@ static bool _equalQuery(const Query* a, const Query* b)
COMPARE_SCALAR_FIELD(resultRelation);
COMPARE_SCALAR_FIELD(hasAggs);
COMPARE_SCALAR_FIELD(hasWindowFuncs);
COMPARE_SCALAR_FIELD(hasTargetSRFs);
COMPARE_SCALAR_FIELD(hasSubLinks);
COMPARE_SCALAR_FIELD(hasDistinctOn);
COMPARE_SCALAR_FIELD(hasRecursive);

View File

@ -65,6 +65,7 @@ static const TagStr g_tagStrArr[] = {{T_Invalid, "Invalid"},
{T_HashJoin, "HashJoin"},
{T_Material, "Material"},
{T_Sort, "Sort"},
{T_SortGroup, "SortGroup"},
{T_Group, "Group"},
{T_Agg, "Agg"},
{T_WindowAgg, "WindowAgg"},
@ -132,6 +133,7 @@ static const TagStr g_tagStrArr[] = {{T_Invalid, "Invalid"},
{T_HashJoinState, "HashJoinState"},
{T_MaterialState, "MaterialState"},
{T_SortState, "SortState"},
{T_SortGroupState, "SortGroupState"},
{T_GroupState, "GroupState"},
{T_AggState, "AggState"},
{T_WindowAggState, "WindowAggState"},
@ -260,6 +262,7 @@ static const TagStr g_tagStrArr[] = {{T_Invalid, "Invalid"},
{T_EquivalenceClass, "EquivalenceClass"},
{T_EquivalenceMember, "EquivalenceMember"},
{T_PathKey, "PathKey"},
{T_PathTarget, "PathTarget"},
{T_RestrictInfo, "RestrictInfo"},
{T_PlaceHolderVar, "PlaceHolderVar"},
{T_SpecialJoinInfo, "SpecialJoinInfo"},
@ -272,6 +275,7 @@ static const TagStr g_tagStrArr[] = {{T_Invalid, "Invalid"},
{T_MergeAction, "MergeAction"},
{T_MemoryContext, "MemoryContext"},
{T_AllocSetContext, "AllocSetContext"},
{T_OptAllocSetContext, "OptAllocSetContext"},
{T_StackAllocSetContext, "StackAllocSetContext"},
{T_SharedAllocSetContext, "SharedAllocSetContext"},
{T_MemalignAllocSetContext, "MemalignAllocSetContext"},

View File

@ -725,6 +725,7 @@ static void _outJoinPlanInfo(StringInfo str, Join* node)
_outPlanInfo(str, (Plan*)node);
WRITE_ENUM_FIELD(jointype, JoinType);
WRITE_BOOL_FIELD(inner_unique);
WRITE_NODE_FIELD(joinqual);
WRITE_BOOL_FIELD(optimizable);
WRITE_NODE_FIELD(nulleqqual);
@ -747,6 +748,13 @@ static void _outResult(StringInfo str, BaseResult* node)
WRITE_NODE_FIELD(resconstantqual);
}
static void _outProjectSet(StringInfo str, const ProjectSet *node)
{
WRITE_NODE_TYPE("PROJECTSET");
_outPlanInfo(str, (Plan *)node);
}
static void _outModifyTable(StringInfo str, ModifyTable* node)
{
WRITE_NODE_TYPE("MODIFYTABLE");
@ -1549,6 +1557,7 @@ static void _outCommonJoinPart(StringInfo str, T* node)
_outJoinPlanInfo(str, (Join*)node);
WRITE_BOOL_FIELD(skip_mark_restore);
WRITE_NODE_FIELD(mergeclauses);
numCols = list_length(node->mergeclauses);
@ -2875,6 +2884,7 @@ static void _outMergeAction(StringInfo str, const MergeAction* node)
*
* Note we do NOT print the parent, else we'd be in infinite recursion.
* We can print the parent's relids for identification purposes, though.
* We print the pathtarget only if it's not the default one for the rel.
* We also do not print the whole of param_info, since it's printed by
* _outRelOptInfo; it's sufficient and less cluttering to print just the
* required outer relids.
@ -2884,6 +2894,12 @@ static void _outPathInfo(StringInfo str, Path* node)
WRITE_ENUM_FIELD(pathtype, NodeTag);
appendStringInfo(str, " :parent_relids ");
_outBitmapset(str, node->parent->relids);
if (node->pathtarget != node->parent->reltarget) {
WRITE_NODE_FIELD(pathtarget->exprs);
WRITE_FLOAT_FIELD(pathtarget->cost.startup, "%.2f");
WRITE_FLOAT_FIELD(pathtarget->cost.per_tuple, "%.2f");
WRITE_INT_FIELD(pathtarget->width);
}
appendStringInfo(str, " :required_outer ");
if (node->param_info) {
_outBitmapset(str, node->param_info->ppi_req_outer);
@ -2911,6 +2927,7 @@ static void _outJoinPathInfo(StringInfo str, JoinPath* node)
_outPathInfo(str, (Path*)node);
WRITE_ENUM_FIELD(jointype, JoinType);
WRITE_BOOL_FIELD(inner_unique);
WRITE_NODE_FIELD(outerjoinpath);
WRITE_NODE_FIELD(innerjoinpath);
WRITE_NODE_FIELD(joinrestrictinfo);
@ -3072,6 +3089,25 @@ static void _outMaterialPath(StringInfo str, MaterialPath* node)
WRITE_BOOL_FIELD(materialize_all);
}
static void _outProjectionPath(StringInfo str, const ProjectionPath *node)
{
WRITE_NODE_TYPE("PROJECTIONPATH");
_outPathInfo(str, (Path *)node);
WRITE_NODE_FIELD(subpath);
WRITE_BOOL_FIELD(dummypp);
}
static void _outProjectSetPath(StringInfo str, const ProjectSetPath *node)
{
WRITE_NODE_TYPE("PROJECTSETPATH");
_outPathInfo(str, (Path *)node);
WRITE_NODE_FIELD(subpath);
}
static void _outUniquePath(StringInfo str, UniquePath* node)
{
WRITE_NODE_TYPE("UNIQUEPATH");
@ -3102,6 +3138,7 @@ static void _outMergePath(StringInfo str, MergePath* node)
WRITE_NODE_FIELD(path_mergeclauses);
WRITE_NODE_FIELD(outersortkeys);
WRITE_NODE_FIELD(innersortkeys);
WRITE_BOOL_FIELD(skip_mark_restore);
WRITE_BOOL_FIELD(materialize_inner);
}
@ -3217,8 +3254,10 @@ static void _outRelOptInfo(StringInfo str, RelOptInfo* node)
WRITE_BOOL_FIELD(isPartitionedTable);
WRITE_ENUM_FIELD(partflag, PartitionFlag);
WRITE_FLOAT_FIELD(rows, "%.0f");
WRITE_INT_FIELD(width);
WRITE_NODE_FIELD(reltargetlist);
WRITE_NODE_FIELD(reltarget->exprs);
WRITE_FLOAT_FIELD(reltarget->cost.startup, "%.2f");
WRITE_FLOAT_FIELD(reltarget->cost.per_tuple, "%.2f");
WRITE_INT_FIELD(reltarget->width);
WRITE_NODE_FIELD(pathlist);
WRITE_NODE_FIELD(ppilist);
WRITE_NODE_FIELD(cheapest_startup_path);
@ -3242,6 +3281,7 @@ static void _outRelOptInfo(StringInfo str, RelOptInfo* node)
WRITE_NODE_FIELD(subplan);
WRITE_NODE_FIELD(subroot);
/* we don't try to print fdwroutine or fdw_private */
/* can't print unique_for_rels/non_unique_for_rels; BMSes aren't Nodes */
WRITE_NODE_FIELD(baserestrictinfo);
WRITE_UINT_FIELD(baserestrict_min_security);
WRITE_NODE_FIELD(joininfo);
@ -3323,6 +3363,23 @@ static void _outPathKey(StringInfo str, PathKey* node)
WRITE_BOOL_FIELD(pk_nulls_first);
}
static void _outPathTarget(StringInfo str, const PathTarget *node)
{
WRITE_NODE_TYPE("PATHTARGET");
WRITE_NODE_FIELD(exprs);
if (node->sortgrouprefs) {
int i;
appendStringInfoString(str, " :sortgrouprefs");
for (i = 0; i < list_length(node->exprs); i++)
appendStringInfo(str, " %u", node->sortgrouprefs[i]);
}
WRITE_FLOAT_FIELD(cost.startup, "%.2f");
WRITE_FLOAT_FIELD(cost.per_tuple, "%.2f");
WRITE_INT_FIELD(width);
}
static void _outParamPathInfo(StringInfo str, const ParamPathInfo* node)
{
WRITE_NODE_TYPE("PARAMPATHINFO");
@ -4306,6 +4363,7 @@ static void _outQuery(StringInfo str, Query* node)
WRITE_INT_FIELD(resultRelation);
WRITE_BOOL_FIELD(hasAggs);
WRITE_BOOL_FIELD(hasWindowFuncs);
WRITE_BOOL_FIELD(hasTargetSRFs);
WRITE_BOOL_FIELD(hasSubLinks);
WRITE_BOOL_FIELD(hasDistinctOn);
WRITE_BOOL_FIELD(hasRecursive);
@ -5644,6 +5702,9 @@ static void _outNode(StringInfo str, const void* obj)
case T_BaseResult:
_outResult(str, (BaseResult*)obj);
break;
case T_ProjectSet:
_outProjectSet(str, (ProjectSet*)obj);
break;
case T_ModifyTable:
_outModifyTable(str, (ModifyTable*)obj);
break;
@ -5997,6 +6058,12 @@ static void _outNode(StringInfo str, const void* obj)
case T_ResultPath:
_outResultPath(str, (ResultPath*)obj);
break;
case T_ProjectionPath:
_outProjectionPath(str, (ProjectionPath*) obj);
break;
case T_ProjectSetPath:
_outProjectSetPath(str, (ProjectSetPath*) obj);
break;
case T_MaterialPath:
_outMaterialPath(str, (MaterialPath*)obj);
break;
@ -6032,6 +6099,9 @@ static void _outNode(StringInfo str, const void* obj)
break;
case T_PathKey:
_outPathKey(str, (PathKey*)obj);
break;
case T_PathTarget:
_outPathTarget(str, (PathTarget*)obj);
break;
case T_ParamPathInfo:
_outParamPathInfo(str, (ParamPathInfo*)obj);

View File

@ -827,6 +827,7 @@ THR_LOCAL bool skip_read_extern_fields = false;
/* Read Join */ \
_readJoin(&local_node->join); \
\
READ_BOOL_FIELD(skip_mark_restore); \
READ_NODE_FIELD(mergeclauses); \
LIST_LENGTH(mergeclauses); \
READ_OID_ARRAY_LEN(mergeFamilies); \
@ -1410,6 +1411,7 @@ static Query* _readQuery(void)
READ_INT_FIELD(resultRelation);
READ_BOOL_FIELD(hasAggs);
READ_BOOL_FIELD(hasWindowFuncs);
READ_BOOL_FIELD(hasTargetSRFs);
READ_BOOL_FIELD(hasSubLinks);
READ_BOOL_FIELD(hasDistinctOn);
READ_BOOL_FIELD(hasRecursive);
@ -3956,6 +3958,20 @@ static ExecNodes* _readExecNodes(void)
READ_DONE();
}
/*
* _readProjectSet
*/
static ProjectSet *_readProjectSet(ProjectSet* local_node)
{
READ_LOCALS_NULL(ProjectSet);
READ_TEMP_LOCALS();
_readPlan(&local_node->plan);
length = 0;
READ_DONE();
}
static ModifyTable* _readModifyTable(ModifyTable* local_node)
{
READ_LOCALS_NULL(ModifyTable);
@ -4157,6 +4173,7 @@ static Join* _readJoin(Join* local_node)
_readPlan(&local_node->plan);
READ_ENUM_FIELD(jointype, JoinType);
READ_BOOL_FIELD(inner_unique);
READ_NODE_FIELD(joinqual);
READ_BOOL_FIELD(optimizable);
READ_NODE_FIELD(nulleqqual);
@ -5950,6 +5967,8 @@ Node* parseNodeString(void)
return_value = _readCteScan(NULL);
} else if (MATCH("WINDOWAGG", 9)) {
return_value = _readWindowAgg(NULL);
} else if (MATCH("PROJECTSET", 10)) {
return_value = _readProjectSet(NULL);
} else if (MATCH("MODIFYTABLE", 11)) {
return_value = _readModifyTable(NULL);
} else if (MATCH("MERGEWHENCLAUSE", 15)) {

View File

@ -81,6 +81,17 @@ else
sed -i 's/YY_NULL nullptr/YY_NULL 0/g' hint_gram.cpp
endif
# where to find gen_keywordlist.pl and subsidiary files
TOOLSDIR = $(top_srcdir)/src/tools
GEN_KEYWORDLIST = $(PERL) -I $(TOOLSDIR) $(TOOLSDIR)/gen_keywordlist.pl
GEN_KEYWORDLIST_DEPS = $(TOOLSDIR)/gen_keywordlist.pl $(TOOLSDIR)/PerfectHash.pm
distprep: kwlist_d.h
# generate SQL keyword lookup table to be included into keywords*.o.
kwlist_d.h: $(top_srcdir)/src/include/parser/kwlist.h $(GEN_KEYWORDLIST_DEPS)
$(GEN_KEYWORDLIST) --extern -o $(top_srcdir)/src/include/parser $(top_srcdir)/src/include/parser/kwlist.h
hint_scan.inc: hint_scan.l | scan.inc
ifdef FLEX
$(FLEX) $(FLEXFLAGS) -o'$@' $<
@ -89,12 +100,13 @@ else
@$(missing) flex $< $@
endif
hint_gram.o: hint_gram.hpp
hint_gram.o keywords.o parser.0: hint_gram.hpp kwlist_d.h
# gram.cpp, gram.hpp, and scan.inc are in the distribution tarball, so they
# are not cleaned here.
clean distclean maintainer-clean:
rm -f lex.backup
rm -f lex.backup hint_gram.cpp
rm -f kwlist_d.h
maintainer-check:

View File

@ -869,7 +869,7 @@ static Query* transformDeleteStmt(ParseState* pstate, DeleteStmt* stmt)
*/
transformFromClause(pstate, stmt->usingClause);
qual = transformWhereClause(pstate, stmt->whereClause, "WHERE");
qual = transformWhereClause(pstate, stmt->whereClause, EXPR_KIND_WHERE, "WHERE");
qry->returningList = transformReturningList(pstate, stmt->returningList);
if (pstate->p_target_relation && qry->returningList != NIL && RelationIsColStore(pstate->p_target_relation)) {
@ -899,6 +899,7 @@ static Query* transformDeleteStmt(ParseState* pstate, DeleteStmt* stmt)
qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
if (pstate->p_hasWindowFuncs)
parseCheckWindowFuncs(pstate, qry);
qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasAggs = pstate->p_hasAggs;
assign_query_collations(pstate, qry);
@ -909,7 +910,7 @@ static Query* transformDeleteStmt(ParseState* pstate, DeleteStmt* stmt)
qry->hintState = stmt->hintState;
qry->limitCount = transformLimitClause(pstate, stmt->limitClause, "LIMIT");
qry->limitCount = transformLimitClause(pstate, stmt->limitClause, EXPR_KIND_LIMIT, "LIMIT");
if (!IsSupportDeleteLimit(pstate->p_target_relation, (qry->limitCount != NULL))) {
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
@ -1421,7 +1422,11 @@ static Query* transformInsertStmt(ParseState* pstate, InsertStmt* stmt)
}
qry->resultRelation = setTargetTable(pstate, stmt->relation, false, false, targetPerms);
if (pstate->p_target_relation != NULL &&
if (
#ifndef ENABLE_MULTIPLE_NODES
!u_sess->attr.attr_common.enable_parser_fusion &&
#endif
pstate->p_target_relation != NULL &&
((unsigned int)RelationGetInternalMask(pstate->p_target_relation) & INTERNAL_MASK_DINSERT)) {
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
@ -1431,7 +1436,7 @@ static Query* transformInsertStmt(ParseState* pstate, InsertStmt* stmt)
#ifdef ENABLE_MULTIPLE_NODES
if (IS_PGXC_COORDINATOR && !IsConnFromCoord()) {
#endif
if (pstate->p_target_relation != NULL
if (!u_sess->attr.attr_common.enable_parser_fusion && pstate->p_target_relation != NULL
&& RelationIsMatview(pstate->p_target_relation) && !stmt->isRewritten) {
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
@ -1442,7 +1447,11 @@ static Query* transformInsertStmt(ParseState* pstate, InsertStmt* stmt)
}
#endif
if (pstate->p_target_relation != NULL && stmt->upsertClause != NULL) {
if (
#ifndef ENABLE_MULTIPLE_NODES
!u_sess->attr.attr_common.enable_parser_fusion &&
#endif
pstate->p_target_relation != NULL && stmt->upsertClause != NULL) {
/* non-supported upsert cases */
if (!u_sess->attr.attr_sql.enable_upsert_to_merge && RelationIsColumnFormat(pstate->p_target_relation)) {
ereport(ERROR, ((errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
@ -1474,7 +1483,11 @@ static Query* transformInsertStmt(ParseState* pstate, InsertStmt* stmt)
* We reply on gs_redis to ganurantee DFS table to be read only during online expansion
* so we don't need to double check if target table is DFS table here anymore.
*/
if (!u_sess->attr.attr_sql.enable_cluster_resize && pstate->p_target_relation != NULL &&
if (
#ifndef ENABLE_MULTIPLE_NODES
!u_sess->attr.attr_common.enable_parser_fusion &&
#endif
!u_sess->attr.attr_sql.enable_cluster_resize && pstate->p_target_relation != NULL &&
RelationInClusterResizingWriteErrorMode(pstate->p_target_relation)) {
ereport(ERROR,
(errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),
@ -1679,7 +1692,7 @@ static Query* transformInsertStmt(ParseState* pstate, InsertStmt* stmt)
List* sublist = (List*)lfirst(lc);
/* Do basic expression transformation (same as a ROW() expr) */
sublist = transformExpressionList(pstate, sublist);
sublist = transformExpressionList(pstate, sublist, EXPR_KIND_VALUES);
/*
* All the sublists must be the same length, *after*
@ -1785,7 +1798,7 @@ static Query* transformInsertStmt(ParseState* pstate, InsertStmt* stmt)
AssertEreport(selectStmt->intoClause == NULL, MOD_OPT, "intoClause should not happen here");
/* Do basic expression transformation (same as a ROW() expr) */
exprList = transformExpressionList(pstate, (List*)linitial(valuesLists));
exprList = transformExpressionList(pstate, (List*)linitial(valuesLists), EXPR_KIND_VALUES_SINGLE);
/*
* If td_compatible_truncation equal true and no foreign table found,
@ -1853,6 +1866,7 @@ static Query* transformInsertStmt(ParseState* pstate, InsertStmt* stmt)
qry->rtable = pstate->p_rtable;
qry->jointree = makeFromExpr(pstate->p_joinlist, NULL);
qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasSubLinks = pstate->p_hasSubLinks;
/* aggregates not allowed (but subselects are okay) */
if (pstate->p_hasAggs) {
@ -1924,7 +1938,7 @@ List* BuildExcludedTargetlist(Relation targetrel, Index exclRelIndex)
* underlying relation, hence we need entries for dropped columns too.
*/
for (attno = 0; attno < RelationGetNumberOfAttributes(targetrel); attno++) {
Form_pg_attribute attr = targetrel->rd_att->attrs[attno];
Form_pg_attribute attr = &targetrel->rd_att->attrs[attno];
char* name = NULL;
if (attr->attisdropped) {
@ -2057,10 +2071,10 @@ static UpsertExpr* transformUpsertClause(ParseState* pstate, UpsertClause* upser
addRTEtoQuery(pstate, exclRte, false, true, true);
addRTEtoQuery(pstate, pstate->p_target_rangetblentry, false, true, true);
updateTlist = transformTargetList(pstate, upsertClause->targetList);
updateTlist = transformTargetList(pstate, upsertClause->targetList, EXPR_KIND_UPDATE_TARGET);
/* Done with select-like processing, move on transforming to match update set target column */
updateTlist = transformUpdateTargetList(pstate, updateTlist, upsertClause->targetList, relation);
updateWhere = transformWhereClause(pstate, upsertClause->whereClause, "WHERE");
updateWhere = transformWhereClause(pstate, upsertClause->whereClause, EXPR_KIND_WHERE, "WHERE");
#ifdef ENABLE_MULTIPLE_NODES
/* Do not support sublinks in update where clause for now */
if (ContainSubLink(updateWhere)) {
@ -2158,7 +2172,8 @@ List* transformInsertRow(ParseState* pstate, List* exprlist, List* stmtcols, Lis
col = (ResTarget*)lfirst(icols);
AssertEreport(IsA(col, ResTarget), MOD_OPT, "nodeType inconsistant");
expr = transformAssignedExpr(pstate, expr, col->name, lfirst_int(attnos), col->indirection, col->location);
expr = transformAssignedExpr(pstate, expr, EXPR_KIND_INSERT_TARGET,
col->name, lfirst_int(attnos), col->indirection, col->location);
result = lappend(result, expr);
@ -2286,7 +2301,7 @@ static Query* transformSelectStmt(ParseState* pstate, SelectStmt* stmt, bool isF
}
/* transform targetlist */
qry->targetList = transformTargetList(pstate, stmt->targetList);
qry->targetList = transformTargetList(pstate, stmt->targetList, EXPR_KIND_SELECT_TARGET);
/* Transform operator "(+)" to outer join */
if (stmt->hasPlus && stmt->whereClause != NULL) {
@ -2305,13 +2320,13 @@ static Query* transformSelectStmt(ParseState* pstate, SelectStmt* stmt, bool isF
* during transform Whereclause.
*/
setIgnorePlusFlag(pstate, true);
qual = transformWhereClause(pstate, stmt->whereClause, "WHERE");
qual = transformWhereClause(pstate, stmt->whereClause, EXPR_KIND_WHERE, "WHERE");
setIgnorePlusFlag(pstate, false);
/*
* Initial processing of HAVING clause is just like WHERE clause.
*/
qry->havingQual = transformWhereClause(pstate, stmt->havingClause, "HAVING");
qry->havingQual = transformWhereClause(pstate, stmt->havingClause, EXPR_KIND_HAVING, "HAVING");
/*
* Transform sorting/grouping stuff. Do ORDER BY first because both
@ -2319,8 +2334,12 @@ static Query* transformSelectStmt(ParseState* pstate, SelectStmt* stmt, bool isF
* that these functions can also change the targetList, so it's passed to
* them by reference.
*/
qry->sortClause = transformSortClause(
pstate, stmt->sortClause, &qry->targetList, true /* fix unknowns */, false /* allow SQL92 rules */);
qry->sortClause = transformSortClause(pstate,
stmt->sortClause,
&qry->targetList,
EXPR_KIND_ORDER_BY,
true /* fix unknowns */,
false /* allow SQL92 rules */);
/*
* Transform A_const to columnref type in group by clause, So that repeated group column
@ -2343,6 +2362,7 @@ static Query* transformSelectStmt(ParseState* pstate, SelectStmt* stmt, bool isF
&qry->groupingSets,
&qry->targetList,
qry->sortClause,
EXPR_KIND_GROUP_BY,
false /* allow SQL92 rules */);
if (stmt->distinctClause == NIL) {
@ -2360,8 +2380,8 @@ static Query* transformSelectStmt(ParseState* pstate, SelectStmt* stmt, bool isF
}
/* transform LIMIT */
qry->limitOffset = transformLimitClause(pstate, stmt->limitOffset, "OFFSET");
qry->limitCount = transformLimitClause(pstate, stmt->limitCount, "LIMIT");
qry->limitOffset = transformLimitClause(pstate, stmt->limitOffset, EXPR_KIND_OFFSET, "OFFSET");
qry->limitCount = transformLimitClause(pstate, stmt->limitCount, EXPR_KIND_LIMIT, "LIMIT");
/* transform window clauses after we have seen all window functions */
qry->windowClause = transformWindowDefinitions(pstate, pstate->p_windowdefs, &qry->targetList);
@ -2379,6 +2399,7 @@ static Query* transformSelectStmt(ParseState* pstate, SelectStmt* stmt, bool isF
if (pstate->p_hasWindowFuncs) {
parseCheckWindowFuncs(pstate, qry);
}
qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasAggs = pstate->p_hasAggs;
foreach (l, stmt->lockingClause) {
@ -2474,7 +2495,7 @@ static Query* transformValuesClause(ParseState* pstate, SelectStmt* stmt)
List* sublist = (List*)lfirst(lc);
/* Do basic expression transformation (same as a ROW() expr) */
sublist = transformExpressionList(pstate, sublist);
sublist = transformExpressionList(pstate, sublist, EXPR_KIND_VALUES);
/*
* All the sublists must be the same length, *after* transformation
@ -2596,11 +2617,15 @@ static Query* transformValuesClause(ParseState* pstate, SelectStmt* stmt)
* The grammar allows attaching ORDER BY, LIMIT, and FOR UPDATE to a
* VALUES, so cope.
*/
qry->sortClause = transformSortClause(
pstate, stmt->sortClause, &qry->targetList, true /* fix unknowns */, false /* allow SQL92 rules */);
qry->sortClause = transformSortClause(pstate,
stmt->sortClause,
&qry->targetList,
EXPR_KIND_ORDER_BY,
true /* fix unknowns */,
false /* allow SQL92 rules */);
qry->limitOffset = transformLimitClause(pstate, stmt->limitOffset, "OFFSET");
qry->limitCount = transformLimitClause(pstate, stmt->limitCount, "LIMIT");
qry->limitOffset = transformLimitClause(pstate, stmt->limitOffset, EXPR_KIND_OFFSET, "OFFSET");
qry->limitCount = transformLimitClause(pstate, stmt->limitCount, EXPR_KIND_LIMIT, "LIMIT");
if (stmt->lockingClause) {
ereport(ERROR,
@ -2827,7 +2852,7 @@ static Query* transformSetOperationStmt(ParseState* pstate, SelectStmt* stmt)
tllen = list_length(qry->targetList);
qry->sortClause = transformSortClause(
pstate, sortClause, &qry->targetList, false /* no unknowns expected */, false /* allow SQL92 rules */);
pstate, sortClause, &qry->targetList, EXPR_KIND_ORDER_BY, false /* no unknowns expected */, false /* allow SQL92 rules */);
pstate->p_rtable = list_truncate(pstate->p_rtable, sv_rtable_length);
pstate->p_relnamespace = sv_relnamespace;
@ -2842,8 +2867,8 @@ static Query* transformSetOperationStmt(ParseState* pstate, SelectStmt* stmt)
parser_errposition(pstate, exprLocation((const Node*)list_nth(qry->targetList, tllen)))));
}
qry->limitOffset = transformLimitClause(pstate, limitOffset, "OFFSET");
qry->limitCount = transformLimitClause(pstate, limitCount, "LIMIT");
qry->limitOffset = transformLimitClause(pstate, limitOffset, EXPR_KIND_OFFSET, "OFFSET");
qry->limitCount = transformLimitClause(pstate, limitCount, EXPR_KIND_LIMIT, "LIMIT");
qry->rtable = pstate->p_rtable;
qry->jointree = makeFromExpr(pstate->p_joinlist, NULL);
@ -2853,6 +2878,7 @@ static Query* transformSetOperationStmt(ParseState* pstate, SelectStmt* stmt)
if (pstate->p_hasWindowFuncs) {
parseCheckWindowFuncs(pstate, qry);
}
qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasAggs = pstate->p_hasAggs;
foreach (l, lockingClause) {
@ -3387,8 +3413,8 @@ static Query* transformUpdateStmt(ParseState* pstate, UpdateStmt* stmt)
*/
transformFromClause(pstate, stmt->fromClause);
qry->targetList = transformTargetList(pstate, stmt->targetList);
qual = transformWhereClause(pstate, stmt->whereClause, "WHERE");
qry->targetList = transformTargetList(pstate, stmt->targetList, EXPR_KIND_UPDATE_TARGET);
qual = transformWhereClause(pstate, stmt->whereClause, EXPR_KIND_WHERE, "WHERE");
qry->returningList = transformReturningList(pstate, stmt->returningList);
if (qry->returningList != NIL && RelationIsColStore(pstate->p_target_relation)) {
@ -3401,6 +3427,7 @@ static Query* transformUpdateStmt(ParseState* pstate, UpdateStmt* stmt)
qry->rtable = pstate->p_rtable;
qry->jointree = makeFromExpr(pstate->p_joinlist, qual);
qry->hasTargetSRFs = pstate->p_hasTargetSRFs;
qry->hasSubLinks = pstate->p_hasSubLinks;
/*
@ -3550,7 +3577,7 @@ static List* transformReturningList(ParseState* pstate, List* returningList)
pstate->p_hasWindowFuncs = false;
/* transform RETURNING identically to a SELECT targetlist */
rlist = transformTargetList(pstate, returningList);
rlist = transformTargetList(pstate, returningList, EXPR_KIND_RETURNING);
/* check for disallowed stuff */
@ -4263,7 +4290,7 @@ void CheckSelectLocking(Query* qry)
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("SELECT FOR UPDATE/SHARE%s is not allowed with window functions", NOKEYUPDATE_KEYSHARE_ERRMSG)));
}
if (expression_returns_set((Node*)qry->targetList)) {
if (qry->hasTargetSRFs) {
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("SELECT FOR UPDATE/SHARE%s is not allowed with set-returning functions in the target list",

View File

@ -300,17 +300,16 @@ xufailed [uU]&
* We will pass this along as a normal character string,
* but preceded with an internally-generated "NCHAR".
*/
const ScanKeyword *keyword;
int kwnum;
yyless(1); /* eat only 'n' this time */
keyword = ScanKeywordLookup("nchar",
yyextra->keywords,
yyextra->num_keywords);
if (keyword != NULL)
kwnum = ScanKeywordLookup("nchar",
yyextra->keywordlist);
if (kwnum >= 0)
{
yyextra->is_hint_str = true;
return keyword->value;
return yyextra->keyword_tokens[kwnum];
}
else
{

View File

@ -13,17 +13,17 @@
*
* -------------------------------------------------------------------------
*/
#include "postgres.h"
#include "knl/knl_variable.h"
#include "c.h"
#include "parser/keywords.h"
#include "nodes/parsenodes.h"
#include "parser/gramparse.h"
/*ScanKeywordList lookup.data.for.SQL.keywords.*/
#include "parser/kwlist_d.h"
#define PG_KEYWORD(a, b, c) {a, b, c},
#define PG_KEYWORD(kwname,value,category) category,
const ScanKeyword ScanKeywords[] = {
#include "parser/kwlist.h"
const uint8 ScanKeywordCategories[SCANKEYWORDS_NUM_KEYWORDS] = {
#include "parser/kwlist.h"
};
const int NumScanKeywords = lengthof(ScanKeywords);
#undef PG_KEYWORD

View File

@ -21,7 +21,7 @@
#include <ctype.h>
#include "parser/keywords.h"
#include "parser/kwlookup.h"
/*
* ScanKeywordLookup - see if a given word is a keyword
@ -35,56 +35,52 @@
* keywords are to be matched in this way even though non-keyword identifiers
* receive a different case-normalization mapping.
*/
const ScanKeyword* ScanKeywordLookup(const char* text, const ScanKeyword* keywords, int num_keywords)
int
ScanKeywordLookup(const char *str,
const ScanKeywordList *keywords)
{
int len, i;
char word[NAMEDATALEN] = {0};
const ScanKeyword* low = NULL;
const ScanKeyword* high = NULL;
if (text == NULL) {
return NULL;
}
len = strlen(text);
/* We assume all keywords are shorter than NAMEDATALEN. */
if (len >= NAMEDATALEN) {
return NULL;
}
size_t len;
int h;
const char *kw;
/*
* Apply an ASCII-only downcasing. We must not use tolower() since it may
* produce the wrong translation in some locales (eg, Turkish).
* Reject immediately if too long to be any keyword. This saves useless
* hashing and downcasing work on long strings.
*/
for (i = 0; i < len; i++) {
char ch = text[i];
len = strlen(str);
if (len > keywords->max_kw_len)
return -1;
if (ch >= 'A' && ch <= 'Z') {
/*
* Compute the hash function. We assume it was generated to produce
* case-insensitive results. Since it's a perfect hash, we need only
* match to the specific keyword it identifies.
*/
h = keywords->hash(str, len);
/* An out-of-range result implies no match */
if (h < 0 || h >= keywords->num_keywords)
return -1;
/*
* Compare character-by-character to see if we have a match, applying an
* ASCII-only downcasing to the input characters. We must not use
* tolower() since it may produce the wrong translation in some locales
* (eg, Turkish).
*/
kw = GetScanKeyword(h, keywords);
while (*str != '\0')
{
char ch = *str++;
if (ch >= 'A' && ch <= 'Z')
ch += 'a' - 'A';
if (ch != *kw++)
return -1;
}
word[i] = ch;
}
word[len] = '\0';
if (*kw != '\0')
return -1;
/*
* Now do a binary search using plain strcmp() comparison.
*/
low = keywords;
high = keywords + (num_keywords - 1);
while (low <= high) {
const ScanKeyword* middle = NULL;
int difference;
middle = low + (high - low) / 2;
difference = strcmp(middle->name, word);
if (difference == 0) {
return middle;
} else if (difference < 0) {
low = middle + 1;
} else {
high = middle - 1;
}
}
return NULL;
/* Success! */
return h;
}

View File

@ -154,6 +154,7 @@ void transformAggregateCall(ParseState* pstate, Aggref* agg, List* args, List* a
torder = transformSortClause(pstate,
aggorder,
&tlist,
EXPR_KIND_WINDOW_ORDER,
true, /* fix unknowns */
true); /* force SQL99 rules */
@ -1268,6 +1269,137 @@ void build_trans_aggregate_fnexprs(int agg_num_inputs, int agg_num_direct_inputs
/* finalfn is currently never treated as variadic */
}
/*
* Create an expression tree for the transition function of an aggregate.
* This is needed so that polymorphic functions can be used within an
* aggregate --- without the expression tree, such functions would not know
* the datatypes they are supposed to use. (The trees will never actually
* be executed, however, so we can skimp a bit on correctness.)
*
* agg_input_types and agg_state_type identifies the input types of the
* aggregate. These should be resolved to actual types (ie, none should
* ever be ANYELEMENT etc).
* agg_input_collation is the aggregate function's input collation.
*
* For an ordered-set aggregate, remember that agg_input_types describes
* the direct arguments followed by the aggregated arguments.
*
* transfn_oid and invtransfn_oid identify the funcs to be called; the
* latter may be InvalidOid, however if invtransfn_oid is set then
* transfn_oid must also be set.
*
* Pointers to the constructed trees are returned into *transfnexpr,
* *invtransfnexpr. If there is no invtransfn, the respective pointer is set
* to NULL. Since use of the invtransfn is optional, NULL may be passed for
* invtransfnexpr.
*/
void
build_aggregate_transfn_expr(Oid *agg_input_types,
int agg_num_inputs,
int agg_num_direct_inputs,
bool agg_variadic,
Oid agg_state_type,
Oid agg_input_collation,
Oid transfn_oid,
Expr **transfnexpr)
{
Param *argp;
List *args;
FuncExpr *fexpr;
int i;
/*
* Build arg list to use in the transfn FuncExpr node. We really only care
* that transfn can discover the actual argument types at runtime using
* get_fn_expr_argtype(), so it's okay to use Param nodes that don't
* correspond to any real Param.
*/
argp = makeNode(Param);
argp->paramkind = PARAM_EXEC;
argp->paramid = -1;
argp->paramtype = agg_state_type;
argp->paramtypmod = -1;
argp->paramcollid = agg_input_collation;
argp->location = -1;
args = list_make1(argp);
for (i = agg_num_direct_inputs; i < agg_num_inputs; i++)
{
argp = makeNode(Param);
argp->paramkind = PARAM_EXEC;
argp->paramid = -1;
argp->paramtype = agg_input_types[i];
argp->paramtypmod = -1;
argp->paramcollid = agg_input_collation;
argp->location = -1;
args = lappend(args, argp);
}
fexpr = makeFuncExpr(transfn_oid,
agg_state_type,
args,
InvalidOid,
agg_input_collation,
COERCE_EXPLICIT_CALL);
fexpr->funcvariadic = agg_variadic;
*transfnexpr = (Expr *) fexpr;
}
/*
* Like build_aggregate_transfn_expr, but creates an expression tree for the
* final function of an aggregate, rather than the transition function.
*/
void
build_aggregate_finalfn_expr(Oid *agg_input_types,
int num_finalfn_inputs,
Oid agg_state_type,
Oid agg_result_type,
Oid agg_input_collation,
Oid finalfn_oid,
Expr **finalfnexpr)
{
Param *argp;
List *args;
int i;
/*
* Build expr tree for final function
*/
argp = makeNode(Param);
argp->paramkind = PARAM_EXEC;
argp->paramid = -1;
argp->paramtype = agg_state_type;
argp->paramtypmod = -1;
argp->paramcollid = agg_input_collation;
argp->location = -1;
args = list_make1(argp);
/* finalfn may take additional args, which match agg's input types */
for (i = 0; i < num_finalfn_inputs - 1; i++)
{
argp = makeNode(Param);
argp->paramkind = PARAM_EXEC;
argp->paramid = -1;
argp->paramtype = agg_input_types[i];
argp->paramtypmod = -1;
argp->paramcollid = agg_input_collation;
argp->location = -1;
args = lappend(args, argp);
}
*finalfnexpr = (Expr *) makeFuncExpr(finalfn_oid,
agg_result_type,
args,
InvalidOid,
agg_input_collation,
COERCE_EXPLICIT_CALL);
/* finalfn is currently never treated as variadic */
}
/*
* Expand a groupingSets clause to a flat list of grouping sets.
* The returned list is sorted by length, shortest sets first.
@ -1378,7 +1510,7 @@ Node* transformGroupingFunc(ParseState* pstate, GroupingFunc* p)
foreach (lc, args) {
Node* current_result = NULL;
current_result = transformExpr(pstate, (Node*)lfirst(lc));
current_result = transformExpr(pstate, (Node*)lfirst(lc), pstate->p_expr_kind);
/* acceptability of expressions is checked later */
result_list = lappend(result_list, current_result);
}

View File

@ -73,8 +73,8 @@ static TimeCapsuleClause* transformRangeTimeCapsule(ParseState* pstate, RangeTim
static void setNamespaceLateralState(List *l_namespace, bool lateral_only, bool lateral_ok);
static Node* buildMergedJoinVar(ParseState* pstate, JoinType jointype, Var* l_colvar, Var* r_colvar);
static void checkExprIsVarFree(ParseState* pstate, Node* n, const char* constructName);
static TargetEntry* findTargetlistEntrySQL92(ParseState* pstate, Node* node, List** tlist, int clause);
static TargetEntry* findTargetlistEntrySQL99(ParseState* pstate, Node* node, List** tlist);
static TargetEntry* findTargetlistEntrySQL92(ParseState* pstate, Node* node, List** tlist, int clause, ParseExprKind exprKind);
static TargetEntry* findTargetlistEntrySQL99(ParseState* pstate, Node* node, List** tlist, ParseExprKind exprKind);
static int get_matching_location(int sortgroupref, List* sortgrouprefs, List* exprs);
static List* addTargetToGroupList(
ParseState* pstate, TargetEntry* tle, List* grouplist, List* targetlist, int location, bool resolveUnknown);
@ -82,10 +82,10 @@ static WindowClause* findWindowClause(List* wclist, const char* name);
static Node* transformFrameOffset(ParseState* pstate, int frameOptions, Node* clause);
static Node* flatten_grouping_sets(Node* expr, bool toplevel, bool* hasGroupingSets);
static Node* transformGroupingSet(List** flatresult, ParseState* pstate, GroupingSet* gset, List** targetlist,
List* sortClause, bool useSQL99, bool toplevel);
List* sortClause, ParseExprKind exprKind, bool useSQL99, bool toplevel);
static Index transformGroupClauseExpr(List** flatresult, Bitmapset* seen_local, ParseState* pstate, Node* gexpr,
List** targetlist, List* sortClause, bool useSQL99, bool toplevel);
List** targetlist, List* sortClause, ParseExprKind exprKind, bool useSQL99, bool toplevel);
/*
* @Description: append from clause item to the left tree
@ -431,7 +431,7 @@ static Node* transformJoinUsingClause(
* transformJoinOnClause() does. Just invoke transformExpr() to fix up
* the operators, and we're done.
*/
result = transformExpr(pstate, result);
result = transformExpr(pstate, result, EXPR_KIND_JOIN_USING);
result = coerce_to_boolean(pstate, result, "JOIN/USING");
@ -466,7 +466,7 @@ Node* transformJoinOnClause(ParseState* pstate, JoinExpr* j, RangeTblEntry* l_rt
pstate->p_varnamespace = list_make2(makeNamespaceItem(l_rte, false, true),
makeNamespaceItem(r_rte, false, true));
result = transformWhereClause(pstate, j->quals, "JOIN/ON");
result = transformWhereClause(pstate, j->quals, EXPR_KIND_JOIN_ON, "JOIN/ON");
pstate->p_relnamespace = save_relnamespace;
pstate->p_varnamespace = save_varnamespace;
@ -589,7 +589,7 @@ static RangeTblEntry* transformRangeFunction(ParseState* pstate, RangeFunction*
/*
* Transform the raw expression.
*/
funcexpr = transformExpr(pstate, r->funccallnode);
funcexpr = transformExpr(pstate, r->funccallnode, EXPR_KIND_FROM_FUNCTION);
pstate->p_lateral_active = false;
@ -695,7 +695,7 @@ static TableSampleClause* transformRangeTableSample(ParseState* pstate, RangeTab
foreach (larg, rts->args) {
Node* arg = (Node*)lfirst(larg);
arg = transformExpr(pstate, arg);
arg = transformExpr(pstate, arg, EXPR_KIND_FROM_FUNCTION);
arg = coerce_to_specific_type(pstate, arg, FLOAT4OID, "TABLESAMPLE");
assign_expr_collations(pstate, arg);
fargs = lappend(fargs, arg);
@ -706,7 +706,7 @@ static TableSampleClause* transformRangeTableSample(ParseState* pstate, RangeTab
if (rts->repeatable != NULL) {
Node* arg = NULL;
arg = transformExpr(pstate, rts->repeatable);
arg = transformExpr(pstate, rts->repeatable, EXPR_KIND_FROM_FUNCTION);
arg = coerce_to_specific_type(pstate, arg, FLOAT8OID, "REPEATABLE");
assign_expr_collations(pstate, arg);
tablesample->repeatable = (Expr*)arg;
@ -1371,7 +1371,7 @@ static Node* buildMergedJoinVar(ParseState* pstate, JoinType jointype, Var* l_co
*
* constructName does not affect the semantics, but is used in error messages
*/
Node* transformWhereClause(ParseState* pstate, Node* clause, const char* constructName)
Node* transformWhereClause(ParseState* pstate, Node* clause, ParseExprKind exprKind, const char* constructName)
{
Node* qual = NULL;
@ -1379,7 +1379,7 @@ Node* transformWhereClause(ParseState* pstate, Node* clause, const char* constru
return NULL;
}
qual = transformExpr(pstate, clause);
qual = transformExpr(pstate, clause, exprKind);
qual = coerce_to_boolean(pstate, qual, constructName);
@ -1396,14 +1396,14 @@ Node* transformWhereClause(ParseState* pstate, Node* clause, const char* constru
*
* constructName does not affect the semantics, but is used in error messages
*/
Node* transformLimitClause(ParseState* pstate, Node* clause, const char* constructName)
Node* transformLimitClause(ParseState* pstate, Node* clause, ParseExprKind exprKind, const char* constructName)
{
Node* qual = NULL;
if (clause == NULL) {
return NULL;
}
qual = transformExpr(pstate, clause);
qual = transformExpr(pstate, clause, exprKind);
qual = coerce_to_specific_type(pstate, qual, INT8OID, constructName);
@ -1467,7 +1467,7 @@ static void checkExprIsVarFree(ParseState* pstate, Node* n, const char* construc
* tlist the target list (passed by reference so we can append to it)
* clause identifies clause type being processed
*/
static TargetEntry* findTargetlistEntrySQL92(ParseState* pstate, Node* node, List** tlist, int clause)
static TargetEntry* findTargetlistEntrySQL92(ParseState* pstate, Node* node, List** tlist, int clause, ParseExprKind exprKind)
{
ListCell* tl = NULL;
@ -1595,7 +1595,7 @@ static TargetEntry* findTargetlistEntrySQL92(ParseState* pstate, Node* node, Lis
/*
* Otherwise, we have an expression, so process it per SQL99 rules.
*/
return findTargetlistEntrySQL99(pstate, node, tlist);
return findTargetlistEntrySQL99(pstate, node, tlist, exprKind);
}
/*
@ -1610,7 +1610,7 @@ static TargetEntry* findTargetlistEntrySQL92(ParseState* pstate, Node* node, Lis
* node the ORDER BY, GROUP BY, etc expression to be matched
* tlist the target list (passed by reference so we can append to it)
*/
static TargetEntry* findTargetlistEntrySQL99(ParseState* pstate, Node* node, List** tlist)
static TargetEntry* findTargetlistEntrySQL99(ParseState* pstate, Node* node, List** tlist, ParseExprKind exprKind)
{
TargetEntry* target_result = NULL;
ListCell* tl = NULL;
@ -1623,7 +1623,7 @@ static TargetEntry* findTargetlistEntrySQL99(ParseState* pstate, Node* node, Lis
* resjunk target here, though the SQL92 cases above must ignore resjunk
* targets.
*/
expr = transformExpr(pstate, node);
expr = transformExpr(pstate, node, exprKind);
foreach (tl, *tlist) {
TargetEntry* tle = (TargetEntry*)lfirst(tl);
@ -1649,7 +1649,7 @@ static TargetEntry* findTargetlistEntrySQL99(ParseState* pstate, Node* node, Lis
* end of the target list. This target is given resjunk = TRUE so that it
* will not be projected into the final tuple.
*/
target_result = transformTargetEntry(pstate, node, expr, NULL, true);
target_result = transformTargetEntry(pstate, node, expr, exprKind, NULL, true);
*tlist = lappend(*tlist, target_result);
@ -1791,15 +1791,15 @@ static Node* flatten_grouping_sets(Node* expr, bool toplevel, bool* hasGroupingS
* toplevel false if within any grouping set
*/
static Index transformGroupClauseExpr(List** flatresult, Bitmapset* seen_local, ParseState* pstate, Node* gexpr,
List** targetlist, List* sortClause, bool useSQL99, bool toplevel)
List** targetlist, List* sortClause, ParseExprKind exprKind, bool useSQL99, bool toplevel)
{
TargetEntry* tle = NULL;
bool found = false;
if (useSQL99) {
tle = findTargetlistEntrySQL99(pstate, gexpr, targetlist);
tle = findTargetlistEntrySQL99(pstate, gexpr, targetlist, exprKind);
} else {
tle = findTargetlistEntrySQL92(pstate, gexpr, targetlist, GROUP_CLAUSE);
tle = findTargetlistEntrySQL92(pstate, gexpr, targetlist, GROUP_CLAUSE, exprKind);
}
if (tle->ressortgroupref > 0) {
@ -1889,7 +1889,7 @@ static Index transformGroupClauseExpr(List** flatresult, Bitmapset* seen_local,
* toplevel false if within any grouping set
*/
static List* transformGroupClauseList(List** flatresult, ParseState* pstate, List* list, List** targetlist,
List* sortClause, bool useSQL99, bool toplevel)
List* sortClause, ParseExprKind exprKind, bool useSQL99, bool toplevel)
{
Bitmapset* seen_local = NULL;
List* result = NIL;
@ -1899,7 +1899,7 @@ static List* transformGroupClauseList(List** flatresult, ParseState* pstate, Lis
Node* gexpr = (Node*)lfirst(gl);
Index ref =
transformGroupClauseExpr(flatresult, seen_local, pstate, gexpr, targetlist, sortClause, useSQL99, toplevel);
transformGroupClauseExpr(flatresult, seen_local, pstate, gexpr, targetlist, sortClause, exprKind, useSQL99, toplevel);
if (ref > 0) {
seen_local = bms_add_member(seen_local, ref);
result = lappend_int(result, ref);
@ -1929,7 +1929,7 @@ static List* transformGroupClauseList(List** flatresult, ParseState* pstate, Lis
* toplevel false if within any grouping set
*/
static Node* transformGroupingSet(List** flatresult, ParseState* pstate, GroupingSet* gset, List** targetlist,
List* sortClause, bool useSQL99, bool toplevel)
List* sortClause, ParseExprKind exprKind, bool useSQL99, bool toplevel)
{
ListCell* gl = NULL;
List* content = NIL;
@ -1940,16 +1940,16 @@ static Node* transformGroupingSet(List** flatresult, ParseState* pstate, Groupin
Node* n = (Node*)lfirst(gl);
if (IsA(n, List)) {
List* l = transformGroupClauseList(flatresult, pstate, (List*)n, targetlist, sortClause, useSQL99, false);
List* l = transformGroupClauseList(flatresult, pstate, (List*)n, targetlist, sortClause, exprKind, useSQL99, false);
content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE, l, exprLocation(n)));
} else if (IsA(n, GroupingSet)) {
GroupingSet* gset2 = (GroupingSet*)lfirst(gl);
content = lappend(
content, transformGroupingSet(flatresult, pstate, gset2, targetlist, sortClause, useSQL99, false));
content, transformGroupingSet(flatresult, pstate, gset2, targetlist, sortClause, exprKind, useSQL99, false));
} else {
Index ref = transformGroupClauseExpr(flatresult, NULL, pstate, n, targetlist, sortClause, useSQL99, false);
Index ref = transformGroupClauseExpr(flatresult, NULL, pstate, n, targetlist, sortClause, exprKind, useSQL99, false);
content = lappend(content, makeGroupingSet(GROUPING_SET_SIMPLE, list_make1_int(ref), exprLocation(n)));
}
@ -2006,7 +2006,7 @@ static Node* transformGroupingSet(List** flatresult, ParseState* pstate, Groupin
* useSQL99 SQL99 rather than SQL92 syntax
*/
List* transformGroupClause(
ParseState* pstate, List* grouplist, List** groupingSets, List** targetlist, List* sortClause, bool useSQL99)
ParseState* pstate, List* grouplist, List** groupingSets, List** targetlist, List* sortClause, ParseExprKind exprKind, bool useSQL99)
{
List* result = NIL;
List* flat_grouplist = NIL;
@ -2049,14 +2049,14 @@ List* transformGroupClause(
case GROUPING_SET_CUBE:
case GROUPING_SET_ROLLUP:
gsets = lappend(
gsets, transformGroupingSet(&result, pstate, gset, targetlist, sortClause, useSQL99, true));
gsets, transformGroupingSet(&result, pstate, gset, targetlist, sortClause, exprKind, useSQL99, true));
break;
default:
break;
}
} else {
Index ref =
transformGroupClauseExpr(&result, seen_local, pstate, gexpr, targetlist, sortClause, useSQL99, true);
transformGroupClauseExpr(&result, seen_local, pstate, gexpr, targetlist, sortClause, exprKind, useSQL99, true);
if (ref > 0) {
seen_local = bms_add_member(seen_local, ref);
if (hasGroupingSets) {
@ -2085,7 +2085,7 @@ List* transformGroupClause(
* This is also used for window and aggregate ORDER BY clauses (which act
* almost the same, but are always interpreted per SQL99 rules).
*/
List* transformSortClause(ParseState* pstate, List* orderlist, List** targetlist, bool resolveUnknown, bool useSQL99)
List* transformSortClause(ParseState* pstate, List* orderlist, List** targetlist, ParseExprKind exprKind, bool resolveUnknown, bool useSQL99)
{
List* sortlist = NIL;
ListCell* olitem = NULL;
@ -2095,9 +2095,9 @@ List* transformSortClause(ParseState* pstate, List* orderlist, List** targetlist
TargetEntry* tle = NULL;
if (useSQL99) {
tle = findTargetlistEntrySQL99(pstate, sortby->node, targetlist);
tle = findTargetlistEntrySQL99(pstate, sortby->node, targetlist, exprKind);
} else {
tle = findTargetlistEntrySQL92(pstate, sortby->node, targetlist, ORDER_CLAUSE);
tle = findTargetlistEntrySQL92(pstate, sortby->node, targetlist, ORDER_CLAUSE, exprKind);
}
sortlist = addTargetToSortList(pstate, tle, sortlist, *targetlist, sortby, resolveUnknown);
}
@ -2152,9 +2152,9 @@ List* transformWindowDefinitions(ParseState* pstate, List* windowdefs, List** ta
* including the special handling of nondefault operator semantics.
*/
orderClause = transformSortClause(
pstate, windef->orderClause, targetlist, true /* fix unknowns */, true /* force SQL99 rules */);
pstate, windef->orderClause, targetlist, EXPR_KIND_WINDOW_ORDER, true /* fix unknowns */, true /* force SQL99 rules */);
partitionClause = transformGroupClause(
pstate, windef->partitionClause, NULL, targetlist, orderClause, true /* force SQL99 rules */);
pstate, windef->partitionClause, NULL, targetlist, orderClause, EXPR_KIND_WINDOW_PARTITION, true /* force SQL99 rules */);
/*
* And prepare the new WindowClause.
@ -2328,7 +2328,7 @@ List* transformDistinctOnClause(ParseState* pstate, List* distinctlist, List** t
int sortgroupref;
TargetEntry* tle = NULL;
tle = findTargetlistEntrySQL92(pstate, dexpr, targetlist, DISTINCT_ON_CLAUSE);
tle = findTargetlistEntrySQL92(pstate, dexpr, targetlist, DISTINCT_ON_CLAUSE, EXPR_KIND_DISTINCT_ON);
sortgroupref = assignSortGroupRef(tle, *targetlist);
sortgrouprefs = lappend_int(sortgrouprefs, sortgroupref);
}
@ -2754,16 +2754,18 @@ static Node* transformFrameOffset(ParseState* pstate, int frameOptions, Node* cl
if (clause == NULL) {
return NULL;
}
/* Transform the raw expression tree */
node = transformExpr(pstate, clause);
if (frameOptions & FRAMEOPTION_ROWS) {
/* Transform the raw expression tree */
node = transformExpr(pstate, clause, EXPR_KIND_WINDOW_FRAME_ROWS);
/*
* Like LIMIT clause, simply coerce to int8
*/
constructName = "ROWS";
node = coerce_to_specific_type(pstate, node, INT8OID, constructName);
} else if (frameOptions & FRAMEOPTION_RANGE) {
/* Transform the raw expression tree */
node = transformExpr(pstate, clause, EXPR_KIND_WINDOW_FRAME_RANGE);
/*
* this needs a lot of thought to decide how to support in the context
* of Postgres' extensible datatype framework

View File

@ -905,7 +905,7 @@ static Node* coerce_record_to_complex(
Oid exprtype;
/* Fill in NULLs for dropped columns in rowtype */
if (tupdesc->attrs[i]->attisdropped) {
if (tupdesc->attrs[i].attisdropped) {
/*
* can't use atttypid here, but it doesn't really matter what type
* the Const claims to be.
@ -926,8 +926,8 @@ static Node* coerce_record_to_complex(
cexpr = coerce_to_target_type(pstate,
expr,
exprtype,
tupdesc->attrs[i]->atttypid,
tupdesc->attrs[i]->atttypmod,
tupdesc->attrs[i].atttypid,
tupdesc->attrs[i].atttypmod,
ccontext,
COERCE_IMPLICIT_CAST,
-1);
@ -937,7 +937,7 @@ static Node* coerce_record_to_complex(
errmsg("cannot cast type %s to %s", format_type_be(RECORDOID), format_type_be(targetTypeId)),
errdetail("Cannot cast type %s to %s in column %d.",
format_type_be(exprtype),
format_type_be(tupdesc->attrs[i]->atttypid),
format_type_be(tupdesc->attrs[i].atttypid),
ucolno),
parser_coercion_errposition(pstate, location, expr)));
}

View File

@ -785,7 +785,7 @@ bool plus_outerjoin_preprocess(const OperatorPlusProcessContext* ctx, Node* expr
setIgnorePlusFlag(ps, true);
ps->p_plusjoin_rte_info = makePlusJoinInfo(true);
(void)transformExpr(ps, expr);
(void)transformExpr(ps, expr, EXPR_KIND_WHERE);
setIgnorePlusFlag(ps, false);

View File

@ -51,6 +51,7 @@
extern Node* makeAConst(Value* v, int location);
extern Value* makeStringValue(char* str);
static Node *transformExprRecurse(ParseState *pstate, Node *expr);
static Node* transformParamRef(ParseState* pstate, ParamRef* pref);
static Node* transformAExprOp(ParseState* pstate, A_Expr* a);
static Node* transformAExprAnd(ParseState* pstate, A_Expr* a);
@ -116,7 +117,24 @@ static Node* tryTransformFunc(ParseState* pstate, List* fields, int location);
* a Const. More care is needed for node types that are used as both
* input and output of transformExpr; see SubLink for example.
*/
Node* transformExpr(ParseState* pstate, Node* expr)
Node* transformExpr(ParseState* pstate, Node* expr, ParseExprKind exprKind)
{
Node *result;
ParseExprKind sv_expr_kind;
/* Save and restore identity of expression type we're parsing */
Assert(exprKind != EXPR_KIND_NONE);
sv_expr_kind = pstate->p_expr_kind;
pstate->p_expr_kind = exprKind;
result = transformExprRecurse(pstate, expr);
pstate->p_expr_kind = sv_expr_kind;
return result;
}
static Node *transformExprRecurse(ParseState *pstate, Node *expr)
{
Node* result = NULL;
@ -146,7 +164,7 @@ Node* transformExpr(ParseState* pstate, Node* expr)
case T_A_Indirection: {
A_Indirection* ind = (A_Indirection*)expr;
result = transformExpr(pstate, ind->arg);
result = transformExprRecurse(pstate, ind->arg);
result = transformIndirection(pstate, result, ind->indirection);
break;
}
@ -242,7 +260,7 @@ Node* transformExpr(ParseState* pstate, Node* expr)
case T_NamedArgExpr: {
NamedArgExpr* na = (NamedArgExpr*)expr;
na->arg = (Expr*)transformExpr(pstate, (Node*)na->arg);
na->arg = (Expr*)transformExprRecurse(pstate, (Node*)na->arg);
result = expr;
break;
}
@ -281,7 +299,7 @@ Node* transformExpr(ParseState* pstate, Node* expr)
case T_NullTest: {
NullTest* n = (NullTest*)expr;
n->arg = (Expr*)transformExpr(pstate, (Node*)n->arg);
n->arg = (Expr*)transformExprRecurse(pstate, (Node*)n->arg);
/* the argument can be any type, so don't coerce it */
n->argisrow = type_is_rowtype(exprType((Node*)n->arg));
result = expr;
@ -667,7 +685,7 @@ Node* transformColumnRef(ParseState* pstate, ColumnRef* cref)
((rte->alias && (strcmp(rte->alias->aliasname, colname) == 0)) ||
(strcmp(rte->relname, colname) == 0))) {
Node* row_expr = convertStarToCRef(rte, NULL, NULL, colname, cref->location);
node = transformExpr(pstate, row_expr);
node = transformExprRecurse(pstate, row_expr);
} else {
node = transformWholeRowRef(pstate, rte, cref->location);
}
@ -729,7 +747,7 @@ Node* transformColumnRef(ParseState* pstate, ColumnRef* cref)
if (IsA(field2, A_Star)) {
if (OrientedIsCOLorPAX(rte) || RelIsSpecifiedFTbl(rte, HDFS) || RelIsSpecifiedFTbl(rte, OBS)) {
Node* row_expr = convertStarToCRef(rte, NULL, NULL, relname, cref->location);
node = transformExpr(pstate, row_expr);
node = transformExprRecurse(pstate, row_expr);
} else {
node = transformWholeRowRef(pstate, rte, cref->location);
}
@ -799,7 +817,7 @@ Node* transformColumnRef(ParseState* pstate, ColumnRef* cref)
if (IsA(field3, A_Star)) {
if (OrientedIsCOLorPAX(rte) || RelIsSpecifiedFTbl(rte, HDFS) || RelIsSpecifiedFTbl(rte, OBS)) {
Node* row_expr = convertStarToCRef(rte, NULL, nspname, relname, cref->location);
node = transformExpr(pstate, row_expr);
node = transformExprRecurse(pstate, row_expr);
} else {
node = transformWholeRowRef(pstate, rte, cref->location);
}
@ -854,7 +872,7 @@ Node* transformColumnRef(ParseState* pstate, ColumnRef* cref)
if (IsA(field4, A_Star)) {
if (OrientedIsCOLorPAX(rte) || RelIsSpecifiedFTbl(rte, HDFS) || RelIsSpecifiedFTbl(rte, OBS)) {
Node* row_expr = convertStarToCRef(rte, catname, nspname, relname, cref->location);
node = transformExpr(pstate, row_expr);
node = transformExprRecurse(pstate, row_expr);
} else {
node = transformWholeRowRef(pstate, rte, cref->location);
}
@ -1104,7 +1122,7 @@ static Node* transformAExprOp(ParseState* pstate, A_Expr* a)
n->arg = exprIsNullConstant(lexpr) ? (Expr *)rexpr : (Expr *)lexpr;
result = transformExpr(pstate, (Node*)n);
result = transformExprRecurse(pstate, (Node*)n);
} else if (lexpr && IsA(lexpr, RowExpr) && rexpr && IsA(rexpr, SubLink) &&
((SubLink*)rexpr)->subLinkType == EXPR_SUBLINK) {
/*
@ -1118,19 +1136,19 @@ static Node* transformAExprOp(ParseState* pstate, A_Expr* a)
s->testexpr = lexpr;
s->operName = a->name;
s->location = a->location;
result = transformExpr(pstate, (Node*)s);
result = transformExprRecurse(pstate, (Node*)s);
} else if (lexpr && IsA(lexpr, RowExpr) && rexpr && IsA(rexpr, RowExpr)) {
/* "row op row" */
lexpr = transformExpr(pstate, lexpr);
rexpr = transformExpr(pstate, rexpr);
lexpr = transformExprRecurse(pstate, lexpr);
rexpr = transformExprRecurse(pstate, rexpr);
AssertEreport(IsA(lexpr, RowExpr), MOD_OPT, "");
AssertEreport(IsA(rexpr, RowExpr), MOD_OPT, "");
result = make_row_comparison_op(pstate, a->name, ((RowExpr*)lexpr)->args, ((RowExpr*)rexpr)->args, a->location);
} else {
/* Ordinary scalar operator */
lexpr = transformExpr(pstate, lexpr);
rexpr = transformExpr(pstate, rexpr);
lexpr = transformExprRecurse(pstate, lexpr);
rexpr = transformExprRecurse(pstate, rexpr);
result = (Node*)make_op(pstate, a->name, lexpr, rexpr, a->location);
}
@ -1140,8 +1158,8 @@ static Node* transformAExprOp(ParseState* pstate, A_Expr* a)
static Node* transformAExprAnd(ParseState* pstate, A_Expr* a)
{
Node* lexpr = transformExpr(pstate, a->lexpr);
Node* rexpr = transformExpr(pstate, a->rexpr);
Node* lexpr = transformExprRecurse(pstate, a->lexpr);
Node* rexpr = transformExprRecurse(pstate, a->rexpr);
lexpr = coerce_to_boolean(pstate, lexpr, "AND");
rexpr = coerce_to_boolean(pstate, rexpr, "AND");
@ -1151,8 +1169,8 @@ static Node* transformAExprAnd(ParseState* pstate, A_Expr* a)
static Node* transformAExprOr(ParseState* pstate, A_Expr* a)
{
Node* lexpr = transformExpr(pstate, a->lexpr);
Node* rexpr = transformExpr(pstate, a->rexpr);
Node* lexpr = transformExprRecurse(pstate, a->lexpr);
Node* rexpr = transformExprRecurse(pstate, a->rexpr);
lexpr = coerce_to_boolean(pstate, lexpr, "OR");
rexpr = coerce_to_boolean(pstate, rexpr, "OR");
@ -1162,7 +1180,7 @@ static Node* transformAExprOr(ParseState* pstate, A_Expr* a)
static Node* transformAExprNot(ParseState* pstate, A_Expr* a)
{
Node* rexpr = transformExpr(pstate, a->rexpr);
Node* rexpr = transformExprRecurse(pstate, a->rexpr);
rexpr = coerce_to_boolean(pstate, rexpr, "NOT");
@ -1171,24 +1189,24 @@ static Node* transformAExprNot(ParseState* pstate, A_Expr* a)
static Node* transformAExprOpAny(ParseState* pstate, A_Expr* a)
{
Node* lexpr = transformExpr(pstate, a->lexpr);
Node* rexpr = transformExpr(pstate, a->rexpr);
Node* lexpr = transformExprRecurse(pstate, a->lexpr);
Node* rexpr = transformExprRecurse(pstate, a->rexpr);
return (Node*)make_scalar_array_op(pstate, a->name, true, lexpr, rexpr, a->location);
}
static Node* transformAExprOpAll(ParseState* pstate, A_Expr* a)
{
Node* lexpr = transformExpr(pstate, a->lexpr);
Node* rexpr = transformExpr(pstate, a->rexpr);
Node* lexpr = transformExprRecurse(pstate, a->lexpr);
Node* rexpr = transformExprRecurse(pstate, a->rexpr);
return (Node*)make_scalar_array_op(pstate, a->name, false, lexpr, rexpr, a->location);
}
static Node* transformAExprDistinct(ParseState* pstate, A_Expr* a)
{
Node* lexpr = transformExpr(pstate, a->lexpr);
Node* rexpr = transformExpr(pstate, a->rexpr);
Node* lexpr = transformExprRecurse(pstate, a->lexpr);
Node* rexpr = transformExprRecurse(pstate, a->rexpr);
if (lexpr && IsA(lexpr, RowExpr) && rexpr && IsA(rexpr, RowExpr)) {
/* "row op row" */
@ -1201,8 +1219,8 @@ static Node* transformAExprDistinct(ParseState* pstate, A_Expr* a)
static Node* transformAExprNullIf(ParseState* pstate, A_Expr* a)
{
Node* lexpr = transformExpr(pstate, a->lexpr);
Node* rexpr = transformExpr(pstate, a->rexpr);
Node* lexpr = transformExprRecurse(pstate, a->lexpr);
Node* rexpr = transformExprRecurse(pstate, a->rexpr);
OpExpr* result = NULL;
result = (OpExpr*)make_op(pstate, a->name, lexpr, rexpr, a->location);
@ -1235,7 +1253,7 @@ static Node* transformAExprOf(ParseState* pstate, A_Expr* a)
* Checking an expression for match to a list of type names. Will result
* in a boolean constant node.
*/
Node* lexpr = transformExpr(pstate, a->lexpr);
Node* lexpr = transformExprRecurse(pstate, a->lexpr);
Const* result = NULL;
ListCell* telem = NULL;
Oid ltype, rtype;
@ -1296,11 +1314,11 @@ static Node* transformAExprIn(ParseState* pstate, A_Expr* a)
* First step: transform all the inputs, and detect whether any are
* RowExprs or contain Vars.
*/
lexpr = transformExpr(pstate, a->lexpr);
lexpr = transformExprRecurse(pstate, a->lexpr);
haveRowExpr = (lexpr && IsA(lexpr, RowExpr));
rexprs = rvars = rnonvars = NIL;
foreach (l, (List*)a->rexpr) {
Node* rexpr = (Node*)transformExpr(pstate, (Node*)lfirst(l));
Node* rexpr = (Node*)transformExprRecurse(pstate, (Node*)lfirst(l));
haveRowExpr = haveRowExpr || (rexpr && IsA(rexpr, RowExpr));
rexprs = lappend(rexprs, rexpr);
@ -1440,7 +1458,7 @@ static Node* transformFuncCall(ParseState* pstate, FuncCall* fn)
/* Transform the list of arguments ... */
targs = NIL;
foreach (args, fn->args) {
targs = lappend(targs, transformExpr(pstate, (Node*)lfirst(args)));
targs = lappend(targs, transformExprRecurse(pstate, (Node*)lfirst(args)));
}
if (fn->agg_within_group) {
@ -1448,7 +1466,7 @@ static Node* transformFuncCall(ParseState* pstate, FuncCall* fn)
foreach (args, fn->agg_order) {
SortBy* arg = (SortBy*)lfirst(args);
targs = lappend(targs, transformExpr(pstate, arg->node));
targs = lappend(targs, transformExpr(pstate, arg->node, EXPR_KIND_ORDER_BY));
}
}
@ -1514,6 +1532,7 @@ Oid getMultiFuncInfo(char* fun_expr, PLpgSQL_expr* expr)
if (nodeTag(parsetree) == T_SelectStmt) {
SelectStmt* stmt = (SelectStmt*)parsetree;
List* frmList = stmt->fromClause;
pstate->p_expr_kind = EXPR_KIND_FROM_FUNCTION;
ListCell* fl = NULL;
foreach (fl, frmList) {
Node* n = (Node*)lfirst(fl);
@ -1525,7 +1544,7 @@ Oid getMultiFuncInfo(char* fun_expr, PLpgSQL_expr* expr)
FuncCall* fn = (FuncCall*)(r->funccallnode);
foreach (args, fn->args) {
targs = lappend(targs, transformExpr(pstate, (Node*)lfirst(args)));
targs = lappend(targs, transformExprRecurse(pstate, (Node*)lfirst(args)));
}
Node* result = ParseFuncOrColumn(pstate, fn->funcname, targs, fn, fn->location, true);
@ -1563,7 +1582,7 @@ static Node* transformCaseExpr(ParseState* pstate, CaseExpr* c)
newc = makeNode(CaseExpr);
/* transform the test expression, if any */
arg = transformExpr(pstate, (Node*)c->arg);
arg = transformExprRecurse(pstate, (Node*)c->arg);
/* generate placeholder for test expression */
if (arg != NULL) {
@ -1609,12 +1628,12 @@ static Node* transformCaseExpr(ParseState* pstate, CaseExpr* c)
/* shorthand form was specified, so expand... */
warg = (Node*)makeSimpleA_Expr(AEXPR_OP, "=", (Node*)placeholder, warg, w->location);
}
neww->expr = (Expr*)transformExpr(pstate, warg);
neww->expr = (Expr*)transformExprRecurse(pstate, warg);
neww->expr = (Expr*)coerce_to_boolean(pstate, (Node*)neww->expr, "CASE/WHEN");
warg = (Node*)w->result;
neww->result = (Expr*)transformExpr(pstate, warg);
neww->result = (Expr*)transformExprRecurse(pstate, warg);
neww->location = w->location;
newargs = lappend(newargs, neww);
@ -1632,7 +1651,7 @@ static Node* transformCaseExpr(ParseState* pstate, CaseExpr* c)
n->location = -1;
defresult = (Node*)n;
}
newc->defresult = (Expr*)transformExpr(pstate, defresult);
newc->defresult = (Expr*)transformExprRecurse(pstate, defresult);
/*
* Note: default result is considered the most significant type in
@ -1727,7 +1746,7 @@ static Node* transformSubLink(ParseState* pstate, SubLink* sublink)
/*
* Transform lefthand expression, and convert to a list
*/
lefthand = transformExpr(pstate, sublink->testexpr);
lefthand = transformExprRecurse(pstate, sublink->testexpr);
if (lefthand && IsA(lefthand, RowExpr)) {
left_list = ((RowExpr*)lefthand)->args;
} else {
@ -1822,7 +1841,7 @@ static Node* transformArrayExpr(ParseState* pstate, A_ArrayExpr* a, Oid array_ty
AssertEreport(array_type == InvalidOid || array_type == exprType(newe), MOD_OPT, "");
newa->multidims = true;
} else {
newe = transformExpr(pstate, e);
newe = transformExprRecurse(pstate, e);
/*
* Check for sub-array expressions, if we haven't already found
@ -1934,7 +1953,7 @@ static Node* transformRowExpr(ParseState* pstate, RowExpr* r)
newr = makeNode(RowExpr);
/* Transform the field expressions */
newr->args = transformExpressionList(pstate, r->args);
newr->args = transformExpressionList(pstate, r->args, pstate->p_expr_kind);
/* Barring later casting, we consider the type RECORD */
newr->row_typeid = RECORDOID;
@ -1966,7 +1985,7 @@ static Node* transformCoalesceExpr(ParseState* pstate, CoalesceExpr* c)
Node* e = (Node*)lfirst(args);
Node* newe = NULL;
newe = transformExpr(pstate, e);
newe = transformExprRecurse(pstate, e);
newargs = lappend(newargs, newe);
}
@ -2005,7 +2024,7 @@ static Node* transformMinMaxExpr(ParseState* pstate, MinMaxExpr* m)
Node* e = (Node*)lfirst(args);
Node* newe = NULL;
newe = transformExpr(pstate, e);
newe = transformExprRecurse(pstate, e);
newargs = lappend(newargs, newe);
}
@ -2062,7 +2081,7 @@ static Node* transformXmlExpr(ParseState* pstate, XmlExpr* x)
AssertEreport(IsA(r, ResTarget), MOD_OPT, "");
expr = transformExpr(pstate, r->val);
expr = transformExprRecurse(pstate, r->val);
if (r->name) {
argname = map_sql_identifier_to_xml_name(r->name, false, false);
@ -2102,7 +2121,7 @@ static Node* transformXmlExpr(ParseState* pstate, XmlExpr* x)
Node* e = (Node*)lfirst(lc);
Node* newe = NULL;
newe = transformExpr(pstate, e);
newe = transformExprRecurse(pstate, e);
switch (x->op) {
case IS_XMLCONCAT:
newe = coerce_to_specific_type(pstate, newe, XMLOID, "XMLCONCAT");
@ -2158,7 +2177,7 @@ static Node* transformXmlSerialize(ParseState* pstate, XmlSerialize* xs)
xexpr = makeNode(XmlExpr);
xexpr->op = IS_XMLSERIALIZE;
xexpr->args = list_make1(coerce_to_specific_type(pstate, transformExpr(pstate, xs->expr), XMLOID, "XMLSERIALIZE"));
xexpr->args = list_make1(coerce_to_specific_type(pstate, transformExprRecurse(pstate, xs->expr), XMLOID, "XMLSERIALIZE"));
typenameTypeIdAndMod(pstate, xs->typname, &targetType, &targetTypmod);
@ -2215,7 +2234,7 @@ static Node* transformBooleanTest(ParseState* pstate, BooleanTest* b)
clausename = NULL; /* keep compiler quiet */
}
b->arg = (Expr*)transformExpr(pstate, (Node*)b->arg);
b->arg = (Expr*)transformExprRecurse(pstate, (Node*)b->arg);
b->arg = (Expr*)coerce_to_boolean(pstate, (Node*)b->arg, clausename);
@ -2369,7 +2388,7 @@ static Node* transformPredictByFunction(ParseState* pstate, PredictByFunction* p
n->location = p->model_args_location;
n->call_func = false;
return transformExpr(pstate, (Node*)n);
return transformExprRecurse(pstate, (Node*)n);
}
@ -2417,7 +2436,7 @@ static Node* transformWholeRowRef(ParseState* pstate, RangeTblEntry* rte, int lo
static Node* transformTypeCast(ParseState* pstate, TypeCast* tc)
{
Node* result = NULL;
Node* expr = transformExpr(pstate, tc->arg);
Node* expr = transformExprRecurse(pstate, tc->arg);
Oid inputType = exprType(expr);
Oid targetType;
int32 targetTypmod;
@ -2459,7 +2478,7 @@ static Node* transformCollateClause(ParseState* pstate, CollateClause* c)
Oid argtype;
newc = makeNode(CollateExpr);
newc->arg = (Expr*)transformExpr(pstate, c->arg);
newc->arg = (Expr*)transformExprRecurse(pstate, c->arg);
argtype = exprType((Node*)newc->arg);

View File

@ -27,6 +27,7 @@
#include "parser/parse_agg.h"
#include "parser/parse_clause.h"
#include "parser/parse_coerce.h"
#include "parser/parse_expr.h"
#include "parser/parse_func.h"
#include "parser/parse_relation.h"
#include "parser/parse_target.h"
@ -478,6 +479,10 @@ Node* ParseFuncOrColumn(ParseState* pstate, List* funcname, List* fargs, FuncCal
fargs = lappend(fargs, newa);
}
/* if it returns a set, check that's OK */
if (retset)
check_srf_call_placement(pstate, location);
/* build the appropriate output structure */
if (fdresult == FUNCDETAIL_NORMAL) {
FuncExpr* funcexpr = makeNode(FuncExpr);
@ -1920,7 +1925,7 @@ static Node* ParseComplexProjection(ParseState* pstate, char* funcname, Node* fi
AssertEreport(tupdesc, MOD_OPT, "");
for (i = 0; i < tupdesc->natts; i++) {
Form_pg_attribute att = tupdesc->attrs[i];
Form_pg_attribute att = &tupdesc->attrs[i];
if (strcmp(funcname, NameStr(att->attname)) == 0 && !att->attisdropped) {
/* Success, so generate a FieldSelect expression */
@ -2343,3 +2348,167 @@ static Oid cl_get_input_param_original_type(Oid func_id, int argno)
}
return ret;
}
/*
* check_srf_call_placement
* Verify that a set-returning function is called in a valid place,
* and throw a nice error if not.
*
* A side-effect is to set pstate->p_hasTargetSRFs true if appropriate.
*/
void
check_srf_call_placement(ParseState *pstate, int location)
{
const char *err;
bool errkind;
/*
* Check to see if the set-returning function is in an invalid place
* within the query. Basically, we don't allow SRFs anywhere except in
* the targetlist (which includes GROUP BY/ORDER BY expressions), VALUES,
* and functions in FROM.
*
* For brevity we support two schemes for reporting an error here: set
* "err" to a custom message, or set "errkind" true if the error context
* is sufficiently identified by what ParseExprKindName will return, *and*
* what it will return is just a SQL keyword. (Otherwise, use a custom
* message to avoid creating translation problems.)
*/
err = NULL;
errkind = false;
switch (pstate->p_expr_kind) {
case EXPR_KIND_NONE:
Assert(false); /* can't happen */
break;
case EXPR_KIND_OTHER:
/* Accept SRF here; caller must throw error if wanted */
break;
case EXPR_KIND_JOIN_ON:
case EXPR_KIND_JOIN_USING:
err = _("set-returning functions are not allowed in JOIN conditions");
break;
case EXPR_KIND_FROM_SUBSELECT:
/* can't get here, but just in case, throw an error */
errkind = true;
break;
case EXPR_KIND_FROM_FUNCTION:
/* okay ... but we can't check nesting here */
break;
case EXPR_KIND_WHERE:
errkind = true;
break;
case EXPR_KIND_POLICY:
err = _("set-returning functions are not allowed in policy expressions");
break;
case EXPR_KIND_HAVING:
errkind = true;
break;
case EXPR_KIND_FILTER:
errkind = true;
break;
case EXPR_KIND_WINDOW_PARTITION:
case EXPR_KIND_WINDOW_ORDER:
/* okay, these are effectively GROUP BY/ORDER BY */
pstate->p_hasTargetSRFs = true;
break;
case EXPR_KIND_WINDOW_FRAME_RANGE:
case EXPR_KIND_WINDOW_FRAME_ROWS:
err = _("set-returning functions are not allowed in window definitions");
break;
case EXPR_KIND_SELECT_TARGET:
case EXPR_KIND_INSERT_TARGET:
/* okay */
pstate->p_hasTargetSRFs = true;
break;
case EXPR_KIND_UPDATE_SOURCE:
case EXPR_KIND_UPDATE_TARGET:
/* disallowed because it would be ambiguous what to do */
errkind = true;
break;
case EXPR_KIND_GROUP_BY:
case EXPR_KIND_ORDER_BY:
/* okay */
pstate->p_hasTargetSRFs = true;
break;
case EXPR_KIND_DISTINCT_ON:
/* okay */
pstate->p_hasTargetSRFs = true;
break;
case EXPR_KIND_LIMIT:
case EXPR_KIND_OFFSET:
errkind = true;
break;
case EXPR_KIND_RETURNING:
errkind = true;
break;
case EXPR_KIND_VALUES:
/* SRFs are presently not supported by nodeValuesscan.c */
errkind = true;
break;
case EXPR_KIND_VALUES_SINGLE:
/* okay, since we process this like a SELECT tlist */
pstate->p_hasTargetSRFs = true;
break;
case EXPR_KIND_CHECK_CONSTRAINT:
case EXPR_KIND_DOMAIN_CHECK:
err = _("set-returning functions are not allowed in check constraints");
break;
case EXPR_KIND_COLUMN_DEFAULT:
case EXPR_KIND_FUNCTION_DEFAULT:
err = _("set-returning functions are not allowed in DEFAULT expressions");
break;
case EXPR_KIND_INDEX_EXPRESSION:
err = _("set-returning functions are not allowed in index expressions");
break;
case EXPR_KIND_INDEX_PREDICATE:
err = _("set-returning functions are not allowed in index predicates");
break;
case EXPR_KIND_ALTER_COL_TRANSFORM:
err = _("set-returning functions are not allowed in transform expressions");
break;
case EXPR_KIND_EXECUTE_PARAMETER:
err = _("set-returning functions are not allowed in EXECUTE parameters");
break;
case EXPR_KIND_TRIGGER_WHEN:
err = _("set-returning functions are not allowed in trigger WHEN conditions");
break;
case EXPR_KIND_PARTITION_EXPRESSION:
err = _("set-returning functions are not allowed in partition key expression");
break;
case EXPR_KIND_PARTITION_BOUND:
err = _("set-returning functions are not allowed in partition bound expression");
break;
case EXPR_KIND_CALL_ARGUMENT:
err = _("set-returning functions are not allowed in CALL procedure argument");
break;
case EXPR_KIND_COPY_WHERE:
err = _("set-returning functions are not allowed in WHERE condition in COPY FROM");
break;
case EXPR_KIND_GENERATED_COLUMN:
err = _("set-returning functions are not allowed in generation expression");
break;
case EXPR_KIND_WINDOW_FRAME_GROUPS:
err = _("set-returning functions are not allowed in window frame clause with GROUPS");
break;
case EXPR_KIND_MERGE_WHEN:
err = _("set-returning functions are not allowed in merge when expression");
break;
/*
* There is intentionally no default: case here, so that the
* compiler will warn if we add a new ParseExprKind without
* extending this switch. If we do see an unrecognized value at
* runtime, the behavior will be the same as for EXPR_KIND_OTHER,
* which is sane anyway.
*/
}
if (err)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg_internal("%s", err),
parser_errposition(pstate, location)));
if (errkind)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("set-returning functions are not allowed"),
parser_errposition(pstate, location)));
}

View File

@ -1490,6 +1490,10 @@ HintState* create_hintstate(const char* hints)
hstate->cache_plan_hint = keep_last_hint_cell(hstate->cache_plan_hint);
}
if (hstate && hstate->hint_warning != NULL) {
u_sess->parser_cxt.has_hintwarning = true;
}
pfree_ext(hint_str);
return hstate;
}
@ -2687,13 +2691,13 @@ static void set_colinfo_by_relation(Oid relid, int location, SkewColumnInfo* col
relation = heap_open(relid, AccessShareLock);
Assert((location + 1) == relation->rd_att->attrs[location]->attnum);
Assert((location + 1) == relation->rd_att->attrs[location].attnum);
/* Set column info. */
column_info->relation_Oid = relation->rd_att->attrs[location]->attrelid;
column_info->relation_Oid = relation->rd_att->attrs[location].attrelid;
column_info->column_name = column_name;
column_info->attnum = relation->rd_att->attrs[location]->attnum;
column_info->column_typid = relation->rd_att->attrs[location]->atttypid;
column_info->attnum = relation->rd_att->attrs[location].attnum;
column_info->column_typid = relation->rd_att->attrs[location].atttypid;
column_info->expr = NULL;
heap_close(relation, AccessShareLock);
@ -3791,8 +3795,7 @@ bool permit_predpush(PlannerInfo *root)
return !predpushHint->negative;
}
const unsigned int G_NUM_SET_HINT_WHITE_LIST = 33;
const char* G_SET_HINT_WHITE_LIST[G_NUM_SET_HINT_WHITE_LIST] = {
const char* G_SET_HINT_WHITE_LIST[] = {
/* keep in the ascending alphabetical order of frequency */
(char*)"best_agg_plan",
(char*)"cost_weight_index",
@ -3818,6 +3821,7 @@ const char* G_SET_HINT_WHITE_LIST[G_NUM_SET_HINT_WHITE_LIST] = {
(char*)"enable_remotesort",
(char*)"enable_seqscan",
(char*)"enable_sort",
(char*)"enable_sortgroup_agg",
(char*)"enable_stream_operator",
(char*)"enable_stream_recursive",
(char*)"enable_tidscan",
@ -3828,6 +3832,8 @@ const char* G_SET_HINT_WHITE_LIST[G_NUM_SET_HINT_WHITE_LIST] = {
(char*)"seq_page_cost",
(char*)"try_vector_engine_strategy"};
const unsigned int G_NUM_SET_HINT_WHITE_LIST = sizeof(G_SET_HINT_WHITE_LIST) / sizeof(G_SET_HINT_WHITE_LIST[0]);
static int param_str_cmp(const void *s1, const void *s2)
{
const char *key = (const char *)s1;

View File

@ -289,7 +289,7 @@ static Node* build_equal_expr(
RangeSubselect* range_subselect = (RangeSubselect*)stmt->source_relation;
char* target_aliasname = stmt->relation->alias->aliasname;
int attrno = index_info->ii_KeyAttrNumbers[index];
char* attname = pstrdup(NameStr(target_relation->rd_att->attrs[attrno - 1]->attname));
char* attname = pstrdup(NameStr(target_relation->rd_att->attrs[attrno - 1].attname));
char* source_aliasname = range_subselect->alias->aliasname;
/* build the left expr of the equal expr, which comes from the target relation's index */
@ -677,7 +677,7 @@ static List* transformUpdateTargetList(ParseState* pstate, List* origTlist)
ListCell* origTargetList = NULL;
ListCell* tl = NULL;
tlist = transformTargetList(pstate, origTlist);
tlist = transformTargetList(pstate, origTlist, EXPR_KIND_UPDATE_SOURCE);
/* Prepare to assign non-conflicting resnos to resjunk attributes */
if (pstate->p_next_resno <= RelationGetNumberOfAttributes(pstate->p_target_relation)) {
@ -1185,7 +1185,7 @@ Query* transformMergeStmt(ParseState* pstate, MergeStmt* stmt)
* are evaluated separately during execution to decide which of the
* WHEN MATCHED or WHEN NOT MATCHED actions to execute.
*/
action->qual = transformWhereClause(pstate, mergeWhenClause->condition, "WHEN");
action->qual = transformWhereClause(pstate, mergeWhenClause->condition, EXPR_KIND_MERGE_WHEN, "WHEN");
pstate->p_varnamespace = save_varnamespace;
pstate->p_is_insert = true;
@ -1223,7 +1223,7 @@ Query* transformMergeStmt(ParseState* pstate, MergeStmt* stmt)
* Do basic expression transformation (same as a ROW()
* expr, but allow SetToDefault at top level)
*/
exprList = transformExpressionList(pstate, mergeWhenClause->values);
exprList = transformExpressionList(pstate, mergeWhenClause->values, EXPR_KIND_VALUES_SINGLE);
/*
* If td_compatible_truncation equal true and no foreign table found,
@ -1290,7 +1290,7 @@ Query* transformMergeStmt(ParseState* pstate, MergeStmt* stmt)
* are evaluated separately during execution to decide which of the
* WHEN MATCHED or WHEN NOT MATCHED actions to execute.
*/
action->qual = transformWhereClause(pstate, mergeWhenClause->condition, "WHEN");
action->qual = transformWhereClause(pstate, mergeWhenClause->condition, EXPR_KIND_MERGE_WHEN, "WHEN");
pstate->p_varnamespace = save_varnamespace;
pstate->use_level = false;
@ -1651,7 +1651,7 @@ static void check_target_table_columns(ParseState* pstate, bool is_insert_update
"with column (%s) of unstable default value.",
is_insert_update ? "INSERT ... ON DUPLICATE KEY UPDATE" : "MERGE INTO",
RelationGetRelationName(target_relation),
NameStr(target_relation->rd_att->attrs[attrno - 1]->attname))));
NameStr(target_relation->rd_att->attrs[attrno - 1].attname))));
}
}
list_free_deep(rel_valid_cols);
@ -1814,7 +1814,7 @@ static Bitmapset* get_relation_default_attno_bitmap(Relation relation)
Bitmapset* bitmap = NULL;
Form_pg_attribute attr = NULL;
for (int i = 0; i < RelationGetNumberOfAttributes(relation); i++) {
attr = relation->rd_att->attrs[i];
attr = &relation->rd_att->attrs[i];
if (attr->atthasdef && !attr->attisdropped) {
bitmap = bms_add_member(bitmap, attr->attnum);

View File

@ -346,7 +346,7 @@ ArrayRef* transformArraySubscripts(ParseState* pstate, Node* arrayBase, Oid arra
AssertEreport(IsA(ai, A_Indices), MOD_OPT, "");
if (isSlice) {
if (ai->lidx) {
subexpr = transformExpr(pstate, ai->lidx);
subexpr = transformExpr(pstate, ai->lidx, pstate->p_expr_kind);
/* If it's not int4 already, try to coerce */
subexpr = coerce_to_target_type(
pstate, subexpr, exprType(subexpr), INT4OID, -1, COERCION_ASSIGNMENT, COERCE_IMPLICIT_CAST, -1);
@ -363,7 +363,7 @@ ArrayRef* transformArraySubscripts(ParseState* pstate, Node* arrayBase, Oid arra
}
lowerIndexpr = lappend(lowerIndexpr, subexpr);
}
subexpr = transformExpr(pstate, ai->uidx);
subexpr = transformExpr(pstate, ai->uidx, pstate->p_expr_kind);
if (get_typecategory(arrayType) == TYPCATEGORY_TABLEOF_VARCHAR) {
isIndexByVarchar = true;
}

View File

@ -770,6 +770,10 @@ Expr* make_op(ParseState* pstate, List* opname, Node* ltree, Node* rtree, int lo
result->args = args;
result->location = location;
/* if it returns a set, check that's OK */
if (result->opretset)
check_srf_call_placement(pstate, location);
ReleaseSysCache(tup);
return (Expr*)result;

View File

@ -895,7 +895,7 @@ static void buildRelationAliases(TupleDesc tupdesc, Alias* alias, Alias* eref)
}
for (varattno = 0; varattno < maxattrs; varattno++) {
Form_pg_attribute attr = tupdesc->attrs[varattno];
Form_pg_attribute attr = &tupdesc->attrs[varattno];
Value* attrname = NULL;
if (attr->attisdropped) {
@ -1120,7 +1120,9 @@ Relation parserOpenTable(ParseState *pstate, const RangeVar *relation, int lockm
cancel_parser_errposition_callback(&pcbstate);
/* Forbit DQL/DML on recyclebin object */
if (!ENABLE_SQL_FUSION_ENGINE(IUD_PENDING)) {
TrForbidAccessRbObject(RelationRelationId, RelationGetRelid(rel), relation->relname);
}
/* check wlm session info whether is valid in this database */
if (!CheckWLMSessionInfoTableValid(relation->relname) && !u_sess->attr.attr_common.IsInplaceUpgrade) {
@ -2237,7 +2239,7 @@ static void expandTupleDesc(TupleDesc tupdesc, Alias* eref, int rtindex, int sub
int varattno;
for (varattno = 0; varattno < maxattrs; varattno++) {
Form_pg_attribute attr = tupdesc->attrs[varattno];
Form_pg_attribute attr = &tupdesc->attrs[varattno];
if (attr->attisdropped) {
if (include_dropped) {
@ -2517,7 +2519,7 @@ void get_rte_attribute_type(RangeTblEntry* rte, AttrNumber attnum, Oid* vartype,
errmsg("column %d of relation \"%s\" does not exist", attnum, rte->eref->aliasname)));
}
att_tup = tupdesc->attrs[attnum - 1];
att_tup = &tupdesc->attrs[attnum - 1];
/*
* If dropped column, pretend it ain't there. See notes
@ -2759,7 +2761,7 @@ int attnameAttNum(Relation rd, const char* attname, bool sysColOK)
int i;
for (i = 0; i < rd->rd_rel->relnatts; i++) {
Form_pg_attribute att = rd->rd_att->attrs[i];
Form_pg_attribute att = &rd->rd_att->attrs[i];
if (namestrcmp(&(att->attname), attname) == 0 && !att->attisdropped) {
return i + 1;
@ -2823,7 +2825,7 @@ Name attnumAttName(Relation rd, int attid)
if (attid > rd->rd_att->natts) {
ereport(ERROR, (errcode(ERRCODE_INVALID_ATTRIBUTE), errmsg("invalid attribute number %d", attid)));
}
return &rd->rd_att->attrs[attid - 1]->attname;
return &rd->rd_att->attrs[attid - 1].attname;
}
/*
@ -2845,7 +2847,7 @@ Oid attnumTypeId(Relation rd, int attid)
if (attid > rd->rd_att->natts) {
ereport(ERROR, (errcode(ERRCODE_INVALID_ATTRIBUTE), errmsg("invalid attribute number %d", attid)));
}
return rd->rd_att->attrs[attid - 1]->atttypid;
return rd->rd_att->attrs[attid - 1].atttypid;
}
/*
@ -2862,7 +2864,7 @@ Oid attnumCollationId(Relation rd, int attid)
if (attid > rd->rd_att->natts) {
ereport(ERROR, (errcode(ERRCODE_INVALID_ATTRIBUTE), errmsg("invalid attribute number %d", attid)));
}
return rd->rd_att->attrs[attid - 1]->attcollation;
return rd->rd_att->attrs[attid - 1].attcollation;
}
/*

View File

@ -1537,7 +1537,7 @@ static void CreateStartWithCTE(ParseState *pstate, Query *qry,
*/
pstate->p_hasStartWith = false;
common_expr->swoptions->connect_by_level_quals =
transformWhereClause(pstate, context->connectByLevelExpr, "LEVEL/ROWNUM quals");
transformWhereClause(pstate, context->connectByLevelExpr, EXPR_KIND_SELECT_TARGET, "LEVEL/ROWNUM quals");
/* need to fix the collations in the quals as well */
assign_expr_collations(pstate, common_expr->swoptions->connect_by_level_quals);

View File

@ -44,7 +44,7 @@ static Node* transformAssignmentSubscripts(ParseState* pstate, Node* basenode, c
int location);
static List* ExpandColumnRefStar(ParseState* pstate, ColumnRef* cref, bool targetlist);
static List* ExpandAllTables(ParseState* pstate, int location);
static List* ExpandIndirectionStar(ParseState* pstate, A_Indirection* ind, bool targetlist);
static List* ExpandIndirectionStar(ParseState* pstate, A_Indirection* ind, bool targetlist, ParseExprKind exprKind);
static List* ExpandSingleTable(ParseState* pstate, RangeTblEntry* rte, int location, bool targetlist);
static List* ExpandRowReference(ParseState* pstate, Node* expr, bool targetlist);
static int FigureColnameInternal(Node* node, char** name);
@ -81,7 +81,7 @@ static char* find_last_field_name(List* field)
* resjunk true if the target should be marked resjunk, ie, it is not
* wanted in the final projected tuple.
*/
TargetEntry* transformTargetEntry(ParseState* pstate, Node* node, Node* expr, char* colname, bool resjunk)
TargetEntry* transformTargetEntry(ParseState* pstate, Node* node, Node* expr, ParseExprKind exprKind, char* colname, bool resjunk)
{
/* Generate a suitable name for column shown in error case */
if (colname == NULL && !resjunk) {
@ -91,7 +91,7 @@ TargetEntry* transformTargetEntry(ParseState* pstate, Node* node, Node* expr, ch
/* Transform the node if caller didn't do it already */
if (expr == NULL) {
expr = transformExpr(pstate, node);
expr = transformExpr(pstate, node, exprKind);
}
ELOG_FIELD_NAME_END;
@ -114,7 +114,7 @@ TargetEntry* transformTargetEntry(ParseState* pstate, Node* node, Node* expr, ch
* At this point, we don't care whether we are doing SELECT, INSERT,
* or UPDATE; we just transform the given expressions (the "val" fields).
*/
List* transformTargetList(ParseState* pstate, List* targetlist)
List* transformTargetList(ParseState* pstate, List* targetlist, ParseExprKind exprKind)
{
List* p_target = NIL;
ListCell* o_target = NULL;
@ -146,7 +146,7 @@ List* transformTargetList(ParseState* pstate, List* targetlist)
if (IsA(llast(ind->indirection), A_Star)) {
/* It is something.*, expand into multiple items */
p_target = list_concat(p_target, ExpandIndirectionStar(pstate, ind, true));
p_target = list_concat(p_target, ExpandIndirectionStar(pstate, ind, true, exprKind));
continue;
}
}
@ -154,7 +154,7 @@ List* transformTargetList(ParseState* pstate, List* targetlist)
/*
* Not "something.*", so transform as a single expression
*/
p_target = lappend(p_target, transformTargetEntry(pstate, res->val, NULL, res->name, false));
p_target = lappend(p_target, transformTargetEntry(pstate, res->val, NULL, exprKind, res->name, false));
pstate->p_target_list = p_target;
}
@ -169,7 +169,7 @@ List* transformTargetList(ParseState* pstate, List* targetlist)
* and the output elements are likewise just expressions without TargetEntry
* decoration. We use this for ROW() and VALUES() constructs.
*/
List* transformExpressionList(ParseState* pstate, List* exprlist)
List* transformExpressionList(ParseState* pstate, List* exprlist, ParseExprKind exprKind)
{
List* result = NIL;
ListCell* lc = NULL;
@ -195,7 +195,7 @@ List* transformExpressionList(ParseState* pstate, List* exprlist)
if (IsA(llast(ind->indirection), A_Star)) {
/* It is something.*, expand into multiple items */
result = list_concat(result, ExpandIndirectionStar(pstate, ind, false));
result = list_concat(result, ExpandIndirectionStar(pstate, ind, false, exprKind));
continue;
}
}
@ -203,7 +203,7 @@ List* transformExpressionList(ParseState* pstate, List* exprlist)
/*
* Not "something.*", so transform as a single expression
*/
result = lappend(result, transformExpr(pstate, e));
result = lappend(result, transformExpr(pstate, e, exprKind));
}
return result;
@ -364,7 +364,8 @@ static void markTargetListOrigin(ParseState* pstate, TargetEntry* tle, Var* var,
* omits the column name list. So we should usually prefer to use
* exprLocation(expr) for errors that can happen in a default INSERT.
*/
Expr* transformAssignedExpr(ParseState* pstate, Expr* expr, char* colname, int attrno, List* indirection, int location)
Expr* transformAssignedExpr(ParseState* pstate, Expr* expr, ParseExprKind exprKind,
char* colname, int attrno, List* indirection, int location)
{
Oid type_id; /* type of value provided */
int32 type_mod; /* typmod of value provided */
@ -372,6 +373,16 @@ Expr* transformAssignedExpr(ParseState* pstate, Expr* expr, char* colname, int a
int32 attrtypmod;
Oid attrcollation; /* collation of target column */
Relation rd = pstate->p_target_relation;
ParseExprKind sv_expr_kind;
/*
* Save and restore identity of expression type we're parsing. We must
* set p_expr_kind here because we can parse subscripts without going
* through transformExpr().
*/
Assert(exprKind != EXPR_KIND_NONE);
sv_expr_kind = pstate->p_expr_kind;
pstate->p_expr_kind = exprKind;
AssertEreport(rd != NULL, MOD_OPT, "");
/*
@ -387,8 +398,8 @@ Expr* transformAssignedExpr(ParseState* pstate, Expr* expr, char* colname, int a
parser_errposition(pstate, location)));
}
attrtype = attnumTypeId(rd, attrno);
attrtypmod = rd->rd_att->attrs[attrno - 1]->atttypmod;
attrcollation = rd->rd_att->attrs[attrno - 1]->attcollation;
attrtypmod = rd->rd_att->attrs[attrno - 1].atttypmod;
attrcollation = rd->rd_att->attrs[attrno - 1].attcollation;
/*
* If the expression is a DEFAULT placeholder, insert the attribute's
@ -528,6 +539,8 @@ Expr* transformAssignedExpr(ParseState* pstate, Expr* expr, char* colname, int a
ELOG_FIELD_NAME_END;
pstate->p_expr_kind = sv_expr_kind;
return expr;
}
@ -550,7 +563,8 @@ void updateTargetListEntry(
ParseState* pstate, TargetEntry* tle, char* colname, int attrno, List* indirection, int location)
{
/* Fix up expression as needed */
tle->expr = transformAssignedExpr(pstate, tle->expr, colname, attrno, indirection, location);
tle->expr = transformAssignedExpr(pstate, tle->expr, EXPR_KIND_UPDATE_TARGET,
colname, attrno, indirection, location);
/*
* Set the resno to identify the target column --- the rewriter and
@ -843,7 +857,7 @@ List* checkInsertTargets(ParseState* pstate, List* cols, List** attrnos)
errmsg("pstate->p_target_relation is NULL unexpectedly")));
}
Form_pg_attribute* attr = pstate->p_target_relation->rd_att->attrs;
FormData_pg_attribute* attr = pstate->p_target_relation->rd_att->attrs;
int numcol = RelationGetNumberOfAttributes(pstate->p_target_relation);
int i;
is_blockchain_rel = pstate->p_target_relation->rd_isblockchain;
@ -851,7 +865,7 @@ List* checkInsertTargets(ParseState* pstate, List* cols, List** attrnos)
for (i = 0; i < numcol; i++) {
ResTarget* col = NULL;
if (attr[i]->attisdropped) {
if (attr[i].attisdropped) {
continue;
}
/* If the hidden column in timeseries relation, skip it */
@ -860,7 +874,7 @@ List* checkInsertTargets(ParseState* pstate, List* cols, List** attrnos)
}
col = makeNode(ResTarget);
col->name = pstrdup(NameStr(attr[i]->attname));
col->name = pstrdup(NameStr(attr[i].attname));
if (is_blockchain_rel && strcmp(col->name, "hash") == 0) {
continue;
}
@ -1120,7 +1134,7 @@ static List* ExpandAllTables(ParseState* pstate, int location)
* target list (where we want TargetEntry nodes in the result) and foo.* in
* a ROW() or VALUES() construct (where we want just bare expressions).
*/
static List* ExpandIndirectionStar(ParseState* pstate, A_Indirection* ind, bool targetlist)
static List* ExpandIndirectionStar(ParseState* pstate, A_Indirection* ind, bool targetlist, ParseExprKind exprKind)
{
Node* expr = NULL;
@ -1129,7 +1143,7 @@ static List* ExpandIndirectionStar(ParseState* pstate, A_Indirection* ind, bool
ind->indirection = list_truncate(ind->indirection, list_length(ind->indirection) - 1);
/* And transform that */
expr = transformExpr(pstate, (Node*)ind);
expr = transformExpr(pstate, (Node*)ind, exprKind);
/* Expand the rowtype expression into individual fields */
return ExpandRowReference(pstate, expr, targetlist);
@ -1249,7 +1263,7 @@ static List* ExpandRowReference(ParseState* pstate, Node* expr, bool targetlist)
/* Generate a list of references to the individual fields */
numAttrs = tupleDesc->natts;
for (i = 0; i < numAttrs; i++) {
Form_pg_attribute att = tupleDesc->attrs[i];
Form_pg_attribute att = &tupleDesc->attrs[i];
FieldSelect* fselect = NULL;
if (att->attisdropped) {

View File

@ -11,10 +11,6 @@
* Hence these functions are now called at the start of execution of their
* respective utility commands.
*
* NOTE: in general we must avoid scribbling on the passed-in raw parse
* tree, since it might be in a plan cache. The simplest solution is
* a quick copyObject() call before manipulating the query tree.
*
*
* Portions Copyright (c) 1996-2012, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
@ -169,7 +165,7 @@ static void check_partition_name_less_than(List* partitionList, bool isPartition
static void check_partition_name_start_end(List* partitionList, bool isPartition);
/* for range partition: start/end syntax */
static void precheck_start_end_defstate(List* pos, Form_pg_attribute* attrs,
static void precheck_start_end_defstate(List* pos, FormData_pg_attribute* attrs,
RangePartitionStartEndDefState* defState, bool isPartition);
static Datum get_partition_arg_value(Node* node, bool* isnull);
static Datum evaluate_opexpr(
@ -216,11 +212,6 @@ Oid *namespaceid, bool isFirstNode)
Oid existing_relid;
bool is_ledger_nsp = false;
bool is_row_table = is_ledger_rowstore(stmt->options);
/*
* We must not scribble on the passed-in CreateStmt, so copy it. (This is
* overkill, but easy.)
*/
stmt = (CreateStmt*)copyObject(stmt);
if (uuids != NIL) {
list_free_deep(stmt->uuids);
@ -1271,7 +1262,7 @@ static DistributeBy* GetHideTagDistribution(TupleDesc tupleDesc)
DistributeBy* distributeby = makeNode(DistributeBy);
distributeby->disttype = DISTTYPE_HASH;
for (int attno = 1; attno <= tupleDesc->natts; attno++) {
Form_pg_attribute attribute = tupleDesc->attrs[attno - 1];
Form_pg_attribute attribute = &tupleDesc->attrs[attno - 1];
char* attributeName = NameStr(attribute->attname);
if (attribute->attkvtype == ATT_KV_TAG) {
distributeby->colname = lappend(distributeby->colname, makeString(attributeName));
@ -1549,7 +1540,7 @@ static void transformTableLikeClause(
*/
bool hideTag = false;
for (parent_attno = 1; parent_attno <= tupleDesc->natts; parent_attno++) {
Form_pg_attribute attribute = tupleDesc->attrs[parent_attno - 1];
Form_pg_attribute attribute = &tupleDesc->attrs[parent_attno - 1];
char* attributeName = NameStr(attribute->attname);
ColumnDef* def = NULL;
@ -1815,7 +1806,7 @@ static void transformTableLikeClause(
for (pckNum = 0; pckNum < tupleDesc->constr->clusterKeyNum; pckNum++) {
AttrNumber attrNum = tupleDesc->constr->clusterKeys[pckNum];
Form_pg_attribute attribute = tupleDesc->attrs[attrNum - 1];
Form_pg_attribute attribute = &tupleDesc->attrs[attrNum - 1];
char* attrName = NameStr(attribute->attname);
n->contype = CONSTR_CLUSTER;
@ -2080,7 +2071,7 @@ static void transformTableLikePartitionKeys(
ColumnRef* c = NULL;
Relation partitionRel = NULL;
TupleDesc relationTupleDesc = NULL;
Form_pg_attribute* relationAtts = NULL;
FormData_pg_attribute* relationAtts = NULL;
int relationAttNumber = 0;
Datum partkey_raw = (Datum)0;
ArrayType* partkey_columns = NULL;
@ -2131,7 +2122,7 @@ static void transformTableLikePartitionKeys(
int attnum = (int)(attnums[i]);
if (attnum >= 1 && attnum <= relationAttNumber) {
c = makeNode(ColumnRef);
c->fields = list_make1(makeString(pstrdup(NameStr(relationAtts[attnum - 1]->attname))));
c->fields = list_make1(makeString(pstrdup(NameStr(relationAtts[attnum - 1].attname))));
*partKeyColumns = lappend(*partKeyColumns, c);
*partKeyPosList = lappend_int(*partKeyPosList, attnum - 1);
} else {
@ -2222,7 +2213,7 @@ static void transformTableLikePartitionBoundaries(
Value* boundaryValue = NULL;
Datum boundaryDatum = (Datum)0;
Node* boundaryNode = NULL;
Form_pg_attribute* relation_atts = NULL;
FormData_pg_attribute* relation_atts = NULL;
Form_pg_attribute att = NULL;
int partKeyPos = 0;
int16 typlen = 0;
@ -2247,7 +2238,7 @@ static void transformTableLikePartitionBoundaries(
{
boundaryValue = (Value*)lfirst(boundaryCell);
partKeyPos = (int)lfirst_int(partKeyCell);
att = relation_atts[partKeyPos];
att = &relation_atts[partKeyPos];
/* get the oid/mod/collation/ of partition key */
typid = att->atttypid;
@ -2305,7 +2296,7 @@ static void transformOfType(CreateStmtContext* cxt, TypeName* ofTypename)
tupdesc = lookup_rowtype_tupdesc(ofTypeId, -1);
for (i = 0; i < tupdesc->natts; i++) {
Form_pg_attribute attr = tupdesc->attrs[i];
Form_pg_attribute attr = &tupdesc->attrs[i];
ColumnDef* n = NULL;
if (attr->attisdropped)
@ -2416,7 +2407,7 @@ IndexStmt* generateClonedIndexStmt(
CreateStmtContext* cxt, Relation source_idx, const AttrNumber* attmap, int attmap_length, Relation rel, TransformTableType transformType)
{
Oid source_relid = RelationGetRelid(source_idx);
Form_pg_attribute* attrs = RelationGetDescr(source_idx)->attrs;
FormData_pg_attribute* attrs = RelationGetDescr(source_idx)->attrs;
HeapTuple ht_idxrel;
HeapTuple ht_idx;
Form_pg_class idxrelrec;
@ -2628,7 +2619,7 @@ IndexStmt* generateClonedIndexStmt(
}
/* Copy the original index column name */
iparam->indexcolname = pstrdup(NameStr(attrs[keyno]->attname));
iparam->indexcolname = pstrdup(NameStr(attrs[keyno].attname));
/* Add the collation name, if non-default */
iparam->collation = get_collation(indcollation->values[keyno], keycoltype);
@ -3178,7 +3169,7 @@ static IndexStmt* transformIndexConstraint(Constraint* constraint, CreateStmtCon
*/
if (attnum > 0) {
AssertEreport(attnum <= heap_rel->rd_att->natts, MOD_OPT, "");
attform = heap_rel->rd_att->attrs[attnum - 1];
attform = &heap_rel->rd_att->attrs[attnum - 1];
} else
attform = SystemAttributeDefinition(attnum, heap_rel->rd_rel->relhasoids, RELATION_HAS_BUCKET(heap_rel), RELATION_HAS_UIDS(heap_rel));
attname = pstrdup(NameStr(attform->attname));
@ -3283,7 +3274,7 @@ static IndexStmt* transformIndexConstraint(Constraint* constraint, CreateStmtCon
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("inherited relation \"%s\" is not a table", inh->relname)));
for (count = 0; count < rel->rd_att->natts; count++) {
Form_pg_attribute inhattr = rel->rd_att->attrs[count];
Form_pg_attribute inhattr = &rel->rd_att->attrs[count];
char* inhname = NameStr(inhattr->attname);
if (inhattr->attisdropped)
@ -3405,7 +3396,7 @@ static IndexStmt* transformIndexConstraint(Constraint* constraint, CreateStmtCon
errmsg("inherited relation \"%s\" is not a table or foreign table", inh->relname)));
}
for (count = 0; count < rel->rd_att->natts; count++) {
Form_pg_attribute inhattr = rel->rd_att->attrs[count];
Form_pg_attribute inhattr = &rel->rd_att->attrs[count];
char* inhname = NameStr(inhattr->attname);
if (inhattr->attisdropped)
@ -3580,12 +3571,6 @@ IndexStmt* transformIndexStmt(Oid relid, IndexStmt* stmt, const char* queryStrin
ListCell* l = NULL;
int crossbucketopt = -1; /* -1 means the SQL statement doesn't contain crossbucket option */
/*
* We must not scribble on the passed-in IndexStmt, so copy it. (This is
* overkill, but easy.)
*/
stmt = (IndexStmt*)copyObject(stmt);
/* Set up pstate */
pstate = make_parsestate(NULL);
pstate->p_sourcetext = queryString;
@ -3728,7 +3713,7 @@ IndexStmt* transformIndexStmt(Oid relid, IndexStmt* stmt, const char* queryStrin
/* take care of the where clause */
if (stmt->whereClause) {
stmt->whereClause = transformWhereClause(pstate, stmt->whereClause, "WHERE");
stmt->whereClause = transformWhereClause(pstate, stmt->whereClause, EXPR_KIND_INDEX_PREDICATE, "WHERE");
/* we have to fix its collations too */
assign_expr_collations(pstate, stmt->whereClause);
}
@ -3743,7 +3728,7 @@ IndexStmt* transformIndexStmt(Oid relid, IndexStmt* stmt, const char* queryStrin
ielem->indexcolname = FigureIndexColname(ielem->expr);
/* Now do parse transformation of the expression */
ielem->expr = transformExpr(pstate, ielem->expr);
ielem->expr = transformExpr(pstate, ielem->expr, EXPR_KIND_INDEX_EXPRESSION);
/* We have to fix its collations too */
assign_expr_collations(pstate, ielem->expr);
@ -3756,8 +3741,6 @@ IndexStmt* transformIndexStmt(Oid relid, IndexStmt* stmt, const char* queryStrin
#ifndef ENABLE_MULTIPLE_NODES
ExcludeRownumExpr(pstate, (Node*)ielem->expr);
#endif
if (expression_returns_set(ielem->expr))
ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("index expression cannot return a set")));
}
if (IsElementExisted(indexElements, ielem)) {
@ -3819,9 +3802,6 @@ static bool IsElementExisted(List* indexElements, IndexElem* ielem)
*
* actions and whereClause are output parameters that receive the
* transformed results.
*
* Note that we must not scribble on the passed-in RuleStmt, so we do
* copyObject() on the actions and WHERE clause.
*/
void transformRuleStmt(RuleStmt* stmt, const char* queryString, List** actions, Node** whereClause)
{
@ -3897,7 +3877,7 @@ void transformRuleStmt(RuleStmt* stmt, const char* queryString, List** actions,
}
/* take care of the where clause */
*whereClause = transformWhereClause(pstate, (Node*)copyObject(stmt->whereClause), "WHERE");
*whereClause = transformWhereClause(pstate, stmt->whereClause, EXPR_KIND_WHERE, "WHERE");
/* we have to fix its collations too */
assign_expr_collations(pstate, *whereClause);
@ -3968,7 +3948,7 @@ void transformRuleStmt(RuleStmt* stmt, const char* queryString, List** actions,
addRTEtoQuery(sub_pstate, newrte, false, true, false);
/* Transform the rule action statement */
top_subqry = transformStmt(sub_pstate, (Node*)copyObject(action));
top_subqry = transformStmt(sub_pstate, action);
/*
* We cannot support utility-statement actions (eg NOTIFY) with
* nonempty rule WHERE conditions, because there's no way to make
@ -4131,11 +4111,6 @@ List* transformAlterTableStmt(Oid relid, AlterTableStmt* stmt, const char* query
SplitPartitionState* splitDefState = NULL;
ListCell* cell = NULL;
/*
* We must not scribble on the passed-in AlterTableStmt, so copy it. (This
* is overkill, but easy.)
*/
stmt = (AlterTableStmt*)copyObject(stmt);
/* Caller is responsible for locking the relation */
rel = relation_open(relid, NoLock);
if (IS_FOREIGNTABLE(rel) || IS_STREAM_TABLE(rel)) {
@ -5564,7 +5539,7 @@ List* transformListPartitionValue(ParseState* pstate, List* boundary, bool needC
/* scan value of partition key of per partition */
foreach (valueCell, boundary) {
elem = (Node*)lfirst(valueCell);
result = transformIntoConst(pstate, elem);
result = transformIntoConst(pstate, EXPR_KIND_PARTITION_BOUND, elem);
if (PointerIsValid(result) && needCheck && ((Const*)result)->constisnull && !((Const*)result)->ismaxvalue) {
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@ -5644,7 +5619,7 @@ List* transformRangePartitionValueInternal(ParseState* pstate, List* boundary, b
/* scan max value of partition key of per partition */
foreach (valueCell, boundary) {
maxElem = (Node*)lfirst(valueCell);
result = transformIntoConst(pstate, maxElem, isPartition);
result = transformIntoConst(pstate, EXPR_KIND_PARTITION_BOUND, maxElem, isPartition);
if (PointerIsValid(result) && needCheck && ((Const*)result)->constisnull && !((Const*)result)->ismaxvalue) {
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@ -5671,12 +5646,12 @@ List* transformRangePartitionValueInternal(ParseState* pstate, List* boundary, b
* Return :
* Notes :
*/
Node* transformIntoConst(ParseState* pstate, Node* maxElem, bool isPartition)
Node* transformIntoConst(ParseState* pstate, ParseExprKind exprKind, Node* maxElem, bool isPartition)
{
Node* result = NULL;
FuncExpr* funcexpr = NULL;
/* transform expression first */
maxElem = transformExpr(pstate, maxElem);
maxElem = transformExpr(pstate, maxElem, exprKind);
/* then, evaluate expression */
switch (nodeTag(maxElem)) {
@ -6111,7 +6086,7 @@ static Oid get_split_partition_oid(Relation partTableRel, SplitPartitionState* s
* precheck_start_end_defstate
* precheck start/end value of a range partition defstate
*/
static void precheck_start_end_defstate(List* pos, Form_pg_attribute* attrs,
static void precheck_start_end_defstate(List* pos, FormData_pg_attribute* attrs,
RangePartitionStartEndDefState* defState, bool isPartition)
{
ListCell* cell = NULL;
@ -6126,7 +6101,7 @@ static void precheck_start_end_defstate(List* pos, Form_pg_attribute* attrs,
foreach (cell, pos) {
int i = lfirst_int(cell);
switch (attrs[i]->atttypid) {
switch (attrs[i].atttypid) {
case INT2OID:
case INT4OID:
case INT8OID:
@ -6815,7 +6790,7 @@ static List* DividePartitionStartEndInterval(ParseState* pstate, Form_pg_attribu
*
* RETURN: a new partition list (wrote by "less/than" syntax).
*/
List* transformRangePartStartEndStmt(ParseState* pstate, List* partitionList, List* pos, Form_pg_attribute* attrs,
List* transformRangePartStartEndStmt(ParseState* pstate, List* partitionList, List* pos, FormData_pg_attribute* attrs,
int32 existPartNum, Const* lowBound, Const* upBound, bool needFree, bool isPartition)
{
ListCell* cell = NULL;
@ -6889,7 +6864,7 @@ List* transformRangePartStartEndStmt(ParseState* pstate, List* partitionList, Li
/* check: datatype of partition key */
foreach (cell, pos) {
i = lfirst_int(cell);
attr = attrs[i];
attr = &attrs[i];
target_type = attr->atttypid;
switch (target_type) {
@ -6912,7 +6887,7 @@ List* transformRangePartStartEndStmt(ParseState* pstate, List* partitionList, Li
ereport(ERROR,
(errcode(ERRCODE_DATATYPE_MISMATCH),
errmsg("datatype of column \"%s\" is unsupported for %s key in start/end clause.",
NameStr(attrs[i]->attname), (isPartition ? "partition" : "distribution")),
NameStr(attrs[i].attname), (isPartition ? "partition" : "distribution")),
errhint("Valid datatypes are: smallint, int, bigint, float4/real, float8/double, numeric, date "
"and timestamp [with time zone].")));
break;

View File

@ -49,7 +49,9 @@ List* raw_parser(const char* str, List** query_string_locationlist)
core_yyscan_t yyscanner;
base_yy_extra_type yyextra;
int yyresult;
#ifndef ENABLE_MULTIPLE_NODES
if (!u_sess->attr.attr_common.enable_parser_fusion) {
#endif
/* reset u_sess->parser_cxt.stmt_contains_operator_plus */
resetOperatorPlusFlag();
@ -58,9 +60,12 @@ List* raw_parser(const char* str, List** query_string_locationlist)
/* reset u_sess->parser_cxt.isCreateFuncOrProc */
resetCreateFuncFlag();
#ifndef ENABLE_MULTIPLE_NODES
}
#endif
/* initialize the flex scanner */
yyscanner = scanner_init(str, &yyextra.core_yy_extra, ScanKeywords, NumScanKeywords);
yyscanner = scanner_init(str, &yyextra.core_yy_extra, &ScanKeywords, ScanKeywordTokens);
/* base_yylex() only needs this much initialization */
yyextra.lookahead_num = 0;

View File

@ -46,6 +46,23 @@
#undef fprintf
#define fprintf(file, fmt, msg) ereport(ERROR, (errmsg_internal("%s", msg)))
/*
* Constant data exported from this file. This array maps from the
* zero-based keyword numbers returned by ScanKeywordLookup to the
* Bison token numbers needed by gram.y. This is exported because
* callers need to pass it to scanner_init, if they are using the
* standard keyword list ScanKeywords.
*/
#define PG_KEYWORD(kwname, value, category) value,
const uint16 ScanKeywordTokens[] = {
#include "parser/kwlist.h"
};
#undef PG_KEYWORD
/*
/*
* Set the type of YYSTYPE.
*/
@ -514,19 +531,18 @@ other .
* We will pass this along as a normal character string,
* but preceded with an internally-generated "NCHAR".
*/
const ScanKeyword *keyword;
int kwnum;
SET_YYLLOC();
yyless(1); /* eat only 'n' this time */
keyword = ScanKeywordLookup("nchar",
yyextra->keywords,
yyextra->num_keywords);
if (keyword != NULL)
kwnum = ScanKeywordLookup("nchar",
yyextra->keywordlist);
if (kwnum >= 0)
{
yylval->keyword = keyword->name;
yyextra->is_hint_str = false;
return keyword->value;
yylval->keyword = GetScanKeyword(kwnum, yyextra->keywordlist);
return yyextra->keyword_tokens[kwnum];
}
else
{
@ -1033,41 +1049,41 @@ other .
{identifier} {
const ScanKeyword *keyword;
int kwnum;
char *ident;
SET_YYLLOC();
/* Is it a keyword? */
keyword = ScanKeywordLookup(yytext,
yyextra->keywords,
yyextra->num_keywords);
kwnum = ScanKeywordLookup(yytext, yyextra->keywordlist);
yyextra->is_hint_str = false;
bool isPlpgsqlKeyword = yyextra->isPlpgsqlKeyWord;
if (keyword != NULL)
if (kwnum >= 0)
{
yylval->keyword = keyword->name;
yylval->keyword = GetScanKeyword(kwnum,
yyextra->keywordlist);
uint16 token = yyextra->keyword_tokens[kwnum];
/* Find the CREATE PROCEDURE syntax and set dolqstart. */
if (keyword->value == CREATE)
if (token == CREATE)
{
yyextra->is_createstmt = true;
}
else if (keyword->value == TRIGGER && yyextra->is_createstmt)
else if (token == TRIGGER && yyextra->is_createstmt)
{
/* Create trigger don't need set dolqstart */
yyextra->is_createstmt = false;
}
else if ((keyword->value == (isPlpgsqlKeyword? yyextra->plKeywordValue->procedure : PROCEDURE) ||
keyword->value == (isPlpgsqlKeyword? yyextra->plKeywordValue->function : FUNCTION))
else if ((token == (isPlpgsqlKeyword? yyextra->plKeywordValue->procedure : PROCEDURE) ||
token == (isPlpgsqlKeyword? yyextra->plKeywordValue->function : FUNCTION))
&& (yyextra->is_createstmt))
{
/* Make yyextra->dolqstart not NULL means its in a proc with $$. */
yyextra->dolqstart = "";
}
else if (keyword->value == (isPlpgsqlKeyword? yyextra->plKeywordValue->begin : BEGIN_P))
else if (token == (isPlpgsqlKeyword? yyextra->plKeywordValue->begin : BEGIN_P))
{
if (!(u_sess->parser_cxt.isCreateFuncOrProc || u_sess->plsql_cxt.curr_compile_context != NULL)) {
/* cases that have to be a trans stmt and fall quickly */
@ -1079,16 +1095,16 @@ other .
return BEGIN_NON_ANOYBLOCK;
}
}
else if (keyword->value == (isPlpgsqlKeyword? yyextra->plKeywordValue->select : SELECT) ||
keyword->value == (isPlpgsqlKeyword? yyextra->plKeywordValue->update : UPDATE) ||
keyword->value == (isPlpgsqlKeyword? yyextra->plKeywordValue->insert : INSERT) ||
keyword->value == (isPlpgsqlKeyword? yyextra->plKeywordValue->Delete : DELETE_P) ||
keyword->value == MERGE)
else if (token == (isPlpgsqlKeyword? yyextra->plKeywordValue->select : SELECT) ||
token == (isPlpgsqlKeyword? yyextra->plKeywordValue->update : UPDATE) ||
token == (isPlpgsqlKeyword? yyextra->plKeywordValue->insert : INSERT) ||
token == (isPlpgsqlKeyword? yyextra->plKeywordValue->Delete : DELETE_P) ||
token == MERGE)
{
yyextra->is_hint_str = true;
}
return keyword->value;
return token;
}
/*
@ -1235,8 +1251,8 @@ scanner_yyerror(const char *message, core_yyscan_t yyscanner)
core_yyscan_t
scanner_init(const char *str,
core_yy_extra_type *yyext,
const ScanKeyword *keywords,
int num_keywords)
const ScanKeywordList *keywordlist,
const uint16 *keyword_tokens)
{
Size slen = strlen(str);
yyscan_t scanner;
@ -1248,8 +1264,8 @@ scanner_init(const char *str,
core_yyset_extra(yyext, scanner);
yyext->keywords = keywords;
yyext->num_keywords = num_keywords;
yyext->keywordlist = keywordlist;
yyext->keyword_tokens = keyword_tokens;
yyext->in_slash_proc_body = false;
yyext->paren_depth = 0;
yyext->query_string_locationlist = NIL;

View File

@ -160,12 +160,12 @@ char** CopyOps_RawDataToArrayField(TupleDesc tupdesc, char* message, int len)
char* line_end_ptr = NULL;
int fields = tupdesc->natts;
char** raw_fields = NULL;
Form_pg_attribute* attr = tupdesc->attrs;
FormData_pg_attribute* attr = tupdesc->attrs;
errno_t rc = 0;
/* Adjust number of fields depending on dropped attributes */
for (fieldno = 0; fieldno < tupdesc->natts; fieldno++) {
if (attr[fieldno]->attisdropped)
if (attr[fieldno].attisdropped)
fields--;
}
@ -356,7 +356,7 @@ char* CopyOps_BuildOneRowTo(TupleDesc tupdesc, Datum* values, const bool* nulls,
char* res = NULL;
int i;
FmgrInfo* out_functions = NULL;
Form_pg_attribute* attr = tupdesc->attrs;
FormData_pg_attribute* attr = tupdesc->attrs;
StringInfo buf;
/* Get info about the columns we need to process. */
@ -366,10 +366,10 @@ char* CopyOps_BuildOneRowTo(TupleDesc tupdesc, Datum* values, const bool* nulls,
bool isvarlena = false;
/* Do not need any information for dropped attributes */
if (attr[i]->attisdropped)
if (attr[i].attisdropped)
continue;
getTypeOutputInfo(attr[i]->atttypid, &out_func_oid, &isvarlena);
getTypeOutputInfo(attr[i].atttypid, &out_func_oid, &isvarlena);
fmgr_info(out_func_oid, &out_functions[i]);
}
@ -381,7 +381,7 @@ char* CopyOps_BuildOneRowTo(TupleDesc tupdesc, Datum* values, const bool* nulls,
bool isnull = nulls[i];
/* Do not need any information for dropped attributes */
if (attr[i]->attisdropped)
if (attr[i].attisdropped)
continue;
if (need_delim)

View File

@ -139,7 +139,7 @@ void RemoteCopy_BuildStatement(
if (state->is_from) {
for (attnum = 1; attnum <= tupDesc->natts; attnum++) {
/* Don't let dropped attributes go into the column list */
if (tupDesc->attrs[attnum - 1]->attisdropped)
if (tupDesc->attrs[attnum - 1].attisdropped)
continue;
if (!list_member_int(attnums, attnum)) {
@ -148,7 +148,7 @@ void RemoteCopy_BuildStatement(
if (defexpr && ((!pgxc_is_expr_shippable(expression_planner(defexpr), NULL)) ||
(list_member_int(state->idx_dist_by_col, attnum - 1)))) {
appendStringInfoString(
&state->query_buf, quote_identifier(NameStr(tupDesc->attrs[attnum - 1]->attname)));
&state->query_buf, quote_identifier(NameStr(tupDesc->attrs[attnum - 1].attname)));
appendStringInfoString(&state->query_buf, ", ");
}
}

View File

@ -484,7 +484,7 @@ static void distrib_copy_from(RedistribState* distribState, ExecNodes* exec_node
while (contains_tuple) {
char* data = NULL;
int len;
Form_pg_attribute* attr = tupdesc->attrs;
FormData_pg_attribute* attr = tupdesc->attrs;
TupleTableSlot* slot = NULL;
ExecNodes* local_execnodes = NULL;
@ -505,7 +505,7 @@ static void distrib_copy_from(RedistribState* distribState, ExecNodes* exec_node
/* Find value of distribution column if necessary */
for (int i = 0; i < tupdesc->natts; i++) {
att_type[i] = attr[i]->atttypid;
att_type[i] = attr[i].atttypid;
}
local_execnodes = GetRelationNodes(copyState->rel_loc,

View File

@ -650,7 +650,7 @@ static void HandleCopyDataRow(RemoteQueryState* combiner, char* msg_body, size_t
bool* nulls = NULL;
TupleDesc tupdesc = combiner->tuple_desc;
int i, dropped;
Form_pg_attribute* attr = tupdesc->attrs;
FormData_pg_attribute* attr = tupdesc->attrs;
FmgrInfo* in_functions = NULL;
Oid* typioparams = NULL;
char** fields = NULL;
@ -665,10 +665,10 @@ static void HandleCopyDataRow(RemoteQueryState* combiner, char* msg_body, size_t
Oid in_func_oid;
/* Do not need any information for dropped attributes */
if (attr[i]->attisdropped)
if (attr[i].attisdropped)
continue;
getTypeInputInfo(attr[i]->atttypid, &in_func_oid, &typioparams[i]);
getTypeInputInfo(attr[i].atttypid, &in_func_oid, &typioparams[i]);
fmgr_info(in_func_oid, &in_functions[i]);
}
@ -683,14 +683,14 @@ static void HandleCopyDataRow(RemoteQueryState* combiner, char* msg_body, size_t
for (i = 0; i < tupdesc->natts; i++) {
char* string = fields[i - dropped];
/* Do not need any information for dropped attributes */
if (attr[i]->attisdropped) {
if (attr[i].attisdropped) {
dropped++;
nulls[i] = true; /* Consider dropped parameter as NULL */
continue;
}
/* Find value */
values[i] = InputFunctionCall(&in_functions[i], string, typioparams[i], attr[i]->atttypmod);
values[i] = InputFunctionCall(&in_functions[i], string, typioparams[i], attr[i].atttypmod);
/* Setup value with NULL flag if necessary */
if (string == NULL)
nulls[i] = true;
@ -3770,8 +3770,7 @@ RemoteQueryState* ExecInitRemoteQuery(RemoteQuery* node, EState* estate, int efl
ExecAssignExprContext(estate, &remotestate->ss.ps);
/* Initialise child expressions */
remotestate->ss.ps.targetlist = (List*)ExecInitExpr((Expr*)node->scan.plan.targetlist, (PlanState*)remotestate);
remotestate->ss.ps.qual = (List*)ExecInitExpr((Expr*)node->scan.plan.qual, (PlanState*)remotestate);
remotestate->ss.ps.qual = (List*)ExecInitQual(node->scan.plan.qual, (PlanState*)remotestate);
/* check for unsupported flags */
Assert(!(eflags & (EXEC_FLAG_MARK)));
@ -3789,9 +3788,6 @@ RemoteQueryState* ExecInitRemoteQuery(RemoteQuery* node, EState* estate, int efl
ExecInitScanTupleSlot(estate, &remotestate->ss);
scan_type = ExecTypeFromTL(node->base_tlist, false);
ExecAssignScanType(&remotestate->ss, scan_type);
remotestate->ss.ps.ps_TupFromTlist = false;
/*
* If there are parameters supplied, get them into a form to be sent to the
* Datanodes with bind message. We should not have had done this before.
@ -3883,7 +3879,7 @@ PGXCNodeAllHandles* get_exec_connections(
ExprState* estate = ExecInitExpr(expr, (PlanState*)planstate);
Datum partvalue = ExecEvalExpr(estate, planstate->ss.ps.ps_ExprContext, &isnull, NULL);
Datum partvalue = ExecEvalExpr(estate, planstate->ss.ps.ps_ExprContext, &isnull);
MemoryContextSwitchTo(oldContext);
values[i] = partvalue;
@ -7931,10 +7927,10 @@ static void SetDataRowForIntParams(
tdesc = dataSlot->tts_tupleDescriptor;
int numatts = tdesc->natts;
for (attindex = 0; attindex < numatts; attindex++) {
rq_state->rqs_param_types[attindex] = tdesc->attrs[attindex]->atttypid;
rq_state->rqs_param_types[attindex] = tdesc->attrs[attindex].atttypid;
/* For unknown param type(maybe a const), we need to convert it to text */
if (tdesc->attrs[attindex]->atttypid == UNKNOWNOID) {
if (tdesc->attrs[attindex].atttypid == UNKNOWNOID) {
rq_state->rqs_param_types[attindex] = TEXTOID;
}
}
@ -8013,7 +8009,7 @@ static void SetDataRowForIntParams(
appendBinaryStringInfo(&buf, (char*)&n32, 4);
} else
/* It should switch memctx to ExprContext for makenode in ExecInitExpr */
pgxc_append_param_val(&buf, dataSlot->tts_values[attindex], tdesc->attrs[attindex]->atttypid);
pgxc_append_param_val(&buf, dataSlot->tts_values[attindex], tdesc->attrs[attindex].atttypid);
}
}
@ -9104,7 +9100,7 @@ static void FetchGlobalStatisticsFromDN(int dn_conn_count, PGXCNodeHandle** pgxc
bool typisvarlena = false;
char *corrs = NULL, *tmp = NULL;
getTypeOutputInfo(
scanslot->tts_tupleDescriptor->attrs[k]->atttypid, &foutoid, &typisvarlena);
scanslot->tts_tupleDescriptor->attrs[k].atttypid, &foutoid, &typisvarlena);
corrs = OidOutputFunctionCall(foutoid, scanslot->tts_values[k]);
while (corrs != NULL) {
if (*corrs == '{')
@ -9368,9 +9364,9 @@ bool PgfdwGetRelAttnum(int2vector* keys, PGFDWTableAnalyze* info)
for (int i = 0; i < tupdesc->natts; i++) {
for (int j = 0; j < attnum; j++) {
tup_attname = tupdesc->attrs[i]->attname.data;
tup_attname = tupdesc->attrs[i].attname.data;
if (tup_attname && strcmp(tup_attname, att_name[j]) == 0) {
real_attnum[total] = tupdesc->attrs[i]->attnum;
real_attnum[total] = tupdesc->attrs[i].attnum;
total++;
break;
}
@ -9415,9 +9411,9 @@ bool PgfdwGetRelAttnum(TupleTableSlot* slot, PGFDWTableAnalyze* info)
TupleDesc tupdesc = RelationGetDescr(rel);
for (int i = 0; i < tupdesc->natts; i++) {
tup_attname = tupdesc->attrs[i]->attname.data;
tup_attname = tupdesc->attrs[i].attname.data;
if (tup_attname && strcmp(tup_attname, att_name) == 0) {
real_attnum = tupdesc->attrs[i]->attnum;
real_attnum = tupdesc->attrs[i].attnum;
break;
}
}

View File

@ -489,7 +489,7 @@ Datum pg_pool_validate(PG_FUNCTION_ARGS)
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* construct a tuple descriptor for the result row. */
tupdesc = CreateTemplateTupleDesc(2, false, TAM_HEAP);
tupdesc = CreateTemplateTupleDesc(2, false);
TupleDescInitEntry(tupdesc, (AttrNumber)1, "pid", INT8OID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)2, "node_name", TEXTOID, -1, 0);
funcctx->tuple_desc = BlessTupleDesc(tupdesc);

View File

@ -196,9 +196,9 @@ int WaitLatch(volatile Latch* latch, int wakeEvents, long timeout)
* Like WaitLatch, but with an extra socket argument for WL_SOCKET_*
* conditions.
*
* When waiting on a socket, WL_SOCKET_READABLE *must* be included in
* 'wakeEvents'; WL_SOCKET_WRITEABLE is optional. The reason for this is
* that EOF and error conditions are reported only via WL_SOCKET_READABLE.
* When waiting on a socket, EOF and error conditions are reported by
* returning the socket as readable/writable or both, depending on
* WL_SOCKET_READBLE/WL_SOCKET_WRITEABLE being specified.
*/
int WaitLatchOrSocket(volatile Latch* latch, int wakeEvents, pgsocket sock, long timeout)
{
@ -222,8 +222,6 @@ int WaitLatchOrSocket(volatile Latch* latch, int wakeEvents, pgsocket sock, long
wakeEvents &= ~(WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE);
Assert(wakeEvents != 0); /* must have at least one wake event */
/* Cannot specify WL_SOCKET_WRITEABLE without WL_SOCKET_READABLE */
Assert((wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) != WL_SOCKET_WRITEABLE);
if ((wakeEvents & WL_LATCH_SET) && latch->owner_pid != t_thrd.proc_cxt.MyProcPid)
ereport(ERROR, (errcode(ERRCODE_INVALID_OPERATION), errmsg("cannot wait on a latch owned by another process")));
@ -278,7 +276,16 @@ int WaitLatchOrSocket(volatile Latch* latch, int wakeEvents, pgsocket sock, long
break;
}
/* Must wait ... we use poll(2) if available, otherwise select(2) */
/*
* Must wait ... we use poll(2) if available, otherwise select(2).
*
* On at least older linux kernels select(), in violation of POSIX,
* doesn't reliably return a socket as writable if closed - but we
* rely on that. So for all the known cases of this problem are on
* platforms that alse provide a poll() implementation without that
* bug. If we find one where that's not the case, we'll need to add a
* workaround.
*/
#ifdef HAVE_POLL
nfds = 0;
if (wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) {
@ -320,11 +327,19 @@ int WaitLatchOrSocket(volatile Latch* latch, int wakeEvents, pgsocket sock, long
result |= WL_TIMEOUT;
} else {
/* at least one event occurred, so check revents values */
if ((wakeEvents & WL_SOCKET_READABLE) && (pfds[0].revents & (POLLIN | POLLHUP | POLLERR | POLLNVAL))) {
if ((wakeEvents & WL_SOCKET_READABLE) && (pfds[0].revents & POLLIN)) {
/* data available in socket, or EOF/error condition */
result |= WL_SOCKET_READABLE;
}
if ((wakeEvents & WL_SOCKET_WRITEABLE) && (pfds[0].revents & POLLOUT)) {
/* socket is writable */
result |= WL_SOCKET_WRITEABLE;
}
if (pfds[0].revents & (POLLHUP | POLLERR | POLLNVAL)) {
/* EOF/error condition */
if (wakeEvents & WL_SOCKET_READABLE)
result |= WL_SOCKET_READABLE;
if (wakeEvents & WL_SOCKET_WRITEABLE)
result |= WL_SOCKET_WRITEABLE;
}
@ -396,6 +411,7 @@ int WaitLatchOrSocket(volatile Latch* latch, int wakeEvents, pgsocket sock, long
result |= WL_SOCKET_READABLE;
}
if ((wakeEvents & WL_SOCKET_WRITEABLE) && FD_ISSET(sock, &output_mask)) {
/* socket is writable, or EOF */
result |= WL_SOCKET_WRITEABLE;
}
if ((wakeEvents & WL_POSTMASTER_DEATH) &&

View File

@ -109,8 +109,6 @@ int WaitLatchOrSocket(volatile Latch* latch, int wakeEvents, pgsocket sock, long
wakeEvents &= ~(WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE);
Assert(wakeEvents != 0); /* must have at least one wake event */
/* Cannot specify WL_SOCKET_WRITEABLE without WL_SOCKET_READABLE */
Assert((wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) != WL_SOCKET_WRITEABLE);
if ((wakeEvents & WL_LATCH_SET) && latch->owner_pid != t_thrd.proc_cxt.MyProcPid)
ereport(ERROR, (errcode(ERRCODE_INVALID_OPERATION), errmsg("cannot wait on a latch owned by another process")));
@ -141,10 +139,10 @@ int WaitLatchOrSocket(volatile Latch* latch, int wakeEvents, pgsocket sock, long
numevents = 2;
if (wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) {
/* Need an event object to represent events on the socket */
int flags = 0;
int flags = FD_CLOSE; /* always check for errors/EOF */
if (wakeEvents & WL_SOCKET_READABLE)
flags |= (FD_READ | FD_CLOSE);
flags |= FD_READ;
if (wakeEvents & WL_SOCKET_WRITEABLE)
flags |= FD_WRITE;
@ -214,12 +212,20 @@ int WaitLatchOrSocket(volatile Latch* latch, int wakeEvents, pgsocket sock, long
ereport(ERROR,
(errcode(ERRCODE_SYSTEM_ERROR),
errmsg("failed to enumerate network events: error code %u", WSAGetLastError())));
if ((wakeEvents & WL_SOCKET_READABLE) && (resEvents.lNetworkEvents & (FD_READ | FD_CLOSE))) {
if ((wakeEvents & WL_SOCKET_READABLE) && (resEvents.lNetworkEvents & FD_READ)) {
result |= WL_SOCKET_READABLE;
}
if ((wakeEvents & WL_SOCKET_WRITEABLE) && (resEvents.lNetworkEvents & FD_WRITE)) {
result |= WL_SOCKET_WRITEABLE;
}
if (resEvents.lNetworkEvents & FD_CLOSE) {
if (wakeEvents & WL_SOCKET_READABLE) {
result |= WL_SOCKET_READABLE;
}
if (wakeEvents & WL_SOCKET_WRITEABLE) {
result |= WL_SOCKET_WRITEABLE;
}
}
} else if ((wakeEvents & WL_POSTMASTER_DEATH) && rc == WAIT_OBJECT_0 + pmdeath_eventno) {
/*
* Postmaster apparently died. Since the consequences of falsely

View File

@ -47,7 +47,7 @@ static void tt_setup_firstcall(FuncCallContext* funcctx, Oid prsid)
st->list = (LexDescr*)DatumGetPointer(OidFunctionCall1(prs->lextypeOid, (Datum)0));
funcctx->user_fctx = (void*)st;
tupdesc = CreateTemplateTupleDesc(3, false, TAM_HEAP);
tupdesc = CreateTemplateTupleDesc(3, false);
TupleDescInitEntry(tupdesc, (AttrNumber)1, "tokid", INT4OID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)2, "alias", TEXTOID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)3, "description", TEXTOID, -1, 0);
@ -185,7 +185,7 @@ static void prs_setup_firstcall(FuncCallContext* funcctx, Oid prsid, text* txt)
st->cur = 0;
funcctx->user_fctx = (void*)st;
tupdesc = CreateTemplateTupleDesc(2, false, TAM_HEAP);
tupdesc = CreateTemplateTupleDesc(2, false);
TupleDescInitEntry(tupdesc, (AttrNumber)1, "tokid", INT4OID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)2, "token", TEXTOID, -1, 0);

View File

@ -86,7 +86,7 @@ my $prorettype; #[6]
foreach my $row (@{ $catalog{builtindata} })
{
if ($row =~ /_0\(([0-9A-Z]+)\),\s+_1\(\"(\S+)\"\),\s+_2\((\d+)\),\s+_3\((\w+)\),\s+_4\((\w+)\),\s+.+?_6\((\d+)\),.+?_25\(\"(\w+)\"\),/)
if ($row =~ /_0\(([0-9A-Z_]+)\),\s+_1\(\"(\S+)\"\),\s+_2\((\d+)\),\s+_3\((\w+)\),\s+_4\((\w+)\),\s+.+?_6\((\w+)\),.+?_9\(INTERNALlanguageId\),.+?_25\(\"(.+?)\"\),/)
{
$foid = $1;
$funcName = $2;
@ -218,12 +218,58 @@ foreach my $s (sort { $a->{proname} cmp $b->{proname} } @fmgr)
# Create the fmgr_builtins table
print T "\nconst FmgrBuiltin fmgr_builtins[] = {\n";
my @fmgr_builtin_oid_index;
my $last_builtin_oid = 0;
foreach my $s (sort { $a->{prosrc} cmp $b->{prosrc} } @fmgr)
{
print T
" { $s->{oid}, \"$s->{prosrc}\", $s->{nargs}, $s->{strict}, $s->{retset}, $s->{prosrc}, $s->{prorettype} },\n";
$nfmgrfuncs = $nfmgrfuncs + 1;
" { $s->{oid}, \"$s->{prosrc}\", $s->{nargs}, $s->{strict}, $s->{retset}, $s->{prosrc}, $s->{prorettype} }";
$fmgr_builtin_oid_index[$s->{oid}] = $nfmgrfuncs++;
if ($nfmgrfuncs <= $#fmgr)
{
print T ",\n";
}
else
{
print T "\n";
}
$last_builtin_oid = $s->{oid};
}
print T "};\n";
printf T qq|
const int fmgr_nbuiltins = (sizeof(fmgr_builtins) / sizeof(FmgrBuiltin));
const Oid fmgr_last_builtin_oid = %u;
|, $last_builtin_oid;
# Create fmgr_builtin_oid_index table.
printf T qq|
const uint16 fmgr_builtin_oid_index[%u] = {
|, $last_builtin_oid + 1;
for (my $i = 0; $i <= $last_builtin_oid; $i++)
{
my $oid = $fmgr_builtin_oid_index[$i];
# fmgr_builtin_oid_index is sparse, map nonexistent functions to
# InvalidOidBuiltinMapping
if (not defined $oid)
{
$oid = 'InvalidOidBuiltinMapping';
}
if ($i == $last_builtin_oid)
{
print T " $oid\n";
}
else
{
print T " $oid,\n";
}
}
print T "};\n";
print H "\n#define nBuiltinFuncs $nfuncs\n";
print H "\n#define NFMGRFUNCS $nfmgrfuncs\n";
@ -232,17 +278,6 @@ print H "\n#define NFMGRFUNCS $nfmgrfuncs\n";
# And add the file footers.
print H "\n#endif /* FMGROIDS_H */\n";
print T
qq| /* dummy entry is easier than getting rid of comma after last real one */
/* (not that there has ever been anything wrong with *having* a
comma after the last field in an array initializer) */
{ 0, NULL, 0, false, false, NULL, InvalidOid}
};
/* Note fmgr_nbuiltins excludes the dummy entry */
const int fmgr_nbuiltins = (sizeof(fmgr_builtins) / sizeof(FmgrBuiltin)) - 1;
|;
close(H);
close(T);
close(B);

View File

@ -39,7 +39,7 @@ OBJS = acl.o arrayfuncs.o array_selfuncs.o array_typanalyze.o \
tsquery_op.o tsquery_rewrite.o tsquery_util.o tsrank.o \
tsvector.o tsvector_op.o tsvector_parser.o \
txid.o uuid.o windowfuncs.o xml.o extended_statistics.o clientlogic_bytea.o clientlogicsettings.o \
median_aggs.o expr_distinct.o nlssort.o memory_func.o first_last_agg.o
median_aggs.o expr_distinct.o nlssort.o memory_func.o first_last_agg.o expandeddatum.o
like.o: like.cpp like_match.cpp

View File

@ -1731,7 +1731,7 @@ Datum aclexplode(PG_FUNCTION_ARGS)
* build tupdesc for result tuples (matches out parameters in pg_proc
* entry)
*/
tupdesc = CreateTemplateTupleDesc(4, false, TAM_HEAP);
tupdesc = CreateTemplateTupleDesc(4, false);
TupleDescInitEntry(tupdesc, (AttrNumber)1, "grantor", OIDOID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)2, "grantee", OIDOID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)3, "privilege_type", TEXTOID, -1, 0);
@ -5768,43 +5768,6 @@ static Oid get_role_oid_or_public(const char* rolname)
return get_role_oid(rolname, false);
}
/*
* @Description: check whether role is independent role.
* @in roleid : the role need to be check.
* @return : true for independent and false for noindependent.
*/
bool is_role_independent(Oid roleid)
{
HeapTuple rtup = NULL;
bool isNull = false;
bool flag = false;
Relation relation = heap_open(AuthIdRelationId, AccessShareLock);
TupleDesc pg_authid_dsc = RelationGetDescr(relation);
/* Look up the information in pg_authid. */
rtup = SearchSysCache1(AUTHOID, ObjectIdGetDatum(roleid));
if (HeapTupleIsValid(rtup)) {
/*
* For upgrade reason, we must get field value through heap_getattr function
* although it is a char type value.
*/
Datum authidrolkindDatum = heap_getattr(rtup, Anum_pg_authid_rolkind, pg_authid_dsc, &isNull);
if (DatumGetChar(authidrolkindDatum) == ROLKIND_INDEPENDENT)
flag = true;
else
flag = false;
ReleaseSysCache(rtup);
}
heap_close(relation, AccessShareLock);
return flag;
}
/*
* @Description: check whether role is iamauth role whose password has been disabled.
* @in roleid : the role need to be check.

View File

@ -212,6 +212,28 @@ Datum date_out(PG_FUNCTION_ARGS)
PG_RETURN_CSTRING(result);
}
char* output_date_out(DateADT date)
{
char* result = NULL;
struct pg_tm tt, *tm = &tt;
u_sess->utils_cxt.dateoutput_buffer[0] = '\0';
if (DATE_NOT_FINITE(date))
EncodeSpecialDate(date, u_sess->utils_cxt.dateoutput_buffer, MAXDATELEN + 1);
else {
if (unlikely(date > 0 && (INT_MAX - date < POSTGRES_EPOCH_JDATE))) {
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("input julian date is overflow")));
}
j2date(date + POSTGRES_EPOCH_JDATE, &(tm->tm_year), &(tm->tm_mon), &(tm->tm_mday));
EncodeDateOnly(tm, u_sess->time_cxt.DateStyle, u_sess->utils_cxt.dateoutput_buffer);
}
return u_sess->utils_cxt.dateoutput_buffer;
}
/*
* date_recv - converts external binary format to date
*/

View File

@ -43,9 +43,12 @@ static int DecodeTimezone(const char* str, int* tzp);
static const datetkn* datebsearch(const char* key, const datetkn* base, int nel);
static int DecodeDate(char* str, unsigned int fmask, unsigned int* tmask, bool* is2digits, struct pg_tm* tm);
static int ValidateDate(unsigned int fmask, bool isjulian, bool is2digits, bool bc, struct pg_tm* tm);
static void TrimTrailingZeros(char* str);
static void AppendTrailingZeros(char* str);
static void AppendSeconds(char* cp, int sec, fsec_t fsec, int precision, bool fillzeros);
#ifndef HAVE_INT64_TIMESTAMP
static char *TrimTrailingZeros(char *str);
#endif /* HAVE_INT64_TIMESTAMP */
static char *AppendSeconds(char *cp, int sec, fsec_t fsec, int precision, bool fillzeros);
static void AdjustFractSeconds(double frac, struct pg_tm* tm, fsec_t* fsec, int scale);
static void AdjustFractDays(double frac, struct pg_tm* tm, fsec_t* fsec, int scale);
@ -366,15 +369,17 @@ void GetCurrentTimeUsec(struct pg_tm* tm, fsec_t* fsec, int* tzp)
* but conversations on the lists suggest this isn't desired
* since showing '0.10' is misleading with values of precision(1).
*/
static void TrimTrailingZeros(char* str)
#ifndef HAVE_INT64_TIMESTAMP
static char* TrimTrailingZeros(char* str)
{
int len = strlen(str);
while (len > 1 && *(str + len - 1) == '0' && *(str + len - 2) != '.') {
len--;
*(str + len) = '\0';
}
return str + len;
}
#endif
/*
* Append sections and fractional seconds (if any) at *cp.
@ -382,34 +387,84 @@ static void TrimTrailingZeros(char* str)
* pad to two integral-seconds digits.
* Note that any sign is stripped from the input seconds values.
*/
static void AppendSeconds(char* cp, int sec, fsec_t fsec, int precision, bool fillzeros)
static char* AppendSeconds(char* cp, int sec, fsec_t fsec, int precision, bool fillzeros)
{
errno_t rc;
if (fsec == 0) {
if (fillzeros)
rc = sprintf_s(cp, MAXDATELEN, "%02d", abs(sec));
else
rc = sprintf_s(cp, MAXDATELEN, "%d", abs(sec));
securec_check_ss(rc, "\0", "\0");
} else {
Assert(precision >= 0);
#ifdef HAVE_INT64_TIMESTAMP
/* fsec_t is just an int32 */
if (fillzeros)
rc = sprintf_s(cp, MAXDATELEN, "%02d.%0*d", abs(sec), precision, (int)Abs(fsec));
cp = pg_ltostr_zeropad(cp, Abs(sec), 2);
else
rc = sprintf_s(cp, MAXDATELEN, "%d.%0*d", abs(sec), precision, (int)Abs(fsec));
#else
if (fillzeros)
rc = sprintf_s(cp, MAXDATELEN, "%0*.*f", precision + 3, precision, fabs(sec + fsec));
cp = pg_ltostr(cp, Abs(sec));
if (fsec != 0)
{
int32 value = Abs(fsec);
char *end = &cp[precision + 1];
bool gotnonzero = false;
*cp++ = '.';
/*
* Append the fractional seconds part. Note that we don't want any
* trailing zeros here, so since we're building the number in reverse
* we'll skip appending zeros until we've output a non-zero digit.
*/
while (precision--)
{
int32 oldval = value;
int32 remainder;
value /= 10;
remainder = oldval - value * 10;
/* check if we got a non-zero */
if (remainder)
gotnonzero = true;
if (gotnonzero)
cp[precision] = '0' + remainder;
else
rc = sprintf_s(cp, MAXDATELEN, "%.*f", precision, fabs(sec + fsec));
#endif
securec_check_ss(rc, "\0", "\0");
TrimTrailingZeros(cp);
end = &cp[precision];
}
/*
* If we still have a non-zero value then precision must have not been
* enough to print the number. We punt the problem to pg_ltostr(),
* which will generate a correct answer in the minimum valid width.
*/
if (value)
return pg_ltostr(cp, Abs(fsec));
return end;
}
else
return cp;
#else
/* fsec_t is a double */
if (fsec == 0)
{
if (fillzeros)
return pg_ltostr_zeropad(cp, Abs(sec), 2);
else
return pg_ltostr(cp, Abs(sec));
}
else
{
if (fillzeros)
sprintf(cp, "%0*.*f", precision + 3, precision, fabs(sec + fsec));
else
sprintf(cp, "%.*f", precision, fabs(sec + fsec));
return TrimTrailingZeros(cp);
}
#endif /* HAVE_INT64_TIMESTAMP */
}
/* Variant of above that's specialized to timestamp case */
static void AppendTimestampSeconds(char* cp, struct pg_tm* tm, fsec_t fsec)
static char* AppendTimestampSeconds(char* cp, struct pg_tm* tm, fsec_t fsec)
{
/*
* In float mode, don't print fractional seconds before 1 AD, since it's
@ -419,7 +474,7 @@ static void AppendTimestampSeconds(char* cp, struct pg_tm* tm, fsec_t fsec)
if (tm->tm_year <= 0)
fsec = 0;
#endif
AppendSeconds(cp, tm->tm_sec, fsec, MAX_TIMESTAMP_PRECISION, true);
return AppendSeconds(cp, tm->tm_sec, fsec, MAX_TIMESTAMP_PRECISION, true);
}
/*
@ -3265,27 +3320,35 @@ static const datetkn* datebsearch(const char* key, const datetkn* base, int nel)
/* EncodeTimezone()
* Append representation of a numeric timezone offset to str.
*/
static void EncodeTimezone(char* str, int tz, int style)
static char* EncodeTimezone(char* str, int tz, int style)
{
int hour, min, sec;
errno_t rc;
sec = abs(tz);
min = sec / SECS_PER_MINUTE;
sec -= min * SECS_PER_MINUTE;
hour = min / MINS_PER_HOUR;
min -= hour * MINS_PER_HOUR;
str += strlen(str);
/* TZ is negated compared to sign we wish to display ... */
*str++ = ((tz <= 0) ? '+' : '-');
if (sec != 0)
rc = sprintf_s(str, MAXDATELEN, "%02d:%02d:%02d", hour, min, sec);
{
str = pg_ltostr_zeropad(str, hour, 2);
*str++ = ':';
str = pg_ltostr_zeropad(str, min, 2);
*str++ = ':';
str = pg_ltostr_zeropad(str, sec, 2);
}
else if (min != 0 || style == USE_XSD_DATES)
rc = sprintf_s(str, MAXDATELEN, "%02d:%02d", hour, min);
{
str = pg_ltostr_zeropad(str, hour, 2);
*str++ = ':';
str = pg_ltostr_zeropad(str, min, 2);
}
else
rc = sprintf_s(str, MAXDATELEN, "%02d", hour);
securec_check_ss(rc, "\0", "\0");
str = pg_ltostr_zeropad(str, hour, 2);
return str;
}
/* EncodeDateOnly()
@ -3293,71 +3356,71 @@ static void EncodeTimezone(char* str, int tz, int style)
*/
void EncodeDateOnly(struct pg_tm* tm, int style, char* str)
{
errno_t rc;
Assert(tm->tm_mon >= 1 && tm->tm_mon <= MONTHS_PER_YEAR);
size_t str_len = 0;
switch (style) {
case USE_ISO_DATES:
case USE_XSD_DATES:
/* compatible with ISO date formats */
if (tm->tm_year > 0)
rc = sprintf_s(str, MAXDATELEN + 1, "%04d-%02d-%02d", tm->tm_year, tm->tm_mon, tm->tm_mday);
else
rc = sprintf_s(
str, MAXDATELEN + 1, "%04d-%02d-%02d %s", -(tm->tm_year - 1), tm->tm_mon, tm->tm_mday, "BC");
securec_check_ss(rc, "\0", "\0");
str = pg_ltostr_zeropad(str,
(tm->tm_year > 0) ? tm->tm_year : -(tm->tm_year - 1), 4);
*str++ = '-';
str = pg_ltostr_zeropad(str, tm->tm_mon, 2);
*str++ = '-';
str = pg_ltostr_zeropad(str, tm->tm_mday, 2);
break;
case USE_SQL_DATES:
/* compatible with A db/Ingres date formats */
if (u_sess->time_cxt.DateOrder == DATEORDER_DMY)
rc = sprintf_s(str, MAXDATELEN + 1, "%02d/%02d", tm->tm_mday, tm->tm_mon);
else
rc = sprintf_s(str, MAXDATELEN + 1, "%02d/%02d", tm->tm_mon, tm->tm_mday);
securec_check_ss(rc, "\0", "\0");
str_len = strlen(str);
if (tm->tm_year > 0)
rc = sprintf_s(str + str_len, MAXDATELEN + 1 - str_len, "/%04d", tm->tm_year);
else
rc = sprintf_s(str + str_len, MAXDATELEN + 1 - str_len, "/%04d %s", -(tm->tm_year - 1), "BC");
securec_check_ss(rc, "\0", "\0");
if (u_sess->time_cxt.DateOrder == DATEORDER_DMY) {
str = pg_ltostr_zeropad(str, tm->tm_mday, 2);
*str++ = '/';
str = pg_ltostr_zeropad(str, tm->tm_mon, 2);
}
else {
str = pg_ltostr_zeropad(str, tm->tm_mon, 2);
*str++ = '/';
str = pg_ltostr_zeropad(str, tm->tm_mday, 2);
}
*str++ = '/';
str = pg_ltostr_zeropad(str,
(tm->tm_year > 0) ? tm->tm_year : -(tm->tm_year - 1), 4);
break;
case USE_GERMAN_DATES:
/* German-style date format */
rc = sprintf_s(str, MAXDATELEN + 1, "%02d.%02d", tm->tm_mday, tm->tm_mon);
securec_check_ss(rc, "\0", "\0");
str_len = strlen(str);
if (tm->tm_year > 0)
rc = sprintf_s(str + str_len, MAXDATELEN + 1 - str_len, ".%04d", tm->tm_year);
else
rc = sprintf_s(str + str_len, MAXDATELEN + 1 - str_len, ".%04d %s", -(tm->tm_year - 1), "BC");
securec_check_ss(rc, "\0", "\0");
str = pg_ltostr_zeropad(str, tm->tm_mday, 2);
*str++ = '.';
str = pg_ltostr_zeropad(str, tm->tm_mon, 2);
*str++ = '.';
str = pg_ltostr_zeropad(str,
(tm->tm_year > 0) ? tm->tm_year : -(tm->tm_year - 1), 4);
break;
case USE_POSTGRES_DATES:
default:
/* traditional date-only style for openGauss */
if (u_sess->time_cxt.DateOrder == DATEORDER_DMY)
rc = sprintf_s(str, MAXDATELEN + 1, "%02d-%02d", tm->tm_mday, tm->tm_mon);
else
rc = sprintf_s(str, MAXDATELEN + 1, "%02d-%02d", tm->tm_mon, tm->tm_mday);
securec_check_ss(rc, "\0", "\0");
str_len = strlen(str);
if (tm->tm_year > 0)
rc = sprintf_s(str + str_len, MAXDATELEN + 1 - str_len, "-%04d", tm->tm_year);
else
rc = sprintf_s(str + str_len, MAXDATELEN + 1 - str_len, "-%04d %s", -(tm->tm_year - 1), "BC");
securec_check_ss(rc, "\0", "\0");
if (u_sess->time_cxt.DateOrder == DATEORDER_DMY) {
str = pg_ltostr_zeropad(str, tm->tm_mday, 2);
*str++ = '-';
str = pg_ltostr_zeropad(str, tm->tm_mon, 2);
}
else {
str = pg_ltostr_zeropad(str, tm->tm_mon, 2);
*str++ = '-';
str = pg_ltostr_zeropad(str, tm->tm_mday, 2);
}
*str++ = '-';
str = pg_ltostr_zeropad(str,
(tm->tm_year > 0) ? tm->tm_year : -(tm->tm_year - 1), 4);
break;
}
if (tm->tm_year <= 0)
{
memcpy(str, " BC", 3); /* Don't copy NUL */
str += 3;
}
*str = '\0';
}
/* EncodeTimeOnly()
@ -3370,17 +3433,14 @@ void EncodeDateOnly(struct pg_tm* tm, int style, char* str)
*/
void EncodeTimeOnly(struct pg_tm* tm, fsec_t fsec, bool print_tz, int tz, int style, char* str)
{
errno_t rc = EOK;
/*The length of str is defined where the function is called.*/
rc = sprintf_s(str, MAXDATELEN + 1, "%02d:%02d:", tm->tm_hour, tm->tm_min);
securec_check_ss(rc, "\0", "\0");
str += strlen(str);
AppendSeconds(str, tm->tm_sec, fsec, MAX_TIME_PRECISION, true);
str = pg_ltostr_zeropad(str, tm->tm_hour, 2);
*str++ = ':';
str = pg_ltostr_zeropad(str, tm->tm_min, 2);
*str++ = ':';
str = AppendSeconds(str, tm->tm_sec, fsec, MAX_TIME_PRECISION, true);
if (print_tz)
EncodeTimezone(str, tz, style);
str = EncodeTimezone(str, tz, style);
*str = '\0';
}
/* EncodeDateTime()
@ -3402,7 +3462,6 @@ void EncodeTimeOnly(struct pg_tm* tm, fsec_t fsec, bool print_tz, int tz, int st
void EncodeDateTime(struct pg_tm* tm, fsec_t fsec, bool print_tz, int tz, const char* tzn, int style, char* str)
{
int day;
errno_t rc = EOK;
Assert(tm->tm_mon >= 1 && tm->tm_mon <= MONTHS_PER_YEAR);
/*
@ -3415,137 +3474,114 @@ void EncodeDateTime(struct pg_tm* tm, fsec_t fsec, bool print_tz, int tz, const
case USE_ISO_DATES:
case USE_XSD_DATES:
/* Compatible with ISO-8601 date formats */
if (style == USE_ISO_DATES)
rc = sprintf_s(str,
MAXDATELEN + 1,
"%04d-%02d-%02d %02d:%02d:",
(tm->tm_year > 0) ? tm->tm_year : -(tm->tm_year - 1),
tm->tm_mon,
tm->tm_mday,
tm->tm_hour,
tm->tm_min);
else
rc = sprintf_s(str,
MAXDATELEN + 1,
"%04d-%02d-%02dT%02d:%02d:",
(tm->tm_year > 0) ? tm->tm_year : -(tm->tm_year - 1),
tm->tm_mon,
tm->tm_mday,
tm->tm_hour,
tm->tm_min);
securec_check_ss(rc, "\0", "\0");
AppendTimestampSeconds(str + strlen(str), tm, fsec);
str = pg_ltostr_zeropad(str,
(tm->tm_year > 0) ? tm->tm_year : -(tm->tm_year - 1), 4);
*str++ = '-';
str = pg_ltostr_zeropad(str, tm->tm_mon, 2);
*str++ = '-';
str = pg_ltostr_zeropad(str, tm->tm_mday, 2);
*str++ = (style == USE_ISO_DATES) ? ' ' : 'T';
str = pg_ltostr_zeropad(str, tm->tm_hour, 2);
*str++ = ':';
str = pg_ltostr_zeropad(str, tm->tm_min, 2);
*str++ = ':';
str = AppendTimestampSeconds(str, tm, fsec);
if (print_tz)
EncodeTimezone(str, tz, style);
if (tm->tm_year <= 0) {
rc = sprintf_s(str + strlen(str), MAXDATELEN + 1 - strlen(str), " BC");
securec_check_ss(rc, "\0", "\0");
}
str = EncodeTimezone(str, tz, style);
break;
case USE_SQL_DATES:
/* Compatible with A db/Ingres date formats */
if (u_sess->time_cxt.DateOrder == DATEORDER_DMY)
rc = sprintf_s(str, MAXDATELEN + 1, "%02d/%02d", tm->tm_mday, tm->tm_mon);
else
rc = sprintf_s(str, MAXDATELEN + 1, "%02d/%02d", tm->tm_mon, tm->tm_mday);
securec_check_ss(rc, "\0", "\0");
rc = sprintf_s(str + 5,
MAXDATELEN - 4,
"/%04d %02d:%02d:",
(tm->tm_year > 0) ? tm->tm_year : -(tm->tm_year - 1),
tm->tm_hour,
tm->tm_min);
securec_check_ss(rc, "\0", "\0");
AppendTimestampSeconds(str + strlen(str), tm, fsec);
if (u_sess->time_cxt.DateOrder == DATEORDER_DMY) {
str = pg_ltostr_zeropad(str, tm->tm_mday, 2);
*str++ = '/';
str = pg_ltostr_zeropad(str, tm->tm_mon, 2);
} else {
str = pg_ltostr_zeropad(str, tm->tm_mon, 2);
*str++ = '/';
str = pg_ltostr_zeropad(str, tm->tm_mday, 2);
}
*str++ = '/';
str = pg_ltostr_zeropad(str,
(tm->tm_year > 0) ? tm->tm_year : -(tm->tm_year - 1), 4);
*str++ = ' ';
str = pg_ltostr_zeropad(str, tm->tm_hour, 2);
*str++ = ':';
str = pg_ltostr_zeropad(str, tm->tm_min, 2);
*str++ = ':';
str = AppendTimestampSeconds(str, tm, fsec);
/*
* Note: the uses of %.*s in this function would be risky if the
* timezone names ever contain non-ASCII characters. However, all
* TZ abbreviations in the Olson database are plain ASCII.
* TZ abbreviations in the IANA database are plain ASCII.
*/
if (print_tz) {
if (NULL != tzn) {
rc = sprintf_s(str + strlen(str), MAXDATELEN + 1 - strlen(str), " %.*s", MAXTZLEN, tzn);
securec_check_ss(rc, "\0", "\0");
sprintf(str, " %.*s", MAXTZLEN, tzn);
str += strlen(str);
} else
EncodeTimezone(str, tz, style);
}
if (tm->tm_year <= 0) {
rc = sprintf_s(str + strlen(str), MAXDATELEN + 1 - strlen(str), " BC");
securec_check_ss(rc, "\0", "\0");
str = EncodeTimezone(str, tz, style);
}
break;
case USE_GERMAN_DATES:
/* German variant on European style */
rc = sprintf_s(str, MAXDATELEN + 1, "%02d.%02d", tm->tm_mday, tm->tm_mon);
securec_check_ss(rc, "\0", "\0");
rc = sprintf_s(str + 5,
MAXDATELEN - 4,
".%04d %02d:%02d:",
(tm->tm_year > 0) ? tm->tm_year : -(tm->tm_year - 1),
tm->tm_hour,
tm->tm_min);
securec_check_ss(rc, "\0", "\0");
AppendTimestampSeconds(str + strlen(str), tm, fsec);
str = pg_ltostr_zeropad(str, tm->tm_mday, 2);
*str++ = '.';
str = pg_ltostr_zeropad(str, tm->tm_mon, 2);
*str++ = '.';
str = pg_ltostr_zeropad(str,
(tm->tm_year > 0) ? tm->tm_year : -(tm->tm_year - 1), 4);
*str++ = ' ';
str = pg_ltostr_zeropad(str, tm->tm_hour, 2);
*str++ = ':';
str = pg_ltostr_zeropad(str, tm->tm_min, 2);
*str++ = ':';
str = AppendTimestampSeconds(str, tm, fsec);
if (print_tz) {
if (NULL != tzn) {
rc = sprintf_s(str + strlen(str), MAXDATELEN + 1 - strlen(str), " %.*s", MAXTZLEN, tzn);
securec_check_ss(rc, "\0", "\0");
sprintf(str, " %.*s", MAXTZLEN, tzn);
str += strlen(str);
} else
EncodeTimezone(str, tz, style);
}
if (tm->tm_year <= 0) {
rc = sprintf_s(str + strlen(str), MAXDATELEN + 1 - strlen(str), " BC");
securec_check_ss(rc, "\0", "\0");
str = EncodeTimezone(str, tz, style);
}
break;
case USE_POSTGRES_DATES:
default:
/* Backward-compatible with traditional openGauss abstime dates */
day = date2j(tm->tm_year, tm->tm_mon, tm->tm_mday);
tm->tm_wday = j2day(day);
rc = strncpy_s(str, MAXDATELEN + 1, days[tm->tm_wday], 3);
securec_check(rc, "\0", "\0");
rc = strcpy_s(str + 3, MAXDATELEN - 2, " ");
securec_check(rc, "\0", "\0");
if (u_sess->time_cxt.DateOrder == DATEORDER_DMY)
rc = sprintf_s(str + 4, MAXDATELEN - 3, "%02d %3s", tm->tm_mday, months[tm->tm_mon - 1]);
else
rc = sprintf_s(str + 4, MAXDATELEN - 3, "%3s %02d", months[tm->tm_mon - 1], tm->tm_mday);
securec_check_ss(rc, "\0", "\0");
rc = sprintf_s(str + 10, MAXDATELEN - 9, " %02d:%02d:", tm->tm_hour, tm->tm_min);
securec_check_ss(rc, "\0", "\0");
AppendTimestampSeconds(str + strlen(str), tm, fsec);
rc = sprintf_s(str + strlen(str),
MAXDATELEN + 1 - strlen(str),
" %04d",
(tm->tm_year > 0) ? tm->tm_year : -(tm->tm_year - 1));
securec_check_ss(rc, "\0", "\0");
memcpy(str, days[tm->tm_wday], 3);
str += 3;
*str++ = ' ';
if (u_sess->time_cxt.DateOrder == DATEORDER_DMY) {
str = pg_ltostr_zeropad(str, tm->tm_mday, 2);
*str++ = ' ';
memcpy(str, months[tm->tm_mon - 1], 3);
str += 3;
} else {
memcpy(str, months[tm->tm_mon - 1], 3);
str += 3;
*str++ = ' ';
str = pg_ltostr_zeropad(str, tm->tm_mday, 2);
}
*str++ = ' ';
str = pg_ltostr_zeropad(str, tm->tm_hour, 2);
*str++ = ':';
str = pg_ltostr_zeropad(str, tm->tm_min, 2);
*str++ = ':';
str = AppendTimestampSeconds(str, tm, fsec);
*str++ = ' ';
str = pg_ltostr_zeropad(str,
(tm->tm_year > 0) ? tm->tm_year : -(tm->tm_year - 1), 4);
if (print_tz) {
if (NULL != tzn) {
rc = sprintf_s(str + strlen(str), MAXDATELEN + 1 - strlen(str), " %.*s", MAXTZLEN, tzn);
securec_check_ss(rc, "\0", "\0");
sprintf(str, " %.*s", MAXTZLEN, tzn);
str += strlen(str);
} else {
/*
* We have a time zone, but no string version. Use the
@ -3553,18 +3589,17 @@ void EncodeDateTime(struct pg_tm* tm, fsec_t fsec, bool print_tz, int tz, const
* avoid formatting something which would be rejected by
* the date/time parser later. - thomas 2001-10-19
*/
rc = sprintf_s(str + strlen(str), MAXDATELEN + 1 - strlen(str), " ");
securec_check_ss(rc, "\0", "\0");
EncodeTimezone(str, tz, style);
*str++ = ' ';
str = EncodeTimezone(str, tz, style);
}
}
if (tm->tm_year <= 0) {
rc = sprintf_s(str + strlen(str), MAXDATELEN + 1 - strlen(str), " BC");
securec_check_ss(rc, "\0", "\0");
}
break;
}
if (tm->tm_year <= 0) {
memcpy(str, " BC", 3); /* Don't copy NUL */
str += 3;
}
*str = '\0';
}
/*
@ -3745,7 +3780,8 @@ void EncodeInterval(struct pg_tm* tm, fsec_t fsec, int style, char* str)
abs(min));
securec_check_ss(rc, "\0", "\0");
cp += strlen(cp);
AppendSeconds(cp, sec, fsec, MAX_INTERVAL_PRECISION, true);
cp = AppendSeconds(cp, sec, fsec, MAX_INTERVAL_PRECISION, true);
*cp = '\0';
}
/* the format for has_year_month */
else if (has_year_month) {
@ -3826,7 +3862,8 @@ void EncodeInterval(struct pg_tm* tm, fsec_t fsec, int style, char* str)
abs(min));
securec_check_ss(rc, "\0", "\0");
cp += strlen(cp);
AppendSeconds(cp, sec, fsec, MAX_INTERVAL_PRECISION, true);
cp = AppendSeconds(cp, sec, fsec, MAX_INTERVAL_PRECISION, true);
*cp = '\0';
} else if (has_year_month) {
rc = sprintf_s(cp, curlen, "%d-%d", year, mon);
securec_check_ss(rc, "\0", "\0");
@ -3834,12 +3871,14 @@ void EncodeInterval(struct pg_tm* tm, fsec_t fsec, int style, char* str)
rc = sprintf_s(cp, curlen, "%d %d:%02d:", mday, hour, min);
securec_check_ss(rc, "\0", "\0");
cp += strlen(cp);
AppendSeconds(cp, sec, fsec, MAX_INTERVAL_PRECISION, true);
cp = AppendSeconds(cp, sec, fsec, MAX_INTERVAL_PRECISION, true);
*cp = '\0';
} else {
rc = sprintf_s(cp, curlen, "%d:%02d:", hour, min);
securec_check_ss(rc, "\0", "\0");
cp += strlen(cp);
AppendSeconds(cp, sec, fsec, MAX_INTERVAL_PRECISION, true);
cp = AppendSeconds(cp, sec, fsec, MAX_INTERVAL_PRECISION, true);
*cp = '\0';
}
} break;
@ -3862,8 +3901,7 @@ void EncodeInterval(struct pg_tm* tm, fsec_t fsec, int style, char* str)
if (sec != 0 || fsec != 0) {
if (sec < 0 || fsec < 0)
*cp++ = '-';
AppendSeconds(cp, sec, fsec, MAX_INTERVAL_PRECISION, false);
cp += strlen(cp);
cp = AppendSeconds(cp, sec, fsec, MAX_INTERVAL_PRECISION, false);
*cp++ = 'S';
*cp++ = '\0';
}
@ -3892,7 +3930,8 @@ void EncodeInterval(struct pg_tm* tm, fsec_t fsec, int style, char* str)
abs(min));
securec_check_ss(rc, "\0", "\0");
cp += strlen(cp);
AppendSeconds(cp, sec, fsec, MAX_INTERVAL_PRECISION, true);
cp = AppendSeconds(cp, sec, fsec, MAX_INTERVAL_PRECISION, true);
*cp = '\0';
}
break;
@ -3922,12 +3961,8 @@ void EncodeInterval(struct pg_tm* tm, fsec_t fsec, int style, char* str)
*cp++ = '-';
curlen--;
}
AppendSeconds(cp, sec, fsec, MAX_INTERVAL_PRECISION, false);
int len = strlen(cp);
cp += len;
curlen -= len;
rc = sprintf_s(cp, curlen, " sec%s", (abs(sec) != 1 || fsec != 0) ? "s" : "");
securec_check_ss(rc, "\0", "\0");
cp = AppendSeconds(cp, sec, fsec, MAX_INTERVAL_PRECISION, false);
sprintf(cp, " sec%s", (abs(sec) != 1 || fsec != 0) ? "s" : "");
is_zero = FALSE;
}
/* identically zero? then put in a unitless zero... */
@ -4091,7 +4126,7 @@ Datum pg_timezone_abbrevs(PG_FUNCTION_ARGS)
* build tupdesc for result tuples. This must match this function's
* pg_proc entry!
*/
tupdesc = CreateTemplateTupleDesc(3, false, TAM_HEAP);
tupdesc = CreateTemplateTupleDesc(3, false);
TupleDescInitEntry(tupdesc, (AttrNumber)1, "abbrev", TEXTOID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)2, "utc_offset", INTERVALOID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)3, "is_dst", BOOLOID, -1, 0);
@ -4182,7 +4217,7 @@ Datum pg_timezone_names(PG_FUNCTION_ARGS)
* build tupdesc for result tuples. This must match this function's
* pg_proc entry!
*/
tupdesc = CreateTemplateTupleDesc(4, false, TAM_HEAP);
tupdesc = CreateTemplateTupleDesc(4, false);
TupleDescInitEntry(tupdesc, (AttrNumber)1, "name", TEXTOID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)2, "abbrev", TEXTOID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)3, "utc_offset", INTERVALOID, -1, 0);

View File

@ -313,7 +313,7 @@ static double calculate_coltable_compress_ratio(Relation onerel)
CStoreScanDesc cstoreScanDesc = NULL;
TupleDesc tupdesc = onerel->rd_att;
int attrNum = tupdesc->natts;
Form_pg_attribute* attrs = tupdesc->attrs;
FormData_pg_attribute* attrs = tupdesc->attrs;
CUDesc cuDesc;
CU* cuPtr = NULL;
double total_source_size = 0;
@ -328,7 +328,7 @@ static double calculate_coltable_compress_ratio(Relation onerel)
double numericDataSize = 0;
for (int i = 0; i < attrNum; i++) {
colIdx[i] = attrs[i]->attnum;
colIdx[i] = attrs[i].attnum;
slotIdList[i] = CACHE_BLOCK_INVALID_IDX;
}
@ -346,13 +346,13 @@ static double calculate_coltable_compress_ratio(Relation onerel)
/*sample the first CU of each column, and calculate the compression ratio of this table.*/
for (int col = 0; col < attrNum; col++) {
// skip dropped column
if (attrs[col]->attisdropped) {
if (attrs[col].attisdropped) {
continue;
}
bool found = cstore->GetCUDesc(col, targetblock, &cuDesc, SnapshotNow);
if (found && cuDesc.cu_size != 0) {
cuPtr = cstore->GetCUData(&cuDesc, col, attrs[col]->attlen, slotIdList[col]);
cuPtr = cstore->GetCUData(&cuDesc, col, attrs[col].attlen, slotIdList[col]);
if ((cuPtr->m_infoMode & CU_IntLikeCompressed) && ATT_IS_NUMERIC_TYPE(cuPtr->m_atttypid)) {
numericExpandRatio = 1.5; /* default expand ratio */
numericDataSize = 0;
@ -853,8 +853,8 @@ int64 CalculateCStoreRelationSize(Relation rel, ForkNumber forknum)
} else {
for (int i = 0; i < RelationGetDescr(rel)->natts; i++) {
totalsize += calculate_relation_size(
&rel->rd_node, rel->rd_backend, ColumnId2ColForkNum(rel->rd_att->attrs[i]->attnum));
CFileNode tmpNode(rel->rd_node, rel->rd_att->attrs[i]->attnum, MAIN_FORKNUM);
&rel->rd_node, rel->rd_backend, ColumnId2ColForkNum(rel->rd_att->attrs[i].attnum));
CFileNode tmpNode(rel->rd_node, rel->rd_att->attrs[i].attnum, MAIN_FORKNUM);
CUStorage custore(tmpNode);
for (segcount = 0;; segcount++) {
struct stat fst;

View File

@ -114,9 +114,6 @@ static void domain_check_input(Datum value, bool isnull, DomainIOData* my_extra)
errmsg("domain %s does not allow null values", format_type_be(my_extra->domain_type))));
break;
case DOM_CONSTRAINT_CHECK: {
Datum conResult;
bool conIsNull = false;
/* Make the econtext if we didn't already */
if (econtext == NULL) {
MemoryContext oldcontext;
@ -136,9 +133,7 @@ static void domain_check_input(Datum value, bool isnull, DomainIOData* my_extra)
econtext->domainValue_datum = value;
econtext->domainValue_isNull = isnull;
conResult = ExecEvalExprSwitchContext(con->check_expr, econtext, &conIsNull, NULL);
if (!conIsNull && !DatumGetBool(conResult))
if (!ExecCheck(con->check_exprstate, econtext))
ereport(ERROR,
(errcode(ERRCODE_CHECK_VIOLATION),
errmsg("value for domain %s violates check constraint \"%s\"",

View File

@ -0,0 +1,16 @@
#include "postgres.h"
#include "utils/expandeddatum.h"
/*
* If the Datum represents a R/W expanded object, change it to R/O.
* Otherwise return the original Datum.
*
* Caller must ensure that the datum is a non-null varlena value. Typically
* this is invoked via MakeExpandedObjectReadOnly(), which checks that.
*/
Datum
MakeExpandedObjectReadOnlyInternal(Datum d)
{
return d;
}

View File

@ -380,7 +380,7 @@ static void ReadBinaryFileBlocksFirstCall(PG_FUNCTION_ARGS, int32 startBlockNum,
* build tupdesc for result tuples. This must match this function's
* pg_proc entry!
*/
TupleDesc tupdesc = CreateTemplateTupleDesc(4, false, TAM_HEAP);
TupleDesc tupdesc = CreateTemplateTupleDesc(4, false);
TupleDescInitEntry(tupdesc, (AttrNumber)1, "path", TEXTOID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)2, "blocknum", INT4OID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)3, "len", INT4OID, -1, 0);
@ -466,7 +466,7 @@ Datum pg_stat_file(PG_FUNCTION_ARGS)
* This record type had better match the output parameters declared for me
* in pg_proc.h.
*/
tupdesc = CreateTemplateTupleDesc(6, false, TAM_HEAP);
tupdesc = CreateTemplateTupleDesc(6, false);
TupleDescInitEntry(tupdesc, (AttrNumber)1, "size", INT8OID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)2, "access", TIMESTAMPTZOID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)3, "modification", TIMESTAMPTZOID, -1, 0);
@ -635,7 +635,7 @@ Datum pg_stat_file_recursive(PG_FUNCTION_ARGS)
* This record type had better match the output parameters declared for me
* in pg_proc.h.
*/
tupdesc = CreateTemplateTupleDesc(4, false, TAM_HEAP);
tupdesc = CreateTemplateTupleDesc(4, false);
TupleDescInitEntry(tupdesc, (AttrNumber)1, "path", TEXTOID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)2, "filename", TEXTOID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber)3, "size", INT8OID, -1, 0);

View File

@ -1360,7 +1360,7 @@ static void composite_to_json(Datum composite, StringInfo result, bool use_line_
bool typisvarlena = false;
Oid castfunc = InvalidOid;
if (tupdesc->attrs[i]->attisdropped) {
if (tupdesc->attrs[i].attisdropped) {
continue;
}
if (needsep) {
@ -1368,16 +1368,16 @@ static void composite_to_json(Datum composite, StringInfo result, bool use_line_
}
needsep = true;
attname = NameStr(tupdesc->attrs[i]->attname);
attname = NameStr(tupdesc->attrs[i].attname);
escape_json(result, attname);
appendStringInfoChar(result, ':');
val = heap_getattr(tuple, i + 1, tupdesc, &isnull);
getTypeOutputInfo(tupdesc->attrs[i]->atttypid, &typoutput, &typisvarlena);
getTypeOutputInfo(tupdesc->attrs[i].atttypid, &typoutput, &typisvarlena);
if (tupdesc->attrs[i]->atttypid > FirstNormalObjectId) {
if (tupdesc->attrs[i].atttypid > FirstNormalObjectId) {
HeapTuple cast_tuple;
Form_pg_cast castForm;
cast_tuple = SearchSysCache2(CASTSOURCETARGET, ObjectIdGetDatum(tupdesc->attrs[i]->atttypid),
cast_tuple = SearchSysCache2(CASTSOURCETARGET, ObjectIdGetDatum(tupdesc->attrs[i].atttypid),
ObjectIdGetDatum(JSONOID));
if (HeapTupleIsValid(cast_tuple)) {
castForm = (Form_pg_cast) GETSTRUCT(cast_tuple);
@ -1391,14 +1391,14 @@ static void composite_to_json(Datum composite, StringInfo result, bool use_line_
if (castfunc != InvalidOid) {
tcategory = TYPCATEGORY_JSON_CAST;
} else if (tupdesc->attrs[i]->atttypid == RECORDARRAYOID) {
} else if (tupdesc->attrs[i].atttypid == RECORDARRAYOID) {
tcategory = TYPCATEGORY_ARRAY;
} else if (tupdesc->attrs[i]->atttypid == RECORDOID) {
} else if (tupdesc->attrs[i].atttypid == RECORDOID) {
tcategory = TYPCATEGORY_COMPOSITE;
} else if (tupdesc->attrs[i]->atttypid == JSONOID || tupdesc->attrs[i]->atttypid == JSONBOID) {
} else if (tupdesc->attrs[i].atttypid == JSONOID || tupdesc->attrs[i].atttypid == JSONBOID) {
tcategory = TYPCATEGORY_JSON;
} else {
tcategory = TypeCategory(tupdesc->attrs[i]->atttypid);
tcategory = TypeCategory(tupdesc->attrs[i].atttypid);
}
datum_to_json(val, isnull, result, tcategory, typoutput, false);
}

View File

@ -1941,13 +1941,13 @@ static inline Datum populate_record_worker(FunctionCallInfo fcinfo, bool have_re
for (i = 0; i < ncolumns; ++i) {
ColumnIOData *column_info = &my_extra->columns[i];
Oid column_type = tupdesc->attrs[i]->atttypid;
Oid column_type = tupdesc->attrs[i].atttypid;
JsonbValue *v = NULL;
char fname[NAMEDATALEN];
JsonHashEntry *hashentry = NULL;
/* Ignore dropped columns in datatype */
if (tupdesc->attrs[i]->attisdropped) {
if (tupdesc->attrs[i].attisdropped) {
nulls[i] = true;
continue;
}
@ -1955,11 +1955,11 @@ static inline Datum populate_record_worker(FunctionCallInfo fcinfo, bool have_re
if (jtype == JSONOID) {
rc = memset_s(fname, NAMEDATALEN, 0, NAMEDATALEN);
securec_check(rc, "\0", "\0");
rc = strncpy_s(fname, NAMEDATALEN, NameStr(tupdesc->attrs[i]->attname), NAMEDATALEN - 1);
rc = strncpy_s(fname, NAMEDATALEN, NameStr(tupdesc->attrs[i].attname), NAMEDATALEN - 1);
securec_check(rc, "\0", "\0");
hashentry = (JsonHashEntry *)hash_search(json_hash, fname, HASH_FIND, NULL);
} else {
char *key = NameStr(tupdesc->attrs[i]->attname);
char *key = NameStr(tupdesc->attrs[i].attname);
v = findJsonbValueFromSuperHeaderLen(VARDATA(jb), JB_FOBJECT, key, strlen(key));
}
@ -1991,7 +1991,7 @@ static inline Datum populate_record_worker(FunctionCallInfo fcinfo, bool have_re
* checks are done
*/
values[i] = InputFunctionCall(&column_info->proc, NULL, column_info->typioparam,
tupdesc->attrs[i]->atttypmod);
tupdesc->attrs[i].atttypmod);
nulls[i] = true;
} else {
char *s = NULL;
@ -2018,7 +2018,7 @@ static inline Datum populate_record_worker(FunctionCallInfo fcinfo, bool have_re
}
values[i] = InputFunctionCall(&column_info->proc, s,
column_info->typioparam, tupdesc->attrs[i]->atttypmod);
column_info->typioparam, tupdesc->attrs[i].atttypmod);
nulls[i] = false;
}
}
@ -2212,16 +2212,16 @@ static void make_row_from_rec_and_jsonb(Jsonb *element, PopulateRecordsetState *
for (i = 0; i < ncolumns; ++i) {
ColumnIOData *column_info = &my_extra->columns[i];
Oid column_type = tupdesc->attrs[i]->atttypid;
Oid column_type = tupdesc->attrs[i].atttypid;
JsonbValue *v = NULL;
char *key = NULL;
/* Ignore dropped columns in datatype */
if (tupdesc->attrs[i]->attisdropped) {
if (tupdesc->attrs[i].attisdropped) {
nulls[i] = true;
continue;
}
key = NameStr(tupdesc->attrs[i]->attname);
key = NameStr(tupdesc->attrs[i].attname);
v = findJsonbValueFromSuperHeaderLen(VARDATA(element), JB_FOBJECT, key, strlen(key));
/*
@ -2250,7 +2250,7 @@ static void make_row_from_rec_and_jsonb(Jsonb *element, PopulateRecordsetState *
* checks are done
*/
values[i] = InputFunctionCall(&column_info->proc, NULL, column_info->typioparam,
tupdesc->attrs[i]->atttypmod);
tupdesc->attrs[i].atttypmod);
nulls[i] = true;
} else {
char *s = NULL;
@ -2271,7 +2271,7 @@ static void make_row_from_rec_and_jsonb(Jsonb *element, PopulateRecordsetState *
elog(ERROR, "invalid jsonb type");
}
values[i] = InputFunctionCall(&column_info->proc, s, column_info->typioparam, tupdesc->attrs[i]->atttypmod);
values[i] = InputFunctionCall(&column_info->proc, s, column_info->typioparam, tupdesc->attrs[i].atttypmod);
nulls[i] = false;
}
}
@ -2512,18 +2512,18 @@ static void populate_recordset_object_end(void *state)
for (i = 0; i < ncolumns; ++i) {
ColumnIOData *column_info = &my_extra->columns[i];
Oid column_type = tupdesc->attrs[i]->atttypid;
Oid column_type = tupdesc->attrs[i].atttypid;
char *value = NULL;
/* Ignore dropped columns in datatype */
if (tupdesc->attrs[i]->attisdropped) {
if (tupdesc->attrs[i].attisdropped) {
nulls[i] = true;
continue;
}
errno_t rc = memset_s(fname, NAMEDATALEN, 0, NAMEDATALEN);
securec_check(rc, "\0", "\0");
rc = strncpy_s(fname, NAMEDATALEN, NameStr(tupdesc->attrs[i]->attname), NAMEDATALEN - 1);
rc = strncpy_s(fname, NAMEDATALEN, NameStr(tupdesc->attrs[i].attname), NAMEDATALEN - 1);
securec_check(rc, "\0", "\0");
hashentry = (JsonHashEntry *)hash_search(json_hash, fname, HASH_FIND, NULL);
@ -2553,12 +2553,12 @@ static void populate_recordset_object_end(void *state)
* checks are done
*/
values[i] = InputFunctionCall(&column_info->proc, NULL, column_info->typioparam,
tupdesc->attrs[i]->atttypmod);
tupdesc->attrs[i].atttypmod);
nulls[i] = true;
} else {
value = hashentry->val;
values[i] = InputFunctionCall(&column_info->proc, value, column_info->typioparam,
tupdesc->attrs[i]->atttypmod);
tupdesc->attrs[i].atttypmod);
nulls[i] = false;
}
}

Some files were not shown because too many files have changed in this diff Show More