mirror of https://github.com/ByConity/ByConity
Merge remote-tracking branch 'upstream/master' into fix27
This commit is contained in:
commit
4bb320627c
35
CHANGELOG.md
35
CHANGELOG.md
|
@ -1,3 +1,38 @@
|
|||
## ClickHouse release v19.17.6.36, 2019-12-27
|
||||
|
||||
### Bug Fix
|
||||
* Fixed potential buffer overflow in decompress. Malicious user can pass fabricated compressed data that could cause read after buffer. This issue was found by Eldar Zaitov from Yandex information security team. [#8404](https://github.com/ClickHouse/ClickHouse/pull/8404) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed possible server crash (`std::terminate`) when the server cannot send or write data in JSON or XML format with values of String data type (that require UTF-8 validation) or when compressing result data with Brotli algorithm or in some other rare cases. [#8384](https://github.com/ClickHouse/ClickHouse/pull/8384) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed dictionaries with source from a clickhouse `VIEW`, now reading such dictionaries doesn't cause the error `There is no query`. [#8351](https://github.com/ClickHouse/ClickHouse/pull/8351) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Fixed checking if a client host is allowed by host_regexp specified in users.xml. [#8241](https://github.com/ClickHouse/ClickHouse/pull/8241), [#8342](https://github.com/ClickHouse/ClickHouse/pull/8342) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* `RENAME TABLE` for a distributed table now renames the folder containing inserted data before sending to shards. This fixes an issue with successive renames `tableA->tableB`, `tableC->tableA`. [#8306](https://github.com/ClickHouse/ClickHouse/pull/8306) ([tavplubix](https://github.com/tavplubix))
|
||||
* `range_hashed` external dictionaries created by DDL queries now allow ranges of arbitrary numeric types. [#8275](https://github.com/ClickHouse/ClickHouse/pull/8275) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed `INSERT INTO table SELECT ... FROM mysql(...)` table function. [#8234](https://github.com/ClickHouse/ClickHouse/pull/8234) ([tavplubix](https://github.com/tavplubix))
|
||||
* Fixed segfault in `INSERT INTO TABLE FUNCTION file()` while inserting into a file which doesn't exist. Now in this case file would be created and then insert would be processed. [#8177](https://github.com/ClickHouse/ClickHouse/pull/8177) ([Olga Khvostikova](https://github.com/stavrolia))
|
||||
* Fixed bitmapAnd error when intersecting an aggregated bitmap and a scalar bitmap. [#8082](https://github.com/ClickHouse/ClickHouse/pull/8082) ([Yue Huang](https://github.com/moon03432))
|
||||
* Fixed segfault when `EXISTS` query was used without `TABLE` or `DICTIONARY` qualifier, just like `EXISTS t`. [#8213](https://github.com/ClickHouse/ClickHouse/pull/8213) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed return type for functions `rand` and `randConstant` in case of nullable argument. Now functions always return `UInt32` and never `Nullable(UInt32)`. [#8204](https://github.com/ClickHouse/ClickHouse/pull/8204) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Fixed `DROP DICTIONARY IF EXISTS db.dict`, now it doesn't throw exception if `db` doesn't exist. [#8185](https://github.com/ClickHouse/ClickHouse/pull/8185) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* If a table wasn't completely dropped because of server crash, the server will try to restore and load it [#8176](https://github.com/ClickHouse/ClickHouse/pull/8176) ([tavplubix](https://github.com/tavplubix))
|
||||
* Fixed a trivial count query for a distributed table if there are more than two shard local table. [#8164](https://github.com/ClickHouse/ClickHouse/pull/8164) ([小路](https://github.com/nicelulu))
|
||||
* Fixed bug that lead to a data race in DB::BlockStreamProfileInfo::calculateRowsBeforeLimit() [#8143](https://github.com/ClickHouse/ClickHouse/pull/8143) ([Alexander Kazakov](https://github.com/Akazz))
|
||||
* Fixed `ALTER table MOVE part` executed immediately after merging the specified part, which could cause moving a part which the specified part merged into. Now it correctly moves the specified part. [#8104](https://github.com/ClickHouse/ClickHouse/pull/8104) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Expressions for dictionaries can be specified as strings now. This is useful for calculation of attributes while extracting data from non-ClickHouse sources because it allows to use non-ClickHouse syntax for those expressions. [#8098](https://github.com/ClickHouse/ClickHouse/pull/8098) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed a very rare race in `clickhouse-copier` because of an overflow in ZXid. [#8088](https://github.com/ClickHouse/ClickHouse/pull/8088) ([Ding Xiang Fei](https://github.com/dingxiangfei2009))
|
||||
* Fixed the bug when after the query failed (due to "Too many simultaneous queries" for example) it would not read external tables info, and the
|
||||
next request would interpret this info as the beginning of the next query causing an error like `Unknown packet from client`. [#8084](https://github.com/ClickHouse/ClickHouse/pull/8084) ([Azat Khuzhin](https://github.com/azat))
|
||||
* Avoid null dereference after "Unknown packet X from server" [#8071](https://github.com/ClickHouse/ClickHouse/pull/8071) ([Azat Khuzhin](https://github.com/azat))
|
||||
* Restore support of all ICU locales, add the ability to apply collations for constant expressions and add language name to system.collations table. [#8051](https://github.com/ClickHouse/ClickHouse/pull/8051) ([alesapin](https://github.com/alesapin))
|
||||
* Number of streams for read from `StorageFile` and `StorageHDFS` is now limited, to avoid exceeding the memory limit. [#7981](https://github.com/ClickHouse/ClickHouse/pull/7981) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed `CHECK TABLE` query for `*MergeTree` tables without key. [#7979](https://github.com/ClickHouse/ClickHouse/pull/7979) ([alesapin](https://github.com/alesapin))
|
||||
* Removed the mutation number from a part name in case there were no mutations. This removing improved the compatibility with older versions. [#8250](https://github.com/ClickHouse/ClickHouse/pull/8250) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed the bug that mutations are skipped for some attached parts due to their data_version are larger than the table mutation version. [#7812](https://github.com/ClickHouse/ClickHouse/pull/7812) ([Zhichang Yu](https://github.com/yuzhichang))
|
||||
* Allow starting the server with redundant copies of parts after moving them to another device. [#7810](https://github.com/ClickHouse/ClickHouse/pull/7810) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Fixed the error "Sizes of columns doesn't match" that might appear when using aggregate function columns. [#7790](https://github.com/ClickHouse/ClickHouse/pull/7790) ([Boris Granveaud](https://github.com/bgranvea))
|
||||
* Now an exception will be thrown in case of using WITH TIES alongside LIMIT BY. And now it's possible to use TOP with LIMIT BY. [#7637](https://github.com/ClickHouse/ClickHouse/pull/7637) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
|
||||
* Fix dictionary reload if it has `invalidate_query`, which stopped updates and some exception on previous update tries. [#8029](https://github.com/ClickHouse/ClickHouse/pull/8029) ([alesapin](https://github.com/alesapin))
|
||||
|
||||
|
||||
## ClickHouse release v19.17.4.11, 2019-11-22
|
||||
|
||||
### Backward Incompatible Change
|
||||
|
|
|
@ -176,7 +176,9 @@ if (ARCH_NATIVE)
|
|||
set (COMPILER_FLAGS "${COMPILER_FLAGS} -march=native")
|
||||
endif ()
|
||||
|
||||
set (CMAKE_CXX_STANDARD 17)
|
||||
# cmake < 3.12 doesn't supoprt 20. We'll set CMAKE_CXX_FLAGS for now
|
||||
# set (CMAKE_CXX_STANDARD 20)
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++2a")
|
||||
set (CMAKE_CXX_EXTENSIONS 0) # https://cmake.org/cmake/help/latest/prop_tgt/CXX_EXTENSIONS.html#prop_tgt:CXX_EXTENSIONS
|
||||
set (CMAKE_CXX_STANDARD_REQUIRED ON)
|
||||
|
||||
|
@ -248,8 +250,16 @@ endif ()
|
|||
string (TOUPPER ${CMAKE_BUILD_TYPE} CMAKE_BUILD_TYPE_UC)
|
||||
set (CMAKE_POSTFIX_VARIABLE "CMAKE_${CMAKE_BUILD_TYPE_UC}_POSTFIX")
|
||||
|
||||
if (NOT MAKE_STATIC_LIBRARIES)
|
||||
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
|
||||
if (MAKE_STATIC_LIBRARIES)
|
||||
set (CMAKE_POSITION_INDEPENDENT_CODE OFF)
|
||||
if (OS_LINUX)
|
||||
# Slightly more efficient code can be generated
|
||||
set (CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} -fno-pie")
|
||||
set (CMAKE_C_FLAGS_RELWITHDEBINFO "${CMAKE_C_FLAGS_RELWITHDEBINFO} -fno-pie")
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -Wl,-no-pie")
|
||||
endif ()
|
||||
else ()
|
||||
set (CMAKE_POSITION_INDEPENDENT_CODE ON)
|
||||
endif ()
|
||||
|
||||
# Using "include-what-you-use" tool.
|
||||
|
|
|
@ -15,6 +15,7 @@ set(CMAKE_C_STANDARD_LIBRARIES ${DEFAULT_LIBS})
|
|||
|
||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -mmacosx-version-min=10.14")
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mmacosx-version-min=10.14")
|
||||
set (CMAKE_ASM_FLAGS "${CMAKE_ASM_FLAGS} -mmacosx-version-min=10.14")
|
||||
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -mmacosx-version-min=10.14")
|
||||
set (CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -mmacosx-version-min=10.14")
|
||||
|
|
|
@ -2,6 +2,7 @@ set (CMAKE_SYSTEM_NAME "Darwin")
|
|||
set (CMAKE_SYSTEM_PROCESSOR "x86_64")
|
||||
set (CMAKE_C_COMPILER_TARGET "x86_64-apple-darwin")
|
||||
set (CMAKE_CXX_COMPILER_TARGET "x86_64-apple-darwin")
|
||||
set (CMAKE_ASM_COMPILER_TARGET "x86_64-apple-darwin")
|
||||
set (CMAKE_OSX_SYSROOT "${CMAKE_CURRENT_LIST_DIR}/../toolchain/darwin-x86_64")
|
||||
|
||||
set (CMAKE_TRY_COMPILE_TARGET_TYPE STATIC_LIBRARY) # disable linkage check - it doesn't work in CMake
|
||||
|
|
|
@ -13,7 +13,6 @@ if (CMAKE_CROSSCOMPILING)
|
|||
if (OS_DARWIN)
|
||||
# FIXME: broken dependencies
|
||||
set (USE_SNAPPY OFF CACHE INTERNAL "")
|
||||
set (ENABLE_SSL OFF CACHE INTERNAL "")
|
||||
set (ENABLE_PROTOBUF OFF CACHE INTERNAL "")
|
||||
set (ENABLE_PARQUET OFF CACHE INTERNAL "")
|
||||
set (ENABLE_READLINE OFF CACHE INTERNAL "")
|
||||
|
|
|
@ -2,10 +2,10 @@
|
|||
|
||||
if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
|
||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -w")
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w -std=c++1z")
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w")
|
||||
elseif (CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
|
||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -w")
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w -std=c++1z")
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w")
|
||||
endif ()
|
||||
|
||||
set_property(DIRECTORY PROPERTY EXCLUDE_FROM_ALL 1)
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
include(ExternalProject)
|
||||
|
||||
set (CMAKE_CXX_STANDARD 17)
|
||||
|
||||
# === thrift
|
||||
|
||||
set(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/thrift/lib/cpp)
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
set (CAPNPROTO_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/capnproto/c++/src)
|
||||
|
||||
set (CMAKE_CXX_STANDARD 17)
|
||||
|
||||
set (KJ_SRCS
|
||||
${CAPNPROTO_SOURCE_DIR}/kj/array.c++
|
||||
${CAPNPROTO_SOURCE_DIR}/kj/common.c++
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
set(ICU_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/icu/icu4c/source)
|
||||
set(ICUDATA_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/icudata/)
|
||||
|
||||
set (CMAKE_CXX_STANDARD 17)
|
||||
|
||||
# These lists of sources were generated from build log of the original ICU build system (configure + make).
|
||||
|
||||
set(ICUUC_SOURCES
|
||||
|
|
|
@ -1,16 +1,6 @@
|
|||
set(OPENSSL_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/openssl)
|
||||
set(OPENSSL_BINARY_DIR ${ClickHouse_BINARY_DIR}/contrib/openssl)
|
||||
|
||||
#file(READ ${CMAKE_CURRENT_SOURCE_DIR}/${OPENSSL_SOURCE_DIR}/ssl/VERSION SSL_VERSION)
|
||||
#string(STRIP ${SSL_VERSION} SSL_VERSION)
|
||||
#string(REPLACE ":" "." SSL_VERSION ${SSL_VERSION})
|
||||
#string(REGEX REPLACE "\\..*" "" SSL_MAJOR_VERSION ${SSL_VERSION})
|
||||
|
||||
#file(READ ${CMAKE_CURRENT_SOURCE_DIR}/${OPENSSL_SOURCE_DIR}/crypto/VERSION CRYPTO_VERSION)
|
||||
#string(STRIP ${CRYPTO_VERSION} CRYPTO_VERSION)
|
||||
#string(REPLACE ":" "." CRYPTO_VERSION ${CRYPTO_VERSION})
|
||||
#string(REGEX REPLACE "\\..*" "" CRYPTO_MAJOR_VERSION ${CRYPTO_VERSION})
|
||||
|
||||
set(OPENSSLDIR "/etc/ssl" CACHE PATH "Set the default openssl directory")
|
||||
set(OPENSSL_ENGINESDIR "/usr/lib/engines-3" CACHE PATH "Set the default openssl directory for engines")
|
||||
set(OPENSSL_MODULESDIR "/usr/local/lib/ossl-modules" CACHE PATH "Set the default openssl directory for modules")
|
||||
|
@ -27,19 +17,25 @@ elseif(ARCH_AARCH64)
|
|||
endif()
|
||||
|
||||
enable_language(ASM)
|
||||
|
||||
if (COMPILER_CLANG)
|
||||
add_definitions(-Wno-unused-command-line-argument)
|
||||
endif ()
|
||||
|
||||
if (ARCH_AMD64)
|
||||
if (OS_DARWIN)
|
||||
set (OPENSSL_SYSTEM "macosx")
|
||||
endif ()
|
||||
|
||||
macro(perl_generate_asm FILE_IN FILE_OUT)
|
||||
add_custom_command(OUTPUT ${FILE_OUT}
|
||||
COMMAND /usr/bin/env perl ${FILE_IN} ${FILE_OUT}
|
||||
COMMAND /usr/bin/env perl ${FILE_IN} ${OPENSSL_SYSTEM} ${FILE_OUT}
|
||||
# ASM code has broken unwind tables (CFI), strip them.
|
||||
# Otherwise asynchronous unwind (that we use for query profiler)
|
||||
# will lead to segfault while trying to interpret wrong "CFA expression".
|
||||
COMMAND sed -i -e '/^\.cfi_/d' ${FILE_OUT})
|
||||
endmacro()
|
||||
|
||||
perl_generate_asm(${OPENSSL_SOURCE_DIR}/crypto/aes/asm/aes-x86_64.pl ${OPENSSL_BINARY_DIR}/crypto/aes/aes-x86_64.s)
|
||||
perl_generate_asm(${OPENSSL_SOURCE_DIR}/crypto/aes/asm/aesni-mb-x86_64.pl ${OPENSSL_BINARY_DIR}/crypto/aes/aesni-mb-x86_64.s)
|
||||
perl_generate_asm(${OPENSSL_SOURCE_DIR}/crypto/aes/asm/aesni-sha1-x86_64.pl ${OPENSSL_BINARY_DIR}/crypto/aes/aesni-sha1-x86_64.s)
|
||||
|
@ -70,12 +66,15 @@ if (ARCH_AMD64)
|
|||
perl_generate_asm(${OPENSSL_SOURCE_DIR}/crypto/sha/asm/sha512-x86_64.pl ${OPENSSL_BINARY_DIR}/crypto/sha/sha256-x86_64.s) # This is not a mistake
|
||||
perl_generate_asm(${OPENSSL_SOURCE_DIR}/crypto/sha/asm/sha512-x86_64.pl ${OPENSSL_BINARY_DIR}/crypto/sha/sha512-x86_64.s)
|
||||
perl_generate_asm(${OPENSSL_SOURCE_DIR}/crypto/whrlpool/asm/wp-x86_64.pl ${OPENSSL_BINARY_DIR}/crypto/whrlpool/wp-x86_64.s)
|
||||
|
||||
elseif (ARCH_AARCH64)
|
||||
|
||||
macro(perl_generate_asm FILE_IN FILE_OUT)
|
||||
add_custom_command(OUTPUT ${FILE_OUT}
|
||||
COMMAND /usr/bin/env perl ${FILE_IN} "linux64" ${FILE_OUT})
|
||||
# Hope that the ASM code for AArch64 doesn't have broken CFI. Otherwise, add the same sed as for x86_64.
|
||||
endmacro()
|
||||
|
||||
perl_generate_asm(${OPENSSL_SOURCE_DIR}/crypto/aes/asm/aesv8-armx.pl ${OPENSSL_BINARY_DIR}/crypto/aes/aesv8-armx.S)
|
||||
perl_generate_asm(${OPENSSL_SOURCE_DIR}/crypto/aes/asm/vpaes-armv8.pl ${OPENSSL_BINARY_DIR}/crypto/aes/vpaes-armv8.S)
|
||||
perl_generate_asm(${OPENSSL_SOURCE_DIR}/crypto/bn/asm/armv8-mont.pl ${OPENSSL_BINARY_DIR}/crypto/bn/armv8-mont.S)
|
||||
|
@ -88,6 +87,7 @@ elseif (ARCH_AARCH64)
|
|||
perl_generate_asm(${OPENSSL_SOURCE_DIR}/crypto/sha/asm/sha1-armv8.pl ${OPENSSL_BINARY_DIR}/crypto/sha/sha1-armv8.S)
|
||||
perl_generate_asm(${OPENSSL_SOURCE_DIR}/crypto/sha/asm/sha512-armv8.pl ${OPENSSL_BINARY_DIR}/crypto/sha/sha256-armv8.S) # This is not a mistake
|
||||
perl_generate_asm(${OPENSSL_SOURCE_DIR}/crypto/sha/asm/sha512-armv8.pl ${OPENSSL_BINARY_DIR}/crypto/sha/sha512-armv8.S)
|
||||
|
||||
endif ()
|
||||
|
||||
set(CRYPTO_SRCS
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
# This strings autochanged from release_lib.sh:
|
||||
set(VERSION_REVISION 54430)
|
||||
set(VERSION_MAJOR 19)
|
||||
set(VERSION_MINOR 19)
|
||||
set(VERSION_REVISION 54431)
|
||||
set(VERSION_MAJOR 20)
|
||||
set(VERSION_MINOR 1)
|
||||
set(VERSION_PATCH 1)
|
||||
set(VERSION_GITHASH 8bd9709d1dec3366e35d2efeab213435857f67a9)
|
||||
set(VERSION_DESCRIBE v19.19.1.1-prestable)
|
||||
set(VERSION_STRING 19.19.1.1)
|
||||
set(VERSION_GITHASH 51d4c8a53be94504e3607b2232e12e5ef7a8ec28)
|
||||
set(VERSION_DESCRIBE v20.1.1.1-prestable)
|
||||
set(VERSION_STRING 20.1.1.1)
|
||||
# end of autochange
|
||||
|
||||
set(VERSION_EXTRA "" CACHE STRING "")
|
||||
|
|
|
@ -76,7 +76,7 @@ void LocalServer::initialize(Poco::Util::Application & self)
|
|||
if (config().has("logger") || config().has("logger.level") || config().has("logger.log"))
|
||||
{
|
||||
// sensitive data rules are not used here
|
||||
buildLoggers(config(), logger());
|
||||
buildLoggers(config(), logger(), self.commandName());
|
||||
}
|
||||
else
|
||||
{
|
||||
|
|
|
@ -124,7 +124,7 @@ void ODBCBridge::initialize(Application & self)
|
|||
|
||||
config().setString("logger", "ODBCBridge");
|
||||
|
||||
buildLoggers(config(), logger());
|
||||
buildLoggers(config(), logger(), self.commandName());
|
||||
|
||||
log = &logger();
|
||||
hostname = config().getString("listen-host", "localhost");
|
||||
|
|
|
@ -17,7 +17,6 @@ namespace DB
|
|||
|
||||
namespace
|
||||
{
|
||||
const std::regex QUOTE_REGEX{"\""};
|
||||
std::string getMainMetric(const PerformanceTestInfo & test_info)
|
||||
{
|
||||
std::string main_metric;
|
||||
|
@ -30,10 +29,18 @@ std::string getMainMetric(const PerformanceTestInfo & test_info)
|
|||
main_metric = test_info.main_metric;
|
||||
return main_metric;
|
||||
}
|
||||
|
||||
bool isASCIIString(const std::string & str)
|
||||
{
|
||||
return std::all_of(str.begin(), str.end(), isASCII);
|
||||
}
|
||||
|
||||
String jsonString(const String & str, FormatSettings & settings)
|
||||
{
|
||||
WriteBufferFromOwnString buffer;
|
||||
writeJSONString(str, buffer, settings);
|
||||
return std::move(buffer.str());
|
||||
}
|
||||
}
|
||||
|
||||
ReportBuilder::ReportBuilder(const std::string & server_version_)
|
||||
|
@ -55,6 +62,9 @@ std::string ReportBuilder::buildFullReport(
|
|||
std::vector<TestStats> & stats,
|
||||
const std::vector<std::size_t> & queries_to_run) const
|
||||
{
|
||||
FormatSettings settings;
|
||||
|
||||
|
||||
JSONString json_output;
|
||||
|
||||
json_output.set("hostname", hostname);
|
||||
|
@ -67,20 +77,17 @@ std::string ReportBuilder::buildFullReport(
|
|||
json_output.set("path", test_info.path);
|
||||
json_output.set("main_metric", getMainMetric(test_info));
|
||||
|
||||
if (test_info.substitutions.size())
|
||||
if (!test_info.substitutions.empty())
|
||||
{
|
||||
JSONString json_parameters(2); /// here, 2 is the size of \t padding
|
||||
|
||||
for (auto it = test_info.substitutions.begin(); it != test_info.substitutions.end(); ++it)
|
||||
for (auto & [parameter, values] : test_info.substitutions)
|
||||
{
|
||||
std::string parameter = it->first;
|
||||
Strings values = it->second;
|
||||
|
||||
std::ostringstream array_string;
|
||||
array_string << "[";
|
||||
for (size_t i = 0; i != values.size(); ++i)
|
||||
{
|
||||
array_string << '"' << std::regex_replace(values[i], QUOTE_REGEX, "\\\"") << '"';
|
||||
array_string << jsonString(values[i], settings);
|
||||
if (i != values.size() - 1)
|
||||
{
|
||||
array_string << ", ";
|
||||
|
@ -110,13 +117,12 @@ std::string ReportBuilder::buildFullReport(
|
|||
|
||||
JSONString runJSON;
|
||||
|
||||
auto query = std::regex_replace(test_info.queries[query_index], QUOTE_REGEX, "\\\"");
|
||||
runJSON.set("query", query);
|
||||
runJSON.set("query", jsonString(test_info.queries[query_index], settings), false);
|
||||
runJSON.set("query_index", query_index);
|
||||
if (!statistics.exception.empty())
|
||||
{
|
||||
if (isASCIIString(statistics.exception))
|
||||
runJSON.set("exception", std::regex_replace(statistics.exception, QUOTE_REGEX, "\\\""));
|
||||
runJSON.set("exception", jsonString(statistics.exception, settings), false);
|
||||
else
|
||||
runJSON.set("exception", "Some exception occured with non ASCII message. This may produce invalid JSON. Try reproduce locally.");
|
||||
}
|
||||
|
@ -183,7 +189,7 @@ std::string ReportBuilder::buildCompactReport(
|
|||
std::vector<TestStats> & stats,
|
||||
const std::vector<std::size_t> & queries_to_run) const
|
||||
{
|
||||
|
||||
FormatSettings settings;
|
||||
std::ostringstream output;
|
||||
|
||||
for (size_t query_index = 0; query_index < test_info.queries.size(); ++query_index)
|
||||
|
@ -194,7 +200,7 @@ std::string ReportBuilder::buildCompactReport(
|
|||
for (size_t number_of_launch = 0; number_of_launch < test_info.times_to_run; ++number_of_launch)
|
||||
{
|
||||
if (test_info.queries.size() > 1)
|
||||
output << "query \"" << test_info.queries[query_index] << "\", ";
|
||||
output << "query " << jsonString(test_info.queries[query_index], settings) << ", ";
|
||||
|
||||
output << "run " << std::to_string(number_of_launch + 1) << ": ";
|
||||
|
||||
|
|
|
@ -947,6 +947,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
|||
});
|
||||
|
||||
/// try to load dictionaries immediately, throw on error and die
|
||||
ext::scope_guard dictionaries_xmls, models_xmls;
|
||||
try
|
||||
{
|
||||
if (!config().getBool("dictionaries_lazy_load", true))
|
||||
|
@ -954,12 +955,10 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
|||
global_context->tryCreateEmbeddedDictionaries();
|
||||
global_context->getExternalDictionariesLoader().enableAlwaysLoadEverything(true);
|
||||
}
|
||||
|
||||
auto dictionaries_repository = std::make_unique<ExternalLoaderXMLConfigRepository>(config(), "dictionaries_config");
|
||||
global_context->getExternalDictionariesLoader().addConfigRepository("", std::move(dictionaries_repository));
|
||||
|
||||
auto models_repository = std::make_unique<ExternalLoaderXMLConfigRepository>(config(), "models_config");
|
||||
global_context->getExternalModelsLoader().addConfigRepository("", std::move(models_repository));
|
||||
dictionaries_xmls = global_context->getExternalDictionariesLoader().addConfigRepository(
|
||||
std::make_unique<ExternalLoaderXMLConfigRepository>(config(), "dictionaries_config"));
|
||||
models_xmls = global_context->getExternalModelsLoader().addConfigRepository(
|
||||
std::make_unique<ExternalLoaderXMLConfigRepository>(config(), "models_config"));
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
|
|
|
@ -212,21 +212,23 @@ public:
|
|||
Float64 getFloat64(size_t n) const override;
|
||||
Float32 getFloat32(size_t n) const override;
|
||||
|
||||
UInt64 getUInt(size_t n) const override
|
||||
/// Out of range conversion is permitted.
|
||||
UInt64 NO_SANITIZE_UNDEFINED getUInt(size_t n) const override
|
||||
{
|
||||
return UInt64(data[n]);
|
||||
}
|
||||
|
||||
/// Out of range conversion is permitted.
|
||||
Int64 NO_SANITIZE_UNDEFINED getInt(size_t n) const override
|
||||
{
|
||||
return Int64(data[n]);
|
||||
}
|
||||
|
||||
bool getBool(size_t n) const override
|
||||
{
|
||||
return bool(data[n]);
|
||||
}
|
||||
|
||||
Int64 getInt(size_t n) const override
|
||||
{
|
||||
return Int64(data[n]);
|
||||
}
|
||||
|
||||
void insert(const Field & x) override
|
||||
{
|
||||
data.push_back(DB::get<NearestFieldType<T>>(x));
|
||||
|
|
|
@ -23,11 +23,10 @@ struct TrivialWeightFunction
|
|||
};
|
||||
|
||||
|
||||
/// Thread-safe cache that evicts entries which are not used for a long time or are expired.
|
||||
/// Thread-safe cache that evicts entries which are not used for a long time.
|
||||
/// WeightFunction is a functor that takes Mapped as a parameter and returns "weight" (approximate size)
|
||||
/// of that value.
|
||||
/// Cache starts to evict entries when their total weight exceeds max_size and when expiration time of these
|
||||
/// entries is due.
|
||||
/// Cache starts to evict entries when their total weight exceeds max_size.
|
||||
/// Value weight should not change after insertion.
|
||||
template <typename TKey, typename TMapped, typename HashFunction = std::hash<TKey>, typename WeightFunction = TrivialWeightFunction<TMapped>>
|
||||
class LRUCache
|
||||
|
@ -36,15 +35,13 @@ public:
|
|||
using Key = TKey;
|
||||
using Mapped = TMapped;
|
||||
using MappedPtr = std::shared_ptr<Mapped>;
|
||||
using Delay = std::chrono::seconds;
|
||||
|
||||
private:
|
||||
using Clock = std::chrono::steady_clock;
|
||||
using Timestamp = Clock::time_point;
|
||||
|
||||
public:
|
||||
LRUCache(size_t max_size_, const Delay & expiration_delay_ = Delay::zero())
|
||||
: max_size(std::max(static_cast<size_t>(1), max_size_)), expiration_delay(expiration_delay_) {}
|
||||
LRUCache(size_t max_size_)
|
||||
: max_size(std::max(static_cast<size_t>(1), max_size_)) {}
|
||||
|
||||
MappedPtr get(const Key & key)
|
||||
{
|
||||
|
@ -167,16 +164,9 @@ protected:
|
|||
|
||||
struct Cell
|
||||
{
|
||||
bool expired(const Timestamp & last_timestamp, const Delay & delay) const
|
||||
{
|
||||
return (delay == Delay::zero()) ||
|
||||
((last_timestamp > timestamp) && ((last_timestamp - timestamp) > delay));
|
||||
}
|
||||
|
||||
MappedPtr value;
|
||||
size_t size;
|
||||
LRUQueueIterator queue_iterator;
|
||||
Timestamp timestamp;
|
||||
};
|
||||
|
||||
using Cells = std::unordered_map<Key, Cell, HashFunction>;
|
||||
|
@ -257,7 +247,6 @@ private:
|
|||
/// Total weight of values.
|
||||
size_t current_size = 0;
|
||||
const size_t max_size;
|
||||
const Delay expiration_delay;
|
||||
|
||||
std::atomic<size_t> hits {0};
|
||||
std::atomic<size_t> misses {0};
|
||||
|
@ -273,7 +262,6 @@ private:
|
|||
}
|
||||
|
||||
Cell & cell = it->second;
|
||||
updateCellTimestamp(cell);
|
||||
|
||||
/// Move the key to the end of the queue. The iterator remains valid.
|
||||
queue.splice(queue.end(), queue, cell.queue_iterator);
|
||||
|
@ -303,18 +291,11 @@ private:
|
|||
cell.value = mapped;
|
||||
cell.size = cell.value ? weight_function(*cell.value) : 0;
|
||||
current_size += cell.size;
|
||||
updateCellTimestamp(cell);
|
||||
|
||||
removeOverflow(cell.timestamp);
|
||||
removeOverflow();
|
||||
}
|
||||
|
||||
void updateCellTimestamp(Cell & cell)
|
||||
{
|
||||
if (expiration_delay != Delay::zero())
|
||||
cell.timestamp = Clock::now();
|
||||
}
|
||||
|
||||
void removeOverflow(const Timestamp & last_timestamp)
|
||||
void removeOverflow()
|
||||
{
|
||||
size_t current_weight_lost = 0;
|
||||
size_t queue_size = cells.size();
|
||||
|
@ -330,8 +311,6 @@ private:
|
|||
}
|
||||
|
||||
const auto & cell = it->second;
|
||||
if (!cell.expired(last_timestamp, expiration_delay))
|
||||
break;
|
||||
|
||||
current_size -= cell.size;
|
||||
current_weight_lost += cell.size;
|
||||
|
|
|
@ -13,9 +13,6 @@ target_link_libraries (sip_hash_perf PRIVATE clickhouse_common_io)
|
|||
add_executable (auto_array auto_array.cpp)
|
||||
target_link_libraries (auto_array PRIVATE clickhouse_common_io)
|
||||
|
||||
add_executable (lru_cache lru_cache.cpp)
|
||||
target_link_libraries (lru_cache PRIVATE clickhouse_common_io)
|
||||
|
||||
add_executable (hash_table hash_table.cpp)
|
||||
target_link_libraries (hash_table PRIVATE clickhouse_common_io)
|
||||
|
||||
|
|
|
@ -1,317 +0,0 @@
|
|||
#include <Common/LRUCache.h>
|
||||
#include <Common/Exception.h>
|
||||
|
||||
#include <iostream>
|
||||
#include <string>
|
||||
#include <thread>
|
||||
#include <chrono>
|
||||
#include <functional>
|
||||
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
void run();
|
||||
void runTest(unsigned int num, const std::function<bool()> & func);
|
||||
bool test1();
|
||||
bool test2();
|
||||
bool test_concurrent();
|
||||
|
||||
#define ASSERT_CHECK(cond, res) \
|
||||
do \
|
||||
{ \
|
||||
if (!(cond)) \
|
||||
{ \
|
||||
std::cout << __FILE__ << ":" << __LINE__ << ":" \
|
||||
<< "Assertion " << #cond << " failed.\n"; \
|
||||
if ((res)) { (res) = false; } \
|
||||
} \
|
||||
} \
|
||||
while (0)
|
||||
|
||||
void run()
|
||||
{
|
||||
const std::vector<std::function<bool()>> tests =
|
||||
{
|
||||
test1,
|
||||
test2,
|
||||
test_concurrent
|
||||
};
|
||||
|
||||
unsigned int num = 0;
|
||||
for (const auto & test : tests)
|
||||
{
|
||||
++num;
|
||||
runTest(num, test);
|
||||
}
|
||||
}
|
||||
|
||||
void runTest(unsigned int num, const std::function<bool()> & func)
|
||||
{
|
||||
bool ok;
|
||||
|
||||
try
|
||||
{
|
||||
ok = func();
|
||||
}
|
||||
catch (const DB::Exception & ex)
|
||||
{
|
||||
ok = false;
|
||||
std::cout << "Caught exception " << ex.displayText() << "\n";
|
||||
}
|
||||
catch (const std::exception & ex)
|
||||
{
|
||||
ok = false;
|
||||
std::cout << "Caught exception " << ex.what() << "\n";
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
ok = false;
|
||||
std::cout << "Caught unhandled exception\n";
|
||||
}
|
||||
|
||||
if (ok)
|
||||
std::cout << "Test " << num << " passed\n";
|
||||
else
|
||||
std::cout << "Test " << num << " failed\n";
|
||||
}
|
||||
|
||||
struct Weight
|
||||
{
|
||||
size_t operator()(const std::string & s) const
|
||||
{
|
||||
return s.size();
|
||||
}
|
||||
};
|
||||
|
||||
bool test1()
|
||||
{
|
||||
using Cache = DB::LRUCache<std::string, std::string, std::hash<std::string>, Weight>;
|
||||
using MappedPtr = Cache::MappedPtr;
|
||||
|
||||
auto ptr = [](const std::string & s)
|
||||
{
|
||||
return MappedPtr(new std::string(s));
|
||||
};
|
||||
|
||||
Cache cache(10);
|
||||
|
||||
bool res = true;
|
||||
|
||||
ASSERT_CHECK(!cache.get("asd"), res);
|
||||
|
||||
cache.set("asd", ptr("qwe"));
|
||||
|
||||
ASSERT_CHECK((*cache.get("asd") == "qwe"), res);
|
||||
|
||||
cache.set("zxcv", ptr("12345"));
|
||||
cache.set("01234567891234567", ptr("--"));
|
||||
|
||||
ASSERT_CHECK((*cache.get("zxcv") == "12345"), res);
|
||||
ASSERT_CHECK((*cache.get("asd") == "qwe"), res);
|
||||
ASSERT_CHECK((*cache.get("01234567891234567") == "--"), res);
|
||||
ASSERT_CHECK(!cache.get("123x"), res);
|
||||
|
||||
cache.set("321x", ptr("+"));
|
||||
|
||||
ASSERT_CHECK(!cache.get("zxcv"), res);
|
||||
ASSERT_CHECK((*cache.get("asd") == "qwe"), res);
|
||||
ASSERT_CHECK((*cache.get("01234567891234567") == "--"), res);
|
||||
ASSERT_CHECK(!cache.get("123x"), res);
|
||||
ASSERT_CHECK((*cache.get("321x") == "+"), res);
|
||||
|
||||
ASSERT_CHECK((cache.weight() == 6), res);
|
||||
ASSERT_CHECK((cache.count() == 3), res);
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
bool test2()
|
||||
{
|
||||
using namespace std::literals;
|
||||
using Cache = DB::LRUCache<std::string, std::string, std::hash<std::string>, Weight>;
|
||||
using MappedPtr = Cache::MappedPtr;
|
||||
|
||||
auto ptr = [](const std::string & s)
|
||||
{
|
||||
return MappedPtr(new std::string(s));
|
||||
};
|
||||
|
||||
Cache cache(10, 3s);
|
||||
|
||||
bool res = true;
|
||||
|
||||
ASSERT_CHECK(!cache.get("asd"), res);
|
||||
|
||||
cache.set("asd", ptr("qwe"));
|
||||
|
||||
ASSERT_CHECK((*cache.get("asd") == "qwe"), res);
|
||||
|
||||
cache.set("zxcv", ptr("12345"));
|
||||
cache.set("01234567891234567", ptr("--"));
|
||||
|
||||
ASSERT_CHECK((*cache.get("zxcv") == "12345"), res);
|
||||
ASSERT_CHECK((*cache.get("asd") == "qwe"), res);
|
||||
ASSERT_CHECK((*cache.get("01234567891234567") == "--"), res);
|
||||
ASSERT_CHECK(!cache.get("123x"), res);
|
||||
|
||||
cache.set("321x", ptr("+"));
|
||||
|
||||
ASSERT_CHECK((cache.get("zxcv")), res);
|
||||
ASSERT_CHECK((*cache.get("asd") == "qwe"), res);
|
||||
ASSERT_CHECK((*cache.get("01234567891234567") == "--"), res);
|
||||
ASSERT_CHECK(!cache.get("123x"), res);
|
||||
ASSERT_CHECK((*cache.get("321x") == "+"), res);
|
||||
|
||||
ASSERT_CHECK((cache.weight() == 11), res);
|
||||
ASSERT_CHECK((cache.count() == 4), res);
|
||||
|
||||
std::this_thread::sleep_for(5s);
|
||||
|
||||
cache.set("123x", ptr("2769"));
|
||||
|
||||
ASSERT_CHECK(!cache.get("zxcv"), res);
|
||||
ASSERT_CHECK((*cache.get("asd") == "qwe"), res);
|
||||
ASSERT_CHECK((*cache.get("01234567891234567") == "--"), res);
|
||||
ASSERT_CHECK((*cache.get("321x") == "+"), res);
|
||||
|
||||
ASSERT_CHECK((cache.weight() == 10), res);
|
||||
ASSERT_CHECK((cache.count() == 4), res);
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
bool test_concurrent()
|
||||
{
|
||||
using namespace std::literals;
|
||||
|
||||
using Cache = DB::LRUCache<std::string, std::string, std::hash<std::string>, Weight>;
|
||||
Cache cache(2);
|
||||
|
||||
bool res = true;
|
||||
|
||||
auto load_func = [](const std::string & result, std::chrono::seconds sleep_for, bool throw_exc)
|
||||
{
|
||||
std::this_thread::sleep_for(sleep_for);
|
||||
if (throw_exc)
|
||||
throw std::runtime_error("Exception!");
|
||||
return std::make_shared<std::string>(result);
|
||||
};
|
||||
|
||||
/// Case 1: Both threads are able to load the value.
|
||||
|
||||
std::pair<Cache::MappedPtr, bool> result1;
|
||||
std::thread thread1([&]()
|
||||
{
|
||||
result1 = cache.getOrSet("key", [&]() { return load_func("val1", 1s, false); });
|
||||
});
|
||||
|
||||
std::pair<Cache::MappedPtr, bool> result2;
|
||||
std::thread thread2([&]()
|
||||
{
|
||||
result2 = cache.getOrSet("key", [&]() { return load_func("val2", 1s, false); });
|
||||
});
|
||||
|
||||
thread1.join();
|
||||
thread2.join();
|
||||
|
||||
ASSERT_CHECK((result1.first == result2.first), res);
|
||||
ASSERT_CHECK((result1.second != result2.second), res);
|
||||
|
||||
/// Case 2: One thread throws an exception during loading.
|
||||
|
||||
cache.reset();
|
||||
|
||||
bool thrown = false;
|
||||
thread1 = std::thread([&]()
|
||||
{
|
||||
try
|
||||
{
|
||||
cache.getOrSet("key", [&]() { return load_func("val1", 2s, true); });
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
thrown = true;
|
||||
}
|
||||
});
|
||||
|
||||
thread2 = std::thread([&]()
|
||||
{
|
||||
std::this_thread::sleep_for(1s);
|
||||
result2 = cache.getOrSet("key", [&]() { return load_func("val2", 1s, false); });
|
||||
});
|
||||
|
||||
thread1.join();
|
||||
thread2.join();
|
||||
|
||||
ASSERT_CHECK((thrown == true), res);
|
||||
ASSERT_CHECK((result2.second == true), res);
|
||||
ASSERT_CHECK((result2.first.get() == cache.get("key").get()), res);
|
||||
ASSERT_CHECK((*result2.first == "val2"), res);
|
||||
|
||||
/// Case 3: All threads throw an exception.
|
||||
|
||||
cache.reset();
|
||||
|
||||
bool thrown1 = false;
|
||||
thread1 = std::thread([&]()
|
||||
{
|
||||
try
|
||||
{
|
||||
cache.getOrSet("key", [&]() { return load_func("val1", 1s, true); });
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
thrown1 = true;
|
||||
}
|
||||
});
|
||||
|
||||
bool thrown2 = false;
|
||||
thread2 = std::thread([&]()
|
||||
{
|
||||
try
|
||||
{
|
||||
cache.getOrSet("key", [&]() { return load_func("val1", 1s, true); });
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
thrown2 = true;
|
||||
}
|
||||
});
|
||||
|
||||
thread1.join();
|
||||
thread2.join();
|
||||
|
||||
ASSERT_CHECK((thrown1 == true), res);
|
||||
ASSERT_CHECK((thrown2 == true), res);
|
||||
ASSERT_CHECK((cache.get("key") == nullptr), res);
|
||||
|
||||
/// Case 4: Concurrent reset.
|
||||
|
||||
cache.reset();
|
||||
|
||||
thread1 = std::thread([&]()
|
||||
{
|
||||
result1 = cache.getOrSet("key", [&]() { return load_func("val1", 2s, false); });
|
||||
});
|
||||
|
||||
std::this_thread::sleep_for(1s);
|
||||
cache.reset();
|
||||
|
||||
thread1.join();
|
||||
|
||||
ASSERT_CHECK((result1.second == true), res);
|
||||
ASSERT_CHECK((*result1.first == "val1"), res);
|
||||
ASSERT_CHECK((cache.get("key") == nullptr), res);
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
int main()
|
||||
{
|
||||
run();
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -131,8 +131,6 @@ struct Settings : public SettingsCollection<Settings>
|
|||
M(SettingBool, force_index_by_date, 0, "Throw an exception if there is a partition key in a table, and it is not used.", 0) \
|
||||
M(SettingBool, force_primary_key, 0, "Throw an exception if there is primary key in a table, and it is not used.", 0) \
|
||||
\
|
||||
M(SettingUInt64, mark_cache_min_lifetime, 10000, "If the maximum size of mark_cache is exceeded, delete only records older than mark_cache_min_lifetime seconds.", 0) \
|
||||
\
|
||||
M(SettingFloat, max_streams_to_max_threads_ratio, 1, "Allows you to use more sources than the number of threads - to more evenly distribute work across threads. It is assumed that this is a temporary solution, since it will be possible in the future to make the number of sources equal to the number of threads, but for each source to dynamically select available work for itself.", 0) \
|
||||
M(SettingFloat, max_streams_multiplier_for_merge_tables, 5, "Ask more streams when reading from Merge table. Streams will be spread across tables that Merge table will use. This allows more even distribution of work across threads and especially helpful when merged tables differ in size.", 0) \
|
||||
\
|
||||
|
@ -383,6 +381,7 @@ struct Settings : public SettingsCollection<Settings>
|
|||
M(SettingBool, enable_scalar_subquery_optimization, true, "If it is set to true, prevent scalar subqueries from (de)serializing large scalar values and possibly avoid running the same subquery more than once.", 0) \
|
||||
M(SettingBool, optimize_trivial_count_query, true, "Process trivial 'SELECT count() FROM table' query from metadata.", 0) \
|
||||
M(SettingUInt64, mutations_sync, 0, "Wait for synchronous execution of ALTER TABLE UPDATE/DELETE queries (mutations). 0 - execute asynchronously. 1 - wait current server. 2 - wait all replicas if they exist.", 0) \
|
||||
M(SettingBool, optimize_if_chain_to_miltiif, false, "Replace if(cond1, then1, if(cond2, ...)) chains to multiIf. Currently it's not beneficial for numeric types.", 0) \
|
||||
\
|
||||
/** Obsolete settings that do nothing but left for compatibility reasons. Remove each one after half a year of obsolescence. */ \
|
||||
\
|
||||
|
@ -393,6 +392,7 @@ struct Settings : public SettingsCollection<Settings>
|
|||
M(SettingBool, allow_experimental_cross_to_join_conversion, true, "Obsolete setting, does nothing. Will be removed after 2020-05-31", 0) \
|
||||
M(SettingBool, allow_experimental_data_skipping_indices, true, "Obsolete setting, does nothing. Will be removed after 2020-05-31", 0) \
|
||||
M(SettingBool, merge_tree_uniform_read_distribution, true, "Obsolete setting, does nothing. Will be removed after 2020-05-20", 0) \
|
||||
M(SettingUInt64, mark_cache_min_lifetime, 0, "Obsolete setting, does nothing. Will be removed after 2020-05-31", 0) \
|
||||
|
||||
DECLARE_SETTINGS_COLLECTION(LIST_OF_SETTINGS)
|
||||
|
||||
|
|
|
@ -129,7 +129,7 @@ void PushingToViewsBlockOutputStream::write(const Block & block)
|
|||
for (size_t view_num = 0; view_num < views.size(); ++view_num)
|
||||
{
|
||||
auto thread_group = CurrentThread::getGroup();
|
||||
pool.scheduleOrThrowOnError([=]
|
||||
pool.scheduleOrThrowOnError([=, this]
|
||||
{
|
||||
setThreadName("PushingToViews");
|
||||
if (thread_group)
|
||||
|
|
|
@ -30,7 +30,7 @@ namespace ErrorCodes
|
|||
extern const int LOGICAL_ERROR;
|
||||
}
|
||||
|
||||
static const std::vector<String> supported_functions{"any", "anyLast", "min", "max", "sum"};
|
||||
static const std::vector<String> supported_functions{"any", "anyLast", "min", "max", "sum", "groupBitAnd", "groupBitOr", "groupBitXor"};
|
||||
|
||||
|
||||
String DataTypeCustomSimpleAggregateFunction::getName() const
|
||||
|
|
|
@ -10,8 +10,6 @@
|
|||
#include <IO/WriteHelpers.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/InterpreterCreateQuery.h>
|
||||
#include <Interpreters/ExternalLoaderDatabaseConfigRepository.h>
|
||||
#include <Interpreters/ExternalDictionariesLoader.h>
|
||||
#include <Parsers/ASTCreateQuery.h>
|
||||
#include <Parsers/ParserCreateQuery.h>
|
||||
#include <Storages/StorageFactory.h>
|
||||
|
@ -181,12 +179,8 @@ void DatabaseOrdinary::loadStoredObjects(
|
|||
/// After all tables was basically initialized, startup them.
|
||||
startupTables(pool);
|
||||
|
||||
/// Add database as repository
|
||||
auto dictionaries_repository = std::make_unique<ExternalLoaderDatabaseConfigRepository>(shared_from_this(), context);
|
||||
auto & external_loader = context.getExternalDictionariesLoader();
|
||||
external_loader.addConfigRepository(getDatabaseName(), std::move(dictionaries_repository));
|
||||
|
||||
/// Attach dictionaries.
|
||||
attachToExternalDictionariesLoader(context);
|
||||
for (const auto & name_with_query : file_names)
|
||||
{
|
||||
auto create_query = name_with_query.second->as<const ASTCreateQuery &>();
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
#include <Databases/DatabaseWithDictionaries.h>
|
||||
#include <Interpreters/ExternalDictionariesLoader.h>
|
||||
#include <Interpreters/ExternalLoaderPresetConfigRepository.h>
|
||||
#include <Interpreters/ExternalLoaderTempConfigRepository.h>
|
||||
#include <Interpreters/ExternalLoaderDatabaseConfigRepository.h>
|
||||
#include <Dictionaries/getDictionaryConfigurationFromAST.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Storages/StorageDictionary.h>
|
||||
|
@ -74,7 +75,7 @@ void DatabaseWithDictionaries::createDictionary(const Context & context, const S
|
|||
|
||||
/// A dictionary with the same full name could be defined in *.xml config files.
|
||||
String full_name = getDatabaseName() + "." + dictionary_name;
|
||||
auto & external_loader = const_cast<ExternalDictionariesLoader &>(context.getExternalDictionariesLoader());
|
||||
const auto & external_loader = context.getExternalDictionariesLoader();
|
||||
if (external_loader.getCurrentStatus(full_name) != ExternalLoader::Status::NOT_EXIST)
|
||||
throw Exception(
|
||||
"Dictionary " + backQuote(getDatabaseName()) + "." + backQuote(dictionary_name) + " already exists.",
|
||||
|
@ -106,15 +107,10 @@ void DatabaseWithDictionaries::createDictionary(const Context & context, const S
|
|||
|
||||
/// Add a temporary repository containing the dictionary.
|
||||
/// We need this temp repository to try loading the dictionary before actually attaching it to the database.
|
||||
static std::atomic<size_t> counter = 0;
|
||||
String temp_repository_name = String(IExternalLoaderConfigRepository::INTERNAL_REPOSITORY_NAME_PREFIX) + " creating " + full_name + " "
|
||||
+ std::to_string(++counter);
|
||||
external_loader.addConfigRepository(
|
||||
temp_repository_name,
|
||||
std::make_unique<ExternalLoaderPresetConfigRepository>(
|
||||
std::vector{std::pair{dictionary_metadata_tmp_path,
|
||||
getDictionaryConfigurationFromAST(query->as<const ASTCreateQuery &>(), getDatabaseName())}}));
|
||||
SCOPE_EXIT({ external_loader.removeConfigRepository(temp_repository_name); });
|
||||
auto temp_repository
|
||||
= const_cast<ExternalDictionariesLoader &>(external_loader) /// the change of ExternalDictionariesLoader is temporary
|
||||
.addConfigRepository(std::make_unique<ExternalLoaderTempConfigRepository>(
|
||||
getDatabaseName(), dictionary_metadata_tmp_path, getDictionaryConfigurationFromAST(query->as<const ASTCreateQuery &>())));
|
||||
|
||||
bool lazy_load = context.getConfigRef().getBool("dictionaries_lazy_load", true);
|
||||
if (!lazy_load)
|
||||
|
@ -253,4 +249,23 @@ ASTPtr DatabaseWithDictionaries::getCreateDictionaryQueryImpl(
|
|||
return ast;
|
||||
}
|
||||
|
||||
void DatabaseWithDictionaries::shutdown()
|
||||
{
|
||||
detachFromExternalDictionariesLoader();
|
||||
DatabaseOnDisk::shutdown();
|
||||
}
|
||||
|
||||
DatabaseWithDictionaries::~DatabaseWithDictionaries() = default;
|
||||
|
||||
void DatabaseWithDictionaries::attachToExternalDictionariesLoader(Context & context)
|
||||
{
|
||||
database_as_config_repo_for_external_loader = context.getExternalDictionariesLoader().addConfigRepository(
|
||||
std::make_unique<ExternalLoaderDatabaseConfigRepository>(*this, context));
|
||||
}
|
||||
|
||||
void DatabaseWithDictionaries::detachFromExternalDictionariesLoader()
|
||||
{
|
||||
database_as_config_repo_for_external_loader = {};
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
#include <Databases/DatabaseOnDisk.h>
|
||||
#include <ext/scope_guard.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
@ -25,15 +26,25 @@ public:
|
|||
|
||||
bool isDictionaryExist(const Context & context, const String & dictionary_name) const override;
|
||||
|
||||
void shutdown() override;
|
||||
|
||||
~DatabaseWithDictionaries() override;
|
||||
|
||||
protected:
|
||||
DatabaseWithDictionaries(const String & name, const String & metadata_path_, const String & logger)
|
||||
: DatabaseOnDisk(name, metadata_path_, logger) {}
|
||||
|
||||
void attachToExternalDictionariesLoader(Context & context);
|
||||
void detachFromExternalDictionariesLoader();
|
||||
|
||||
StoragePtr getDictionaryStorage(const Context & context, const String & table_name) const;
|
||||
|
||||
ASTPtr getCreateDictionaryQueryImpl(const Context & context,
|
||||
const String & dictionary_name,
|
||||
bool throw_on_error) const override;
|
||||
|
||||
private:
|
||||
ext::scope_guard database_as_config_repo_for_external_loader;
|
||||
};
|
||||
|
||||
}
|
||||
|
|
|
@ -57,12 +57,15 @@ inline size_t CacheDictionary::getCellIdx(const Key id) const
|
|||
|
||||
|
||||
CacheDictionary::CacheDictionary(
|
||||
const std::string & database_,
|
||||
const std::string & name_,
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
const DictionaryLifetime dict_lifetime_,
|
||||
const size_t size_)
|
||||
: name{name_}
|
||||
: database(database_)
|
||||
, name(name_)
|
||||
, full_name{database_.empty() ? name_ : (database_ + "." + name_)}
|
||||
, dict_struct(dict_struct_)
|
||||
, source_ptr{std::move(source_ptr_)}
|
||||
, dict_lifetime(dict_lifetime_)
|
||||
|
@ -73,7 +76,7 @@ CacheDictionary::CacheDictionary(
|
|||
, rnd_engine(randomSeed())
|
||||
{
|
||||
if (!this->source_ptr->supportsSelectiveLoad())
|
||||
throw Exception{name + ": source cannot be used with CacheDictionary", ErrorCodes::UNSUPPORTED_METHOD};
|
||||
throw Exception{full_name + ": source cannot be used with CacheDictionary", ErrorCodes::UNSUPPORTED_METHOD};
|
||||
|
||||
createAttributes();
|
||||
}
|
||||
|
@ -204,7 +207,7 @@ void CacheDictionary::isInConstantVector(const Key child_id, const PaddedPODArra
|
|||
void CacheDictionary::getString(const std::string & attribute_name, const PaddedPODArray<Key> & ids, ColumnString * out) const
|
||||
{
|
||||
auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
const auto null_value = StringRef{std::get<String>(attribute.null_values)};
|
||||
|
||||
|
@ -215,7 +218,7 @@ void CacheDictionary::getString(
|
|||
const std::string & attribute_name, const PaddedPODArray<Key> & ids, const ColumnString * const def, ColumnString * const out) const
|
||||
{
|
||||
auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
getItemsString(attribute, ids, out, [&](const size_t row) { return def->getDataAt(row); });
|
||||
}
|
||||
|
@ -224,7 +227,7 @@ void CacheDictionary::getString(
|
|||
const std::string & attribute_name, const PaddedPODArray<Key> & ids, const String & def, ColumnString * const out) const
|
||||
{
|
||||
auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
getItemsString(attribute, ids, out, [&](const size_t) { return StringRef{def}; });
|
||||
}
|
||||
|
@ -352,7 +355,7 @@ void CacheDictionary::createAttributes()
|
|||
hierarchical_attribute = &attributes.back();
|
||||
|
||||
if (hierarchical_attribute->type != AttributeUnderlyingType::utUInt64)
|
||||
throw Exception{name + ": hierarchical attribute must be UInt64.", ErrorCodes::TYPE_MISMATCH};
|
||||
throw Exception{full_name + ": hierarchical attribute must be UInt64.", ErrorCodes::TYPE_MISMATCH};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -539,7 +542,7 @@ CacheDictionary::Attribute & CacheDictionary::getAttribute(const std::string & a
|
|||
{
|
||||
const auto it = attribute_index_by_name.find(attribute_name);
|
||||
if (it == std::end(attribute_index_by_name))
|
||||
throw Exception{name + ": no such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
throw Exception{full_name + ": no such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
return attributes[it->second];
|
||||
}
|
||||
|
@ -580,7 +583,7 @@ std::exception_ptr CacheDictionary::getLastException() const
|
|||
|
||||
void registerDictionaryCache(DictionaryFactory & factory)
|
||||
{
|
||||
auto create_layout = [=](const std::string & name,
|
||||
auto create_layout = [=](const std::string & full_name,
|
||||
const DictionaryStructure & dict_struct,
|
||||
const Poco::Util::AbstractConfiguration & config,
|
||||
const std::string & config_prefix,
|
||||
|
@ -590,22 +593,24 @@ void registerDictionaryCache(DictionaryFactory & factory)
|
|||
throw Exception{"'key' is not supported for dictionary of layout 'cache'", ErrorCodes::UNSUPPORTED_METHOD};
|
||||
|
||||
if (dict_struct.range_min || dict_struct.range_max)
|
||||
throw Exception{name
|
||||
throw Exception{full_name
|
||||
+ ": elements .structure.range_min and .structure.range_max should be defined only "
|
||||
"for a dictionary of layout 'range_hashed'",
|
||||
ErrorCodes::BAD_ARGUMENTS};
|
||||
const auto & layout_prefix = config_prefix + ".layout";
|
||||
const auto size = config.getInt(layout_prefix + ".cache.size_in_cells");
|
||||
if (size == 0)
|
||||
throw Exception{name + ": dictionary of layout 'cache' cannot have 0 cells", ErrorCodes::TOO_SMALL_BUFFER_SIZE};
|
||||
throw Exception{full_name + ": dictionary of layout 'cache' cannot have 0 cells", ErrorCodes::TOO_SMALL_BUFFER_SIZE};
|
||||
|
||||
const bool require_nonempty = config.getBool(config_prefix + ".require_nonempty", false);
|
||||
if (require_nonempty)
|
||||
throw Exception{name + ": dictionary of layout 'cache' cannot have 'require_nonempty' attribute set",
|
||||
throw Exception{full_name + ": dictionary of layout 'cache' cannot have 'require_nonempty' attribute set",
|
||||
ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
const String database = config.getString(config_prefix + ".database", "");
|
||||
const String name = config.getString(config_prefix + ".name");
|
||||
const DictionaryLifetime dict_lifetime{config, config_prefix + ".lifetime"};
|
||||
return std::make_unique<CacheDictionary>(name, dict_struct, std::move(source_ptr), dict_lifetime, size);
|
||||
return std::make_unique<CacheDictionary>(database, name, dict_struct, std::move(source_ptr), dict_lifetime, size);
|
||||
};
|
||||
factory.registerLayout("cache", create_layout, false);
|
||||
}
|
||||
|
|
|
@ -25,13 +25,16 @@ class CacheDictionary final : public IDictionary
|
|||
{
|
||||
public:
|
||||
CacheDictionary(
|
||||
const std::string & database_,
|
||||
const std::string & name_,
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
const DictionaryLifetime dict_lifetime_,
|
||||
const size_t size_);
|
||||
|
||||
std::string getName() const override { return name; }
|
||||
const std::string & getDatabase() const override { return database; }
|
||||
const std::string & getName() const override { return name; }
|
||||
const std::string & getFullName() const override { return full_name; }
|
||||
|
||||
std::string getTypeName() const override { return "Cache"; }
|
||||
|
||||
|
@ -52,7 +55,7 @@ public:
|
|||
|
||||
std::shared_ptr<const IExternalLoadable> clone() const override
|
||||
{
|
||||
return std::make_shared<CacheDictionary>(name, dict_struct, source_ptr->clone(), dict_lifetime, size);
|
||||
return std::make_shared<CacheDictionary>(database, name, dict_struct, source_ptr->clone(), dict_lifetime, size);
|
||||
}
|
||||
|
||||
const IDictionarySource * getSource() const override { return source_ptr.get(); }
|
||||
|
@ -254,7 +257,9 @@ private:
|
|||
template <typename AncestorType>
|
||||
void isInImpl(const PaddedPODArray<Key> & child_ids, const AncestorType & ancestor_ids, PaddedPODArray<UInt8> & out) const;
|
||||
|
||||
const std::string database;
|
||||
const std::string name;
|
||||
const std::string full_name;
|
||||
const DictionaryStructure dict_struct;
|
||||
mutable DictionarySourcePtr source_ptr;
|
||||
const DictionaryLifetime dict_lifetime;
|
||||
|
|
|
@ -333,7 +333,7 @@ void CacheDictionary::update(
|
|||
last_exception = std::current_exception();
|
||||
backoff_end_time = now + std::chrono::seconds(calculateDurationWithBackoff(rnd_engine, error_count));
|
||||
|
||||
tryLogException(last_exception, log, "Could not update cache dictionary '" + getName() +
|
||||
tryLogException(last_exception, log, "Could not update cache dictionary '" + getFullName() +
|
||||
"', next update is scheduled at " + ext::to_string(backoff_end_time));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -51,12 +51,15 @@ inline UInt64 ComplexKeyCacheDictionary::getCellIdx(const StringRef key) const
|
|||
|
||||
|
||||
ComplexKeyCacheDictionary::ComplexKeyCacheDictionary(
|
||||
const std::string & database_,
|
||||
const std::string & name_,
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
const DictionaryLifetime dict_lifetime_,
|
||||
const size_t size_)
|
||||
: name{name_}
|
||||
: database(database_)
|
||||
, name(name_)
|
||||
, full_name{database_.empty() ? name_ : (database_ + "." + name_)}
|
||||
, dict_struct(dict_struct_)
|
||||
, source_ptr{std::move(source_ptr_)}
|
||||
, dict_lifetime(dict_lifetime_)
|
||||
|
@ -65,7 +68,7 @@ ComplexKeyCacheDictionary::ComplexKeyCacheDictionary(
|
|||
, rnd_engine(randomSeed())
|
||||
{
|
||||
if (!this->source_ptr->supportsSelectiveLoad())
|
||||
throw Exception{name + ": source cannot be used with ComplexKeyCacheDictionary", ErrorCodes::UNSUPPORTED_METHOD};
|
||||
throw Exception{full_name + ": source cannot be used with ComplexKeyCacheDictionary", ErrorCodes::UNSUPPORTED_METHOD};
|
||||
|
||||
createAttributes();
|
||||
}
|
||||
|
@ -77,7 +80,7 @@ void ComplexKeyCacheDictionary::getString(
|
|||
dict_struct.validateKeyTypes(key_types);
|
||||
|
||||
auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
const auto null_value = StringRef{std::get<String>(attribute.null_values)};
|
||||
|
||||
|
@ -94,7 +97,7 @@ void ComplexKeyCacheDictionary::getString(
|
|||
dict_struct.validateKeyTypes(key_types);
|
||||
|
||||
auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
getItemsString(attribute, key_columns, out, [&](const size_t row) { return def->getDataAt(row); });
|
||||
}
|
||||
|
@ -109,7 +112,7 @@ void ComplexKeyCacheDictionary::getString(
|
|||
dict_struct.validateKeyTypes(key_types);
|
||||
|
||||
auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
getItemsString(attribute, key_columns, out, [&](const size_t) { return StringRef{def}; });
|
||||
}
|
||||
|
@ -249,7 +252,7 @@ void ComplexKeyCacheDictionary::createAttributes()
|
|||
attributes.push_back(createAttributeWithType(attribute.underlying_type, attribute.null_value));
|
||||
|
||||
if (attribute.hierarchical)
|
||||
throw Exception{name + ": hierarchical attributes not supported for dictionary of type " + getTypeName(),
|
||||
throw Exception{full_name + ": hierarchical attributes not supported for dictionary of type " + getTypeName(),
|
||||
ErrorCodes::TYPE_MISMATCH};
|
||||
}
|
||||
}
|
||||
|
@ -258,7 +261,7 @@ ComplexKeyCacheDictionary::Attribute & ComplexKeyCacheDictionary::getAttribute(c
|
|||
{
|
||||
const auto it = attribute_index_by_name.find(attribute_name);
|
||||
if (it == std::end(attribute_index_by_name))
|
||||
throw Exception{name + ": no such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
throw Exception{full_name + ": no such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
return attributes[it->second];
|
||||
}
|
||||
|
@ -394,7 +397,7 @@ BlockInputStreamPtr ComplexKeyCacheDictionary::getBlockInputStream(const Names &
|
|||
|
||||
void registerDictionaryComplexKeyCache(DictionaryFactory & factory)
|
||||
{
|
||||
auto create_layout = [=](const std::string & name,
|
||||
auto create_layout = [=](const std::string & full_name,
|
||||
const DictionaryStructure & dict_struct,
|
||||
const Poco::Util::AbstractConfiguration & config,
|
||||
const std::string & config_prefix,
|
||||
|
@ -405,15 +408,17 @@ void registerDictionaryComplexKeyCache(DictionaryFactory & factory)
|
|||
const auto & layout_prefix = config_prefix + ".layout";
|
||||
const auto size = config.getInt(layout_prefix + ".complex_key_cache.size_in_cells");
|
||||
if (size == 0)
|
||||
throw Exception{name + ": dictionary of layout 'cache' cannot have 0 cells", ErrorCodes::TOO_SMALL_BUFFER_SIZE};
|
||||
throw Exception{full_name + ": dictionary of layout 'cache' cannot have 0 cells", ErrorCodes::TOO_SMALL_BUFFER_SIZE};
|
||||
|
||||
const bool require_nonempty = config.getBool(config_prefix + ".require_nonempty", false);
|
||||
if (require_nonempty)
|
||||
throw Exception{name + ": dictionary of layout 'cache' cannot have 'require_nonempty' attribute set",
|
||||
throw Exception{full_name + ": dictionary of layout 'cache' cannot have 'require_nonempty' attribute set",
|
||||
ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
const String database = config.getString(config_prefix + ".database", "");
|
||||
const String name = config.getString(config_prefix + ".name");
|
||||
const DictionaryLifetime dict_lifetime{config, config_prefix + ".lifetime"};
|
||||
return std::make_unique<ComplexKeyCacheDictionary>(name, dict_struct, std::move(source_ptr), dict_lifetime, size);
|
||||
return std::make_unique<ComplexKeyCacheDictionary>(database, name, dict_struct, std::move(source_ptr), dict_lifetime, size);
|
||||
};
|
||||
factory.registerLayout("complex_key_cache", create_layout, true);
|
||||
}
|
||||
|
|
|
@ -42,6 +42,7 @@ class ComplexKeyCacheDictionary final : public IDictionaryBase
|
|||
{
|
||||
public:
|
||||
ComplexKeyCacheDictionary(
|
||||
const std::string & database_,
|
||||
const std::string & name_,
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
|
@ -50,7 +51,9 @@ public:
|
|||
|
||||
std::string getKeyDescription() const { return key_description; }
|
||||
|
||||
std::string getName() const override { return name; }
|
||||
const std::string & getDatabase() const override { return database; }
|
||||
const std::string & getName() const override { return name; }
|
||||
const std::string & getFullName() const override { return full_name; }
|
||||
|
||||
std::string getTypeName() const override { return "ComplexKeyCache"; }
|
||||
|
||||
|
@ -75,7 +78,7 @@ public:
|
|||
|
||||
std::shared_ptr<const IExternalLoadable> clone() const override
|
||||
{
|
||||
return std::make_shared<ComplexKeyCacheDictionary>(name, dict_struct, source_ptr->clone(), dict_lifetime, size);
|
||||
return std::make_shared<ComplexKeyCacheDictionary>(database, name, dict_struct, source_ptr->clone(), dict_lifetime, size);
|
||||
}
|
||||
|
||||
const IDictionarySource * getSource() const override { return source_ptr.get(); }
|
||||
|
@ -668,7 +671,9 @@ private:
|
|||
|
||||
bool isEmptyCell(const UInt64 idx) const;
|
||||
|
||||
const std::string database;
|
||||
const std::string name;
|
||||
const std::string full_name;
|
||||
const DictionaryStructure dict_struct;
|
||||
const DictionarySourcePtr source_ptr;
|
||||
const DictionaryLifetime dict_lifetime;
|
||||
|
|
|
@ -15,13 +15,16 @@ namespace ErrorCodes
|
|||
}
|
||||
|
||||
ComplexKeyHashedDictionary::ComplexKeyHashedDictionary(
|
||||
const std::string & database_,
|
||||
const std::string & name_,
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
const DictionaryLifetime dict_lifetime_,
|
||||
bool require_nonempty_,
|
||||
BlockPtr saved_block_)
|
||||
: name{name_}
|
||||
: database(database_)
|
||||
, name(name_)
|
||||
, full_name{database_.empty() ? name_ : (database_ + "." + name_)}
|
||||
, dict_struct(dict_struct_)
|
||||
, source_ptr{std::move(source_ptr_)}
|
||||
, dict_lifetime(dict_lifetime_)
|
||||
|
@ -40,7 +43,7 @@ ComplexKeyHashedDictionary::ComplexKeyHashedDictionary(
|
|||
dict_struct.validateKeyTypes(key_types); \
|
||||
\
|
||||
const auto & attribute = getAttribute(attribute_name); \
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
\
|
||||
const auto null_value = std::get<TYPE>(attribute.null_values); \
|
||||
\
|
||||
|
@ -72,7 +75,7 @@ void ComplexKeyHashedDictionary::getString(
|
|||
dict_struct.validateKeyTypes(key_types);
|
||||
|
||||
const auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
const auto & null_value = StringRef{std::get<String>(attribute.null_values)};
|
||||
|
||||
|
@ -94,7 +97,7 @@ void ComplexKeyHashedDictionary::getString(
|
|||
dict_struct.validateKeyTypes(key_types); \
|
||||
\
|
||||
const auto & attribute = getAttribute(attribute_name); \
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
\
|
||||
getItemsImpl<TYPE, TYPE>( \
|
||||
attribute, \
|
||||
|
@ -128,7 +131,7 @@ void ComplexKeyHashedDictionary::getString(
|
|||
dict_struct.validateKeyTypes(key_types);
|
||||
|
||||
const auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
getItemsImpl<StringRef, StringRef>(
|
||||
attribute,
|
||||
|
@ -148,7 +151,7 @@ void ComplexKeyHashedDictionary::getString(
|
|||
dict_struct.validateKeyTypes(key_types); \
|
||||
\
|
||||
const auto & attribute = getAttribute(attribute_name); \
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
\
|
||||
getItemsImpl<TYPE, TYPE>( \
|
||||
attribute, key_columns, [&](const size_t row, const auto value) { out[row] = value; }, [&](const size_t) { return def; }); \
|
||||
|
@ -179,7 +182,7 @@ void ComplexKeyHashedDictionary::getString(
|
|||
dict_struct.validateKeyTypes(key_types);
|
||||
|
||||
const auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
getItemsImpl<StringRef, StringRef>(
|
||||
attribute,
|
||||
|
@ -256,7 +259,7 @@ void ComplexKeyHashedDictionary::createAttributes()
|
|||
attributes.push_back(createAttributeWithType(attribute.underlying_type, attribute.null_value));
|
||||
|
||||
if (attribute.hierarchical)
|
||||
throw Exception{name + ": hierarchical attributes not supported for dictionary of type " + getTypeName(),
|
||||
throw Exception{full_name + ": hierarchical attributes not supported for dictionary of type " + getTypeName(),
|
||||
ErrorCodes::TYPE_MISMATCH};
|
||||
}
|
||||
}
|
||||
|
@ -397,7 +400,7 @@ void ComplexKeyHashedDictionary::loadData()
|
|||
updateData();
|
||||
|
||||
if (require_nonempty && 0 == element_count)
|
||||
throw Exception{name + ": dictionary source is empty and 'require_nonempty' property is set.", ErrorCodes::DICTIONARY_IS_EMPTY};
|
||||
throw Exception{full_name + ": dictionary source is empty and 'require_nonempty' property is set.", ErrorCodes::DICTIONARY_IS_EMPTY};
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
|
@ -630,7 +633,7 @@ const ComplexKeyHashedDictionary::Attribute & ComplexKeyHashedDictionary::getAtt
|
|||
{
|
||||
const auto it = attribute_index_by_name.find(attribute_name);
|
||||
if (it == std::end(attribute_index_by_name))
|
||||
throw Exception{name + ": no such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
throw Exception{full_name + ": no such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
return attributes[it->second];
|
||||
}
|
||||
|
@ -742,7 +745,7 @@ BlockInputStreamPtr ComplexKeyHashedDictionary::getBlockInputStream(const Names
|
|||
|
||||
void registerDictionaryComplexKeyHashed(DictionaryFactory & factory)
|
||||
{
|
||||
auto create_layout = [=](const std::string & name,
|
||||
auto create_layout = [=](const std::string &,
|
||||
const DictionaryStructure & dict_struct,
|
||||
const Poco::Util::AbstractConfiguration & config,
|
||||
const std::string & config_prefix,
|
||||
|
@ -751,12 +754,13 @@ void registerDictionaryComplexKeyHashed(DictionaryFactory & factory)
|
|||
if (!dict_struct.key)
|
||||
throw Exception{"'key' is required for dictionary of layout 'complex_key_hashed'", ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
const String database = config.getString(config_prefix + ".database", "");
|
||||
const String name = config.getString(config_prefix + ".name");
|
||||
const DictionaryLifetime dict_lifetime{config, config_prefix + ".lifetime"};
|
||||
const bool require_nonempty = config.getBool(config_prefix + ".require_nonempty", false);
|
||||
return std::make_unique<ComplexKeyHashedDictionary>(name, dict_struct, std::move(source_ptr), dict_lifetime, require_nonempty);
|
||||
return std::make_unique<ComplexKeyHashedDictionary>(database, name, dict_struct, std::move(source_ptr), dict_lifetime, require_nonempty);
|
||||
};
|
||||
factory.registerLayout("complex_key_hashed", create_layout, true);
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
|
|
@ -23,6 +23,7 @@ class ComplexKeyHashedDictionary final : public IDictionaryBase
|
|||
{
|
||||
public:
|
||||
ComplexKeyHashedDictionary(
|
||||
const std::string & database_,
|
||||
const std::string & name_,
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
|
@ -32,7 +33,9 @@ public:
|
|||
|
||||
std::string getKeyDescription() const { return key_description; }
|
||||
|
||||
std::string getName() const override { return name; }
|
||||
const std::string & getDatabase() const override { return database; }
|
||||
const std::string & getName() const override { return name; }
|
||||
const std::string & getFullName() const override { return full_name; }
|
||||
|
||||
std::string getTypeName() const override { return "ComplexKeyHashed"; }
|
||||
|
||||
|
@ -48,7 +51,7 @@ public:
|
|||
|
||||
std::shared_ptr<const IExternalLoadable> clone() const override
|
||||
{
|
||||
return std::make_shared<ComplexKeyHashedDictionary>(name, dict_struct, source_ptr->clone(), dict_lifetime, require_nonempty, saved_block);
|
||||
return std::make_shared<ComplexKeyHashedDictionary>(database, name, dict_struct, source_ptr->clone(), dict_lifetime, require_nonempty, saved_block);
|
||||
}
|
||||
|
||||
const IDictionarySource * getSource() const override { return source_ptr.get(); }
|
||||
|
@ -233,7 +236,9 @@ private:
|
|||
template <typename T>
|
||||
std::vector<StringRef> getKeys(const Attribute & attribute) const;
|
||||
|
||||
const std::string database;
|
||||
const std::string name;
|
||||
const std::string full_name;
|
||||
const DictionaryStructure dict_struct;
|
||||
const DictionarySourcePtr source_ptr;
|
||||
const DictionaryLifetime dict_lifetime;
|
||||
|
|
|
@ -21,13 +21,16 @@ static const auto max_array_size = 500000;
|
|||
|
||||
|
||||
FlatDictionary::FlatDictionary(
|
||||
const std::string & database_,
|
||||
const std::string & name_,
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
const DictionaryLifetime dict_lifetime_,
|
||||
bool require_nonempty_,
|
||||
BlockPtr saved_block_)
|
||||
: name{name_}
|
||||
: database(database_)
|
||||
, name(name_)
|
||||
, full_name{database_.empty() ? name_ : (database_ + "." + name_)}
|
||||
, dict_struct(dict_struct_)
|
||||
, source_ptr{std::move(source_ptr_)}
|
||||
, dict_lifetime(dict_lifetime_)
|
||||
|
@ -107,7 +110,7 @@ void FlatDictionary::isInConstantVector(const Key child_id, const PaddedPODArray
|
|||
void FlatDictionary::get##TYPE(const std::string & attribute_name, const PaddedPODArray<Key> & ids, ResultArrayType<TYPE> & out) const \
|
||||
{ \
|
||||
const auto & attribute = getAttribute(attribute_name); \
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
\
|
||||
const auto null_value = std::get<TYPE>(attribute.null_values); \
|
||||
\
|
||||
|
@ -133,7 +136,7 @@ DECLARE(Decimal128)
|
|||
void FlatDictionary::getString(const std::string & attribute_name, const PaddedPODArray<Key> & ids, ColumnString * out) const
|
||||
{
|
||||
const auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
const auto & null_value = std::get<StringRef>(attribute.null_values);
|
||||
|
||||
|
@ -152,7 +155,7 @@ void FlatDictionary::getString(const std::string & attribute_name, const PaddedP
|
|||
ResultArrayType<TYPE> & out) const \
|
||||
{ \
|
||||
const auto & attribute = getAttribute(attribute_name); \
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
\
|
||||
getItemsImpl<TYPE, TYPE>( \
|
||||
attribute, ids, [&](const size_t row, const auto value) { out[row] = value; }, [&](const size_t row) { return def[row]; }); \
|
||||
|
@ -177,7 +180,7 @@ void FlatDictionary::getString(
|
|||
const std::string & attribute_name, const PaddedPODArray<Key> & ids, const ColumnString * const def, ColumnString * const out) const
|
||||
{
|
||||
const auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
getItemsImpl<StringRef, StringRef>(
|
||||
attribute,
|
||||
|
@ -191,7 +194,7 @@ void FlatDictionary::getString(
|
|||
const std::string & attribute_name, const PaddedPODArray<Key> & ids, const TYPE def, ResultArrayType<TYPE> & out) const \
|
||||
{ \
|
||||
const auto & attribute = getAttribute(attribute_name); \
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
\
|
||||
getItemsImpl<TYPE, TYPE>( \
|
||||
attribute, ids, [&](const size_t row, const auto value) { out[row] = value; }, [&](const size_t) { return def; }); \
|
||||
|
@ -216,7 +219,7 @@ void FlatDictionary::getString(
|
|||
const std::string & attribute_name, const PaddedPODArray<Key> & ids, const String & def, ColumnString * const out) const
|
||||
{
|
||||
const auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
FlatDictionary::getItemsImpl<StringRef, StringRef>(
|
||||
attribute,
|
||||
|
@ -297,7 +300,7 @@ void FlatDictionary::createAttributes()
|
|||
hierarchical_attribute = &attributes.back();
|
||||
|
||||
if (hierarchical_attribute->type != AttributeUnderlyingType::utUInt64)
|
||||
throw Exception{name + ": hierarchical attribute must be UInt64.", ErrorCodes::TYPE_MISMATCH};
|
||||
throw Exception{full_name + ": hierarchical attribute must be UInt64.", ErrorCodes::TYPE_MISMATCH};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -404,7 +407,7 @@ void FlatDictionary::loadData()
|
|||
updateData();
|
||||
|
||||
if (require_nonempty && 0 == element_count)
|
||||
throw Exception{name + ": dictionary source is empty and 'require_nonempty' property is set.", ErrorCodes::DICTIONARY_IS_EMPTY};
|
||||
throw Exception{full_name + ": dictionary source is empty and 'require_nonempty' property is set.", ErrorCodes::DICTIONARY_IS_EMPTY};
|
||||
}
|
||||
|
||||
|
||||
|
@ -578,7 +581,7 @@ template <typename T>
|
|||
void FlatDictionary::resize(Attribute & attribute, const Key id)
|
||||
{
|
||||
if (id >= max_array_size)
|
||||
throw Exception{name + ": identifier should be less than " + toString(max_array_size), ErrorCodes::ARGUMENT_OUT_OF_BOUND};
|
||||
throw Exception{full_name + ": identifier should be less than " + toString(max_array_size), ErrorCodes::ARGUMENT_OUT_OF_BOUND};
|
||||
|
||||
auto & array = std::get<ContainerType<T>>(attribute.arrays);
|
||||
if (id >= array.size())
|
||||
|
@ -666,7 +669,7 @@ const FlatDictionary::Attribute & FlatDictionary::getAttribute(const std::string
|
|||
{
|
||||
const auto it = attribute_index_by_name.find(attribute_name);
|
||||
if (it == std::end(attribute_index_by_name))
|
||||
throw Exception{name + ": no such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
throw Exception{full_name + ": no such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
return attributes[it->second];
|
||||
}
|
||||
|
@ -706,7 +709,7 @@ BlockInputStreamPtr FlatDictionary::getBlockInputStream(const Names & column_nam
|
|||
|
||||
void registerDictionaryFlat(DictionaryFactory & factory)
|
||||
{
|
||||
auto create_layout = [=](const std::string & name,
|
||||
auto create_layout = [=](const std::string & full_name,
|
||||
const DictionaryStructure & dict_struct,
|
||||
const Poco::Util::AbstractConfiguration & config,
|
||||
const std::string & config_prefix,
|
||||
|
@ -716,13 +719,16 @@ void registerDictionaryFlat(DictionaryFactory & factory)
|
|||
throw Exception{"'key' is not supported for dictionary of layout 'flat'", ErrorCodes::UNSUPPORTED_METHOD};
|
||||
|
||||
if (dict_struct.range_min || dict_struct.range_max)
|
||||
throw Exception{name
|
||||
throw Exception{full_name
|
||||
+ ": elements .structure.range_min and .structure.range_max should be defined only "
|
||||
"for a dictionary of layout 'range_hashed'",
|
||||
ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
const String database = config.getString(config_prefix + ".database", "");
|
||||
const String name = config.getString(config_prefix + ".name");
|
||||
const DictionaryLifetime dict_lifetime{config, config_prefix + ".lifetime"};
|
||||
const bool require_nonempty = config.getBool(config_prefix + ".require_nonempty", false);
|
||||
return std::make_unique<FlatDictionary>(name, dict_struct, std::move(source_ptr), dict_lifetime, require_nonempty);
|
||||
return std::make_unique<FlatDictionary>(database, name, dict_struct, std::move(source_ptr), dict_lifetime, require_nonempty);
|
||||
};
|
||||
factory.registerLayout("flat", create_layout, false);
|
||||
}
|
||||
|
|
|
@ -22,6 +22,7 @@ class FlatDictionary final : public IDictionary
|
|||
{
|
||||
public:
|
||||
FlatDictionary(
|
||||
const std::string & database_,
|
||||
const std::string & name_,
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
|
@ -29,7 +30,9 @@ public:
|
|||
bool require_nonempty_,
|
||||
BlockPtr saved_block_ = nullptr);
|
||||
|
||||
std::string getName() const override { return name; }
|
||||
const std::string & getDatabase() const override { return database; }
|
||||
const std::string & getName() const override { return name; }
|
||||
const std::string & getFullName() const override { return full_name; }
|
||||
|
||||
std::string getTypeName() const override { return "Flat"; }
|
||||
|
||||
|
@ -45,7 +48,7 @@ public:
|
|||
|
||||
std::shared_ptr<const IExternalLoadable> clone() const override
|
||||
{
|
||||
return std::make_shared<FlatDictionary>(name, dict_struct, source_ptr->clone(), dict_lifetime, require_nonempty, saved_block);
|
||||
return std::make_shared<FlatDictionary>(database, name, dict_struct, source_ptr->clone(), dict_lifetime, require_nonempty, saved_block);
|
||||
}
|
||||
|
||||
const IDictionarySource * getSource() const override { return source_ptr.get(); }
|
||||
|
@ -222,7 +225,9 @@ private:
|
|||
|
||||
PaddedPODArray<Key> getIds() const;
|
||||
|
||||
const std::string database;
|
||||
const std::string name;
|
||||
const std::string full_name;
|
||||
const DictionaryStructure dict_struct;
|
||||
const DictionarySourcePtr source_ptr;
|
||||
const DictionaryLifetime dict_lifetime;
|
||||
|
|
|
@ -31,6 +31,7 @@ namespace ErrorCodes
|
|||
|
||||
|
||||
HashedDictionary::HashedDictionary(
|
||||
const std::string & database_,
|
||||
const std::string & name_,
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
|
@ -38,7 +39,9 @@ HashedDictionary::HashedDictionary(
|
|||
bool require_nonempty_,
|
||||
bool sparse_,
|
||||
BlockPtr saved_block_)
|
||||
: name{name_}
|
||||
: database(database_)
|
||||
, name(name_)
|
||||
, full_name{database_.empty() ? name_ : (database_ + "." + name_)}
|
||||
, dict_struct(dict_struct_)
|
||||
, source_ptr{std::move(source_ptr_)}
|
||||
, dict_lifetime(dict_lifetime_)
|
||||
|
@ -129,7 +132,7 @@ void HashedDictionary::isInConstantVector(const Key child_id, const PaddedPODArr
|
|||
const \
|
||||
{ \
|
||||
const auto & attribute = getAttribute(attribute_name); \
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
\
|
||||
const auto null_value = std::get<TYPE>(attribute.null_values); \
|
||||
\
|
||||
|
@ -155,7 +158,7 @@ DECLARE(Decimal128)
|
|||
void HashedDictionary::getString(const std::string & attribute_name, const PaddedPODArray<Key> & ids, ColumnString * out) const
|
||||
{
|
||||
const auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
const auto & null_value = StringRef{std::get<String>(attribute.null_values)};
|
||||
|
||||
|
@ -174,7 +177,7 @@ void HashedDictionary::getString(const std::string & attribute_name, const Padde
|
|||
ResultArrayType<TYPE> & out) const \
|
||||
{ \
|
||||
const auto & attribute = getAttribute(attribute_name); \
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
\
|
||||
getItemsImpl<TYPE, TYPE>( \
|
||||
attribute, ids, [&](const size_t row, const auto value) { out[row] = value; }, [&](const size_t row) { return def[row]; }); \
|
||||
|
@ -199,7 +202,7 @@ void HashedDictionary::getString(
|
|||
const std::string & attribute_name, const PaddedPODArray<Key> & ids, const ColumnString * const def, ColumnString * const out) const
|
||||
{
|
||||
const auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
getItemsImpl<StringRef, StringRef>(
|
||||
attribute,
|
||||
|
@ -213,7 +216,7 @@ void HashedDictionary::getString(
|
|||
const std::string & attribute_name, const PaddedPODArray<Key> & ids, const TYPE & def, ResultArrayType<TYPE> & out) const \
|
||||
{ \
|
||||
const auto & attribute = getAttribute(attribute_name); \
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
\
|
||||
getItemsImpl<TYPE, TYPE>( \
|
||||
attribute, ids, [&](const size_t row, const auto value) { out[row] = value; }, [&](const size_t) { return def; }); \
|
||||
|
@ -238,7 +241,7 @@ void HashedDictionary::getString(
|
|||
const std::string & attribute_name, const PaddedPODArray<Key> & ids, const String & def, ColumnString * const out) const
|
||||
{
|
||||
const auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
getItemsImpl<StringRef, StringRef>(
|
||||
attribute,
|
||||
|
@ -317,7 +320,7 @@ void HashedDictionary::createAttributes()
|
|||
hierarchical_attribute = &attributes.back();
|
||||
|
||||
if (hierarchical_attribute->type != AttributeUnderlyingType::utUInt64)
|
||||
throw Exception{name + ": hierarchical attribute must be UInt64.", ErrorCodes::TYPE_MISMATCH};
|
||||
throw Exception{full_name + ": hierarchical attribute must be UInt64.", ErrorCodes::TYPE_MISMATCH};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -424,7 +427,7 @@ void HashedDictionary::loadData()
|
|||
updateData();
|
||||
|
||||
if (require_nonempty && 0 == element_count)
|
||||
throw Exception{name + ": dictionary source is empty and 'require_nonempty' property is set.", ErrorCodes::DICTIONARY_IS_EMPTY};
|
||||
throw Exception{full_name + ": dictionary source is empty and 'require_nonempty' property is set.", ErrorCodes::DICTIONARY_IS_EMPTY};
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
|
@ -684,7 +687,7 @@ const HashedDictionary::Attribute & HashedDictionary::getAttribute(const std::st
|
|||
{
|
||||
const auto it = attribute_index_by_name.find(attribute_name);
|
||||
if (it == std::end(attribute_index_by_name))
|
||||
throw Exception{name + ": no such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
throw Exception{full_name + ": no such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
return attributes[it->second];
|
||||
}
|
||||
|
@ -768,27 +771,31 @@ BlockInputStreamPtr HashedDictionary::getBlockInputStream(const Names & column_n
|
|||
|
||||
void registerDictionaryHashed(DictionaryFactory & factory)
|
||||
{
|
||||
auto create_layout = [=](const std::string & name,
|
||||
auto create_layout = [=](const std::string & full_name,
|
||||
const DictionaryStructure & dict_struct,
|
||||
const Poco::Util::AbstractConfiguration & config,
|
||||
const std::string & config_prefix,
|
||||
DictionarySourcePtr source_ptr) -> DictionaryPtr
|
||||
DictionarySourcePtr source_ptr,
|
||||
bool sparse) -> DictionaryPtr
|
||||
{
|
||||
if (dict_struct.key)
|
||||
throw Exception{"'key' is not supported for dictionary of layout 'hashed'", ErrorCodes::UNSUPPORTED_METHOD};
|
||||
|
||||
if (dict_struct.range_min || dict_struct.range_max)
|
||||
throw Exception{name
|
||||
throw Exception{full_name
|
||||
+ ": elements .structure.range_min and .structure.range_max should be defined only "
|
||||
"for a dictionary of layout 'range_hashed'",
|
||||
ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
const String database = config.getString(config_prefix + ".database", "");
|
||||
const String name = config.getString(config_prefix + ".name");
|
||||
const DictionaryLifetime dict_lifetime{config, config_prefix + ".lifetime"};
|
||||
const bool require_nonempty = config.getBool(config_prefix + ".require_nonempty", false);
|
||||
const bool sparse = name == "sparse_hashed";
|
||||
return std::make_unique<HashedDictionary>(name, dict_struct, std::move(source_ptr), dict_lifetime, require_nonempty, sparse);
|
||||
return std::make_unique<HashedDictionary>(database, name, dict_struct, std::move(source_ptr), dict_lifetime, require_nonempty, sparse);
|
||||
};
|
||||
factory.registerLayout("hashed", create_layout, false);
|
||||
factory.registerLayout("sparse_hashed", create_layout, false);
|
||||
using namespace std::placeholders;
|
||||
factory.registerLayout("hashed", std::bind(create_layout, _1, _2, _3, _4, _5, /* sparse = */ false), false);
|
||||
factory.registerLayout("sparse_hashed", std::bind(create_layout, _1, _2, _3, _4, _5, /* sparse = */ true), false);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -26,6 +26,7 @@ class HashedDictionary final : public IDictionary
|
|||
{
|
||||
public:
|
||||
HashedDictionary(
|
||||
const std::string & database_,
|
||||
const std::string & name_,
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
|
@ -34,7 +35,9 @@ public:
|
|||
bool sparse_,
|
||||
BlockPtr saved_block_ = nullptr);
|
||||
|
||||
std::string getName() const override { return name; }
|
||||
const std::string & getDatabase() const override { return database; }
|
||||
const std::string & getName() const override { return name; }
|
||||
const std::string & getFullName() const override { return full_name; }
|
||||
|
||||
std::string getTypeName() const override { return sparse ? "SparseHashed" : "Hashed"; }
|
||||
|
||||
|
@ -50,7 +53,7 @@ public:
|
|||
|
||||
std::shared_ptr<const IExternalLoadable> clone() const override
|
||||
{
|
||||
return std::make_shared<HashedDictionary>(name, dict_struct, source_ptr->clone(), dict_lifetime, require_nonempty, sparse, saved_block);
|
||||
return std::make_shared<HashedDictionary>(database, name, dict_struct, source_ptr->clone(), dict_lifetime, require_nonempty, sparse, saved_block);
|
||||
}
|
||||
|
||||
const IDictionarySource * getSource() const override { return source_ptr.get(); }
|
||||
|
@ -262,7 +265,9 @@ private:
|
|||
template <typename ChildType, typename AncestorType>
|
||||
void isInImpl(const ChildType & child_ids, const AncestorType & ancestor_ids, PaddedPODArray<UInt8> & out) const;
|
||||
|
||||
const std::string database;
|
||||
const std::string name;
|
||||
const std::string full_name;
|
||||
const DictionaryStructure dict_struct;
|
||||
const DictionarySourcePtr source_ptr;
|
||||
const DictionaryLifetime dict_lifetime;
|
||||
|
|
|
@ -25,6 +25,11 @@ struct IDictionaryBase : public IExternalLoadable
|
|||
{
|
||||
using Key = UInt64;
|
||||
|
||||
virtual const std::string & getDatabase() const = 0;
|
||||
virtual const std::string & getName() const = 0;
|
||||
virtual const std::string & getFullName() const = 0;
|
||||
const std::string & getLoadableName() const override { return getFullName(); }
|
||||
|
||||
virtual std::string getTypeName() const = 0;
|
||||
|
||||
virtual size_t getBytesAllocated() const = 0;
|
||||
|
|
|
@ -68,12 +68,15 @@ static bool operator<(const RangeHashedDictionary::Range & left, const RangeHash
|
|||
|
||||
|
||||
RangeHashedDictionary::RangeHashedDictionary(
|
||||
const std::string & dictionary_name_,
|
||||
const std::string & database_,
|
||||
const std::string & name_,
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
const DictionaryLifetime dict_lifetime_,
|
||||
bool require_nonempty_)
|
||||
: dictionary_name{dictionary_name_}
|
||||
: database(database_)
|
||||
, name(name_)
|
||||
, full_name{database_.empty() ? name_ : (database_ + "." + name_)}
|
||||
, dict_struct(dict_struct_)
|
||||
, source_ptr{std::move(source_ptr_)}
|
||||
, dict_lifetime(dict_lifetime_)
|
||||
|
@ -156,7 +159,7 @@ void RangeHashedDictionary::createAttributes()
|
|||
attributes.push_back(createAttributeWithType(attribute.underlying_type, attribute.null_value));
|
||||
|
||||
if (attribute.hierarchical)
|
||||
throw Exception{dictionary_name + ": hierarchical attributes not supported by " + getName() + " dictionary.",
|
||||
throw Exception{full_name + ": hierarchical attributes not supported by " + getName() + " dictionary.",
|
||||
ErrorCodes::BAD_ARGUMENTS};
|
||||
}
|
||||
}
|
||||
|
@ -207,7 +210,7 @@ void RangeHashedDictionary::loadData()
|
|||
stream->readSuffix();
|
||||
|
||||
if (require_nonempty && 0 == element_count)
|
||||
throw Exception{dictionary_name + ": dictionary source is empty and 'require_nonempty' property is set.",
|
||||
throw Exception{full_name + ": dictionary source is empty and 'require_nonempty' property is set.",
|
||||
ErrorCodes::DICTIONARY_IS_EMPTY};
|
||||
}
|
||||
|
||||
|
@ -520,7 +523,7 @@ const RangeHashedDictionary::Attribute & RangeHashedDictionary::getAttribute(con
|
|||
{
|
||||
const auto it = attribute_index_by_name.find(attribute_name);
|
||||
if (it == std::end(attribute_index_by_name))
|
||||
throw Exception{dictionary_name + ": no such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
throw Exception{full_name + ": no such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
return attributes[it->second];
|
||||
}
|
||||
|
@ -674,7 +677,7 @@ BlockInputStreamPtr RangeHashedDictionary::getBlockInputStream(const Names & col
|
|||
|
||||
void registerDictionaryRangeHashed(DictionaryFactory & factory)
|
||||
{
|
||||
auto create_layout = [=](const std::string & name,
|
||||
auto create_layout = [=](const std::string & full_name,
|
||||
const DictionaryStructure & dict_struct,
|
||||
const Poco::Util::AbstractConfiguration & config,
|
||||
const std::string & config_prefix,
|
||||
|
@ -684,12 +687,14 @@ void registerDictionaryRangeHashed(DictionaryFactory & factory)
|
|||
throw Exception{"'key' is not supported for dictionary of layout 'range_hashed'", ErrorCodes::UNSUPPORTED_METHOD};
|
||||
|
||||
if (!dict_struct.range_min || !dict_struct.range_max)
|
||||
throw Exception{name + ": dictionary of layout 'range_hashed' requires .structure.range_min and .structure.range_max",
|
||||
throw Exception{full_name + ": dictionary of layout 'range_hashed' requires .structure.range_min and .structure.range_max",
|
||||
ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
const String database = config.getString(config_prefix + ".database", "");
|
||||
const String name = config.getString(config_prefix + ".name");
|
||||
const DictionaryLifetime dict_lifetime{config, config_prefix + ".lifetime"};
|
||||
const bool require_nonempty = config.getBool(config_prefix + ".require_nonempty", false);
|
||||
return std::make_unique<RangeHashedDictionary>(name, dict_struct, std::move(source_ptr), dict_lifetime, require_nonempty);
|
||||
return std::make_unique<RangeHashedDictionary>(database, name, dict_struct, std::move(source_ptr), dict_lifetime, require_nonempty);
|
||||
};
|
||||
factory.registerLayout("range_hashed", create_layout, false);
|
||||
}
|
||||
|
|
|
@ -18,13 +18,16 @@ class RangeHashedDictionary final : public IDictionaryBase
|
|||
{
|
||||
public:
|
||||
RangeHashedDictionary(
|
||||
const std::string & dictionary_name_,
|
||||
const std::string & database_,
|
||||
const std::string & name_,
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
const DictionaryLifetime dict_lifetime_,
|
||||
bool require_nonempty_);
|
||||
|
||||
std::string getName() const override { return dictionary_name; }
|
||||
const std::string & getDatabase() const override { return database; }
|
||||
const std::string & getName() const override { return name; }
|
||||
const std::string & getFullName() const override { return full_name; }
|
||||
|
||||
std::string getTypeName() const override { return "RangeHashed"; }
|
||||
|
||||
|
@ -40,7 +43,7 @@ public:
|
|||
|
||||
std::shared_ptr<const IExternalLoadable> clone() const override
|
||||
{
|
||||
return std::make_shared<RangeHashedDictionary>(dictionary_name, dict_struct, source_ptr->clone(), dict_lifetime, require_nonempty);
|
||||
return std::make_shared<RangeHashedDictionary>(database, name, dict_struct, source_ptr->clone(), dict_lifetime, require_nonempty);
|
||||
}
|
||||
|
||||
const IDictionarySource * getSource() const override { return source_ptr.get(); }
|
||||
|
@ -208,7 +211,9 @@ private:
|
|||
|
||||
friend struct RangeHashedDIctionaryCallGetBlockInputStreamImpl;
|
||||
|
||||
const std::string dictionary_name;
|
||||
const std::string database;
|
||||
const std::string name;
|
||||
const std::string full_name;
|
||||
const DictionaryStructure dict_struct;
|
||||
const DictionarySourcePtr source_ptr;
|
||||
const DictionaryLifetime dict_lifetime;
|
||||
|
|
|
@ -35,12 +35,15 @@ namespace ErrorCodes
|
|||
}
|
||||
|
||||
TrieDictionary::TrieDictionary(
|
||||
const std::string & database_,
|
||||
const std::string & name_,
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
const DictionaryLifetime dict_lifetime_,
|
||||
bool require_nonempty_)
|
||||
: name{name_}
|
||||
: database(database_)
|
||||
, name(name_)
|
||||
, full_name{database_.empty() ? name_ : (database_ + "." + name_)}
|
||||
, dict_struct(dict_struct_)
|
||||
, source_ptr{std::move(source_ptr_)}
|
||||
, dict_lifetime(dict_lifetime_)
|
||||
|
@ -75,7 +78,7 @@ TrieDictionary::~TrieDictionary()
|
|||
validateKeyTypes(key_types); \
|
||||
\
|
||||
const auto & attribute = getAttribute(attribute_name); \
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
\
|
||||
const auto null_value = std::get<TYPE>(attribute.null_values); \
|
||||
\
|
||||
|
@ -107,7 +110,7 @@ void TrieDictionary::getString(
|
|||
validateKeyTypes(key_types);
|
||||
|
||||
const auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
const auto & null_value = StringRef{std::get<String>(attribute.null_values)};
|
||||
|
||||
|
@ -129,7 +132,7 @@ void TrieDictionary::getString(
|
|||
validateKeyTypes(key_types); \
|
||||
\
|
||||
const auto & attribute = getAttribute(attribute_name); \
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
\
|
||||
getItemsImpl<TYPE, TYPE>( \
|
||||
attribute, \
|
||||
|
@ -163,7 +166,7 @@ void TrieDictionary::getString(
|
|||
validateKeyTypes(key_types);
|
||||
|
||||
const auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
getItemsImpl<StringRef, StringRef>(
|
||||
attribute,
|
||||
|
@ -183,7 +186,7 @@ void TrieDictionary::getString(
|
|||
validateKeyTypes(key_types); \
|
||||
\
|
||||
const auto & attribute = getAttribute(attribute_name); \
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::ut##TYPE); \
|
||||
\
|
||||
getItemsImpl<TYPE, TYPE>( \
|
||||
attribute, key_columns, [&](const size_t row, const auto value) { out[row] = value; }, [&](const size_t) { return def; }); \
|
||||
|
@ -214,7 +217,7 @@ void TrieDictionary::getString(
|
|||
validateKeyTypes(key_types);
|
||||
|
||||
const auto & attribute = getAttribute(attribute_name);
|
||||
checkAttributeType(name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
checkAttributeType(full_name, attribute_name, attribute.type, AttributeUnderlyingType::utString);
|
||||
|
||||
getItemsImpl<StringRef, StringRef>(
|
||||
attribute,
|
||||
|
@ -291,7 +294,7 @@ void TrieDictionary::createAttributes()
|
|||
attributes.push_back(createAttributeWithType(attribute.underlying_type, attribute.null_value));
|
||||
|
||||
if (attribute.hierarchical)
|
||||
throw Exception{name + ": hierarchical attributes not supported for dictionary of type " + getTypeName(),
|
||||
throw Exception{full_name + ": hierarchical attributes not supported for dictionary of type " + getTypeName(),
|
||||
ErrorCodes::TYPE_MISMATCH};
|
||||
}
|
||||
}
|
||||
|
@ -337,7 +340,7 @@ void TrieDictionary::loadData()
|
|||
stream->readSuffix();
|
||||
|
||||
if (require_nonempty && 0 == element_count)
|
||||
throw Exception{name + ": dictionary source is empty and 'require_nonempty' property is set.", ErrorCodes::DICTIONARY_IS_EMPTY};
|
||||
throw Exception{full_name + ": dictionary source is empty and 'require_nonempty' property is set.", ErrorCodes::DICTIONARY_IS_EMPTY};
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
|
@ -627,7 +630,7 @@ const TrieDictionary::Attribute & TrieDictionary::getAttribute(const std::string
|
|||
{
|
||||
const auto it = attribute_index_by_name.find(attribute_name);
|
||||
if (it == std::end(attribute_index_by_name))
|
||||
throw Exception{name + ": no such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
throw Exception{full_name + ": no such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
return attributes[it->second];
|
||||
}
|
||||
|
@ -767,7 +770,7 @@ BlockInputStreamPtr TrieDictionary::getBlockInputStream(const Names & column_nam
|
|||
|
||||
void registerDictionaryTrie(DictionaryFactory & factory)
|
||||
{
|
||||
auto create_layout = [=](const std::string & name,
|
||||
auto create_layout = [=](const std::string &,
|
||||
const DictionaryStructure & dict_struct,
|
||||
const Poco::Util::AbstractConfiguration & config,
|
||||
const std::string & config_prefix,
|
||||
|
@ -776,10 +779,12 @@ void registerDictionaryTrie(DictionaryFactory & factory)
|
|||
if (!dict_struct.key)
|
||||
throw Exception{"'key' is required for dictionary of layout 'ip_trie'", ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
const String database = config.getString(config_prefix + ".database", "");
|
||||
const String name = config.getString(config_prefix + ".name");
|
||||
const DictionaryLifetime dict_lifetime{config, config_prefix + ".lifetime"};
|
||||
const bool require_nonempty = config.getBool(config_prefix + ".require_nonempty", false);
|
||||
// This is specialised trie for storing IPv4 and IPv6 prefixes.
|
||||
return std::make_unique<TrieDictionary>(name, dict_struct, std::move(source_ptr), dict_lifetime, require_nonempty);
|
||||
return std::make_unique<TrieDictionary>(database, name, dict_struct, std::move(source_ptr), dict_lifetime, require_nonempty);
|
||||
};
|
||||
factory.registerLayout("ip_trie", create_layout, true);
|
||||
}
|
||||
|
|
|
@ -23,6 +23,7 @@ class TrieDictionary final : public IDictionaryBase
|
|||
{
|
||||
public:
|
||||
TrieDictionary(
|
||||
const std::string & database_,
|
||||
const std::string & name_,
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
|
@ -33,7 +34,9 @@ public:
|
|||
|
||||
std::string getKeyDescription() const { return key_description; }
|
||||
|
||||
std::string getName() const override { return name; }
|
||||
const std::string & getDatabase() const override { return database; }
|
||||
const std::string & getName() const override { return name; }
|
||||
const std::string & getFullName() const override { return full_name; }
|
||||
|
||||
std::string getTypeName() const override { return "Trie"; }
|
||||
|
||||
|
@ -49,7 +52,7 @@ public:
|
|||
|
||||
std::shared_ptr<const IExternalLoadable> clone() const override
|
||||
{
|
||||
return std::make_shared<TrieDictionary>(name, dict_struct, source_ptr->clone(), dict_lifetime, require_nonempty);
|
||||
return std::make_shared<TrieDictionary>(database, name, dict_struct, source_ptr->clone(), dict_lifetime, require_nonempty);
|
||||
}
|
||||
|
||||
const IDictionarySource * getSource() const override { return source_ptr.get(); }
|
||||
|
@ -232,7 +235,9 @@ private:
|
|||
|
||||
Columns getKeyColumns() const;
|
||||
|
||||
const std::string database;
|
||||
const std::string name;
|
||||
const std::string full_name;
|
||||
const DictionaryStructure dict_struct;
|
||||
const DictionarySourcePtr source_ptr;
|
||||
const DictionaryLifetime dict_lifetime;
|
||||
|
|
|
@ -421,7 +421,7 @@ void checkPrimaryKey(const NamesToTypeNames & all_attrs, const Names & key_attrs
|
|||
}
|
||||
|
||||
|
||||
DictionaryConfigurationPtr getDictionaryConfigurationFromAST(const ASTCreateQuery & query, const String & database_name)
|
||||
DictionaryConfigurationPtr getDictionaryConfigurationFromAST(const ASTCreateQuery & query)
|
||||
{
|
||||
checkAST(query);
|
||||
|
||||
|
@ -434,10 +434,14 @@ DictionaryConfigurationPtr getDictionaryConfigurationFromAST(const ASTCreateQuer
|
|||
|
||||
AutoPtr<Poco::XML::Element> name_element(xml_document->createElement("name"));
|
||||
current_dictionary->appendChild(name_element);
|
||||
String full_name = (!database_name.empty() ? database_name : query.database) + "." + query.table;
|
||||
AutoPtr<Text> name(xml_document->createTextNode(full_name));
|
||||
AutoPtr<Text> name(xml_document->createTextNode(query.table));
|
||||
name_element->appendChild(name);
|
||||
|
||||
AutoPtr<Poco::XML::Element> database_element(xml_document->createElement("database"));
|
||||
current_dictionary->appendChild(database_element);
|
||||
AutoPtr<Text> database(xml_document->createTextNode(query.database));
|
||||
database_element->appendChild(database);
|
||||
|
||||
AutoPtr<Element> structure_element(xml_document->createElement("structure"));
|
||||
current_dictionary->appendChild(structure_element);
|
||||
Names pk_attrs = getPrimaryKeyColumns(query.dictionary->primary_key);
|
||||
|
|
|
@ -10,6 +10,6 @@ using DictionaryConfigurationPtr = Poco::AutoPtr<Poco::Util::AbstractConfigurati
|
|||
/// Convert dictionary AST to Poco::AbstractConfiguration
|
||||
/// This function is necessary because all loadable objects configuration are Poco::AbstractConfiguration
|
||||
/// Can throw exception if query is ill-formed
|
||||
DictionaryConfigurationPtr getDictionaryConfigurationFromAST(const ASTCreateQuery & query, const String & database_name = {});
|
||||
DictionaryConfigurationPtr getDictionaryConfigurationFromAST(const ASTCreateQuery & query);
|
||||
|
||||
}
|
||||
|
|
|
@ -57,7 +57,8 @@ TEST(ConvertDictionaryAST, SimpleDictConfiguration)
|
|||
DictionaryConfigurationPtr config = getDictionaryConfigurationFromAST(*create);
|
||||
|
||||
/// name
|
||||
EXPECT_EQ(config->getString("dictionary.name"), "test.dict1");
|
||||
EXPECT_EQ(config->getString("dictionary.database"), "test");
|
||||
EXPECT_EQ(config->getString("dictionary.name"), "dict1");
|
||||
|
||||
/// lifetime
|
||||
EXPECT_EQ(config->getInt("dictionary.lifetime.min"), 1);
|
||||
|
|
|
@ -127,10 +127,10 @@ private:
|
|||
auto dict = dictionaries_loader.getDictionary(dict_name_col->getValue<String>());
|
||||
const auto dict_ptr = dict.get();
|
||||
|
||||
if (!context.hasDictionaryAccessRights(dict_ptr->getName()))
|
||||
if (!context.hasDictionaryAccessRights(dict_ptr->getFullName()))
|
||||
{
|
||||
throw Exception{"For function " + getName() + ", cannot access dictionary "
|
||||
+ dict->getName() + " on database " + context.getCurrentDatabase(), ErrorCodes::DICTIONARY_ACCESS_DENIED};
|
||||
+ dict->getFullName() + " on database " + context.getCurrentDatabase(), ErrorCodes::DICTIONARY_ACCESS_DENIED};
|
||||
}
|
||||
|
||||
if (!executeDispatchSimple<FlatDictionary>(block, arguments, result, dict_ptr) &&
|
||||
|
@ -302,10 +302,10 @@ private:
|
|||
auto dict = dictionaries_loader.getDictionary(dict_name_col->getValue<String>());
|
||||
const auto dict_ptr = dict.get();
|
||||
|
||||
if (!context.hasDictionaryAccessRights(dict_ptr->getName()))
|
||||
if (!context.hasDictionaryAccessRights(dict_ptr->getFullName()))
|
||||
{
|
||||
throw Exception{"For function " + getName() + ", cannot access dictionary "
|
||||
+ dict->getName() + " on database " + context.getCurrentDatabase(), ErrorCodes::DICTIONARY_ACCESS_DENIED};
|
||||
+ dict->getFullName() + " on database " + context.getCurrentDatabase(), ErrorCodes::DICTIONARY_ACCESS_DENIED};
|
||||
}
|
||||
|
||||
if (!executeDispatch<FlatDictionary>(block, arguments, result, dict_ptr) &&
|
||||
|
@ -488,10 +488,10 @@ private:
|
|||
auto dict = dictionaries_loader.getDictionary(dict_name_col->getValue<String>());
|
||||
const auto dict_ptr = dict.get();
|
||||
|
||||
if (!context.hasDictionaryAccessRights(dict_ptr->getName()))
|
||||
if (!context.hasDictionaryAccessRights(dict_ptr->getFullName()))
|
||||
{
|
||||
throw Exception{"For function " + getName() + ", cannot access dictionary "
|
||||
+ dict->getName() + " on database " + context.getCurrentDatabase(), ErrorCodes::DICTIONARY_ACCESS_DENIED};
|
||||
+ dict->getFullName() + " on database " + context.getCurrentDatabase(), ErrorCodes::DICTIONARY_ACCESS_DENIED};
|
||||
}
|
||||
|
||||
if (!executeDispatch<FlatDictionary>(block, arguments, result, dict_ptr) &&
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
#include "IFunctionImpl.h"
|
||||
#include <Common/intExp.h>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Core/Defines.h>
|
||||
#include <cmath>
|
||||
#include <type_traits>
|
||||
#include <array>
|
||||
|
@ -702,7 +703,7 @@ private:
|
|||
}
|
||||
|
||||
template <typename Container>
|
||||
void executeImplNumToNum(const Container & src, Container & dst, const Array & boundaries)
|
||||
void NO_INLINE executeImplNumToNum(const Container & src, Container & dst, const Array & boundaries)
|
||||
{
|
||||
using ValueType = typename Container::value_type;
|
||||
std::vector<ValueType> boundary_values(boundaries.size());
|
||||
|
@ -714,20 +715,53 @@ private:
|
|||
|
||||
size_t size = src.size();
|
||||
dst.resize(size);
|
||||
for (size_t i = 0; i < size; ++i)
|
||||
|
||||
if (boundary_values.size() < 32) /// Just a guess
|
||||
{
|
||||
auto it = std::upper_bound(boundary_values.begin(), boundary_values.end(), src[i]);
|
||||
if (it == boundary_values.end())
|
||||
/// Linear search with value on previous iteration as a hint.
|
||||
/// Not optimal if the size of list is large and distribution of values is uniform random.
|
||||
|
||||
auto begin = boundary_values.begin();
|
||||
auto end = boundary_values.end();
|
||||
auto it = begin + (end - begin) / 2;
|
||||
|
||||
for (size_t i = 0; i < size; ++i)
|
||||
{
|
||||
dst[i] = boundary_values.back();
|
||||
auto value = src[i];
|
||||
|
||||
if (*it < value)
|
||||
{
|
||||
while (it != end && *it <= value)
|
||||
++it;
|
||||
if (it != begin)
|
||||
--it;
|
||||
}
|
||||
else
|
||||
{
|
||||
while (*it > value && it != begin)
|
||||
--it;
|
||||
}
|
||||
|
||||
dst[i] = *it;
|
||||
}
|
||||
else if (it == boundary_values.begin())
|
||||
}
|
||||
else
|
||||
{
|
||||
for (size_t i = 0; i < size; ++i)
|
||||
{
|
||||
dst[i] = boundary_values.front();
|
||||
}
|
||||
else
|
||||
{
|
||||
dst[i] = *(it - 1);
|
||||
auto it = std::upper_bound(boundary_values.begin(), boundary_values.end(), src[i]);
|
||||
if (it == boundary_values.end())
|
||||
{
|
||||
dst[i] = boundary_values.back();
|
||||
}
|
||||
else if (it == boundary_values.begin())
|
||||
{
|
||||
dst[i] = boundary_values.front();
|
||||
}
|
||||
else
|
||||
{
|
||||
dst[i] = *(it - 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
#include <utility>
|
||||
#include <vector>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
|
@ -60,7 +61,7 @@ struct FormatImpl
|
|||
UInt64 * index_positions_ptr,
|
||||
std::vector<String> & substrings)
|
||||
{
|
||||
/// Is current position is after open curly brace.
|
||||
/// Is current position after open curly brace.
|
||||
bool is_open_curly = false;
|
||||
/// The position of last open token.
|
||||
size_t last_open = -1;
|
||||
|
|
|
@ -60,7 +60,7 @@ public:
|
|||
|
||||
const ExternalLoadableLifetime & getLifetime() const override;
|
||||
|
||||
std::string getName() const override { return name; }
|
||||
const std::string & getLoadableName() const override { return name; }
|
||||
|
||||
bool supportUpdates() const override { return true; }
|
||||
|
||||
|
@ -69,7 +69,7 @@ public:
|
|||
std::shared_ptr<const IExternalLoadable> clone() const override;
|
||||
|
||||
private:
|
||||
std::string name;
|
||||
const std::string name;
|
||||
std::string model_path;
|
||||
std::string lib_path;
|
||||
ExternalLoadableLifetime lifetime;
|
||||
|
|
|
@ -33,8 +33,6 @@
|
|||
#include <Interpreters/UsersManager.h>
|
||||
#include <Dictionaries/Embedded/GeoDictionariesLoader.h>
|
||||
#include <Interpreters/EmbeddedDictionaries.h>
|
||||
#include <Interpreters/ExternalLoaderXMLConfigRepository.h>
|
||||
#include <Interpreters/ExternalLoaderDatabaseConfigRepository.h>
|
||||
#include <Interpreters/ExternalDictionariesLoader.h>
|
||||
#include <Interpreters/ExternalModelsLoader.h>
|
||||
#include <Interpreters/ExpressionActions.h>
|
||||
|
@ -1088,7 +1086,6 @@ DatabasePtr Context::detachDatabase(const String & database_name)
|
|||
{
|
||||
auto lock = getLock();
|
||||
auto res = getDatabase(database_name);
|
||||
getExternalDictionariesLoader().removeConfigRepository(database_name);
|
||||
shared->databases.erase(database_name);
|
||||
|
||||
return res;
|
||||
|
@ -1436,7 +1433,7 @@ void Context::setMarkCache(size_t cache_size_in_bytes)
|
|||
if (shared->mark_cache)
|
||||
throw Exception("Mark cache has been already created.", ErrorCodes::LOGICAL_ERROR);
|
||||
|
||||
shared->mark_cache = std::make_shared<MarkCache>(cache_size_in_bytes, std::chrono::seconds(settings.mark_cache_min_lifetime));
|
||||
shared->mark_cache = std::make_shared<MarkCache>(cache_size_in_bytes);
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -72,7 +72,7 @@ bool EmbeddedDictionaries::reloadImpl(const bool throw_on_error, const bool forc
|
|||
|
||||
bool was_exception = false;
|
||||
|
||||
DictionaryReloader<RegionsHierarchies> reload_regions_hierarchies = [=] (const Poco::Util::AbstractConfiguration & config)
|
||||
DictionaryReloader<RegionsHierarchies> reload_regions_hierarchies = [=, this] (const Poco::Util::AbstractConfiguration & config)
|
||||
{
|
||||
return geo_dictionaries_loader->reloadRegionsHierarchies(config);
|
||||
};
|
||||
|
@ -80,7 +80,7 @@ bool EmbeddedDictionaries::reloadImpl(const bool throw_on_error, const bool forc
|
|||
if (!reloadDictionary<RegionsHierarchies>(regions_hierarchies, std::move(reload_regions_hierarchies), throw_on_error, force_reload))
|
||||
was_exception = true;
|
||||
|
||||
DictionaryReloader<RegionsNames> reload_regions_names = [=] (const Poco::Util::AbstractConfiguration & config)
|
||||
DictionaryReloader<RegionsNames> reload_regions_names = [=, this] (const Poco::Util::AbstractConfiguration & config)
|
||||
{
|
||||
return geo_dictionaries_loader->reloadRegionsNames(config);
|
||||
};
|
||||
|
|
|
@ -9,6 +9,7 @@ ExternalDictionariesLoader::ExternalDictionariesLoader(Context & context_)
|
|||
: ExternalLoader("external dictionary", &Logger::get("ExternalDictionariesLoader"))
|
||||
, context(context_)
|
||||
{
|
||||
setConfigSettings({"dictionary", "name", "database"});
|
||||
enableAsyncLoading(true);
|
||||
enablePeriodicUpdates(true);
|
||||
}
|
||||
|
@ -23,11 +24,4 @@ ExternalLoader::LoadablePtr ExternalDictionariesLoader::create(
|
|||
bool dictionary_from_database = !repository_name.empty();
|
||||
return DictionaryFactory::instance().create(name, config, key_in_config, context, dictionary_from_database);
|
||||
}
|
||||
|
||||
void ExternalDictionariesLoader::addConfigRepository(
|
||||
const std::string & repository_name, std::unique_ptr<IExternalLoaderConfigRepository> config_repository)
|
||||
{
|
||||
ExternalLoader::addConfigRepository(repository_name, std::move(config_repository), {"dictionary", "name"});
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -29,10 +29,6 @@ public:
|
|||
return std::static_pointer_cast<const IDictionaryBase>(tryLoad(name));
|
||||
}
|
||||
|
||||
void addConfigRepository(
|
||||
const std::string & repository_name,
|
||||
std::unique_ptr<IExternalLoaderConfigRepository> config_repository);
|
||||
|
||||
protected:
|
||||
LoadablePtr create(const std::string & name, const Poco::Util::AbstractConfiguration & config,
|
||||
const std::string & key_in_config, const std::string & repository_name) const override;
|
||||
|
|
|
@ -92,6 +92,7 @@ struct ExternalLoader::ObjectConfig
|
|||
Poco::AutoPtr<Poco::Util::AbstractConfiguration> config;
|
||||
String key_in_config;
|
||||
String repository_name;
|
||||
bool from_temp_repository = false;
|
||||
String path;
|
||||
};
|
||||
|
||||
|
@ -107,26 +108,30 @@ public:
|
|||
}
|
||||
~LoadablesConfigReader() = default;
|
||||
|
||||
using RepositoryPtr = std::unique_ptr<IExternalLoaderConfigRepository>;
|
||||
using Repository = IExternalLoaderConfigRepository;
|
||||
|
||||
void addConfigRepository(const String & repository_name, RepositoryPtr repository, const ExternalLoaderConfigSettings & settings)
|
||||
void addConfigRepository(std::unique_ptr<Repository> repository)
|
||||
{
|
||||
std::lock_guard lock{mutex};
|
||||
RepositoryInfo repository_info{std::move(repository), settings, {}};
|
||||
repositories.emplace(repository_name, std::move(repository_info));
|
||||
auto * ptr = repository.get();
|
||||
repositories.emplace(ptr, RepositoryInfo{std::move(repository), {}});
|
||||
need_collect_object_configs = true;
|
||||
}
|
||||
|
||||
RepositoryPtr removeConfigRepository(const String & repository_name)
|
||||
void removeConfigRepository(Repository * repository)
|
||||
{
|
||||
std::lock_guard lock{mutex};
|
||||
auto it = repositories.find(repository_name);
|
||||
auto it = repositories.find(repository);
|
||||
if (it == repositories.end())
|
||||
return nullptr;
|
||||
auto repository = std::move(it->second.repository);
|
||||
return;
|
||||
repositories.erase(it);
|
||||
need_collect_object_configs = true;
|
||||
return repository;
|
||||
}
|
||||
|
||||
void setConfigSettings(const ExternalLoaderConfigSettings & settings_)
|
||||
{
|
||||
std::lock_guard lock{mutex};
|
||||
settings = settings_;
|
||||
}
|
||||
|
||||
using ObjectConfigsPtr = std::shared_ptr<const std::unordered_map<String /* object's name */, ObjectConfig>>;
|
||||
|
@ -170,8 +175,7 @@ private:
|
|||
|
||||
struct RepositoryInfo
|
||||
{
|
||||
RepositoryPtr repository;
|
||||
ExternalLoaderConfigSettings settings;
|
||||
std::unique_ptr<Repository> repository;
|
||||
std::unordered_map<String /* path */, FileInfo> files;
|
||||
};
|
||||
|
||||
|
@ -179,18 +183,10 @@ private:
|
|||
/// Checks last modification times of files and read those files which are new or changed.
|
||||
void readRepositories(const std::optional<String> & only_repository_name = {}, const std::optional<String> & only_path = {})
|
||||
{
|
||||
Strings repository_names;
|
||||
if (only_repository_name)
|
||||
for (auto & [repository, repository_info] : repositories)
|
||||
{
|
||||
if (repositories.count(*only_repository_name))
|
||||
repository_names.push_back(*only_repository_name);
|
||||
}
|
||||
else
|
||||
boost::copy(repositories | boost::adaptors::map_keys, std::back_inserter(repository_names));
|
||||
|
||||
for (const auto & repository_name : repository_names)
|
||||
{
|
||||
auto & repository_info = repositories[repository_name];
|
||||
if (only_repository_name && (repository->getName() != *only_repository_name))
|
||||
continue;
|
||||
|
||||
for (auto & file_info : repository_info.files | boost::adaptors::map_values)
|
||||
file_info.in_use = false;
|
||||
|
@ -198,11 +194,11 @@ private:
|
|||
Strings existing_paths;
|
||||
if (only_path)
|
||||
{
|
||||
if (repository_info.repository->exists(*only_path))
|
||||
if (repository->exists(*only_path))
|
||||
existing_paths.push_back(*only_path);
|
||||
}
|
||||
else
|
||||
boost::copy(repository_info.repository->getAllLoadablesDefinitionNames(), std::back_inserter(existing_paths));
|
||||
boost::copy(repository->getAllLoadablesDefinitionNames(), std::back_inserter(existing_paths));
|
||||
|
||||
for (const auto & path : existing_paths)
|
||||
{
|
||||
|
@ -210,13 +206,13 @@ private:
|
|||
if (it != repository_info.files.end())
|
||||
{
|
||||
FileInfo & file_info = it->second;
|
||||
if (readFileInfo(file_info, *repository_info.repository, path, repository_info.settings))
|
||||
if (readFileInfo(file_info, *repository, path))
|
||||
need_collect_object_configs = true;
|
||||
}
|
||||
else
|
||||
{
|
||||
FileInfo file_info;
|
||||
if (readFileInfo(file_info, *repository_info.repository, path, repository_info.settings))
|
||||
if (readFileInfo(file_info, *repository, path))
|
||||
{
|
||||
repository_info.files.emplace(path, std::move(file_info));
|
||||
need_collect_object_configs = true;
|
||||
|
@ -249,8 +245,7 @@ private:
|
|||
bool readFileInfo(
|
||||
FileInfo & file_info,
|
||||
IExternalLoaderConfigRepository & repository,
|
||||
const String & path,
|
||||
const ExternalLoaderConfigSettings & settings) const
|
||||
const String & path) const
|
||||
{
|
||||
try
|
||||
{
|
||||
|
@ -293,7 +288,13 @@ private:
|
|||
continue;
|
||||
}
|
||||
|
||||
object_configs_from_file.emplace_back(object_name, ObjectConfig{file_contents, key, {}, {}});
|
||||
String database;
|
||||
if (!settings.external_database.empty())
|
||||
database = file_contents->getString(key + "." + settings.external_database, "");
|
||||
if (!database.empty())
|
||||
object_name = database + "." + object_name;
|
||||
|
||||
object_configs_from_file.emplace_back(object_name, ObjectConfig{file_contents, key, {}, {}, {}});
|
||||
}
|
||||
|
||||
file_info.objects = std::move(object_configs_from_file);
|
||||
|
@ -318,7 +319,7 @@ private:
|
|||
// Generate new result.
|
||||
auto new_configs = std::make_shared<std::unordered_map<String /* object's name */, ObjectConfig>>();
|
||||
|
||||
for (const auto & [repository_name, repository_info] : repositories)
|
||||
for (const auto & [repository, repository_info] : repositories)
|
||||
{
|
||||
for (const auto & [path, file_info] : repository_info.files)
|
||||
{
|
||||
|
@ -328,19 +329,19 @@ private:
|
|||
if (already_added_it == new_configs->end())
|
||||
{
|
||||
auto & new_config = new_configs->emplace(object_name, object_config).first->second;
|
||||
new_config.repository_name = repository_name;
|
||||
new_config.from_temp_repository = repository->isTemporary();
|
||||
new_config.repository_name = repository->getName();
|
||||
new_config.path = path;
|
||||
}
|
||||
else
|
||||
{
|
||||
const auto & already_added = already_added_it->second;
|
||||
if (!startsWith(repository_name, IExternalLoaderConfigRepository::INTERNAL_REPOSITORY_NAME_PREFIX) &&
|
||||
!startsWith(already_added.repository_name, IExternalLoaderConfigRepository::INTERNAL_REPOSITORY_NAME_PREFIX))
|
||||
if (!already_added.from_temp_repository && !repository->isTemporary())
|
||||
{
|
||||
LOG_WARNING(
|
||||
log,
|
||||
type_name << " '" << object_name << "' is found "
|
||||
<< (((path == already_added.path) && repository_name == already_added.repository_name)
|
||||
<< (((path == already_added.path) && (repository->getName() == already_added.repository_name))
|
||||
? ("twice in the same file '" + path + "'")
|
||||
: ("both in file '" + already_added.path + "' and '" + path + "'")));
|
||||
}
|
||||
|
@ -356,7 +357,8 @@ private:
|
|||
Logger * log;
|
||||
|
||||
std::mutex mutex;
|
||||
std::unordered_map<String, RepositoryInfo> repositories;
|
||||
ExternalLoaderConfigSettings settings;
|
||||
std::unordered_map<Repository *, RepositoryInfo> repositories;
|
||||
ObjectConfigsPtr object_configs;
|
||||
bool need_collect_object_configs = false;
|
||||
};
|
||||
|
@ -613,7 +615,7 @@ public:
|
|||
}
|
||||
catch (...)
|
||||
{
|
||||
tryLogCurrentException(log, "Could not check if " + type_name + " '" + object->getName() + "' was modified");
|
||||
tryLogCurrentException(log, "Could not check if " + type_name + " '" + object->getLoadableName() + "' was modified");
|
||||
/// Cannot check isModified, so update
|
||||
should_update_flag = true;
|
||||
}
|
||||
|
@ -1151,20 +1153,23 @@ ExternalLoader::ExternalLoader(const String & type_name_, Logger * log_)
|
|||
|
||||
ExternalLoader::~ExternalLoader() = default;
|
||||
|
||||
void ExternalLoader::addConfigRepository(
|
||||
const std::string & repository_name,
|
||||
std::unique_ptr<IExternalLoaderConfigRepository> config_repository,
|
||||
const ExternalLoaderConfigSettings & config_settings)
|
||||
ext::scope_guard ExternalLoader::addConfigRepository(std::unique_ptr<IExternalLoaderConfigRepository> repository)
|
||||
{
|
||||
config_files_reader->addConfigRepository(repository_name, std::move(config_repository), config_settings);
|
||||
reloadConfig(repository_name);
|
||||
auto * ptr = repository.get();
|
||||
String name = ptr->getName();
|
||||
config_files_reader->addConfigRepository(std::move(repository));
|
||||
reloadConfig(name);
|
||||
|
||||
return [this, ptr, name]()
|
||||
{
|
||||
config_files_reader->removeConfigRepository(ptr);
|
||||
reloadConfig(name);
|
||||
};
|
||||
}
|
||||
|
||||
std::unique_ptr<IExternalLoaderConfigRepository> ExternalLoader::removeConfigRepository(const std::string & repository_name)
|
||||
void ExternalLoader::setConfigSettings(const ExternalLoaderConfigSettings & settings)
|
||||
{
|
||||
auto repository = config_files_reader->removeConfigRepository(repository_name);
|
||||
reloadConfig(repository_name);
|
||||
return repository;
|
||||
config_files_reader->setConfigSettings(settings);
|
||||
}
|
||||
|
||||
void ExternalLoader::enableAlwaysLoadEverything(bool enable)
|
||||
|
|
|
@ -7,6 +7,7 @@
|
|||
#include <Interpreters/IExternalLoadable.h>
|
||||
#include <Interpreters/IExternalLoaderConfigRepository.h>
|
||||
#include <common/logger_useful.h>
|
||||
#include <ext/scope_guard.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
|
@ -24,9 +25,9 @@ struct ExternalLoaderConfigSettings
|
|||
{
|
||||
std::string external_config;
|
||||
std::string external_name;
|
||||
std::string external_database;
|
||||
};
|
||||
|
||||
|
||||
/** Interface for manage user-defined objects.
|
||||
* Monitors configuration file and automatically reloads objects in separate threads.
|
||||
* The monitoring thread wakes up every 'check_period_sec' seconds and checks
|
||||
|
@ -87,13 +88,9 @@ public:
|
|||
virtual ~ExternalLoader();
|
||||
|
||||
/// Adds a repository which will be used to read configurations from.
|
||||
void addConfigRepository(
|
||||
const std::string & repository_name,
|
||||
std::unique_ptr<IExternalLoaderConfigRepository> config_repository,
|
||||
const ExternalLoaderConfigSettings & config_settings);
|
||||
ext::scope_guard addConfigRepository(std::unique_ptr<IExternalLoaderConfigRepository> config_repository);
|
||||
|
||||
/// Removes a repository which were used to read configurations.
|
||||
std::unique_ptr<IExternalLoaderConfigRepository> removeConfigRepository(const std::string & repository_name);
|
||||
void setConfigSettings(const ExternalLoaderConfigSettings & settings_);
|
||||
|
||||
/// Sets whether all the objects from the configuration should be always loaded (even those which are never used).
|
||||
void enableAlwaysLoadEverything(bool enable);
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
#include <Interpreters/ExternalLoaderDatabaseConfigRepository.h>
|
||||
#include <Interpreters/ExternalDictionariesLoader.h>
|
||||
#include <Dictionaries/getDictionaryConfigurationFromAST.h>
|
||||
|
||||
namespace DB
|
||||
|
@ -11,40 +12,46 @@ namespace ErrorCodes
|
|||
|
||||
namespace
|
||||
{
|
||||
String trimDatabaseName(const std::string & loadable_definition_name, const DatabasePtr database)
|
||||
String trimDatabaseName(const std::string & loadable_definition_name, const IDatabase & database)
|
||||
{
|
||||
const auto & dbname = database->getDatabaseName();
|
||||
const auto & dbname = database.getDatabaseName();
|
||||
if (!startsWith(loadable_definition_name, dbname))
|
||||
throw Exception(
|
||||
"Loadable '" + loadable_definition_name + "' is not from database '" + database->getDatabaseName(), ErrorCodes::UNKNOWN_DICTIONARY);
|
||||
"Loadable '" + loadable_definition_name + "' is not from database '" + database.getDatabaseName(), ErrorCodes::UNKNOWN_DICTIONARY);
|
||||
/// dbname.loadable_name
|
||||
///--> remove <---
|
||||
return loadable_definition_name.substr(dbname.length() + 1);
|
||||
}
|
||||
}
|
||||
|
||||
LoadablesConfigurationPtr ExternalLoaderDatabaseConfigRepository::load(const std::string & loadable_definition_name) const
|
||||
ExternalLoaderDatabaseConfigRepository::ExternalLoaderDatabaseConfigRepository(IDatabase & database_, const Context & context_)
|
||||
: name(database_.getDatabaseName())
|
||||
, database(database_)
|
||||
, context(context_)
|
||||
{
|
||||
String dictname = trimDatabaseName(loadable_definition_name, database);
|
||||
return getDictionaryConfigurationFromAST(database->getCreateDictionaryQuery(context, dictname)->as<const ASTCreateQuery &>());
|
||||
}
|
||||
|
||||
bool ExternalLoaderDatabaseConfigRepository::exists(const std::string & loadable_definition_name) const
|
||||
LoadablesConfigurationPtr ExternalLoaderDatabaseConfigRepository::load(const std::string & loadable_definition_name)
|
||||
{
|
||||
return database->isDictionaryExist(
|
||||
context, trimDatabaseName(loadable_definition_name, database));
|
||||
String dictname = trimDatabaseName(loadable_definition_name, database);
|
||||
return getDictionaryConfigurationFromAST(database.getCreateDictionaryQuery(context, dictname)->as<const ASTCreateQuery &>());
|
||||
}
|
||||
|
||||
bool ExternalLoaderDatabaseConfigRepository::exists(const std::string & loadable_definition_name)
|
||||
{
|
||||
return database.isDictionaryExist(context, trimDatabaseName(loadable_definition_name, database));
|
||||
}
|
||||
|
||||
Poco::Timestamp ExternalLoaderDatabaseConfigRepository::getUpdateTime(const std::string & loadable_definition_name)
|
||||
{
|
||||
return database->getObjectMetadataModificationTime(trimDatabaseName(loadable_definition_name, database));
|
||||
return database.getObjectMetadataModificationTime(trimDatabaseName(loadable_definition_name, database));
|
||||
}
|
||||
|
||||
std::set<std::string> ExternalLoaderDatabaseConfigRepository::getAllLoadablesDefinitionNames() const
|
||||
std::set<std::string> ExternalLoaderDatabaseConfigRepository::getAllLoadablesDefinitionNames()
|
||||
{
|
||||
std::set<std::string> result;
|
||||
const auto & dbname = database->getDatabaseName();
|
||||
auto itr = database->getDictionariesIterator(context);
|
||||
const auto & dbname = database.getDatabaseName();
|
||||
auto itr = database.getDictionariesIterator(context);
|
||||
while (itr && itr->isValid())
|
||||
{
|
||||
result.insert(dbname + "." + itr->name());
|
||||
|
|
|
@ -12,22 +12,21 @@ namespace DB
|
|||
class ExternalLoaderDatabaseConfigRepository : public IExternalLoaderConfigRepository
|
||||
{
|
||||
public:
|
||||
ExternalLoaderDatabaseConfigRepository(const DatabasePtr & database_, const Context & context_)
|
||||
: database(database_)
|
||||
, context(context_)
|
||||
{
|
||||
}
|
||||
ExternalLoaderDatabaseConfigRepository(IDatabase & database_, const Context & context_);
|
||||
|
||||
std::set<std::string> getAllLoadablesDefinitionNames() const override;
|
||||
const std::string & getName() const override { return name; }
|
||||
|
||||
bool exists(const std::string & loadable_definition_name) const override;
|
||||
std::set<std::string> getAllLoadablesDefinitionNames() override;
|
||||
|
||||
bool exists(const std::string & loadable_definition_name) override;
|
||||
|
||||
Poco::Timestamp getUpdateTime(const std::string & loadable_definition_name) override;
|
||||
|
||||
LoadablesConfigurationPtr load(const std::string & loadable_definition_name) const override;
|
||||
LoadablesConfigurationPtr load(const std::string & loadable_definition_name) override;
|
||||
|
||||
private:
|
||||
DatabasePtr database;
|
||||
const String name;
|
||||
IDatabase & database;
|
||||
Context context;
|
||||
};
|
||||
|
||||
|
|
|
@ -1,49 +0,0 @@
|
|||
#include <Interpreters/ExternalLoaderPresetConfigRepository.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <boost/range/algorithm/copy.hpp>
|
||||
#include <boost/range/adaptor/map.hpp>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
ExternalLoaderPresetConfigRepository::ExternalLoaderPresetConfigRepository(const std::vector<std::pair<String, LoadablesConfigurationPtr>> & preset_)
|
||||
{
|
||||
boost::range::copy(preset_, std::inserter(preset, preset.end()));
|
||||
}
|
||||
|
||||
ExternalLoaderPresetConfigRepository::~ExternalLoaderPresetConfigRepository() = default;
|
||||
|
||||
std::set<String> ExternalLoaderPresetConfigRepository::getAllLoadablesDefinitionNames() const
|
||||
{
|
||||
std::set<String> paths;
|
||||
boost::range::copy(preset | boost::adaptors::map_keys, std::inserter(paths, paths.end()));
|
||||
return paths;
|
||||
}
|
||||
|
||||
bool ExternalLoaderPresetConfigRepository::exists(const String& path) const
|
||||
{
|
||||
return preset.count(path);
|
||||
}
|
||||
|
||||
Poco::Timestamp ExternalLoaderPresetConfigRepository::getUpdateTime(const String & path)
|
||||
{
|
||||
if (!exists(path))
|
||||
throw Exception("Loadable " + path + " not found", ErrorCodes::BAD_ARGUMENTS);
|
||||
return creation_time;
|
||||
}
|
||||
|
||||
/// May contain definition about several entities (several dictionaries in one .xml file)
|
||||
LoadablesConfigurationPtr ExternalLoaderPresetConfigRepository::load(const String & path) const
|
||||
{
|
||||
auto it = preset.find(path);
|
||||
if (it == preset.end())
|
||||
throw Exception("Loadable " + path + " not found", ErrorCodes::BAD_ARGUMENTS);
|
||||
return it->second;
|
||||
}
|
||||
|
||||
}
|
|
@ -1,28 +0,0 @@
|
|||
#pragma once
|
||||
|
||||
#include <Core/Types.h>
|
||||
#include <unordered_map>
|
||||
#include <Interpreters/IExternalLoaderConfigRepository.h>
|
||||
#include <Poco/Timestamp.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
/// A config repository filled with preset loadables used by ExternalLoader.
|
||||
class ExternalLoaderPresetConfigRepository : public IExternalLoaderConfigRepository
|
||||
{
|
||||
public:
|
||||
ExternalLoaderPresetConfigRepository(const std::vector<std::pair<String, LoadablesConfigurationPtr>> & preset_);
|
||||
~ExternalLoaderPresetConfigRepository() override;
|
||||
|
||||
std::set<String> getAllLoadablesDefinitionNames() const override;
|
||||
bool exists(const String & path) const override;
|
||||
Poco::Timestamp getUpdateTime(const String & path) override;
|
||||
LoadablesConfigurationPtr load(const String & path) const override;
|
||||
|
||||
private:
|
||||
std::unordered_map<String, LoadablesConfigurationPtr> preset;
|
||||
Poco::Timestamp creation_time;
|
||||
};
|
||||
|
||||
}
|
|
@ -0,0 +1,46 @@
|
|||
#include <Interpreters/ExternalLoaderTempConfigRepository.h>
|
||||
#include <Common/Exception.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
|
||||
ExternalLoaderTempConfigRepository::ExternalLoaderTempConfigRepository(const String & repository_name_, const String & path_, const LoadablesConfigurationPtr & config_)
|
||||
: name(repository_name_), path(path_), config(config_) {}
|
||||
|
||||
|
||||
std::set<String> ExternalLoaderTempConfigRepository::getAllLoadablesDefinitionNames()
|
||||
{
|
||||
std::set<String> paths;
|
||||
paths.insert(path);
|
||||
return paths;
|
||||
}
|
||||
|
||||
|
||||
bool ExternalLoaderTempConfigRepository::exists(const String & path_)
|
||||
{
|
||||
return path == path_;
|
||||
}
|
||||
|
||||
|
||||
Poco::Timestamp ExternalLoaderTempConfigRepository::getUpdateTime(const String & path_)
|
||||
{
|
||||
if (!exists(path_))
|
||||
throw Exception("Loadable " + path_ + " not found", ErrorCodes::BAD_ARGUMENTS);
|
||||
return creation_time;
|
||||
}
|
||||
|
||||
|
||||
LoadablesConfigurationPtr ExternalLoaderTempConfigRepository::load(const String & path_)
|
||||
{
|
||||
if (!exists(path_))
|
||||
throw Exception("Loadable " + path_ + " not found", ErrorCodes::BAD_ARGUMENTS);
|
||||
return config;
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,31 @@
|
|||
#pragma once
|
||||
|
||||
#include <Core/Types.h>
|
||||
#include <Interpreters/IExternalLoaderConfigRepository.h>
|
||||
#include <Poco/Timestamp.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
/// A config repository filled with preset loadables used by ExternalLoader.
|
||||
class ExternalLoaderTempConfigRepository : public IExternalLoaderConfigRepository
|
||||
{
|
||||
public:
|
||||
ExternalLoaderTempConfigRepository(const String & repository_name_, const String & path_, const LoadablesConfigurationPtr & config_);
|
||||
|
||||
const String & getName() const override { return name; }
|
||||
bool isTemporary() const override { return true; }
|
||||
|
||||
std::set<String> getAllLoadablesDefinitionNames() override;
|
||||
bool exists(const String & path) override;
|
||||
Poco::Timestamp getUpdateTime(const String & path) override;
|
||||
LoadablesConfigurationPtr load(const String & path) override;
|
||||
|
||||
private:
|
||||
String name;
|
||||
String path;
|
||||
LoadablesConfigurationPtr config;
|
||||
Poco::Timestamp creation_time;
|
||||
};
|
||||
|
||||
}
|
|
@ -11,13 +11,18 @@
|
|||
|
||||
namespace DB
|
||||
{
|
||||
ExternalLoaderXMLConfigRepository::ExternalLoaderXMLConfigRepository(
|
||||
const Poco::Util::AbstractConfiguration & main_config_, const std::string & config_key_)
|
||||
: main_config(main_config_), config_key(config_key_)
|
||||
{
|
||||
}
|
||||
|
||||
Poco::Timestamp ExternalLoaderXMLConfigRepository::getUpdateTime(const std::string & definition_entity_name)
|
||||
{
|
||||
return Poco::File(definition_entity_name).getLastModified();
|
||||
}
|
||||
|
||||
std::set<std::string> ExternalLoaderXMLConfigRepository::getAllLoadablesDefinitionNames() const
|
||||
std::set<std::string> ExternalLoaderXMLConfigRepository::getAllLoadablesDefinitionNames()
|
||||
{
|
||||
std::set<std::string> files;
|
||||
|
||||
|
@ -52,13 +57,13 @@ std::set<std::string> ExternalLoaderXMLConfigRepository::getAllLoadablesDefiniti
|
|||
return files;
|
||||
}
|
||||
|
||||
bool ExternalLoaderXMLConfigRepository::exists(const std::string & definition_entity_name) const
|
||||
bool ExternalLoaderXMLConfigRepository::exists(const std::string & definition_entity_name)
|
||||
{
|
||||
return Poco::File(definition_entity_name).exists();
|
||||
}
|
||||
|
||||
Poco::AutoPtr<Poco::Util::AbstractConfiguration> ExternalLoaderXMLConfigRepository::load(
|
||||
const std::string & config_file) const
|
||||
const std::string & config_file)
|
||||
{
|
||||
ConfigProcessor config_processor{config_file};
|
||||
ConfigProcessor::LoadedConfig preprocessed = config_processor.loadConfig();
|
||||
|
|
|
@ -13,26 +13,25 @@ namespace DB
|
|||
class ExternalLoaderXMLConfigRepository : public IExternalLoaderConfigRepository
|
||||
{
|
||||
public:
|
||||
ExternalLoaderXMLConfigRepository(const Poco::Util::AbstractConfiguration & main_config_, const std::string & config_key_);
|
||||
|
||||
ExternalLoaderXMLConfigRepository(const Poco::Util::AbstractConfiguration & main_config_, const std::string & config_key_)
|
||||
: main_config(main_config_)
|
||||
, config_key(config_key_)
|
||||
{
|
||||
}
|
||||
const String & getName() const override { return name; }
|
||||
|
||||
/// Return set of .xml files from path in main_config (config_key)
|
||||
std::set<std::string> getAllLoadablesDefinitionNames() const override;
|
||||
std::set<std::string> getAllLoadablesDefinitionNames() override;
|
||||
|
||||
/// Checks that file with name exists on filesystem
|
||||
bool exists(const std::string & definition_entity_name) const override;
|
||||
bool exists(const std::string & definition_entity_name) override;
|
||||
|
||||
/// Return xml-file modification time via stat call
|
||||
Poco::Timestamp getUpdateTime(const std::string & definition_entity_name) override;
|
||||
|
||||
/// May contain definition about several entities (several dictionaries in one .xml file)
|
||||
LoadablesConfigurationPtr load(const std::string & definition_entity_name) const override;
|
||||
LoadablesConfigurationPtr load(const std::string & definition_entity_name) override;
|
||||
|
||||
private:
|
||||
const String name;
|
||||
|
||||
/// Main server config (config.xml).
|
||||
const Poco::Util::AbstractConfiguration & main_config;
|
||||
|
||||
|
|
|
@ -14,6 +14,7 @@ ExternalModelsLoader::ExternalModelsLoader(Context & context_)
|
|||
: ExternalLoader("external model", &Logger::get("ExternalModelsLoader"))
|
||||
, context(context_)
|
||||
{
|
||||
setConfigSettings({"models", "name", {}});
|
||||
enablePeriodicUpdates(true);
|
||||
}
|
||||
|
||||
|
@ -38,9 +39,4 @@ std::shared_ptr<const IExternalLoadable> ExternalModelsLoader::create(
|
|||
throw Exception("Unknown model type: " + type, ErrorCodes::INVALID_CONFIG_PARAMETER);
|
||||
}
|
||||
}
|
||||
|
||||
void ExternalModelsLoader::addConfigRepository(const String & name, std::unique_ptr<IExternalLoaderConfigRepository> config_repository)
|
||||
{
|
||||
ExternalLoader::addConfigRepository(name, std::move(config_repository), {"models", "name"});
|
||||
}
|
||||
}
|
||||
|
|
|
@ -25,10 +25,6 @@ public:
|
|||
return std::static_pointer_cast<const IModel>(load(name));
|
||||
}
|
||||
|
||||
void addConfigRepository(const String & name,
|
||||
std::unique_ptr<IExternalLoaderConfigRepository> config_repository);
|
||||
|
||||
|
||||
protected:
|
||||
LoadablePtr create(const std::string & name, const Poco::Util::AbstractConfiguration & config,
|
||||
const std::string & key_in_config, const std::string & repository_name) const override;
|
||||
|
|
|
@ -36,7 +36,7 @@ public:
|
|||
|
||||
virtual const ExternalLoadableLifetime & getLifetime() const = 0;
|
||||
|
||||
virtual std::string getName() const = 0;
|
||||
virtual const std::string & getLoadableName() const = 0;
|
||||
/// True if object can be updated when lifetime exceeded.
|
||||
virtual bool supportUpdates() const = 0;
|
||||
/// If lifetime exceeded and isModified(), ExternalLoader replace current object with the result of clone().
|
||||
|
|
|
@ -1,7 +0,0 @@
|
|||
#include <Interpreters/IExternalLoaderConfigRepository.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
const char * IExternalLoaderConfigRepository::INTERNAL_REPOSITORY_NAME_PREFIX = "\xFF internal repo ";
|
||||
}
|
|
@ -4,13 +4,13 @@
|
|||
#include <Poco/Util/AbstractConfiguration.h>
|
||||
#include <Poco/Timestamp.h>
|
||||
|
||||
#include <atomic>
|
||||
#include <memory>
|
||||
#include <string>
|
||||
#include <set>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
using LoadablesConfigurationPtr = Poco::AutoPtr<Poco::Util::AbstractConfiguration>;
|
||||
|
||||
/// Base interface for configurations source for Loadble objects, which can be
|
||||
|
@ -22,24 +22,27 @@ using LoadablesConfigurationPtr = Poco::AutoPtr<Poco::Util::AbstractConfiguratio
|
|||
class IExternalLoaderConfigRepository
|
||||
{
|
||||
public:
|
||||
/// Returns the name of the repository.
|
||||
virtual const std::string & getName() const = 0;
|
||||
|
||||
/// Whether this repository is temporary:
|
||||
/// it's created and destroyed while executing the same query.
|
||||
virtual bool isTemporary() const { return false; }
|
||||
|
||||
/// Return all available sources of loadables structure
|
||||
/// (all .xml files from fs, all entities from database, etc)
|
||||
virtual std::set<std::string> getAllLoadablesDefinitionNames() const = 0;
|
||||
virtual std::set<std::string> getAllLoadablesDefinitionNames() = 0;
|
||||
|
||||
/// Checks that source of loadables configuration exist.
|
||||
virtual bool exists(const std::string & loadable_definition_name) const = 0;
|
||||
virtual bool exists(const std::string & path) = 0;
|
||||
|
||||
/// Returns entity last update time
|
||||
virtual Poco::Timestamp getUpdateTime(const std::string & loadable_definition_name) = 0;
|
||||
virtual Poco::Timestamp getUpdateTime(const std::string & path) = 0;
|
||||
|
||||
/// Load configuration from some concrete source to AbstractConfiguration
|
||||
virtual LoadablesConfigurationPtr load(const std::string & loadable_definition_name) const = 0;
|
||||
virtual LoadablesConfigurationPtr load(const std::string & path) = 0;
|
||||
|
||||
virtual ~IExternalLoaderConfigRepository() = default;
|
||||
|
||||
static const char * INTERNAL_REPOSITORY_NAME_PREFIX;
|
||||
virtual ~IExternalLoaderConfigRepository() {}
|
||||
};
|
||||
|
||||
using ExternalLoaderConfigRepositoryPtr = std::unique_ptr<IExternalLoaderConfigRepository>;
|
||||
|
||||
}
|
||||
|
|
|
@ -106,6 +106,7 @@ BlockIO InterpreterAlterQuery::execute()
|
|||
StorageInMemoryMetadata metadata = table->getInMemoryMetadata();
|
||||
alter_commands.validate(metadata, context);
|
||||
alter_commands.prepare(metadata, context);
|
||||
table->checkAlterIsPossible(alter_commands, context.getSettingsRef());
|
||||
table->alter(alter_commands, context, table_lock_holder);
|
||||
}
|
||||
|
||||
|
|
|
@ -691,7 +691,9 @@ BlockIO InterpreterCreateQuery::createDictionary(ASTCreateQuery & create)
|
|||
|
||||
String dictionary_name = create.table;
|
||||
|
||||
String database_name = !create.database.empty() ? create.database : context.getCurrentDatabase();
|
||||
if (create.database.empty())
|
||||
create.database = context.getCurrentDatabase();
|
||||
const String & database_name = create.database;
|
||||
|
||||
auto guard = context.getDDLGuard(database_name, dictionary_name);
|
||||
DatabasePtr database = context.getDatabase(database_name);
|
||||
|
|
|
@ -405,9 +405,19 @@ InterpreterSelectQuery::InterpreterSelectQuery(
|
|||
query.setExpression(ASTSelectQuery::Expression::WHERE, std::make_shared<ASTLiteral>(0u));
|
||||
need_analyze_again = true;
|
||||
}
|
||||
if (query.prewhere() && query.where())
|
||||
{
|
||||
/// Filter block in WHERE instead to get better performance
|
||||
query.setExpression(ASTSelectQuery::Expression::WHERE, makeASTFunction("and", query.prewhere()->clone(), query.where()->clone()));
|
||||
need_analyze_again = true;
|
||||
}
|
||||
if (need_analyze_again)
|
||||
analyze();
|
||||
|
||||
/// If there is no WHERE, filter blocks as usual
|
||||
if (query.prewhere() && !query.where())
|
||||
analysis_result.prewhere_info->need_filter = true;
|
||||
|
||||
/// Blocks used in expression analysis contains size 1 const columns for constant folding and
|
||||
/// null non-const columns to avoid useless memory allocations. However, a valid block sample
|
||||
/// requires all columns to be of size 0, thus we need to sanitize the block here.
|
||||
|
|
|
@ -0,0 +1,92 @@
|
|||
#include <Common/typeid_cast.h>
|
||||
#include <Parsers/ASTLiteral.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
#include <Parsers/ASTExpressionList.h>
|
||||
#include <Interpreters/OptimizeIfChains.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int UNEXPECTED_AST_STRUCTURE;
|
||||
}
|
||||
|
||||
void OptimizeIfChainsVisitor::visit(ASTPtr & current_ast)
|
||||
{
|
||||
if (!current_ast)
|
||||
return;
|
||||
|
||||
for (ASTPtr & child : current_ast->children)
|
||||
{
|
||||
/// Fallthrough cases
|
||||
|
||||
const auto * function_node = child->as<ASTFunction>();
|
||||
if (!function_node || function_node->name != "if" || !function_node->arguments)
|
||||
{
|
||||
visit(child);
|
||||
continue;
|
||||
}
|
||||
|
||||
const auto * function_args = function_node->arguments->as<ASTExpressionList>();
|
||||
if (!function_args || function_args->children.size() != 3 || !function_args->children[2])
|
||||
{
|
||||
visit(child);
|
||||
continue;
|
||||
}
|
||||
|
||||
const auto * else_arg = function_args->children[2]->as<ASTFunction>();
|
||||
if (!else_arg || else_arg->name != "if")
|
||||
{
|
||||
visit(child);
|
||||
continue;
|
||||
}
|
||||
|
||||
/// The case of:
|
||||
/// if(cond, a, if(...))
|
||||
|
||||
auto chain = ifChain(child);
|
||||
std::reverse(chain.begin(), chain.end());
|
||||
child->as<ASTFunction>()->name = "multiIf";
|
||||
child->as<ASTFunction>()->arguments->children = std::move(chain);
|
||||
}
|
||||
}
|
||||
|
||||
ASTs OptimizeIfChainsVisitor::ifChain(const ASTPtr & child)
|
||||
{
|
||||
const auto * function_node = child->as<ASTFunction>();
|
||||
if (!function_node || !function_node->arguments)
|
||||
throw Exception("Unexpected AST for function 'if'", ErrorCodes::UNEXPECTED_AST_STRUCTURE);
|
||||
|
||||
const auto * function_args = function_node->arguments->as<ASTExpressionList>();
|
||||
|
||||
if (!function_args || function_args->children.size() != 3)
|
||||
throw Exception("Wrong number of arguments for function 'if' (" + toString(function_args->children.size()) + " instead of 3)",
|
||||
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
|
||||
const auto * else_arg = function_args->children[2]->as<ASTFunction>();
|
||||
|
||||
/// Recursively collect arguments from the innermost if ("head-resursion").
|
||||
/// Arguments will be returned in reverse order.
|
||||
|
||||
if (else_arg && else_arg->name == "if")
|
||||
{
|
||||
auto cur = ifChain(function_node->arguments->children[2]);
|
||||
cur.push_back(function_node->arguments->children[1]);
|
||||
cur.push_back(function_node->arguments->children[0]);
|
||||
return cur;
|
||||
}
|
||||
else
|
||||
{
|
||||
ASTs end;
|
||||
end.reserve(3);
|
||||
end.push_back(function_node->arguments->children[2]);
|
||||
end.push_back(function_node->arguments->children[1]);
|
||||
end.push_back(function_node->arguments->children[0]);
|
||||
return end;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,19 @@
|
|||
#pragma once
|
||||
|
||||
#include <Parsers/IAST.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/// It converts if-chain to multiIf.
|
||||
class OptimizeIfChainsVisitor
|
||||
{
|
||||
public:
|
||||
OptimizeIfChainsVisitor() = default;
|
||||
void visit(ASTPtr & ast);
|
||||
|
||||
private:
|
||||
ASTs ifChain(const ASTPtr & child);
|
||||
};
|
||||
|
||||
}
|
|
@ -21,6 +21,7 @@
|
|||
#include <Interpreters/AnalyzedJoin.h>
|
||||
#include <Interpreters/ExpressionActions.h> /// getSmallestColumn()
|
||||
#include <Interpreters/getTableExpressions.h>
|
||||
#include <Interpreters/OptimizeIfChains.h>
|
||||
|
||||
#include <Parsers/ASTExpressionList.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
|
@ -914,6 +915,9 @@ SyntaxAnalyzerResultPtr SyntaxAnalyzer::analyze(
|
|||
/// Optimize if with constant condition after constants was substituted instead of scalar subqueries.
|
||||
OptimizeIfWithConstantConditionVisitor(result.aliases).visit(query);
|
||||
|
||||
if (settings.optimize_if_chain_to_miltiif)
|
||||
OptimizeIfChainsVisitor().visit(query);
|
||||
|
||||
if (select_query)
|
||||
{
|
||||
/// GROUP BY injective function elimination.
|
||||
|
|
|
@ -392,8 +392,9 @@ void AlterCommand::apply(StorageInMemoryMetadata & metadata) const
|
|||
for (const auto & change : settings_changes)
|
||||
{
|
||||
auto finder = [&change](const SettingChange & c) { return c.name == change.name; };
|
||||
if (auto it = std::find_if(settings_from_storage.begin(), settings_from_storage.end(), finder);
|
||||
it != settings_from_storage.end())
|
||||
auto it = std::find_if(settings_from_storage.begin(), settings_from_storage.end(), finder);
|
||||
|
||||
if (it != settings_from_storage.end())
|
||||
it->value = change.value;
|
||||
else
|
||||
settings_from_storage.push_back(change);
|
||||
|
@ -644,11 +645,6 @@ void AlterCommands::prepare(const StorageInMemoryMetadata & metadata, const Cont
|
|||
|
||||
void AlterCommands::validate(const StorageInMemoryMetadata & metadata, const Context & context) const
|
||||
{
|
||||
/// We will save ALTER ADD/MODIFY command indices (only the last for each column) for possible modification
|
||||
/// (we might need to add deduced types or modify default expressions).
|
||||
/// Saving indices because we can add new commands later and thus cause vector resize.
|
||||
std::unordered_map<String, size_t> column_to_command_idx;
|
||||
|
||||
for (size_t i = 0; i < size(); ++i)
|
||||
{
|
||||
auto & command = (*this)[i];
|
||||
|
|
|
@ -96,32 +96,45 @@ struct AlterCommand
|
|||
/// in each part on disk (it's not lightweight alter).
|
||||
bool isModifyingData() const;
|
||||
|
||||
/// checks that only settings changed by alter
|
||||
/// Checks that only settings changed by alter
|
||||
bool isSettingsAlter() const;
|
||||
|
||||
/// Checks that only comment changed by alter
|
||||
bool isCommentAlter() const;
|
||||
};
|
||||
|
||||
/// Return string representation of AlterCommand::Type
|
||||
String alterTypeToString(const AlterCommand::Type type);
|
||||
|
||||
class Context;
|
||||
|
||||
/// Vector of AlterCommand with several additional functions
|
||||
class AlterCommands : public std::vector<AlterCommand>
|
||||
{
|
||||
private:
|
||||
bool prepared = false;
|
||||
public:
|
||||
void apply(StorageInMemoryMetadata & metadata) const;
|
||||
|
||||
|
||||
void prepare(const StorageInMemoryMetadata & metadata, const Context & context);
|
||||
|
||||
/// Validate that commands can be applied to metadata.
|
||||
/// Checks that all columns exist and dependecies between them.
|
||||
/// This check is lightweight and base only on metadata.
|
||||
/// More accurate check have to be performed with storage->checkAlterIsPossible.
|
||||
void validate(const StorageInMemoryMetadata & metadata, const Context & context) const;
|
||||
|
||||
/// Prepare alter commands. Set ignore flag to some of them
|
||||
/// and additional commands for dependent columns.
|
||||
void prepare(const StorageInMemoryMetadata & metadata, const Context & context);
|
||||
|
||||
/// Apply all alter command in sequential order to storage metadata.
|
||||
/// Commands have to be prepared before apply.
|
||||
void apply(StorageInMemoryMetadata & metadata) const;
|
||||
|
||||
/// At least one command modify data on disk.
|
||||
bool isModifyingData() const;
|
||||
|
||||
/// At least one command modify settings.
|
||||
bool isSettingsAlter() const;
|
||||
|
||||
/// At least one command modify comments.
|
||||
bool isCommentAlter() const;
|
||||
};
|
||||
|
||||
|
|
|
@ -381,12 +381,13 @@ StorageInMemoryMetadata IStorage::getInMemoryMetadata() const
|
|||
void IStorage::alter(
|
||||
const AlterCommands & params,
|
||||
const Context & context,
|
||||
TableStructureWriteLockHolder & /*table_lock_holder*/)
|
||||
TableStructureWriteLockHolder & table_lock_holder)
|
||||
{
|
||||
checkAlterIsPossible(params, context.getSettingsRef());
|
||||
|
||||
const String database_name = getDatabaseName();
|
||||
const String table_name = getTableName();
|
||||
|
||||
lockStructureExclusively(table_lock_holder, context.getCurrentQueryId());
|
||||
|
||||
StorageInMemoryMetadata metadata = getInMemoryMetadata();
|
||||
params.apply(metadata);
|
||||
context.getDatabase(database_name)->alterTable(context, table_name, metadata);
|
||||
|
|
|
@ -38,8 +38,8 @@ private:
|
|||
using Base = LRUCache<UInt128, MarksInCompressedFile, UInt128TrivialHash, MarksWeightFunction>;
|
||||
|
||||
public:
|
||||
MarkCache(size_t max_size_in_bytes, const Delay & expiration_delay_)
|
||||
: Base(max_size_in_bytes, expiration_delay_) {}
|
||||
MarkCache(size_t max_size_in_bytes)
|
||||
: Base(max_size_in_bytes) {}
|
||||
|
||||
/// Calculate key from path to file and offset.
|
||||
static UInt128 hash(const String & path_to_file)
|
||||
|
|
|
@ -76,35 +76,23 @@ void MergeTreeBaseSelectProcessor::initializeRangeReaders(MergeTreeReadTask & cu
|
|||
{
|
||||
if (reader->getColumns().empty())
|
||||
{
|
||||
current_task.range_reader = MergeTreeRangeReader(
|
||||
pre_reader.get(), nullptr,
|
||||
prewhere_info->alias_actions, prewhere_info->prewhere_actions,
|
||||
&prewhere_info->prewhere_column_name,
|
||||
current_task.remove_prewhere_column, true);
|
||||
current_task.range_reader = MergeTreeRangeReader(pre_reader.get(), nullptr, prewhere_info, true);
|
||||
}
|
||||
else
|
||||
{
|
||||
MergeTreeRangeReader * pre_reader_ptr = nullptr;
|
||||
if (pre_reader != nullptr)
|
||||
{
|
||||
current_task.pre_range_reader = MergeTreeRangeReader(
|
||||
pre_reader.get(), nullptr,
|
||||
prewhere_info->alias_actions, prewhere_info->prewhere_actions,
|
||||
&prewhere_info->prewhere_column_name,
|
||||
current_task.remove_prewhere_column, false);
|
||||
current_task.pre_range_reader = MergeTreeRangeReader(pre_reader.get(), nullptr, prewhere_info, false);
|
||||
pre_reader_ptr = ¤t_task.pre_range_reader;
|
||||
}
|
||||
|
||||
current_task.range_reader = MergeTreeRangeReader(
|
||||
reader.get(), pre_reader_ptr, nullptr, nullptr,
|
||||
nullptr, false, true);
|
||||
current_task.range_reader = MergeTreeRangeReader(reader.get(), pre_reader_ptr, nullptr, true);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
current_task.range_reader = MergeTreeRangeReader(
|
||||
reader.get(), nullptr, nullptr, nullptr,
|
||||
nullptr, false, true);
|
||||
current_task.range_reader = MergeTreeRangeReader(reader.get(), nullptr, nullptr, true);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -333,6 +321,12 @@ void MergeTreeBaseSelectProcessor::executePrewhereActions(Block & block, const P
|
|||
prewhere_info->prewhere_actions->execute(block);
|
||||
if (prewhere_info->remove_prewhere_column)
|
||||
block.erase(prewhere_info->prewhere_column_name);
|
||||
else
|
||||
{
|
||||
auto & ctn = block.getByName(prewhere_info->prewhere_column_name);
|
||||
ctn.type = std::make_shared<DataTypeUInt8>();
|
||||
ctn.column = ctn.type->createColumnConst(block.rows(), 1u)->convertToFullColumnIfConst();
|
||||
}
|
||||
|
||||
if (!block)
|
||||
block.insert({nullptr, std::make_shared<DataTypeNothing>(), "_nothing"});
|
||||
|
|
|
@ -255,6 +255,36 @@ void MergeTreeRangeReader::ReadResult::clear()
|
|||
filter = nullptr;
|
||||
}
|
||||
|
||||
void MergeTreeRangeReader::ReadResult::shrink(Columns & old_columns)
|
||||
{
|
||||
for (size_t i = 0; i < old_columns.size(); ++i)
|
||||
{
|
||||
if (!old_columns[i])
|
||||
continue;
|
||||
auto new_column = old_columns[i]->cloneEmpty();
|
||||
new_column->reserve(total_rows_per_granule);
|
||||
for (size_t j = 0, pos = 0; j < rows_per_granule_original.size(); pos += rows_per_granule_original[i], ++j)
|
||||
{
|
||||
if (rows_per_granule[j])
|
||||
new_column->insertRangeFrom(*old_columns[i], pos, rows_per_granule[j]);
|
||||
}
|
||||
old_columns[i] = std::move(new_column);
|
||||
}
|
||||
}
|
||||
|
||||
void MergeTreeRangeReader::ReadResult::setFilterConstTrue()
|
||||
{
|
||||
clearFilter();
|
||||
filter_holder = DataTypeUInt8().createColumnConst(num_rows, 1u);
|
||||
}
|
||||
|
||||
void MergeTreeRangeReader::ReadResult::setFilterConstFalse()
|
||||
{
|
||||
clearFilter();
|
||||
columns.clear();
|
||||
num_rows = 0;
|
||||
}
|
||||
|
||||
void MergeTreeRangeReader::ReadResult::optimize()
|
||||
{
|
||||
if (total_rows_per_granule == 0 || filter == nullptr)
|
||||
|
@ -268,30 +298,47 @@ void MergeTreeRangeReader::ReadResult::optimize()
|
|||
clear();
|
||||
return;
|
||||
}
|
||||
else if (total_zero_rows_in_tails == 0 && countBytesInFilter(filter->getData()) == filter->size())
|
||||
else if (total_zero_rows_in_tails == 0 && countBytesInResultFilter(filter->getData()) == filter->size())
|
||||
{
|
||||
filter_holder = nullptr;
|
||||
filter = nullptr;
|
||||
setFilterConstTrue();
|
||||
return;
|
||||
}
|
||||
|
||||
/// Just a guess. If only a few rows may be skipped, it's better not to skip at all.
|
||||
if (2 * total_zero_rows_in_tails > filter->size())
|
||||
else if (2 * total_zero_rows_in_tails > filter->size())
|
||||
{
|
||||
for (auto i : ext::range(0, rows_per_granule.size()))
|
||||
{
|
||||
rows_per_granule_original.push_back(rows_per_granule[i]);
|
||||
rows_per_granule[i] -= zero_tails[i];
|
||||
}
|
||||
num_rows_to_skip_in_last_granule += rows_per_granule_original.back() - rows_per_granule.back();
|
||||
|
||||
auto new_filter = ColumnUInt8::create(filter->size() - total_zero_rows_in_tails);
|
||||
IColumn::Filter & new_data = new_filter->getData();
|
||||
/// Check if const 1 after shrink
|
||||
if (countBytesInResultFilter(filter->getData()) + total_zero_rows_in_tails == total_rows_per_granule)
|
||||
{
|
||||
total_rows_per_granule = total_rows_per_granule - total_zero_rows_in_tails;
|
||||
num_rows = total_rows_per_granule;
|
||||
setFilterConstTrue();
|
||||
shrink(columns); /// shrink acts as filtering in such case
|
||||
}
|
||||
else
|
||||
{
|
||||
auto new_filter = ColumnUInt8::create(filter->size() - total_zero_rows_in_tails);
|
||||
IColumn::Filter & new_data = new_filter->getData();
|
||||
|
||||
size_t rows_in_last_granule = rows_per_granule.back();
|
||||
|
||||
collapseZeroTails(filter->getData(), new_data, zero_tails);
|
||||
|
||||
total_rows_per_granule = new_filter->size();
|
||||
num_rows_to_skip_in_last_granule += rows_in_last_granule - rows_per_granule.back();
|
||||
|
||||
filter = new_filter.get();
|
||||
filter_holder = std::move(new_filter);
|
||||
collapseZeroTails(filter->getData(), new_data);
|
||||
total_rows_per_granule = new_filter->size();
|
||||
num_rows = total_rows_per_granule;
|
||||
filter_original = filter;
|
||||
filter_holder_original = std::move(filter_holder);
|
||||
filter = new_filter.get();
|
||||
filter_holder = std::move(new_filter);
|
||||
}
|
||||
need_filter = true;
|
||||
}
|
||||
/// Another guess, if it's worth filtering at PREWHERE
|
||||
else if (countBytesInResultFilter(filter->getData()) < 0.6 * filter->size())
|
||||
need_filter = true;
|
||||
}
|
||||
|
||||
size_t MergeTreeRangeReader::ReadResult::countZeroTails(const IColumn::Filter & filter_vec, NumRows & zero_tails) const
|
||||
|
@ -314,24 +361,16 @@ size_t MergeTreeRangeReader::ReadResult::countZeroTails(const IColumn::Filter &
|
|||
return total_zero_rows_in_tails;
|
||||
}
|
||||
|
||||
void MergeTreeRangeReader::ReadResult::collapseZeroTails(const IColumn::Filter & filter_vec, IColumn::Filter & new_filter_vec,
|
||||
const NumRows & zero_tails)
|
||||
void MergeTreeRangeReader::ReadResult::collapseZeroTails(const IColumn::Filter & filter_vec, IColumn::Filter & new_filter_vec)
|
||||
{
|
||||
auto filter_data = filter_vec.data();
|
||||
auto new_filter_data = new_filter_vec.data();
|
||||
|
||||
for (auto i : ext::range(0, rows_per_granule.size()))
|
||||
{
|
||||
auto & rows_to_read = rows_per_granule[i];
|
||||
auto filtered_rows_num_at_granule_end = zero_tails[i];
|
||||
|
||||
rows_to_read -= filtered_rows_num_at_granule_end;
|
||||
|
||||
memcpySmallAllowReadWriteOverflow15(new_filter_data, filter_data, rows_to_read);
|
||||
filter_data += rows_to_read;
|
||||
new_filter_data += rows_to_read;
|
||||
|
||||
filter_data += filtered_rows_num_at_granule_end;
|
||||
memcpySmallAllowReadWriteOverflow15(new_filter_data, filter_data, rows_per_granule[i]);
|
||||
filter_data += rows_per_granule_original[i];
|
||||
new_filter_data += rows_per_granule[i];
|
||||
}
|
||||
|
||||
new_filter_vec.resize(new_filter_data - new_filter_vec.data());
|
||||
|
@ -405,15 +444,27 @@ void MergeTreeRangeReader::ReadResult::setFilter(const ColumnPtr & new_filter)
|
|||
}
|
||||
|
||||
|
||||
size_t MergeTreeRangeReader::ReadResult::countBytesInResultFilter(const IColumn::Filter & filter_)
|
||||
{
|
||||
auto it = filter_bytes_map.find(&filter_);
|
||||
if (it == filter_bytes_map.end())
|
||||
{
|
||||
auto bytes = countBytesInFilter(filter_);
|
||||
filter_bytes_map[&filter_] = bytes;
|
||||
return bytes;
|
||||
}
|
||||
else
|
||||
return it->second;
|
||||
}
|
||||
|
||||
MergeTreeRangeReader::MergeTreeRangeReader(
|
||||
MergeTreeReader * merge_tree_reader_, MergeTreeRangeReader * prev_reader_,
|
||||
ExpressionActionsPtr alias_actions_, ExpressionActionsPtr prewhere_actions_,
|
||||
const String * prewhere_column_name_, bool remove_prewhere_column_, bool last_reader_in_chain_)
|
||||
: merge_tree_reader(merge_tree_reader_), index_granularity(&(merge_tree_reader->data_part->index_granularity))
|
||||
, prev_reader(prev_reader_), prewhere_column_name(prewhere_column_name_)
|
||||
, alias_actions(std::move(alias_actions_)), prewhere_actions(std::move(prewhere_actions_))
|
||||
, remove_prewhere_column(remove_prewhere_column_)
|
||||
, last_reader_in_chain(last_reader_in_chain_), is_initialized(true)
|
||||
MergeTreeReader * merge_tree_reader_,
|
||||
MergeTreeRangeReader * prev_reader_,
|
||||
const PrewhereInfoPtr & prewhere_,
|
||||
bool last_reader_in_chain_)
|
||||
: merge_tree_reader(merge_tree_reader_)
|
||||
, index_granularity(&(merge_tree_reader->data_part->index_granularity)), prev_reader(prev_reader_)
|
||||
, prewhere(prewhere_), last_reader_in_chain(last_reader_in_chain_), is_initialized(true)
|
||||
{
|
||||
if (prev_reader)
|
||||
sample_block = prev_reader->getSampleBlock();
|
||||
|
@ -421,14 +472,18 @@ MergeTreeRangeReader::MergeTreeRangeReader(
|
|||
for (auto & name_and_type : merge_tree_reader->getColumns())
|
||||
sample_block.insert({name_and_type.type->createColumn(), name_and_type.type, name_and_type.name});
|
||||
|
||||
if (alias_actions)
|
||||
alias_actions->execute(sample_block, true);
|
||||
if (prewhere)
|
||||
{
|
||||
if (prewhere->alias_actions)
|
||||
prewhere->alias_actions->execute(sample_block, true);
|
||||
|
||||
if (prewhere_actions)
|
||||
prewhere_actions->execute(sample_block, true);
|
||||
sample_block_before_prewhere = sample_block;
|
||||
if (prewhere->prewhere_actions)
|
||||
prewhere->prewhere_actions->execute(sample_block, true);
|
||||
|
||||
if (remove_prewhere_column)
|
||||
sample_block.erase(*prewhere_column_name);
|
||||
if (prewhere->remove_prewhere_column)
|
||||
sample_block.erase(prewhere->prewhere_column_name);
|
||||
}
|
||||
}
|
||||
|
||||
bool MergeTreeRangeReader::isReadingFinished() const
|
||||
|
@ -488,12 +543,10 @@ MergeTreeRangeReader::ReadResult MergeTreeRangeReader::read(size_t max_rows, Mar
|
|||
throw Exception("Expected at least 1 row to read, got 0.", ErrorCodes::LOGICAL_ERROR);
|
||||
|
||||
ReadResult read_result;
|
||||
size_t prev_bytes = 0;
|
||||
|
||||
if (prev_reader)
|
||||
{
|
||||
read_result = prev_reader->read(max_rows, ranges);
|
||||
prev_bytes = read_result.numBytesRead();
|
||||
|
||||
size_t num_read_rows;
|
||||
Columns columns = continueReadingChain(read_result, num_read_rows);
|
||||
|
@ -509,6 +562,15 @@ MergeTreeRangeReader::ReadResult MergeTreeRangeReader::read(size_t max_rows, Mar
|
|||
has_columns = true;
|
||||
}
|
||||
|
||||
size_t total_bytes = 0;
|
||||
for (auto & column : columns)
|
||||
{
|
||||
if (column)
|
||||
total_bytes += column->byteSize();
|
||||
}
|
||||
|
||||
read_result.addNumBytesRead(total_bytes);
|
||||
|
||||
bool should_evaluate_missing_defaults = false;
|
||||
|
||||
if (has_columns)
|
||||
|
@ -533,8 +595,30 @@ MergeTreeRangeReader::ReadResult MergeTreeRangeReader::read(size_t max_rows, Mar
|
|||
}
|
||||
|
||||
if (!columns.empty() && should_evaluate_missing_defaults)
|
||||
merge_tree_reader->evaluateMissingDefaults(
|
||||
prev_reader->getSampleBlock().cloneWithColumns(read_result.columns), columns);
|
||||
{
|
||||
auto block = prev_reader->sample_block.cloneWithColumns(read_result.columns);
|
||||
auto block_before_prewhere = read_result.block_before_prewhere;
|
||||
for (auto & ctn : block)
|
||||
{
|
||||
if (block_before_prewhere.has(ctn.name))
|
||||
block_before_prewhere.erase(ctn.name);
|
||||
}
|
||||
|
||||
if (block_before_prewhere)
|
||||
{
|
||||
if (read_result.need_filter)
|
||||
{
|
||||
auto old_columns = block_before_prewhere.getColumns();
|
||||
filterColumns(old_columns, read_result.getFilter()->getData());
|
||||
block_before_prewhere.setColumns(std::move(old_columns));
|
||||
}
|
||||
|
||||
for (auto && ctn : block_before_prewhere)
|
||||
block.insert(std::move(ctn));
|
||||
}
|
||||
|
||||
merge_tree_reader->evaluateMissingDefaults(block, columns);
|
||||
}
|
||||
|
||||
read_result.columns.reserve(read_result.columns.size() + columns.size());
|
||||
for (auto & column : columns)
|
||||
|
@ -556,17 +640,17 @@ MergeTreeRangeReader::ReadResult MergeTreeRangeReader::read(size_t max_rows, Mar
|
|||
}
|
||||
else
|
||||
read_result.columns.clear();
|
||||
|
||||
size_t total_bytes = 0;
|
||||
for (auto & column : read_result.columns)
|
||||
total_bytes += column->byteSize();
|
||||
|
||||
read_result.addNumBytesRead(total_bytes);
|
||||
}
|
||||
|
||||
if (read_result.num_rows == 0)
|
||||
return read_result;
|
||||
|
||||
size_t total_bytes = 0;
|
||||
for (auto & column : read_result.columns)
|
||||
total_bytes += column->byteSize();
|
||||
|
||||
read_result.addNumBytesRead(total_bytes - prev_bytes);
|
||||
|
||||
executePrewhereActionsAndFilterColumns(read_result);
|
||||
|
||||
return read_result;
|
||||
|
@ -674,7 +758,7 @@ Columns MergeTreeRangeReader::continueReadingChain(ReadResult & result, size_t &
|
|||
|
||||
void MergeTreeRangeReader::executePrewhereActionsAndFilterColumns(ReadResult & result)
|
||||
{
|
||||
if (!prewhere_actions)
|
||||
if (!prewhere)
|
||||
return;
|
||||
|
||||
auto & header = merge_tree_reader->getColumns();
|
||||
|
@ -705,12 +789,14 @@ void MergeTreeRangeReader::executePrewhereActionsAndFilterColumns(ReadResult & r
|
|||
for (auto name_and_type = header.begin(); pos < num_columns; ++pos, ++name_and_type)
|
||||
block.insert({result.columns[pos], name_and_type->type, name_and_type->name});
|
||||
|
||||
if (alias_actions)
|
||||
alias_actions->execute(block);
|
||||
if (prewhere && prewhere->alias_actions)
|
||||
prewhere->alias_actions->execute(block);
|
||||
|
||||
prewhere_actions->execute(block);
|
||||
/// Columns might be projected out. We need to store them here so that default columns can be evaluated later.
|
||||
result.block_before_prewhere = block;
|
||||
prewhere->prewhere_actions->execute(block);
|
||||
|
||||
prewhere_column_pos = block.getPositionByName(*prewhere_column_name);
|
||||
prewhere_column_pos = block.getPositionByName(prewhere->prewhere_column_name);
|
||||
|
||||
result.columns.clear();
|
||||
result.columns.reserve(block.columns());
|
||||
|
@ -729,51 +815,38 @@ void MergeTreeRangeReader::executePrewhereActionsAndFilterColumns(ReadResult & r
|
|||
}
|
||||
|
||||
result.setFilter(filter);
|
||||
|
||||
/// If there is a WHERE, we filter in there, and only optimize IO and shrink columns here
|
||||
if (!last_reader_in_chain)
|
||||
result.optimize();
|
||||
|
||||
bool filter_always_true = !result.getFilter() && result.totalRowsPerGranule() == filter->size();
|
||||
|
||||
/// If we read nothing or filter gets optimized to nothing
|
||||
if (result.totalRowsPerGranule() == 0)
|
||||
result.setFilterConstFalse();
|
||||
/// If we need to filter in PREWHERE
|
||||
else if (prewhere->need_filter || result.need_filter)
|
||||
{
|
||||
result.columns.clear();
|
||||
result.num_rows = 0;
|
||||
}
|
||||
else if (!filter_always_true)
|
||||
{
|
||||
FilterDescription filter_description(*filter);
|
||||
|
||||
size_t num_bytes_in_filter = 0;
|
||||
bool calculated_num_bytes_in_filter = false;
|
||||
|
||||
auto getNumBytesInFilter = [&]()
|
||||
/// If there is a filter and without optimized
|
||||
if (result.getFilter() && last_reader_in_chain)
|
||||
{
|
||||
if (!calculated_num_bytes_in_filter)
|
||||
num_bytes_in_filter = countBytesInFilter(*filter_description.data);
|
||||
|
||||
calculated_num_bytes_in_filter = true;
|
||||
return num_bytes_in_filter;
|
||||
};
|
||||
|
||||
if (last_reader_in_chain)
|
||||
{
|
||||
size_t bytes_in_filter = getNumBytesInFilter();
|
||||
auto result_filter = result.getFilter();
|
||||
/// optimize is not called, need to check const 1 and const 0
|
||||
size_t bytes_in_filter = result.countBytesInResultFilter(result_filter->getData());
|
||||
if (bytes_in_filter == 0)
|
||||
{
|
||||
result.columns.clear();
|
||||
result.num_rows = 0;
|
||||
}
|
||||
else if (bytes_in_filter == filter->size())
|
||||
filter_always_true = true;
|
||||
result.setFilterConstFalse();
|
||||
else if (bytes_in_filter == result.num_rows)
|
||||
result.setFilterConstTrue();
|
||||
}
|
||||
|
||||
if (!filter_always_true)
|
||||
/// If there is still a filter, do the filtering now
|
||||
if (result.getFilter())
|
||||
{
|
||||
filterColumns(result.columns, *filter_description.data);
|
||||
/// filter might be shrinked while columns not
|
||||
auto result_filter = result.getFilterOriginal() ? result.getFilterOriginal() : result.getFilter();
|
||||
filterColumns(result.columns, result_filter->getData());
|
||||
result.need_filter = true;
|
||||
|
||||
/// Get num rows after filtration.
|
||||
bool has_column = false;
|
||||
|
||||
for (auto & column : result.columns)
|
||||
{
|
||||
if (column)
|
||||
|
@ -784,19 +857,26 @@ void MergeTreeRangeReader::executePrewhereActionsAndFilterColumns(ReadResult & r
|
|||
}
|
||||
}
|
||||
|
||||
/// There is only one filter column. Record the actual number
|
||||
if (!has_column)
|
||||
result.num_rows = getNumBytesInFilter();
|
||||
result.num_rows = result.countBytesInResultFilter(result_filter->getData());
|
||||
}
|
||||
|
||||
/// Check if the PREWHERE column is needed
|
||||
if (result.columns.size())
|
||||
{
|
||||
if (prewhere->remove_prewhere_column)
|
||||
result.columns.erase(result.columns.begin() + prewhere_column_pos);
|
||||
else
|
||||
result.columns[prewhere_column_pos] = DataTypeUInt8().createColumnConst(result.num_rows, 1u)->convertToFullColumnIfConst();
|
||||
}
|
||||
}
|
||||
|
||||
if (result.num_rows == 0)
|
||||
return;
|
||||
|
||||
if (remove_prewhere_column)
|
||||
result.columns.erase(result.columns.begin() + prewhere_column_pos);
|
||||
/// Filter in WHERE instead
|
||||
else
|
||||
result.columns[prewhere_column_pos] =
|
||||
DataTypeUInt8().createColumnConst(result.num_rows, 1u)->convertToFullColumnIfConst();
|
||||
{
|
||||
result.columns[prewhere_column_pos] = result.getFilterHolder()->convertToFullColumnIfConst();
|
||||
result.clearFilter(); // Acting as a flag to not filter in PREWHERE
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -13,6 +13,8 @@ using ColumnUInt8 = ColumnVector<UInt8>;
|
|||
|
||||
class MergeTreeReader;
|
||||
class MergeTreeIndexGranularity;
|
||||
struct PrewhereInfo;
|
||||
using PrewhereInfoPtr = std::shared_ptr<PrewhereInfo>;
|
||||
|
||||
/// MergeTreeReader iterator which allows sequential reading for arbitrary number of rows between pairs of marks in the same part.
|
||||
/// Stores reading state, which can be inside granule. Can skip rows in current granule and start reading from next mark.
|
||||
|
@ -20,9 +22,11 @@ class MergeTreeIndexGranularity;
|
|||
class MergeTreeRangeReader
|
||||
{
|
||||
public:
|
||||
MergeTreeRangeReader(MergeTreeReader * merge_tree_reader_, MergeTreeRangeReader * prev_reader_,
|
||||
ExpressionActionsPtr alias_actions_, ExpressionActionsPtr prewhere_actions_,
|
||||
const String * prewhere_column_name_, bool remove_prewhere_column_, bool last_reader_in_chain_);
|
||||
MergeTreeRangeReader(
|
||||
MergeTreeReader * merge_tree_reader_,
|
||||
MergeTreeRangeReader * prev_reader_,
|
||||
const PrewhereInfoPtr & prewhere_,
|
||||
bool last_reader_in_chain_);
|
||||
|
||||
MergeTreeRangeReader() = default;
|
||||
|
||||
|
@ -140,7 +144,9 @@ public:
|
|||
/// The number of bytes read from disk.
|
||||
size_t numBytesRead() const { return num_bytes_read; }
|
||||
/// Filter you need to apply to newly-read columns in order to add them to block.
|
||||
const ColumnUInt8 * getFilterOriginal() const { return filter_original; }
|
||||
const ColumnUInt8 * getFilter() const { return filter; }
|
||||
ColumnPtr & getFilterHolder() { return filter_holder; }
|
||||
|
||||
void addGranule(size_t num_rows_);
|
||||
void adjustLastGranule();
|
||||
|
@ -154,10 +160,21 @@ public:
|
|||
/// Remove all rows from granules.
|
||||
void clear();
|
||||
|
||||
void clearFilter() { filter = nullptr; }
|
||||
void setFilterConstTrue();
|
||||
void setFilterConstFalse();
|
||||
|
||||
void addNumBytesRead(size_t count) { num_bytes_read += count; }
|
||||
|
||||
void shrink(Columns & old_columns);
|
||||
|
||||
size_t countBytesInResultFilter(const IColumn::Filter & filter);
|
||||
|
||||
Columns columns;
|
||||
size_t num_rows = 0;
|
||||
bool need_filter = false;
|
||||
|
||||
Block block_before_prewhere;
|
||||
|
||||
private:
|
||||
RangesInfo started_ranges;
|
||||
|
@ -165,6 +182,7 @@ public:
|
|||
/// Granule here is not number of rows between two marks
|
||||
/// It's amount of rows per single reading act
|
||||
NumRows rows_per_granule;
|
||||
NumRows rows_per_granule_original;
|
||||
/// Sum(rows_per_granule)
|
||||
size_t total_rows_per_granule = 0;
|
||||
/// The number of rows was read at first step. May be zero if no read columns present in part.
|
||||
|
@ -175,11 +193,15 @@ public:
|
|||
size_t num_bytes_read = 0;
|
||||
/// nullptr if prev reader hasn't prewhere_actions. Otherwise filter.size() >= total_rows_per_granule.
|
||||
ColumnPtr filter_holder;
|
||||
ColumnPtr filter_holder_original;
|
||||
const ColumnUInt8 * filter = nullptr;
|
||||
const ColumnUInt8 * filter_original = nullptr;
|
||||
|
||||
void collapseZeroTails(const IColumn::Filter & filter, IColumn::Filter & new_filter, const NumRows & zero_tails);
|
||||
void collapseZeroTails(const IColumn::Filter & filter, IColumn::Filter & new_filter);
|
||||
size_t countZeroTails(const IColumn::Filter & filter, NumRows & zero_tails) const;
|
||||
static size_t numZerosInTail(const UInt8 * begin, const UInt8 * end);
|
||||
|
||||
std::map<const IColumn::Filter *, size_t> filter_bytes_map;
|
||||
};
|
||||
|
||||
ReadResult read(size_t max_rows, MarkRanges & ranges);
|
||||
|
@ -196,16 +218,13 @@ private:
|
|||
MergeTreeReader * merge_tree_reader = nullptr;
|
||||
const MergeTreeIndexGranularity * index_granularity = nullptr;
|
||||
MergeTreeRangeReader * prev_reader = nullptr; /// If not nullptr, read from prev_reader firstly.
|
||||
|
||||
const String * prewhere_column_name = nullptr;
|
||||
ExpressionActionsPtr alias_actions = nullptr; /// If not nullptr, calculate aliases.
|
||||
ExpressionActionsPtr prewhere_actions = nullptr; /// If not nullptr, calculate filter.
|
||||
PrewhereInfoPtr prewhere;
|
||||
|
||||
Stream stream;
|
||||
|
||||
Block sample_block;
|
||||
Block sample_block_before_prewhere;
|
||||
|
||||
bool remove_prewhere_column = false;
|
||||
bool last_reader_in_chain = false;
|
||||
bool is_initialized = false;
|
||||
};
|
||||
|
|
|
@ -20,6 +20,7 @@ struct PrewhereInfo
|
|||
ExpressionActionsPtr remove_columns_actions;
|
||||
String prewhere_column_name;
|
||||
bool remove_prewhere_column = false;
|
||||
bool need_filter = false;
|
||||
|
||||
PrewhereInfo() = default;
|
||||
explicit PrewhereInfo(ExpressionActionsPtr prewhere_actions_, String prewhere_column_name_)
|
||||
|
|
|
@ -716,8 +716,6 @@ void StorageBuffer::alter(const AlterCommands & params, const Context & context,
|
|||
{
|
||||
lockStructureExclusively(table_lock_holder, context.getCurrentQueryId());
|
||||
|
||||
checkAlterIsPossible(params, context.getSettingsRef());
|
||||
|
||||
const String database_name_ = getDatabaseName();
|
||||
const String table_name_ = getTableName();
|
||||
|
||||
|
|
|
@ -411,8 +411,6 @@ void StorageDistributed::alter(const AlterCommands & params, const Context & con
|
|||
{
|
||||
lockStructureExclusively(table_lock_holder, context.getCurrentQueryId());
|
||||
|
||||
checkAlterIsPossible(params, context.getSettingsRef());
|
||||
|
||||
const String current_database_name = getDatabaseName();
|
||||
const String current_table_name = getTableName();
|
||||
|
||||
|
|
|
@ -10,6 +10,7 @@
|
|||
#include <Interpreters/joinDispatch.h>
|
||||
#include <Interpreters/AnalyzedJoin.h>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Common/quoteString.h>
|
||||
|
||||
#include <Poco/String.h> /// toLower
|
||||
#include <Poco/File.h>
|
||||
|
@ -71,7 +72,12 @@ void StorageJoin::truncate(const ASTPtr &, const Context &, TableStructureWriteL
|
|||
HashJoinPtr StorageJoin::getJoin(std::shared_ptr<AnalyzedJoin> analyzed_join) const
|
||||
{
|
||||
if (kind != analyzed_join->kind() || strictness != analyzed_join->strictness())
|
||||
throw Exception("Table " + table_name + " has incompatible type of JOIN.", ErrorCodes::INCOMPATIBLE_TYPE_OF_JOIN);
|
||||
throw Exception("Table " + backQuote(table_name) + " has incompatible type of JOIN.", ErrorCodes::INCOMPATIBLE_TYPE_OF_JOIN);
|
||||
|
||||
if ((analyzed_join->forceNullableRight() && !use_nulls) ||
|
||||
(!analyzed_join->forceNullableRight() && isLeftOrFull(analyzed_join->kind()) && use_nulls))
|
||||
throw Exception("Table " + backQuote(table_name) + " needs the same join_use_nulls setting as present in LEFT or FULL JOIN.",
|
||||
ErrorCodes::INCOMPATIBLE_TYPE_OF_JOIN);
|
||||
|
||||
/// TODO: check key columns
|
||||
|
||||
|
|
|
@ -252,7 +252,7 @@ BlockInputStreams StorageMerge::read(
|
|||
else
|
||||
{
|
||||
source_streams.emplace_back(std::make_shared<LazyBlockInputStream>(
|
||||
header, [=]() mutable -> BlockInputStreamPtr
|
||||
header, [=, this]() mutable -> BlockInputStreamPtr
|
||||
{
|
||||
BlockInputStreams streams = createSourceStreams(query_info, processed_stage, max_block_size,
|
||||
header, storage, struct_lock, real_column_names,
|
||||
|
|
|
@ -252,8 +252,6 @@ void StorageMergeTree::alter(
|
|||
|
||||
lockNewDataStructureExclusively(table_lock_holder, context.getCurrentQueryId());
|
||||
|
||||
checkAlterIsPossible(params, context.getSettingsRef());
|
||||
|
||||
StorageInMemoryMetadata metadata = getInMemoryMetadata();
|
||||
|
||||
params.apply(metadata);
|
||||
|
|
|
@ -3198,9 +3198,6 @@ void StorageReplicatedMergeTree::alter(
|
|||
const String current_database_name = getDatabaseName();
|
||||
const String current_table_name = getTableName();
|
||||
|
||||
|
||||
checkAlterIsPossible(params, query_context.getSettingsRef());
|
||||
|
||||
/// We cannot check this alter commands with method isModifyingData()
|
||||
/// because ReplicatedMergeTree stores both columns and metadata for
|
||||
/// each replica. So we have to wait AlterThread even with lightweight
|
||||
|
|
|
@ -50,19 +50,25 @@ void StorageSystemDictionaries::fillData(MutableColumns & res_columns, const Con
|
|||
const auto & external_dictionaries = context.getExternalDictionariesLoader();
|
||||
for (const auto & load_result : external_dictionaries.getCurrentLoadResults())
|
||||
{
|
||||
if (startsWith(load_result.repository_name, IExternalLoaderConfigRepository::INTERNAL_REPOSITORY_NAME_PREFIX))
|
||||
continue;
|
||||
const auto dict_ptr = std::dynamic_pointer_cast<const IDictionaryBase>(load_result.object);
|
||||
|
||||
size_t i = 0;
|
||||
String database;
|
||||
String short_name = load_result.name;
|
||||
|
||||
if (!load_result.repository_name.empty() && startsWith(load_result.name, load_result.repository_name + "."))
|
||||
String database, short_name;
|
||||
if (dict_ptr)
|
||||
{
|
||||
database = load_result.repository_name;
|
||||
short_name = load_result.name.substr(load_result.repository_name.length() + 1);
|
||||
database = dict_ptr->getDatabase();
|
||||
short_name = dict_ptr->getName();
|
||||
}
|
||||
else
|
||||
{
|
||||
short_name = load_result.name;
|
||||
if (!load_result.repository_name.empty() && startsWith(short_name, load_result.repository_name + "."))
|
||||
{
|
||||
database = load_result.repository_name;
|
||||
short_name = short_name.substr(database.length() + 1);
|
||||
}
|
||||
}
|
||||
|
||||
size_t i = 0;
|
||||
res_columns[i++]->insert(database);
|
||||
res_columns[i++]->insert(short_name);
|
||||
res_columns[i++]->insert(static_cast<Int8>(load_result.status));
|
||||
|
@ -70,7 +76,6 @@ void StorageSystemDictionaries::fillData(MutableColumns & res_columns, const Con
|
|||
|
||||
std::exception_ptr last_exception = load_result.exception;
|
||||
|
||||
const auto dict_ptr = std::dynamic_pointer_cast<const IDictionaryBase>(load_result.object);
|
||||
if (dict_ptr)
|
||||
{
|
||||
res_columns[i++]->insert(dict_ptr->getTypeName());
|
||||
|
|
|
@ -0,0 +1,22 @@
|
|||
<test>
|
||||
<type>once</type>
|
||||
|
||||
<stop_conditions>
|
||||
<any_of>
|
||||
<average_speed_not_changing_for_ms>1000</average_speed_not_changing_for_ms>
|
||||
<total_time_ms>10000</total_time_ms>
|
||||
</any_of>
|
||||
</stop_conditions>
|
||||
|
||||
<main_metric>
|
||||
<max_rows_per_second />
|
||||
</main_metric>
|
||||
|
||||
<query><![CDATA[ WITH number AS x SELECT sum(x < 1 ? 1 : (x < 5 ? 2 : 3)) FROM system.numbers ]]></query>
|
||||
<query><![CDATA[ WITH number AS x SELECT sum(x < 1 ? '1' : (x < 5 ? '2' : '3')) FROM system.numbers ]]></query>
|
||||
<query><![CDATA[ WITH number AS x SELECT sum(x < 1 ? 1 : (x < 5 ? 2 : (x < 10 ? 3 : (x % 2 ? 4 : 5)))) FROM system.numbers ]]></query>
|
||||
<query><![CDATA[ WITH number AS x SELECT sum(x < 1 ? '1' : (x < 5 ? '2' : (x < 10 ? '3' : (x % 2 ? '4' : '5')))) FROM system.numbers ]]></query>
|
||||
<query><![CDATA[
|
||||
WITH number AS x, x = 1 ? 1 : (x = 2 ? 2 : (x = 3 ? 3 : (x = 4 ? 4 : (x = 5 ? 5 : (x = 6 ? 6 : (x = 7 ? 7 : (x = 8 ? 8 : (x = 9 ? 9 : (x = 10 ? 10 : (x = 11 ? 11 : (x = 12 ? 12 : (x = 13 ? 13 : (x = 14 ? 14 : (x = 15 ? 15 : (x = 16 ? 16 : (x = 17 ? 17 : (x = 18 ? 18 : (x = 19 ? 19 : 20)))))))))))))))))) AS res SELECT sum(res) FROM system.numbers
|
||||
]]></query>
|
||||
</test>
|
|
@ -20,6 +20,7 @@
|
|||
|
||||
<settings>
|
||||
<output_format_pretty_max_rows>1000000</output_format_pretty_max_rows>
|
||||
<max_threads>1</max_threads>
|
||||
</settings>
|
||||
|
||||
<substitutions>
|
||||
|
|
|
@ -39,6 +39,6 @@ SimpleAggregateFunction(sum, Double)
|
|||
7 14
|
||||
8 16
|
||||
9 18
|
||||
1 1 2 2.2.2.2
|
||||
10 2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222 20 20.20.20.20
|
||||
SimpleAggregateFunction(anyLast, Nullable(String)) SimpleAggregateFunction(anyLast, LowCardinality(Nullable(String))) SimpleAggregateFunction(anyLast, IPv4)
|
||||
1 1 2 2.2.2.2 3
|
||||
10 2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222 20 20.20.20.20 5
|
||||
SimpleAggregateFunction(anyLast, Nullable(String)) SimpleAggregateFunction(anyLast, LowCardinality(Nullable(String))) SimpleAggregateFunction(anyLast, IPv4) SimpleAggregateFunction(groupBitOr, UInt32)
|
||||
|
|
|
@ -19,14 +19,20 @@ select * from simple;
|
|||
-- complex types
|
||||
drop table if exists simple;
|
||||
|
||||
create table simple (id UInt64,nullable_str SimpleAggregateFunction(anyLast,Nullable(String)),low_str SimpleAggregateFunction(anyLast,LowCardinality(Nullable(String))),ip SimpleAggregateFunction(anyLast,IPv4)) engine=AggregatingMergeTree order by id;
|
||||
insert into simple values(1,'1','1','1.1.1.1');
|
||||
insert into simple values(1,null,'2','2.2.2.2');
|
||||
create table simple (
|
||||
id UInt64,
|
||||
nullable_str SimpleAggregateFunction(anyLast,Nullable(String)),
|
||||
low_str SimpleAggregateFunction(anyLast,LowCardinality(Nullable(String))),
|
||||
ip SimpleAggregateFunction(anyLast,IPv4),
|
||||
status SimpleAggregateFunction(groupBitOr, UInt32)
|
||||
) engine=AggregatingMergeTree order by id;
|
||||
insert into simple values(1,'1','1','1.1.1.1', 1);
|
||||
insert into simple values(1,null,'2','2.2.2.2', 2);
|
||||
-- String longer then MAX_SMALL_STRING_SIZE (actual string length is 100)
|
||||
insert into simple values(10,'10','10','10.10.10.10');
|
||||
insert into simple values(10,'2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222','20','20.20.20.20');
|
||||
insert into simple values(10,'10','10','10.10.10.10', 4);
|
||||
insert into simple values(10,'2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222','20','20.20.20.20', 1);
|
||||
|
||||
select * from simple final;
|
||||
select toTypeName(nullable_str),toTypeName(low_str),toTypeName(ip) from simple limit 1;
|
||||
select toTypeName(nullable_str),toTypeName(low_str),toTypeName(ip),toTypeName(status) from simple limit 1;
|
||||
|
||||
drop table simple;
|
||||
|
|
|
@ -30,3 +30,61 @@ full
|
|||
4 a5 b4
|
||||
4 a5 b5
|
||||
5 b6
|
||||
inner (join_use_nulls mix)
|
||||
2 a3 b1
|
||||
2 a3 b2
|
||||
4 a5 b3
|
||||
4 a5 b4
|
||||
4 a5 b5
|
||||
right (join_use_nulls mix)
|
||||
2 a3 b1
|
||||
2 a3 b2
|
||||
4 a5 b3
|
||||
4 a5 b4
|
||||
4 a5 b5
|
||||
5 \N b6
|
||||
left (join_use_nulls)
|
||||
0 a1 \N
|
||||
1 a2 \N
|
||||
2 a3 b1
|
||||
2 a3 b2
|
||||
3 a4 \N
|
||||
4 a5 b3
|
||||
4 a5 b4
|
||||
4 a5 b5
|
||||
inner (join_use_nulls)
|
||||
2 a3 b1
|
||||
2 a3 b2
|
||||
4 a5 b3
|
||||
4 a5 b4
|
||||
4 a5 b5
|
||||
right (join_use_nulls)
|
||||
2 a3 b1
|
||||
2 a3 b2
|
||||
4 a5 b3
|
||||
4 a5 b4
|
||||
4 a5 b5
|
||||
5 \N b6
|
||||
full (join_use_nulls)
|
||||
0 a1 \N
|
||||
1 a2 \N
|
||||
2 a3 b1
|
||||
2 a3 b2
|
||||
3 a4 \N
|
||||
4 a5 b3
|
||||
4 a5 b4
|
||||
4 a5 b5
|
||||
5 \N b6
|
||||
inner (join_use_nulls mix2)
|
||||
2 a3 b1
|
||||
2 a3 b2
|
||||
4 a5 b3
|
||||
4 a5 b4
|
||||
4 a5 b5
|
||||
right (join_use_nulls mix2)
|
||||
2 a3 b1
|
||||
2 a3 b2
|
||||
4 a5 b3
|
||||
4 a5 b4
|
||||
4 a5 b5
|
||||
5 b6
|
||||
|
|
|
@ -33,20 +33,54 @@ SELECT * FROM t1 RIGHT JOIN right_join j USING(x) ORDER BY x, str, s;
|
|||
SELECT 'full';
|
||||
SELECT * FROM t1 FULL JOIN full_join j USING(x) ORDER BY x, str, s;
|
||||
|
||||
-- TODO
|
||||
-- SET join_use_nulls = 1;
|
||||
--
|
||||
-- SELECT 'left (join_use_nulls)';
|
||||
-- SELECT * FROM t1 LEFT JOIN left_join j USING(x) ORDER BY x, str, s;
|
||||
--
|
||||
-- SELECT 'inner (join_use_nulls)';
|
||||
-- SELECT * FROM t1 INNER JOIN inner_join j USING(x) ORDER BY x, str, s;
|
||||
--
|
||||
-- SELECT 'right (join_use_nulls)';
|
||||
-- SELECT * FROM t1 RIGHT JOIN right_join j USING(x) ORDER BY x, str, s;
|
||||
--
|
||||
-- SELECT 'full (join_use_nulls)';
|
||||
-- SELECT * FROM t1 FULL JOIN full_join j USING(x) ORDER BY x, str, s;
|
||||
SET join_use_nulls = 1;
|
||||
|
||||
SELECT * FROM t1 LEFT JOIN left_join j USING(x) ORDER BY x, str, s; -- { serverError 264 }
|
||||
SELECT * FROM t1 FULL JOIN full_join j USING(x) ORDER BY x, str, s; -- { serverError 264 }
|
||||
|
||||
SELECT 'inner (join_use_nulls mix)';
|
||||
SELECT * FROM t1 INNER JOIN inner_join j USING(x) ORDER BY x, str, s;
|
||||
|
||||
SELECT 'right (join_use_nulls mix)';
|
||||
SELECT * FROM t1 RIGHT JOIN right_join j USING(x) ORDER BY x, str, s;
|
||||
|
||||
DROP TABLE left_join;
|
||||
DROP TABLE inner_join;
|
||||
DROP TABLE right_join;
|
||||
DROP TABLE full_join;
|
||||
|
||||
CREATE TABLE left_join (x UInt32, s String) engine = Join(ALL, LEFT, x) SETTINGS join_use_nulls = 1;
|
||||
CREATE TABLE inner_join (x UInt32, s String) engine = Join(ALL, INNER, x) SETTINGS join_use_nulls = 1;
|
||||
CREATE TABLE right_join (x UInt32, s String) engine = Join(ALL, RIGHT, x) SETTINGS join_use_nulls = 1;
|
||||
CREATE TABLE full_join (x UInt32, s String) engine = Join(ALL, FULL, x) SETTINGS join_use_nulls = 1;
|
||||
|
||||
INSERT INTO left_join (x, s) VALUES (2, 'b1'), (2, 'b2'), (4, 'b3'), (4, 'b4'), (4, 'b5'), (5, 'b6');
|
||||
INSERT INTO inner_join (x, s) VALUES (2, 'b1'), (2, 'b2'), (4, 'b3'), (4, 'b4'), (4, 'b5'), (5, 'b6');
|
||||
INSERT INTO right_join (x, s) VALUES (2, 'b1'), (2, 'b2'), (4, 'b3'), (4, 'b4'), (4, 'b5'), (5, 'b6');
|
||||
INSERT INTO full_join (x, s) VALUES (2, 'b1'), (2, 'b2'), (4, 'b3'), (4, 'b4'), (4, 'b5'), (5, 'b6');
|
||||
|
||||
SELECT 'left (join_use_nulls)';
|
||||
SELECT * FROM t1 LEFT JOIN left_join j USING(x) ORDER BY x, str, s;
|
||||
|
||||
SELECT 'inner (join_use_nulls)';
|
||||
SELECT * FROM t1 INNER JOIN inner_join j USING(x) ORDER BY x, str, s;
|
||||
|
||||
SELECT 'right (join_use_nulls)';
|
||||
SELECT * FROM t1 RIGHT JOIN right_join j USING(x) ORDER BY x, str, s;
|
||||
|
||||
SELECT 'full (join_use_nulls)';
|
||||
SELECT * FROM t1 FULL JOIN full_join j USING(x) ORDER BY x, str, s;
|
||||
|
||||
SET join_use_nulls = 0;
|
||||
|
||||
SELECT * FROM t1 LEFT JOIN left_join j USING(x) ORDER BY x, str, s; -- { serverError 264 }
|
||||
SELECT * FROM t1 FULL JOIN full_join j USING(x) ORDER BY x, str, s; -- { serverError 264 }
|
||||
|
||||
SELECT 'inner (join_use_nulls mix2)';
|
||||
SELECT * FROM t1 INNER JOIN inner_join j USING(x) ORDER BY x, str, s;
|
||||
|
||||
SELECT 'right (join_use_nulls mix2)';
|
||||
SELECT * FROM t1 RIGHT JOIN right_join j USING(x) ORDER BY x, str, s;
|
||||
|
||||
DROP TABLE t1;
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,3 @@
|
|||
SELECT x FROM (SELECT number % 16 = 0 ? nan : (number % 24 = 0 ? NULL : (number % 37 = 0 ? nan : (number % 34 = 0 ? nan : (number % 3 = 0 ? NULL : (number % 68 = 0 ? 42 : (number % 28 = 0 ? nan : (number % 46 = 0 ? nan : (number % 13 = 0 ? nan : (number % 27 = 0 ? NULL : (number % 39 = 0 ? NULL : (number % 27 = 0 ? NULL : (number % 30 = 0 ? NULL : (number % 72 = 0 ? NULL : (number % 36 = 0 ? NULL : (number % 51 = 0 ? NULL : (number % 58 = 0 ? nan : (number % 26 = 0 ? 42 : (number % 13 = 0 ? nan : (number % 12 = 0 ? NULL : (number % 22 = 0 ? nan : (number % 36 = 0 ? NULL : (number % 63 = 0 ? NULL : (number % 27 = 0 ? NULL : (number % 18 = 0 ? NULL : (number % 69 = 0 ? NULL : (number % 76 = 0 ? nan : (number % 42 = 0 ? NULL : (number % 9 = 0 ? NULL : (toFloat64(number)))))))))))))))))))))))))))))) AS x FROM system.numbers LIMIT 1001) ORDER BY x ASC NULLS FIRST;
|
||||
|
||||
SELECT x FROM (SELECT number % 22 = 0 ? nan : (number % 56 = 0 ? 42 : (number % 45 = 0 ? NULL : (number % 47 = 0 ? 42 : (number % 39 = 0 ? NULL : (number % 1 = 0 ? nan : (number % 43 = 0 ? nan : (number % 40 = 0 ? nan : (number % 42 = 0 ? NULL : (number % 26 = 0 ? 42 : (number % 41 = 0 ? 42 : (number % 6 = 0 ? NULL : (number % 39 = 0 ? NULL : (number % 34 = 0 ? nan : (number % 74 = 0 ? 42 : (number % 40 = 0 ? nan : (number % 37 = 0 ? nan : (number % 51 = 0 ? NULL : (number % 46 = 0 ? nan : (toFloat64(number)))))))))))))))))))) AS x FROM system.numbers LIMIT 1001) ORDER BY x ASC NULLS FIRST;
|
|
@ -0,0 +1 @@
|
|||
Ok
|
|
@ -0,0 +1,14 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
. $CURDIR/../shell_config.sh
|
||||
|
||||
# Implementation specific behaviour on overflow. We may return error or produce empty string.
|
||||
${CLICKHOUSE_CLIENT} --query="SELECT randomPrintableASCII(nan);" >/dev/null 2>&1 ||:
|
||||
${CLICKHOUSE_CLIENT} --query="SELECT randomPrintableASCII(inf);" >/dev/null 2>&1 ||:
|
||||
${CLICKHOUSE_CLIENT} --query="SELECT randomPrintableASCII(-inf);" >/dev/null 2>&1 ||:
|
||||
${CLICKHOUSE_CLIENT} --query="SELECT randomPrintableASCII(1e300);" >/dev/null 2>&1 ||:
|
||||
${CLICKHOUSE_CLIENT} --query="SELECT randomPrintableASCII(-123.456);" >/dev/null 2>&1 ||:
|
||||
${CLICKHOUSE_CLIENT} --query="SELECT randomPrintableASCII(-1);" >/dev/null 2>&1 ||:
|
||||
|
||||
${CLICKHOUSE_CLIENT} --query="SELECT randomPrintableASCII(0), 'Ok';"
|
|
@ -0,0 +1,2 @@
|
|||
43
|
||||
1
|
|
@ -0,0 +1,14 @@
|
|||
DROP TABLE IF EXISTS test_prewhere_default_column;
|
||||
DROP TABLE IF EXISTS test_prewhere_column_type;
|
||||
|
||||
CREATE TABLE test_prewhere_default_column (APIKey UInt8, SessionType UInt8) ENGINE = MergeTree() PARTITION BY APIKey ORDER BY tuple();
|
||||
INSERT INTO test_prewhere_default_column VALUES( 42, 42 );
|
||||
ALTER TABLE test_prewhere_default_column ADD COLUMN OperatingSystem UInt64 DEFAULT SessionType+1;
|
||||
|
||||
SELECT OperatingSystem FROM test_prewhere_default_column PREWHERE SessionType = 42;
|
||||
|
||||
|
||||
CREATE TABLE test_prewhere_column_type (`a` LowCardinality(String), `x` Nullable(Int32)) ENGINE = MergeTree ORDER BY tuple();
|
||||
INSERT INTO test_prewhere_column_type VALUES ('', 2);
|
||||
|
||||
SELECT a, y FROM test_prewhere_column_type prewhere (x = 2) AS y;
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue