2012-12-11 09:32:48 +08:00
|
|
|
# Copyright 2012 The Rust Project Developers. See the COPYRIGHT
|
|
|
|
# file at the top-level directory of this distribution and at
|
|
|
|
# http://rust-lang.org/COPYRIGHT.
|
|
|
|
#
|
|
|
|
# Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
|
|
|
|
# <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
|
|
|
|
# option. This file may not be copied, modified, or distributed
|
|
|
|
# except according to those terms.
|
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
|
2011-05-02 04:18:52 +08:00
|
|
|
######################################################################
|
2013-02-06 06:14:58 +08:00
|
|
|
# Test variables
|
2011-05-02 04:18:52 +08:00
|
|
|
######################################################################
|
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
# The names of crates that must be tested
|
2014-01-25 11:27:22 +08:00
|
|
|
TEST_TARGET_CRATES = $(TARGET_CRATES)
|
|
|
|
TEST_DOC_CRATES = $(DOC_CRATES)
|
|
|
|
TEST_HOST_CRATES = $(HOST_CRATES)
|
2013-02-22 08:15:01 +08:00
|
|
|
TEST_CRATES = $(TEST_TARGET_CRATES) $(TEST_HOST_CRATES)
|
2011-05-02 04:18:52 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
######################################################################
|
|
|
|
# Environment configuration
|
|
|
|
######################################################################
|
2011-06-30 06:12:02 +08:00
|
|
|
|
The Big Test Suite Overhaul
This replaces the make-based test runner with a set of Rust-based test
runners. I believe that all existing functionality has been
preserved. The primary objective is to dogfood the Rust test
framework.
A few main things happen here:
1) The run-pass/lib-* tests are all moved into src/test/stdtest. This
is a standalone test crate intended for all standard library tests. It
compiles to build/test/stdtest.stageN.
2) rustc now compiles into yet another build artifact, this one a test
runner that runs any tests contained directly in the rustc crate. This
allows much more fine-grained unit testing of the compiler. It
compiles to build/test/rustctest.stageN.
3) There is a new custom test runner crate at src/test/compiletest
that reproduces all the functionality for running the compile-fail,
run-fail, run-pass and bench tests while integrating with Rust's test
framework. It compiles to build/test/compiletest.stageN.
4) The build rules have been completely changed to use the new test
runners, while also being less redundant, following the example of the
recent stageN.mk rewrite.
It adds two new features to the cfail/rfail/rpass/bench tests:
1) Tests can specify multiple 'error-pattern' directives which must be
satisfied in order.
2) Tests can specify a 'compile-flags' directive which will make the
test runner provide additional command line arguments to rustc.
There are some downsides, the primary being that Rust has to be
functioning pretty well just to run _any_ tests, which I imagine will
be the source of some frustration when the entire test suite
breaks. Will also cause some headaches during porting.
Not having individual make rules, each rpass, etc test no longer
remembers between runs whether it completed successfully. As a result,
it's not possible to incrementally fix multiple tests by just running
'make check', fixing a test, and repeating without re-running all the
tests contained in the test runner. Instead you can filter just the
tests you want to run by using the TESTNAME environment variable.
This also dispenses with the ability to run stage0 tests, but they
tended to be broken more often than not anyway.
2011-07-13 10:01:09 +08:00
|
|
|
# The arguments to all test runners
|
|
|
|
ifdef TESTNAME
|
|
|
|
TESTARGS += $(TESTNAME)
|
|
|
|
endif
|
|
|
|
|
2014-02-08 03:08:32 +08:00
|
|
|
ifdef CHECK_IGNORED
|
The Big Test Suite Overhaul
This replaces the make-based test runner with a set of Rust-based test
runners. I believe that all existing functionality has been
preserved. The primary objective is to dogfood the Rust test
framework.
A few main things happen here:
1) The run-pass/lib-* tests are all moved into src/test/stdtest. This
is a standalone test crate intended for all standard library tests. It
compiles to build/test/stdtest.stageN.
2) rustc now compiles into yet another build artifact, this one a test
runner that runs any tests contained directly in the rustc crate. This
allows much more fine-grained unit testing of the compiler. It
compiles to build/test/rustctest.stageN.
3) There is a new custom test runner crate at src/test/compiletest
that reproduces all the functionality for running the compile-fail,
run-fail, run-pass and bench tests while integrating with Rust's test
framework. It compiles to build/test/compiletest.stageN.
4) The build rules have been completely changed to use the new test
runners, while also being less redundant, following the example of the
recent stageN.mk rewrite.
It adds two new features to the cfail/rfail/rpass/bench tests:
1) Tests can specify multiple 'error-pattern' directives which must be
satisfied in order.
2) Tests can specify a 'compile-flags' directive which will make the
test runner provide additional command line arguments to rustc.
There are some downsides, the primary being that Rust has to be
functioning pretty well just to run _any_ tests, which I imagine will
be the source of some frustration when the entire test suite
breaks. Will also cause some headaches during porting.
Not having individual make rules, each rpass, etc test no longer
remembers between runs whether it completed successfully. As a result,
it's not possible to incrementally fix multiple tests by just running
'make check', fixing a test, and repeating without re-running all the
tests contained in the test runner. Instead you can filter just the
tests you want to run by using the TESTNAME environment variable.
This also dispenses with the ability to run stage0 tests, but they
tended to be broken more often than not anyway.
2011-07-13 10:01:09 +08:00
|
|
|
TESTARGS += --ignored
|
|
|
|
endif
|
|
|
|
|
2013-07-30 03:38:12 +08:00
|
|
|
TEST_BENCH = --bench
|
2013-07-18 02:52:21 +08:00
|
|
|
|
The Big Test Suite Overhaul
This replaces the make-based test runner with a set of Rust-based test
runners. I believe that all existing functionality has been
preserved. The primary objective is to dogfood the Rust test
framework.
A few main things happen here:
1) The run-pass/lib-* tests are all moved into src/test/stdtest. This
is a standalone test crate intended for all standard library tests. It
compiles to build/test/stdtest.stageN.
2) rustc now compiles into yet another build artifact, this one a test
runner that runs any tests contained directly in the rustc crate. This
allows much more fine-grained unit testing of the compiler. It
compiles to build/test/rustctest.stageN.
3) There is a new custom test runner crate at src/test/compiletest
that reproduces all the functionality for running the compile-fail,
run-fail, run-pass and bench tests while integrating with Rust's test
framework. It compiles to build/test/compiletest.stageN.
4) The build rules have been completely changed to use the new test
runners, while also being less redundant, following the example of the
recent stageN.mk rewrite.
It adds two new features to the cfail/rfail/rpass/bench tests:
1) Tests can specify multiple 'error-pattern' directives which must be
satisfied in order.
2) Tests can specify a 'compile-flags' directive which will make the
test runner provide additional command line arguments to rustc.
There are some downsides, the primary being that Rust has to be
functioning pretty well just to run _any_ tests, which I imagine will
be the source of some frustration when the entire test suite
breaks. Will also cause some headaches during porting.
Not having individual make rules, each rpass, etc test no longer
remembers between runs whether it completed successfully. As a result,
it's not possible to incrementally fix multiple tests by just running
'make check', fixing a test, and repeating without re-running all the
tests contained in the test runner. Instead you can filter just the
tests you want to run by using the TESTNAME environment variable.
This also dispenses with the ability to run stage0 tests, but they
tended to be broken more often than not anyway.
2011-07-13 10:01:09 +08:00
|
|
|
# Arguments to the cfail/rfail/rpass/bench tests
|
|
|
|
ifdef CFG_VALGRIND
|
|
|
|
CTEST_RUNTOOL = --runtool "$(CFG_VALGRIND)"
|
2013-07-30 03:38:12 +08:00
|
|
|
TEST_BENCH =
|
The Big Test Suite Overhaul
This replaces the make-based test runner with a set of Rust-based test
runners. I believe that all existing functionality has been
preserved. The primary objective is to dogfood the Rust test
framework.
A few main things happen here:
1) The run-pass/lib-* tests are all moved into src/test/stdtest. This
is a standalone test crate intended for all standard library tests. It
compiles to build/test/stdtest.stageN.
2) rustc now compiles into yet another build artifact, this one a test
runner that runs any tests contained directly in the rustc crate. This
allows much more fine-grained unit testing of the compiler. It
compiles to build/test/rustctest.stageN.
3) There is a new custom test runner crate at src/test/compiletest
that reproduces all the functionality for running the compile-fail,
run-fail, run-pass and bench tests while integrating with Rust's test
framework. It compiles to build/test/compiletest.stageN.
4) The build rules have been completely changed to use the new test
runners, while also being less redundant, following the example of the
recent stageN.mk rewrite.
It adds two new features to the cfail/rfail/rpass/bench tests:
1) Tests can specify multiple 'error-pattern' directives which must be
satisfied in order.
2) Tests can specify a 'compile-flags' directive which will make the
test runner provide additional command line arguments to rustc.
There are some downsides, the primary being that Rust has to be
functioning pretty well just to run _any_ tests, which I imagine will
be the source of some frustration when the entire test suite
breaks. Will also cause some headaches during porting.
Not having individual make rules, each rpass, etc test no longer
remembers between runs whether it completed successfully. As a result,
it's not possible to incrementally fix multiple tests by just running
'make check', fixing a test, and repeating without re-running all the
tests contained in the test runner. Instead you can filter just the
tests you want to run by using the TESTNAME environment variable.
This also dispenses with the ability to run stage0 tests, but they
tended to be broken more often than not anyway.
2011-07-13 10:01:09 +08:00
|
|
|
endif
|
2011-06-30 06:12:02 +08:00
|
|
|
|
2013-07-30 03:43:45 +08:00
|
|
|
ifdef NO_BENCH
|
|
|
|
TEST_BENCH =
|
The Big Test Suite Overhaul
This replaces the make-based test runner with a set of Rust-based test
runners. I believe that all existing functionality has been
preserved. The primary objective is to dogfood the Rust test
framework.
A few main things happen here:
1) The run-pass/lib-* tests are all moved into src/test/stdtest. This
is a standalone test crate intended for all standard library tests. It
compiles to build/test/stdtest.stageN.
2) rustc now compiles into yet another build artifact, this one a test
runner that runs any tests contained directly in the rustc crate. This
allows much more fine-grained unit testing of the compiler. It
compiles to build/test/rustctest.stageN.
3) There is a new custom test runner crate at src/test/compiletest
that reproduces all the functionality for running the compile-fail,
run-fail, run-pass and bench tests while integrating with Rust's test
framework. It compiles to build/test/compiletest.stageN.
4) The build rules have been completely changed to use the new test
runners, while also being less redundant, following the example of the
recent stageN.mk rewrite.
It adds two new features to the cfail/rfail/rpass/bench tests:
1) Tests can specify multiple 'error-pattern' directives which must be
satisfied in order.
2) Tests can specify a 'compile-flags' directive which will make the
test runner provide additional command line arguments to rustc.
There are some downsides, the primary being that Rust has to be
functioning pretty well just to run _any_ tests, which I imagine will
be the source of some frustration when the entire test suite
breaks. Will also cause some headaches during porting.
Not having individual make rules, each rpass, etc test no longer
remembers between runs whether it completed successfully. As a result,
it's not possible to incrementally fix multiple tests by just running
'make check', fixing a test, and repeating without re-running all the
tests contained in the test runner. Instead you can filter just the
tests you want to run by using the TESTNAME environment variable.
This also dispenses with the ability to run stage0 tests, but they
tended to be broken more often than not anyway.
2011-07-13 10:01:09 +08:00
|
|
|
endif
|
2011-06-30 06:12:02 +08:00
|
|
|
|
2011-09-14 06:06:21 +08:00
|
|
|
# Arguments to the perf tests
|
|
|
|
ifdef CFG_PERF_TOOL
|
|
|
|
CTEST_PERF_RUNTOOL = --runtool "$(CFG_PERF_TOOL)"
|
|
|
|
endif
|
|
|
|
|
The Big Test Suite Overhaul
This replaces the make-based test runner with a set of Rust-based test
runners. I believe that all existing functionality has been
preserved. The primary objective is to dogfood the Rust test
framework.
A few main things happen here:
1) The run-pass/lib-* tests are all moved into src/test/stdtest. This
is a standalone test crate intended for all standard library tests. It
compiles to build/test/stdtest.stageN.
2) rustc now compiles into yet another build artifact, this one a test
runner that runs any tests contained directly in the rustc crate. This
allows much more fine-grained unit testing of the compiler. It
compiles to build/test/rustctest.stageN.
3) There is a new custom test runner crate at src/test/compiletest
that reproduces all the functionality for running the compile-fail,
run-fail, run-pass and bench tests while integrating with Rust's test
framework. It compiles to build/test/compiletest.stageN.
4) The build rules have been completely changed to use the new test
runners, while also being less redundant, following the example of the
recent stageN.mk rewrite.
It adds two new features to the cfail/rfail/rpass/bench tests:
1) Tests can specify multiple 'error-pattern' directives which must be
satisfied in order.
2) Tests can specify a 'compile-flags' directive which will make the
test runner provide additional command line arguments to rustc.
There are some downsides, the primary being that Rust has to be
functioning pretty well just to run _any_ tests, which I imagine will
be the source of some frustration when the entire test suite
breaks. Will also cause some headaches during porting.
Not having individual make rules, each rpass, etc test no longer
remembers between runs whether it completed successfully. As a result,
it's not possible to incrementally fix multiple tests by just running
'make check', fixing a test, and repeating without re-running all the
tests contained in the test runner. Instead you can filter just the
tests you want to run by using the TESTNAME environment variable.
This also dispenses with the ability to run stage0 tests, but they
tended to be broken more often than not anyway.
2011-07-13 10:01:09 +08:00
|
|
|
CTEST_TESTARGS := $(TESTARGS)
|
2011-05-02 04:18:52 +08:00
|
|
|
|
The Big Test Suite Overhaul
This replaces the make-based test runner with a set of Rust-based test
runners. I believe that all existing functionality has been
preserved. The primary objective is to dogfood the Rust test
framework.
A few main things happen here:
1) The run-pass/lib-* tests are all moved into src/test/stdtest. This
is a standalone test crate intended for all standard library tests. It
compiles to build/test/stdtest.stageN.
2) rustc now compiles into yet another build artifact, this one a test
runner that runs any tests contained directly in the rustc crate. This
allows much more fine-grained unit testing of the compiler. It
compiles to build/test/rustctest.stageN.
3) There is a new custom test runner crate at src/test/compiletest
that reproduces all the functionality for running the compile-fail,
run-fail, run-pass and bench tests while integrating with Rust's test
framework. It compiles to build/test/compiletest.stageN.
4) The build rules have been completely changed to use the new test
runners, while also being less redundant, following the example of the
recent stageN.mk rewrite.
It adds two new features to the cfail/rfail/rpass/bench tests:
1) Tests can specify multiple 'error-pattern' directives which must be
satisfied in order.
2) Tests can specify a 'compile-flags' directive which will make the
test runner provide additional command line arguments to rustc.
There are some downsides, the primary being that Rust has to be
functioning pretty well just to run _any_ tests, which I imagine will
be the source of some frustration when the entire test suite
breaks. Will also cause some headaches during porting.
Not having individual make rules, each rpass, etc test no longer
remembers between runs whether it completed successfully. As a result,
it's not possible to incrementally fix multiple tests by just running
'make check', fixing a test, and repeating without re-running all the
tests contained in the test runner. Instead you can filter just the
tests you want to run by using the TESTNAME environment variable.
This also dispenses with the ability to run stage0 tests, but they
tended to be broken more often than not anyway.
2011-07-13 10:01:09 +08:00
|
|
|
ifdef VERBOSE
|
|
|
|
CTEST_TESTARGS += --verbose
|
|
|
|
endif
|
2011-05-05 08:32:05 +08:00
|
|
|
|
2012-05-24 13:53:50 +08:00
|
|
|
# If we're running perf then set this environment variable
|
|
|
|
# to put the benchmarks into 'hard mode'
|
|
|
|
ifeq ($(MAKECMDGOALS),perf)
|
|
|
|
RUST_BENCH=1
|
|
|
|
export RUST_BENCH
|
|
|
|
endif
|
|
|
|
|
2013-02-06 12:05:00 +08:00
|
|
|
TEST_LOG_FILE=tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).log
|
|
|
|
TEST_OK_FILE=tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).ok
|
2012-05-24 13:53:50 +08:00
|
|
|
|
2013-07-16 09:52:08 +08:00
|
|
|
TEST_RATCHET_FILE=tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4)-metrics.json
|
|
|
|
TEST_RATCHET_NOISE_PERCENT=10.0
|
|
|
|
|
|
|
|
# Whether to ratchet or merely save benchmarks
|
|
|
|
ifdef CFG_RATCHET_BENCH
|
2013-08-24 06:30:23 +08:00
|
|
|
CRATE_TEST_EXTRA_ARGS=\
|
2013-07-30 03:38:12 +08:00
|
|
|
--test $(TEST_BENCH) \
|
2013-07-16 09:52:08 +08:00
|
|
|
--ratchet-metrics $(call TEST_RATCHET_FILE,$(1),$(2),$(3),$(4)) \
|
|
|
|
--ratchet-noise-percent $(TEST_RATCHET_NOISE_PERCENT)
|
|
|
|
else
|
2013-08-24 06:30:23 +08:00
|
|
|
CRATE_TEST_EXTRA_ARGS=\
|
2013-07-30 03:38:12 +08:00
|
|
|
--test $(TEST_BENCH) \
|
2013-07-16 09:52:08 +08:00
|
|
|
--save-metrics $(call TEST_RATCHET_FILE,$(1),$(2),$(3),$(4))
|
|
|
|
endif
|
|
|
|
|
2013-08-24 06:30:23 +08:00
|
|
|
# If we're sharding the testsuite between parallel testers,
|
|
|
|
# pass this argument along to the compiletest and crate test
|
|
|
|
# invocations.
|
|
|
|
ifdef TEST_SHARD
|
|
|
|
CTEST_TESTARGS += --test-shard=$(TEST_SHARD)
|
|
|
|
CRATE_TEST_EXTRA_ARGS += --test-shard=$(TEST_SHARD)
|
|
|
|
endif
|
|
|
|
|
2013-03-02 20:25:12 +08:00
|
|
|
define DEF_TARGET_COMMANDS
|
|
|
|
|
|
|
|
ifdef CFG_UNIXY_$(1)
|
|
|
|
CFG_RUN_TEST_$(1)=$$(call CFG_RUN_$(1),,$$(CFG_VALGRIND) $$(1))
|
|
|
|
endif
|
|
|
|
|
|
|
|
ifdef CFG_WINDOWSY_$(1)
|
2013-11-30 09:09:10 +08:00
|
|
|
CFG_TESTLIB_$(1)=$$(CFG_BUILD_DIR)$$(2)/$$(strip \
|
2013-03-02 20:25:12 +08:00
|
|
|
$$(if $$(findstring stage0,$$(1)), \
|
2014-01-14 08:45:33 +08:00
|
|
|
stage0/$$(CFG_LIBDIR_RELATIVE), \
|
2013-03-02 20:25:12 +08:00
|
|
|
$$(if $$(findstring stage1,$$(1)), \
|
2014-01-14 08:45:33 +08:00
|
|
|
stage1/$$(CFG_LIBDIR_RELATIVE), \
|
2013-03-02 20:25:12 +08:00
|
|
|
$$(if $$(findstring stage2,$$(1)), \
|
2014-01-14 08:45:33 +08:00
|
|
|
stage2/$$(CFG_LIBDIR_RELATIVE), \
|
2013-03-02 20:25:12 +08:00
|
|
|
$$(if $$(findstring stage3,$$(1)), \
|
2014-01-14 08:45:33 +08:00
|
|
|
stage3/$$(CFG_LIBDIR_RELATIVE), \
|
2014-03-26 09:18:57 +08:00
|
|
|
)))))/rustlib/$$(CFG_BUILD)/lib
|
2013-03-02 20:25:12 +08:00
|
|
|
CFG_RUN_TEST_$(1)=$$(call CFG_RUN_$(1),$$(call CFG_TESTLIB_$(1),$$(1),$$(3)),$$(1))
|
|
|
|
endif
|
|
|
|
|
|
|
|
# Run the compiletest runner itself under valgrind
|
|
|
|
ifdef CTEST_VALGRIND
|
2014-02-08 08:04:57 +08:00
|
|
|
CFG_RUN_CTEST_$(1)=$$(RPATH_VAR$$(1)_T_$$(3)_H_$$(3)) \
|
|
|
|
$$(call CFG_RUN_TEST_$$(CFG_BUILD),$$(2),$$(3))
|
2013-03-02 20:25:12 +08:00
|
|
|
else
|
2014-02-08 08:04:57 +08:00
|
|
|
CFG_RUN_CTEST_$(1)=$$(RPATH_VAR$$(1)_T_$$(3)_H_$$(3)) \
|
|
|
|
$$(call CFG_RUN_$$(CFG_BUILD),$$(TLIB$$(1)_T_$$(3)_H_$$(3)),$$(2))
|
2013-03-02 20:25:12 +08:00
|
|
|
endif
|
|
|
|
|
|
|
|
endef
|
|
|
|
|
2013-10-21 17:18:21 +08:00
|
|
|
$(foreach target,$(CFG_TARGET), \
|
2013-03-02 20:25:12 +08:00
|
|
|
$(eval $(call DEF_TARGET_COMMANDS,$(target))))
|
|
|
|
|
2013-05-07 09:09:34 +08:00
|
|
|
# Target platform specific variables
|
2013-05-01 17:50:23 +08:00
|
|
|
# for arm-linux-androidabi
|
2013-05-03 23:49:18 +08:00
|
|
|
define DEF_ADB_DEVICE_STATUS
|
|
|
|
CFG_ADB_DEVICE_STATUS=$(1)
|
2013-05-01 17:50:23 +08:00
|
|
|
endef
|
|
|
|
|
2013-10-21 17:18:21 +08:00
|
|
|
$(foreach target,$(CFG_TARGET), \
|
2013-05-03 23:49:18 +08:00
|
|
|
$(if $(findstring $(target),"arm-linux-androideabi"), \
|
|
|
|
$(if $(findstring adb,$(CFG_ADB)), \
|
2014-01-04 16:36:02 +08:00
|
|
|
$(if $(findstring device,$(shell $(CFG_ADB) devices 2>/dev/null | grep -E '^[:_A-Za-z0-9-]+[[:blank:]]+device')), \
|
2013-12-15 14:42:01 +08:00
|
|
|
$(info check: android device attached) \
|
|
|
|
$(eval $(call DEF_ADB_DEVICE_STATUS, true)), \
|
|
|
|
$(info check: android device not attached) \
|
|
|
|
$(eval $(call DEF_ADB_DEVICE_STATUS, false)) \
|
2013-05-01 17:50:23 +08:00
|
|
|
), \
|
2013-12-15 14:42:01 +08:00
|
|
|
$(info check: adb not found) \
|
|
|
|
$(eval $(call DEF_ADB_DEVICE_STATUS, false)) \
|
2013-05-03 23:49:18 +08:00
|
|
|
), \
|
2013-05-01 17:50:23 +08:00
|
|
|
) \
|
|
|
|
)
|
|
|
|
|
2013-05-03 23:49:18 +08:00
|
|
|
ifeq ($(CFG_ADB_DEVICE_STATUS),true)
|
|
|
|
CFG_ADB_TEST_DIR=/data/tmp
|
2013-05-01 17:50:23 +08:00
|
|
|
|
2013-05-03 23:49:18 +08:00
|
|
|
$(info check: android device test dir $(CFG_ADB_TEST_DIR) ready \
|
2014-01-04 16:36:02 +08:00
|
|
|
$(shell $(CFG_ADB) remount 1>/dev/null) \
|
|
|
|
$(shell $(CFG_ADB) shell rm -r $(CFG_ADB_TEST_DIR) >/dev/null) \
|
|
|
|
$(shell $(CFG_ADB) shell mkdir $(CFG_ADB_TEST_DIR)) \
|
|
|
|
$(shell $(CFG_ADB) shell mkdir $(CFG_ADB_TEST_DIR)/tmp) \
|
|
|
|
$(shell $(CFG_ADB) push $(S)src/etc/adb_run_wrapper.sh $(CFG_ADB_TEST_DIR) 1>/dev/null) \
|
2014-01-29 15:29:38 +08:00
|
|
|
$(foreach crate,$(TARGET_CRATES),\
|
|
|
|
$(shell $(CFG_ADB) push $(TLIB2_T_arm-linux-androideabi_H_$(CFG_BUILD))/$(call CFG_LIB_GLOB_arm-linux-androideabi,$(crate)) \
|
|
|
|
$(CFG_ADB_TEST_DIR)))\
|
2013-05-01 17:50:23 +08:00
|
|
|
)
|
2013-05-03 23:49:18 +08:00
|
|
|
else
|
|
|
|
CFG_ADB_TEST_DIR=
|
2013-05-01 17:50:23 +08:00
|
|
|
endif
|
2013-03-02 20:25:12 +08:00
|
|
|
|
|
|
|
|
The Big Test Suite Overhaul
This replaces the make-based test runner with a set of Rust-based test
runners. I believe that all existing functionality has been
preserved. The primary objective is to dogfood the Rust test
framework.
A few main things happen here:
1) The run-pass/lib-* tests are all moved into src/test/stdtest. This
is a standalone test crate intended for all standard library tests. It
compiles to build/test/stdtest.stageN.
2) rustc now compiles into yet another build artifact, this one a test
runner that runs any tests contained directly in the rustc crate. This
allows much more fine-grained unit testing of the compiler. It
compiles to build/test/rustctest.stageN.
3) There is a new custom test runner crate at src/test/compiletest
that reproduces all the functionality for running the compile-fail,
run-fail, run-pass and bench tests while integrating with Rust's test
framework. It compiles to build/test/compiletest.stageN.
4) The build rules have been completely changed to use the new test
runners, while also being less redundant, following the example of the
recent stageN.mk rewrite.
It adds two new features to the cfail/rfail/rpass/bench tests:
1) Tests can specify multiple 'error-pattern' directives which must be
satisfied in order.
2) Tests can specify a 'compile-flags' directive which will make the
test runner provide additional command line arguments to rustc.
There are some downsides, the primary being that Rust has to be
functioning pretty well just to run _any_ tests, which I imagine will
be the source of some frustration when the entire test suite
breaks. Will also cause some headaches during porting.
Not having individual make rules, each rpass, etc test no longer
remembers between runs whether it completed successfully. As a result,
it's not possible to incrementally fix multiple tests by just running
'make check', fixing a test, and repeating without re-running all the
tests contained in the test runner. Instead you can filter just the
tests you want to run by using the TESTNAME environment variable.
This also dispenses with the ability to run stage0 tests, but they
tended to be broken more often than not anyway.
2011-07-13 10:01:09 +08:00
|
|
|
######################################################################
|
|
|
|
# Main test targets
|
|
|
|
######################################################################
|
|
|
|
|
2014-02-22 17:09:59 +08:00
|
|
|
check: cleantmptestlogs cleantestlibs tidy check-notidy
|
|
|
|
|
|
|
|
check-notidy: cleantmptestlogs cleantestlibs all check-stage2
|
2013-02-06 06:14:58 +08:00
|
|
|
$(Q)$(CFG_PYTHON) $(S)src/etc/check-summary.py tmp/*.log
|
|
|
|
|
|
|
|
check-lite: cleantestlibs cleantmptestlogs \
|
2014-02-04 07:53:43 +08:00
|
|
|
$(foreach crate,$(TARGET_CRATES),check-stage2-$(crate)) \
|
|
|
|
check-stage2-rpass \
|
2013-11-17 09:07:32 +08:00
|
|
|
check-stage2-rfail check-stage2-cfail check-stage2-rmake
|
2013-02-06 06:14:58 +08:00
|
|
|
$(Q)$(CFG_PYTHON) $(S)src/etc/check-summary.py tmp/*.log
|
|
|
|
|
2014-02-14 19:34:18 +08:00
|
|
|
check-ref: cleantestlibs cleantmptestlogs check-stage2-rpass \
|
|
|
|
check-stage2-rfail check-stage2-cfail check-stage2-rmake
|
|
|
|
$(Q)$(CFG_PYTHON) $(S)src/etc/check-summary.py tmp/*.log
|
|
|
|
|
2014-02-15 11:17:50 +08:00
|
|
|
check-docs: cleantestlibs cleantmptestlogs check-stage2-docs
|
|
|
|
$(Q)$(CFG_PYTHON) $(S)src/etc/check-summary.py tmp/*.log
|
|
|
|
|
2014-02-15 11:21:17 +08:00
|
|
|
# NOTE: Remove after reprogramming windows bots
|
|
|
|
check-fast: check-lite
|
|
|
|
|
2012-04-05 08:57:36 +08:00
|
|
|
.PHONY: cleantmptestlogs cleantestlibs
|
|
|
|
|
2012-04-05 04:40:50 +08:00
|
|
|
cleantmptestlogs:
|
|
|
|
$(Q)rm -f tmp/*.log
|
2011-05-05 08:32:05 +08:00
|
|
|
|
2012-04-05 08:57:36 +08:00
|
|
|
cleantestlibs:
|
2013-10-21 17:18:21 +08:00
|
|
|
$(Q)find $(CFG_BUILD)/test \
|
2012-04-05 08:57:36 +08:00
|
|
|
-name '*.[odasS]' -o \
|
|
|
|
-name '*.so' -o \
|
|
|
|
-name '*.dylib' -o \
|
|
|
|
-name '*.dll' -o \
|
|
|
|
-name '*.def' -o \
|
|
|
|
-name '*.bc' -o \
|
|
|
|
-name '*.dSYM' -o \
|
2012-04-10 13:41:27 +08:00
|
|
|
-name '*.libaux' -o \
|
2012-04-05 08:57:36 +08:00
|
|
|
-name '*.out' -o \
|
2013-02-10 02:09:19 +08:00
|
|
|
-name '*.err' -o \
|
|
|
|
-name '*.debugger.script' \
|
2012-04-05 10:21:09 +08:00
|
|
|
| xargs rm -rf
|
2012-04-05 08:57:36 +08:00
|
|
|
|
2011-05-12 00:37:23 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
######################################################################
|
|
|
|
# Tidy
|
|
|
|
######################################################################
|
2012-06-02 03:28:01 +08:00
|
|
|
|
2011-11-19 06:03:11 +08:00
|
|
|
ifdef CFG_NOTIDY
|
The Big Test Suite Overhaul
This replaces the make-based test runner with a set of Rust-based test
runners. I believe that all existing functionality has been
preserved. The primary objective is to dogfood the Rust test
framework.
A few main things happen here:
1) The run-pass/lib-* tests are all moved into src/test/stdtest. This
is a standalone test crate intended for all standard library tests. It
compiles to build/test/stdtest.stageN.
2) rustc now compiles into yet another build artifact, this one a test
runner that runs any tests contained directly in the rustc crate. This
allows much more fine-grained unit testing of the compiler. It
compiles to build/test/rustctest.stageN.
3) There is a new custom test runner crate at src/test/compiletest
that reproduces all the functionality for running the compile-fail,
run-fail, run-pass and bench tests while integrating with Rust's test
framework. It compiles to build/test/compiletest.stageN.
4) The build rules have been completely changed to use the new test
runners, while also being less redundant, following the example of the
recent stageN.mk rewrite.
It adds two new features to the cfail/rfail/rpass/bench tests:
1) Tests can specify multiple 'error-pattern' directives which must be
satisfied in order.
2) Tests can specify a 'compile-flags' directive which will make the
test runner provide additional command line arguments to rustc.
There are some downsides, the primary being that Rust has to be
functioning pretty well just to run _any_ tests, which I imagine will
be the source of some frustration when the entire test suite
breaks. Will also cause some headaches during porting.
Not having individual make rules, each rpass, etc test no longer
remembers between runs whether it completed successfully. As a result,
it's not possible to incrementally fix multiple tests by just running
'make check', fixing a test, and repeating without re-running all the
tests contained in the test runner. Instead you can filter just the
tests you want to run by using the TESTNAME environment variable.
This also dispenses with the ability to run stage0 tests, but they
tended to be broken more often than not anyway.
2011-07-13 10:01:09 +08:00
|
|
|
tidy:
|
2011-11-19 06:03:11 +08:00
|
|
|
else
|
2012-04-01 13:49:11 +08:00
|
|
|
|
|
|
|
ALL_CS := $(wildcard $(S)src/rt/*.cpp \
|
|
|
|
$(S)src/rt/*/*.cpp \
|
|
|
|
$(S)src/rt/*/*/*.cpp \
|
2013-07-30 20:40:52 +08:00
|
|
|
$(S)src/rustllvm/*.cpp)
|
2013-07-30 20:44:34 +08:00
|
|
|
ALL_CS := $(filter-out $(S)src/rt/miniz.cpp \
|
2014-05-03 10:56:19 +08:00
|
|
|
$(wildcard $(S)src/rt/hoedown/src/*.c) \
|
|
|
|
$(wildcard $(S)src/rt/hoedown/bin/*.c) \
|
2012-04-01 13:49:11 +08:00
|
|
|
,$(ALL_CS))
|
|
|
|
ALL_HS := $(wildcard $(S)src/rt/*.h \
|
|
|
|
$(S)src/rt/*/*.h \
|
|
|
|
$(S)src/rt/*/*/*.h \
|
2013-07-30 20:40:52 +08:00
|
|
|
$(S)src/rustllvm/*.h)
|
2012-04-01 13:49:11 +08:00
|
|
|
ALL_HS := $(filter-out $(S)src/rt/vg/valgrind.h \
|
|
|
|
$(S)src/rt/vg/memcheck.h \
|
|
|
|
$(S)src/rt/msvc/typeof.h \
|
|
|
|
$(S)src/rt/msvc/stdint.h \
|
|
|
|
$(S)src/rt/msvc/inttypes.h \
|
2014-05-03 10:56:19 +08:00
|
|
|
$(wildcard $(S)src/rt/hoedown/src/*.h) \
|
|
|
|
$(wildcard $(S)src/rt/hoedown/bin/*.h) \
|
2012-04-01 13:49:11 +08:00
|
|
|
,$(ALL_HS))
|
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
# Run the tidy script in multiple parts to avoid huge 'echo' commands
|
2011-11-19 06:03:11 +08:00
|
|
|
tidy:
|
|
|
|
@$(call E, check: formatting)
|
2012-04-14 02:59:17 +08:00
|
|
|
$(Q)find $(S)src -name '*.r[sc]' \
|
2013-08-23 05:58:06 +08:00
|
|
|
| grep '^$(S)src/libuv' -v \
|
|
|
|
| grep '^$(S)src/llvm' -v \
|
2013-09-06 10:20:30 +08:00
|
|
|
| grep '^$(S)src/gyp' -v \
|
2014-02-06 07:19:40 +08:00
|
|
|
| grep '^$(S)src/libbacktrace' -v \
|
2012-10-30 04:10:54 +08:00
|
|
|
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
|
2012-04-14 02:59:17 +08:00
|
|
|
$(Q)find $(S)src/etc -name '*.py' \
|
2012-10-30 04:10:54 +08:00
|
|
|
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
|
2014-02-14 00:42:52 +08:00
|
|
|
$(Q)find $(S)src/doc -name '*.js' \
|
|
|
|
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
|
|
|
|
$(Q)find $(S)src/etc -name '*.sh' \
|
|
|
|
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
|
|
|
|
$(Q)find $(S)src/etc -name '*.pl' \
|
|
|
|
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
|
|
|
|
$(Q)find $(S)src/etc -name '*.c' \
|
|
|
|
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
|
|
|
|
$(Q)find $(S)src/etc -name '*.h' \
|
|
|
|
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
|
2012-04-14 02:59:17 +08:00
|
|
|
$(Q)echo $(ALL_CS) \
|
2013-05-04 07:25:04 +08:00
|
|
|
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
|
2012-04-14 02:59:17 +08:00
|
|
|
$(Q)echo $(ALL_HS) \
|
2013-05-04 07:25:04 +08:00
|
|
|
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
|
2013-10-01 01:14:40 +08:00
|
|
|
$(Q)find $(S)src -type f -perm +111 \
|
|
|
|
-not -name '*.rs' -and -not -name '*.py' \
|
|
|
|
-and -not -name '*.sh' \
|
|
|
|
| grep '^$(S)src/llvm' -v \
|
|
|
|
| grep '^$(S)src/libuv' -v \
|
2014-05-03 10:56:19 +08:00
|
|
|
| grep '^$(S)src/rt/hoedown' -v \
|
2013-10-01 01:14:40 +08:00
|
|
|
| grep '^$(S)src/gyp' -v \
|
|
|
|
| grep '^$(S)src/etc' -v \
|
2014-01-29 06:15:29 +08:00
|
|
|
| grep '^$(S)src/doc' -v \
|
2014-02-04 14:54:09 +08:00
|
|
|
| grep '^$(S)src/compiler-rt' -v \
|
2014-02-06 07:19:40 +08:00
|
|
|
| grep '^$(S)src/libbacktrace' -v \
|
2013-10-01 01:14:40 +08:00
|
|
|
| xargs $(CFG_PYTHON) $(S)src/etc/check-binaries.py
|
2012-04-10 09:04:37 +08:00
|
|
|
|
2011-11-19 06:03:11 +08:00
|
|
|
endif
|
2011-06-30 06:12:02 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
|
2012-01-21 10:05:07 +08:00
|
|
|
######################################################################
|
2013-02-06 06:14:58 +08:00
|
|
|
# Sets of tests
|
2012-01-21 10:05:07 +08:00
|
|
|
######################################################################
|
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
define DEF_TEST_SETS
|
|
|
|
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-exec: \
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-rpass-exec \
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-rfail-exec \
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-cfail-exec \
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-rpass-full-exec \
|
2014-04-29 08:31:43 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-cfail-full-exec \
|
2013-11-17 09:07:32 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-rmake-exec \
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-crates-exec \
|
2013-12-23 05:46:06 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-doc-crates-exec \
|
2013-02-06 06:14:58 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-bench-exec \
|
2013-02-11 06:17:28 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-debuginfo-exec \
|
2013-07-06 16:03:03 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-codegen-exec \
|
2013-02-06 06:14:58 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-doc-exec \
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-pretty-exec
|
|
|
|
|
2013-02-22 08:15:01 +08:00
|
|
|
# Only test the compiler-dependent crates when the target is
|
|
|
|
# able to build a compiler (when the target triple is in the set of host triples)
|
2013-10-21 17:18:21 +08:00
|
|
|
ifneq ($$(findstring $(2),$$(CFG_HOST)),)
|
2013-02-22 08:15:01 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-crates-exec: \
|
|
|
|
$$(foreach crate,$$(TEST_CRATES), \
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-$$(crate)-exec)
|
|
|
|
|
2013-02-22 08:15:01 +08:00
|
|
|
else
|
|
|
|
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-crates-exec: \
|
|
|
|
$$(foreach crate,$$(TEST_TARGET_CRATES), \
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-$$(crate)-exec)
|
|
|
|
|
|
|
|
endif
|
|
|
|
|
2013-12-23 05:46:06 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-doc-crates-exec: \
|
|
|
|
$$(foreach crate,$$(TEST_DOC_CRATES), \
|
2014-03-09 11:55:20 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-doc-crate-$$(crate)-exec)
|
2013-12-23 05:46:06 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-doc-exec: \
|
2014-03-08 22:41:31 +08:00
|
|
|
$$(foreach docname,$$(DOCS), \
|
2013-02-06 06:14:58 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-doc-$$(docname)-exec)
|
|
|
|
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-pretty-exec: \
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-pretty-rpass-exec \
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-pretty-rpass-full-exec \
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-pretty-rfail-exec \
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-pretty-bench-exec \
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-pretty-pretty-exec
|
2012-03-21 07:49:12 +08:00
|
|
|
|
2012-01-21 10:05:07 +08:00
|
|
|
endef
|
|
|
|
|
2013-10-21 17:18:21 +08:00
|
|
|
$(foreach host,$(CFG_HOST), \
|
|
|
|
$(foreach target,$(CFG_TARGET), \
|
2013-02-06 06:14:58 +08:00
|
|
|
$(foreach stage,$(STAGES), \
|
|
|
|
$(eval $(call DEF_TEST_SETS,$(stage),$(target),$(host))))))
|
|
|
|
|
2012-01-21 10:05:07 +08:00
|
|
|
|
2011-06-30 06:12:02 +08:00
|
|
|
######################################################################
|
2013-02-06 06:14:58 +08:00
|
|
|
# Crate testing
|
2011-06-30 06:12:02 +08:00
|
|
|
######################################################################
|
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
define TEST_RUNNER
|
2012-01-18 08:45:22 +08:00
|
|
|
|
2014-02-13 16:35:43 +08:00
|
|
|
# If NO_REBUILD is set then break the dependencies on everything but
|
|
|
|
# the source files so we can test crates without rebuilding any of the
|
|
|
|
# parent crates.
|
2013-03-15 06:37:38 +08:00
|
|
|
ifeq ($(NO_REBUILD),)
|
2014-02-13 16:35:43 +08:00
|
|
|
TESTDEP_$(1)_$(2)_$(3)_$(4) = $$(SREQ$(1)_T_$(2)_H_$(3)) \
|
2014-01-29 15:29:38 +08:00
|
|
|
$$(foreach crate,$$(TARGET_CRATES),\
|
2014-02-13 16:35:43 +08:00
|
|
|
$$(TLIB$(1)_T_$(2)_H_$(3))/stamp.$$(crate)) \
|
|
|
|
$$(CRATE_FULLDEPS_$(1)_T_$(2)_H_$(3)_$(4))
|
2014-04-29 23:55:40 +08:00
|
|
|
|
|
|
|
# The regex crate depends on the regex_macros crate during testing, but it
|
|
|
|
# notably depend on the *host* regex_macros crate, not the target version.
|
|
|
|
# Additionally, this is not a dependency in stage1, only in stage2.
|
|
|
|
ifeq ($(4),regex)
|
|
|
|
ifneq ($(1),1)
|
|
|
|
TESTDEP_$(1)_$(2)_$(3)_$(4) += $$(TLIB$(1)_T_$(3)_H_$(3))/stamp.regex_macros
|
|
|
|
endif
|
|
|
|
endif
|
|
|
|
|
2013-03-15 06:37:38 +08:00
|
|
|
else
|
2014-02-13 16:35:43 +08:00
|
|
|
TESTDEP_$(1)_$(2)_$(3)_$(4) = $$(RSINPUTS_$(4))
|
2013-03-15 06:37:38 +08:00
|
|
|
endif
|
|
|
|
|
2014-03-26 10:23:45 +08:00
|
|
|
$(3)/stage$(1)/test/$(4)test-$(2)$$(X_$(2)): CFG_COMPILER_HOST_TRIPLE = $(2)
|
2014-01-25 11:27:22 +08:00
|
|
|
$(3)/stage$(1)/test/$(4)test-$(2)$$(X_$(2)): \
|
2014-02-13 16:35:43 +08:00
|
|
|
$$(CRATEFILE_$(4)) \
|
|
|
|
$$(TESTDEP_$(1)_$(2)_$(3)_$(4))
|
2014-02-09 18:42:28 +08:00
|
|
|
@$$(call E, oxidize: $$@)
|
2014-01-25 11:27:22 +08:00
|
|
|
$$(STAGE$(1)_T_$(2)_H_$(3)) -o $$@ $$< --test \
|
|
|
|
-L "$$(RT_OUTPUT_DIR_$(2))" \
|
|
|
|
-L "$$(LLVM_LIBDIR_$(2))"
|
2012-01-16 09:34:18 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
endef
|
|
|
|
|
2013-10-21 17:18:21 +08:00
|
|
|
$(foreach host,$(CFG_HOST), \
|
|
|
|
$(eval $(foreach target,$(CFG_TARGET), \
|
2013-02-06 06:14:58 +08:00
|
|
|
$(eval $(foreach stage,$(STAGES), \
|
2014-01-25 11:27:22 +08:00
|
|
|
$(eval $(foreach crate,$(TEST_CRATES), \
|
|
|
|
$(eval $(call TEST_RUNNER,$(stage),$(target),$(host),$(crate))))))))))
|
2013-02-06 06:14:58 +08:00
|
|
|
|
|
|
|
define DEF_TEST_CRATE_RULES
|
2013-02-06 12:05:00 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec: $$(call TEST_OK_FILE,$(1),$(2),$(3),$(4))
|
|
|
|
|
|
|
|
$$(call TEST_OK_FILE,$(1),$(2),$(3),$(4)): \
|
2013-05-14 06:19:48 +08:00
|
|
|
$(3)/stage$(1)/test/$(4)test-$(2)$$(X_$(2))
|
2012-01-16 09:34:18 +08:00
|
|
|
@$$(call E, run: $$<)
|
2013-12-13 10:07:23 +08:00
|
|
|
$$(Q)$$(call CFG_RUN_TEST_$(2),$$<,$(2),$(3)) $$(TESTARGS) \
|
|
|
|
--logfile $$(call TEST_LOG_FILE,$(1),$(2),$(3),$(4)) \
|
|
|
|
$$(call CRATE_TEST_EXTRA_ARGS,$(1),$(2),$(3),$(4)) \
|
|
|
|
&& touch $$@
|
2013-02-06 06:14:58 +08:00
|
|
|
endef
|
2012-01-16 09:34:18 +08:00
|
|
|
|
2013-05-01 17:50:23 +08:00
|
|
|
define DEF_TEST_CRATE_RULES_arm-linux-androideabi
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec: $$(call TEST_OK_FILE,$(1),$(2),$(3),$(4))
|
|
|
|
|
|
|
|
$$(call TEST_OK_FILE,$(1),$(2),$(3),$(4)): \
|
2013-05-14 06:19:48 +08:00
|
|
|
$(3)/stage$(1)/test/$(4)test-$(2)$$(X_$(2))
|
2013-05-01 17:50:23 +08:00
|
|
|
@$$(call E, run: $$< via adb)
|
2014-01-14 11:44:18 +08:00
|
|
|
$$(Q)$(CFG_ADB) push $$< $(CFG_ADB_TEST_DIR)
|
|
|
|
$$(Q)$(CFG_ADB) shell '(cd $(CFG_ADB_TEST_DIR); LD_LIBRARY_PATH=. \
|
2013-08-20 15:50:45 +08:00
|
|
|
./$$(notdir $$<) \
|
2013-08-20 18:07:36 +08:00
|
|
|
--logfile $(CFG_ADB_TEST_DIR)/check-stage$(1)-T-$(2)-H-$(3)-$(4).log \
|
2014-01-14 11:44:18 +08:00
|
|
|
$$(call CRATE_TEST_EXTRA_ARGS,$(1),$(2),$(3),$(4)) $(TESTARGS))' \
|
2013-08-20 15:50:45 +08:00
|
|
|
> tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).tmp
|
2014-01-14 11:44:18 +08:00
|
|
|
$$(Q)cat tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).tmp
|
|
|
|
$$(Q)touch tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).log
|
|
|
|
$$(Q)$(CFG_ADB) pull $(CFG_ADB_TEST_DIR)/check-stage$(1)-T-$(2)-H-$(3)-$(4).log tmp/
|
|
|
|
$$(Q)$(CFG_ADB) shell rm $(CFG_ADB_TEST_DIR)/check-stage$(1)-T-$(2)-H-$(3)-$(4).log
|
|
|
|
$$(Q)$(CFG_ADB) pull $(CFG_ADB_TEST_DIR)/$$(call TEST_RATCHET_FILE,$(1),$(2),$(3),$(4)) tmp/
|
2013-05-01 17:50:23 +08:00
|
|
|
@if grep -q "result: ok" tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).tmp; \
|
|
|
|
then \
|
|
|
|
rm tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).tmp; \
|
|
|
|
touch $$@; \
|
|
|
|
else \
|
|
|
|
rm tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).tmp; \
|
|
|
|
exit 101; \
|
|
|
|
fi
|
|
|
|
endef
|
|
|
|
|
|
|
|
define DEF_TEST_CRATE_RULES_null
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec: $$(call TEST_OK_FILE,$(1),$(2),$(3),$(4))
|
|
|
|
|
|
|
|
$$(call TEST_OK_FILE,$(1),$(2),$(3),$(4)): \
|
2013-05-14 06:19:48 +08:00
|
|
|
$(3)/stage$(1)/test/$(4)test-$(2)$$(X_$(2))
|
2013-12-15 14:42:01 +08:00
|
|
|
@$$(call E, failing: no device for $$< )
|
|
|
|
false
|
2013-05-01 17:50:23 +08:00
|
|
|
endef
|
|
|
|
|
2013-10-21 17:18:21 +08:00
|
|
|
$(foreach host,$(CFG_HOST), \
|
|
|
|
$(foreach target,$(CFG_TARGET), \
|
2013-02-06 06:14:58 +08:00
|
|
|
$(foreach stage,$(STAGES), \
|
|
|
|
$(foreach crate, $(TEST_CRATES), \
|
2013-10-21 17:18:21 +08:00
|
|
|
$(if $(findstring $(target),$(CFG_BUILD)), \
|
2013-05-01 17:50:23 +08:00
|
|
|
$(eval $(call DEF_TEST_CRATE_RULES,$(stage),$(target),$(host),$(crate))), \
|
|
|
|
$(if $(findstring $(target),"arm-linux-androideabi"), \
|
2013-05-03 23:49:18 +08:00
|
|
|
$(if $(findstring $(CFG_ADB_DEVICE_STATUS),"true"), \
|
2013-05-01 17:50:23 +08:00
|
|
|
$(eval $(call DEF_TEST_CRATE_RULES_arm-linux-androideabi,$(stage),$(target),$(host),$(crate))), \
|
|
|
|
$(eval $(call DEF_TEST_CRATE_RULES_null,$(stage),$(target),$(host),$(crate))) \
|
|
|
|
), \
|
2013-05-03 23:49:18 +08:00
|
|
|
$(eval $(call DEF_TEST_CRATE_RULES,$(stage),$(target),$(host),$(crate))) \
|
2013-05-07 09:09:34 +08:00
|
|
|
))))))
|
2012-10-03 09:15:02 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
######################################################################
|
|
|
|
# Rules for the compiletest tests (rpass, rfail, etc.)
|
|
|
|
######################################################################
|
|
|
|
|
|
|
|
RPASS_RC := $(wildcard $(S)src/test/run-pass/*.rc)
|
|
|
|
RPASS_RS := $(wildcard $(S)src/test/run-pass/*.rs)
|
|
|
|
RPASS_FULL_RC := $(wildcard $(S)src/test/run-pass-fulldeps/*.rc)
|
|
|
|
RPASS_FULL_RS := $(wildcard $(S)src/test/run-pass-fulldeps/*.rs)
|
2014-04-29 08:31:43 +08:00
|
|
|
CFAIL_FULL_RC := $(wildcard $(S)src/test/compile-fail-fulldeps/*.rc)
|
|
|
|
CFAIL_FULL_RS := $(wildcard $(S)src/test/compile-fail-fulldeps/*.rs)
|
2013-02-06 06:14:58 +08:00
|
|
|
RFAIL_RC := $(wildcard $(S)src/test/run-fail/*.rc)
|
|
|
|
RFAIL_RS := $(wildcard $(S)src/test/run-fail/*.rs)
|
|
|
|
CFAIL_RC := $(wildcard $(S)src/test/compile-fail/*.rc)
|
|
|
|
CFAIL_RS := $(wildcard $(S)src/test/compile-fail/*.rs)
|
2013-08-31 04:17:53 +08:00
|
|
|
BENCH_RS := $(wildcard $(S)src/test/bench/*.rs)
|
2013-02-06 06:14:58 +08:00
|
|
|
PRETTY_RS := $(wildcard $(S)src/test/pretty/*.rs)
|
2013-02-11 05:35:20 +08:00
|
|
|
DEBUGINFO_RS := $(wildcard $(S)src/test/debug-info/*.rs)
|
2013-07-06 16:03:03 +08:00
|
|
|
CODEGEN_RS := $(wildcard $(S)src/test/codegen/*.rs)
|
|
|
|
CODEGEN_CC := $(wildcard $(S)src/test/codegen/*.cc)
|
2012-10-03 09:15:02 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
# perf tests are the same as bench tests only they run under
|
|
|
|
# a performance monitor.
|
2013-08-31 04:17:53 +08:00
|
|
|
PERF_RS := $(wildcard $(S)src/test/bench/*.rs)
|
2012-10-30 09:08:36 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
RPASS_TESTS := $(RPASS_RC) $(RPASS_RS)
|
|
|
|
RPASS_FULL_TESTS := $(RPASS_FULL_RC) $(RPASS_FULL_RS)
|
2014-04-29 08:31:43 +08:00
|
|
|
CFAIL_FULL_TESTS := $(CFAIL_FULL_RC) $(CFAIL_FULL_RS)
|
2013-02-06 06:14:58 +08:00
|
|
|
RFAIL_TESTS := $(RFAIL_RC) $(RFAIL_RS)
|
|
|
|
CFAIL_TESTS := $(CFAIL_RC) $(CFAIL_RS)
|
|
|
|
BENCH_TESTS := $(BENCH_RS)
|
|
|
|
PERF_TESTS := $(PERF_RS)
|
|
|
|
PRETTY_TESTS := $(PRETTY_RS)
|
2013-02-10 02:09:19 +08:00
|
|
|
DEBUGINFO_TESTS := $(DEBUGINFO_RS)
|
2013-07-06 16:03:03 +08:00
|
|
|
CODEGEN_TESTS := $(CODEGEN_RS) $(CODEGEN_CC)
|
2012-06-03 12:30:26 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
CTEST_SRC_BASE_rpass = run-pass
|
|
|
|
CTEST_BUILD_BASE_rpass = run-pass
|
|
|
|
CTEST_MODE_rpass = run-pass
|
|
|
|
CTEST_RUNTOOL_rpass = $(CTEST_RUNTOOL)
|
2012-06-03 12:30:26 +08:00
|
|
|
|
2013-10-26 08:04:37 +08:00
|
|
|
CTEST_SRC_BASE_rpass-full = run-pass-fulldeps
|
|
|
|
CTEST_BUILD_BASE_rpass-full = run-pass-fulldeps
|
2013-02-06 06:14:58 +08:00
|
|
|
CTEST_MODE_rpass-full = run-pass
|
|
|
|
CTEST_RUNTOOL_rpass-full = $(CTEST_RUNTOOL)
|
|
|
|
|
2014-04-29 08:31:43 +08:00
|
|
|
CTEST_SRC_BASE_cfail-full = compile-fail-fulldeps
|
|
|
|
CTEST_BUILD_BASE_cfail-full = compile-fail-fulldeps
|
|
|
|
CTEST_MODE_cfail-full = compile-fail
|
|
|
|
CTEST_RUNTOOL_cfail-full = $(CTEST_RUNTOOL)
|
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
CTEST_SRC_BASE_rfail = run-fail
|
|
|
|
CTEST_BUILD_BASE_rfail = run-fail
|
|
|
|
CTEST_MODE_rfail = run-fail
|
|
|
|
CTEST_RUNTOOL_rfail = $(CTEST_RUNTOOL)
|
|
|
|
|
|
|
|
CTEST_SRC_BASE_cfail = compile-fail
|
|
|
|
CTEST_BUILD_BASE_cfail = compile-fail
|
|
|
|
CTEST_MODE_cfail = compile-fail
|
|
|
|
CTEST_RUNTOOL_cfail = $(CTEST_RUNTOOL)
|
|
|
|
|
2013-08-31 04:17:29 +08:00
|
|
|
CTEST_SRC_BASE_bench = bench
|
2013-02-06 06:14:58 +08:00
|
|
|
CTEST_BUILD_BASE_bench = bench
|
|
|
|
CTEST_MODE_bench = run-pass
|
|
|
|
CTEST_RUNTOOL_bench = $(CTEST_RUNTOOL)
|
|
|
|
|
|
|
|
CTEST_SRC_BASE_perf = bench
|
|
|
|
CTEST_BUILD_BASE_perf = perf
|
|
|
|
CTEST_MODE_perf = run-pass
|
|
|
|
CTEST_RUNTOOL_perf = $(CTEST_PERF_RUNTOOL)
|
|
|
|
|
2013-02-10 02:09:19 +08:00
|
|
|
CTEST_SRC_BASE_debuginfo = debug-info
|
|
|
|
CTEST_BUILD_BASE_debuginfo = debug-info
|
|
|
|
CTEST_MODE_debuginfo = debug-info
|
|
|
|
CTEST_RUNTOOL_debuginfo = $(CTEST_RUNTOOL)
|
|
|
|
|
2013-07-06 16:03:03 +08:00
|
|
|
CTEST_SRC_BASE_codegen = codegen
|
|
|
|
CTEST_BUILD_BASE_codegen = codegen
|
|
|
|
CTEST_MODE_codegen = codegen
|
|
|
|
CTEST_RUNTOOL_codegen = $(CTEST_RUNTOOL)
|
|
|
|
|
2014-04-03 01:25:29 +08:00
|
|
|
# CTEST_DISABLE_$(TEST_GROUP), if set, will cause the test group to be
|
|
|
|
# disabled and the associated message to be printed as a warning
|
|
|
|
# during attempts to run those tests.
|
|
|
|
|
2013-02-11 06:17:28 +08:00
|
|
|
ifeq ($(CFG_GDB),)
|
|
|
|
CTEST_DISABLE_debuginfo = "no gdb found"
|
|
|
|
endif
|
|
|
|
|
2013-07-06 16:03:03 +08:00
|
|
|
ifeq ($(CFG_CLANG),)
|
|
|
|
CTEST_DISABLE_codegen = "no clang found"
|
|
|
|
endif
|
|
|
|
|
2013-03-13 02:58:50 +08:00
|
|
|
ifeq ($(CFG_OSTYPE),apple-darwin)
|
|
|
|
CTEST_DISABLE_debuginfo = "gdb on darwing needs root"
|
|
|
|
endif
|
|
|
|
|
2014-04-03 01:25:29 +08:00
|
|
|
# CTEST_DISABLE_NONSELFHOST_$(TEST_GROUP), if set, will cause that
|
|
|
|
# test group to be disabled *unless* the target is able to build a
|
|
|
|
# compiler (i.e. when the target triple is in the set of of host
|
|
|
|
# triples). The associated message will be printed as a warning
|
|
|
|
# during attempts to run those tests.
|
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
define DEF_CTEST_VARS
|
|
|
|
|
|
|
|
# All the per-stage build rules you might want to call from the
|
|
|
|
# command line.
|
|
|
|
#
|
|
|
|
# $(1) is the stage number
|
|
|
|
# $(2) is the target triple to test
|
|
|
|
# $(3) is the host triple to test
|
|
|
|
|
|
|
|
# Prerequisites for compiletest tests
|
|
|
|
TEST_SREQ$(1)_T_$(2)_H_$(3) = \
|
2013-03-02 20:25:12 +08:00
|
|
|
$$(HBIN$(1)_H_$(3))/compiletest$$(X_$(3)) \
|
2013-02-06 06:14:58 +08:00
|
|
|
$$(SREQ$(1)_T_$(2)_H_$(3))
|
2012-06-03 12:30:26 +08:00
|
|
|
|
2011-10-03 08:37:50 +08:00
|
|
|
# Rules for the cfail/rfail/rpass/bench/perf test runner
|
2011-08-02 05:10:59 +08:00
|
|
|
|
2013-07-24 17:26:52 +08:00
|
|
|
# The tests select when to use debug configuration on their own;
|
|
|
|
# remove directive, if present, from CFG_RUSTC_FLAGS (issue #7898).
|
2013-09-18 12:02:11 +08:00
|
|
|
CTEST_RUSTC_FLAGS := $$(subst --cfg ndebug,,$$(CFG_RUSTC_FLAGS))
|
2013-08-11 15:29:45 +08:00
|
|
|
|
|
|
|
# The tests can not be optimized while the rest of the compiler is optimized, so
|
|
|
|
# filter out the optimization (if any) from rustc and then figure out if we need
|
|
|
|
# to be optimized
|
|
|
|
CTEST_RUSTC_FLAGS := $$(subst -O,,$$(CTEST_RUSTC_FLAGS))
|
|
|
|
ifndef CFG_DISABLE_OPTIMIZE_TESTS
|
|
|
|
CTEST_RUSTC_FLAGS += -O
|
|
|
|
endif
|
2013-07-24 17:26:52 +08:00
|
|
|
|
2011-11-22 14:45:14 +08:00
|
|
|
CTEST_COMMON_ARGS$(1)-T-$(2)-H-$(3) := \
|
2011-11-22 05:11:40 +08:00
|
|
|
--compile-lib-path $$(HLIB$(1)_H_$(3)) \
|
|
|
|
--run-lib-path $$(TLIB$(1)_T_$(2)_H_$(3)) \
|
2013-03-02 20:25:12 +08:00
|
|
|
--rustc-path $$(HBIN$(1)_H_$(3))/rustc$$(X_$(3)) \
|
2013-07-06 16:03:03 +08:00
|
|
|
--clang-path $(if $(CFG_CLANG),$(CFG_CLANG),clang) \
|
2013-10-21 17:18:21 +08:00
|
|
|
--llvm-bin-path $(CFG_LLVM_INST_DIR_$(CFG_BUILD))/bin \
|
2012-02-28 22:45:33 +08:00
|
|
|
--aux-base $$(S)src/test/auxiliary/ \
|
2011-11-22 05:11:40 +08:00
|
|
|
--stage-id stage$(1)-$(2) \
|
2013-05-01 17:50:23 +08:00
|
|
|
--target $(2) \
|
2014-01-18 02:18:02 +08:00
|
|
|
--host $(3) \
|
2013-05-03 23:49:18 +08:00
|
|
|
--adb-path=$(CFG_ADB) \
|
2013-05-02 12:16:01 +08:00
|
|
|
--adb-test-dir=$(CFG_ADB_TEST_DIR) \
|
2014-04-18 01:32:44 +08:00
|
|
|
--host-rustcflags "$(RUSTC_FLAGS_$(3)) $$(CTEST_RUSTC_FLAGS) -L $$(RT_OUTPUT_DIR_$(3))" \
|
2014-02-12 05:51:08 +08:00
|
|
|
--target-rustcflags "$(RUSTC_FLAGS_$(2)) $$(CTEST_RUSTC_FLAGS) -L $$(RT_OUTPUT_DIR_$(2))" \
|
2011-11-22 05:11:40 +08:00
|
|
|
$$(CTEST_TESTARGS)
|
|
|
|
|
2013-02-06 12:05:00 +08:00
|
|
|
CTEST_DEPS_rpass_$(1)-T-$(2)-H-$(3) = $$(RPASS_TESTS)
|
2014-04-17 23:52:25 +08:00
|
|
|
CTEST_DEPS_rpass-full_$(1)-T-$(2)-H-$(3) = $$(RPASS_FULL_TESTS) $$(CSREQ$(1)_T_$(3)_H_$(3)) $$(SREQ$(1)_T_$(2)_H_$(3))
|
2014-04-29 08:31:43 +08:00
|
|
|
CTEST_DEPS_cfail-full_$(1)-T-$(2)-H-$(3) = $$(CFAIL_FULL_TESTS) $$(CSREQ$(1)_T_$(3)_H_$(3)) $$(SREQ$(1)_T_$(2)_H_$(3))
|
2013-02-06 12:05:00 +08:00
|
|
|
CTEST_DEPS_rfail_$(1)-T-$(2)-H-$(3) = $$(RFAIL_TESTS)
|
|
|
|
CTEST_DEPS_cfail_$(1)-T-$(2)-H-$(3) = $$(CFAIL_TESTS)
|
|
|
|
CTEST_DEPS_bench_$(1)-T-$(2)-H-$(3) = $$(BENCH_TESTS)
|
|
|
|
CTEST_DEPS_perf_$(1)-T-$(2)-H-$(3) = $$(PERF_TESTS)
|
2013-02-10 02:09:19 +08:00
|
|
|
CTEST_DEPS_debuginfo_$(1)-T-$(2)-H-$(3) = $$(DEBUGINFO_TESTS)
|
2013-07-06 16:03:03 +08:00
|
|
|
CTEST_DEPS_codegen_$(1)-T-$(2)-H-$(3) = $$(CODEGEN_TESTS)
|
2011-11-22 05:11:40 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
endef
|
2011-11-22 05:11:40 +08:00
|
|
|
|
2013-10-21 17:18:21 +08:00
|
|
|
$(foreach host,$(CFG_HOST), \
|
|
|
|
$(eval $(foreach target,$(CFG_TARGET), \
|
2013-02-06 06:14:58 +08:00
|
|
|
$(eval $(foreach stage,$(STAGES), \
|
|
|
|
$(eval $(call DEF_CTEST_VARS,$(stage),$(target),$(host))))))))
|
2011-11-22 05:11:40 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
define DEF_RUN_COMPILETEST
|
2012-06-02 07:01:53 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
CTEST_ARGS$(1)-T-$(2)-H-$(3)-$(4) := \
|
|
|
|
$$(CTEST_COMMON_ARGS$(1)-T-$(2)-H-$(3)) \
|
2013-08-31 04:17:29 +08:00
|
|
|
--src-base $$(S)src/test/$$(CTEST_SRC_BASE_$(4))/ \
|
2013-02-06 06:14:58 +08:00
|
|
|
--build-base $(3)/test/$$(CTEST_BUILD_BASE_$(4))/ \
|
2013-07-16 09:52:08 +08:00
|
|
|
--ratchet-metrics $(call TEST_RATCHET_FILE,$(1),$(2),$(3),$(4)) \
|
2013-02-06 06:14:58 +08:00
|
|
|
--mode $$(CTEST_MODE_$(4)) \
|
|
|
|
$$(CTEST_RUNTOOL_$(4))
|
2011-11-22 05:11:40 +08:00
|
|
|
|
2013-02-06 12:05:00 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec: $$(call TEST_OK_FILE,$(1),$(2),$(3),$(4))
|
|
|
|
|
2014-04-03 01:25:29 +08:00
|
|
|
# CTEST_DONT_RUN_$(1)-T-$(2)-H-$(3)-$(4)
|
|
|
|
# Goal: leave this variable as empty string if we should run the test.
|
|
|
|
# Otherwise, set it to the reason we are not running the test.
|
|
|
|
# (Encoded as a separate variable because GNU make does not have a
|
|
|
|
# good way to express OR on ifeq commands)
|
2013-02-11 06:17:28 +08:00
|
|
|
|
2014-04-03 01:25:29 +08:00
|
|
|
ifneq ($$(CTEST_DISABLE_$(4)),)
|
|
|
|
# Test suite is disabled for all configured targets.
|
|
|
|
CTEST_DONT_RUN_$(1)-T-$(2)-H-$(3)-$(4) := $$(CTEST_DISABLE_$(4))
|
|
|
|
else
|
|
|
|
# else, check if non-self-hosted target (i.e. target not-in hosts) ...
|
|
|
|
ifeq ($$(findstring $(2),$$(CFG_HOST)),)
|
|
|
|
# ... if so, then check if this test suite is disabled for non-selfhosts.
|
|
|
|
ifneq ($$(CTEST_DISABLE_NONSELFHOST_$(4)),)
|
|
|
|
# Test suite is disabled for this target.
|
|
|
|
CTEST_DONT_RUN_$(1)-T-$(2)-H-$(3)-$(4) := $$(CTEST_DISABLE_NONSELFHOST_$(4))
|
|
|
|
endif
|
|
|
|
endif
|
|
|
|
# Neither DISABLE nor DISABLE_NONSELFHOST is set ==> okay, run the test.
|
|
|
|
endif
|
|
|
|
|
|
|
|
ifeq ($$(CTEST_DONT_RUN_$(1)-T-$(2)-H-$(3)-$(4)),)
|
2013-02-06 12:05:00 +08:00
|
|
|
$$(call TEST_OK_FILE,$(1),$(2),$(3),$(4)): \
|
2013-02-06 06:14:58 +08:00
|
|
|
$$(TEST_SREQ$(1)_T_$(2)_H_$(3)) \
|
|
|
|
$$(CTEST_DEPS_$(4)_$(1)-T-$(2)-H-$(3))
|
2013-05-01 17:50:23 +08:00
|
|
|
@$$(call E, run $(4) [$(2)]: $$<)
|
2013-03-02 20:25:12 +08:00
|
|
|
$$(Q)$$(call CFG_RUN_CTEST_$(2),$(1),$$<,$(3)) \
|
2013-02-06 06:14:58 +08:00
|
|
|
$$(CTEST_ARGS$(1)-T-$(2)-H-$(3)-$(4)) \
|
2013-02-06 12:05:00 +08:00
|
|
|
--logfile $$(call TEST_LOG_FILE,$(1),$(2),$(3),$(4)) \
|
|
|
|
&& touch $$@
|
2011-11-22 05:11:40 +08:00
|
|
|
|
2013-02-11 06:17:28 +08:00
|
|
|
else
|
|
|
|
|
2014-04-03 01:25:29 +08:00
|
|
|
$$(call TEST_OK_FILE,$(1),$(2),$(3),$(4)):
|
2013-05-01 17:50:23 +08:00
|
|
|
@$$(call E, run $(4) [$(2)]: $$<)
|
2014-04-03 01:25:29 +08:00
|
|
|
@$$(call E, warning: tests disabled: $$(CTEST_DONT_RUN_$(1)-T-$(2)-H-$(3)-$(4)))
|
2013-02-11 06:17:28 +08:00
|
|
|
touch $$@
|
|
|
|
|
|
|
|
endif
|
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
endef
|
2011-11-22 14:45:14 +08:00
|
|
|
|
2014-04-29 08:31:43 +08:00
|
|
|
CTEST_NAMES = rpass rpass-full cfail-full rfail cfail bench perf debuginfo codegen
|
2012-06-02 07:01:53 +08:00
|
|
|
|
2013-10-21 17:18:21 +08:00
|
|
|
$(foreach host,$(CFG_HOST), \
|
|
|
|
$(eval $(foreach target,$(CFG_TARGET), \
|
2013-02-06 06:14:58 +08:00
|
|
|
$(eval $(foreach stage,$(STAGES), \
|
|
|
|
$(eval $(foreach name,$(CTEST_NAMES), \
|
|
|
|
$(eval $(call DEF_RUN_COMPILETEST,$(stage),$(target),$(host),$(name))))))))))
|
|
|
|
|
|
|
|
PRETTY_NAMES = pretty-rpass pretty-rpass-full pretty-rfail pretty-bench pretty-pretty
|
|
|
|
PRETTY_DEPS_pretty-rpass = $(RPASS_TESTS)
|
|
|
|
PRETTY_DEPS_pretty-rpass-full = $(RPASS_FULL_TESTS)
|
|
|
|
PRETTY_DEPS_pretty-rfail = $(RFAIL_TESTS)
|
|
|
|
PRETTY_DEPS_pretty-bench = $(BENCH_TESTS)
|
|
|
|
PRETTY_DEPS_pretty-pretty = $(PRETTY_TESTS)
|
|
|
|
PRETTY_DIRNAME_pretty-rpass = run-pass
|
2013-10-26 08:04:37 +08:00
|
|
|
PRETTY_DIRNAME_pretty-rpass-full = run-pass-fulldeps
|
2013-02-06 06:14:58 +08:00
|
|
|
PRETTY_DIRNAME_pretty-rfail = run-fail
|
|
|
|
PRETTY_DIRNAME_pretty-bench = bench
|
|
|
|
PRETTY_DIRNAME_pretty-pretty = pretty
|
|
|
|
|
|
|
|
define DEF_RUN_PRETTY_TEST
|
|
|
|
|
|
|
|
PRETTY_ARGS$(1)-T-$(2)-H-$(3)-$(4) := \
|
2011-11-22 14:45:14 +08:00
|
|
|
$$(CTEST_COMMON_ARGS$(1)-T-$(2)-H-$(3)) \
|
2013-02-06 06:14:58 +08:00
|
|
|
--src-base $$(S)src/test/$$(PRETTY_DIRNAME_$(4))/ \
|
|
|
|
--build-base $(3)/test/$$(PRETTY_DIRNAME_$(4))/ \
|
2011-11-22 14:45:14 +08:00
|
|
|
--mode pretty
|
|
|
|
|
2013-02-06 12:05:00 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec: $$(call TEST_OK_FILE,$(1),$(2),$(3),$(4))
|
|
|
|
|
|
|
|
$$(call TEST_OK_FILE,$(1),$(2),$(3),$(4)): \
|
2013-02-06 06:14:58 +08:00
|
|
|
$$(TEST_SREQ$(1)_T_$(2)_H_$(3)) \
|
|
|
|
$$(PRETTY_DEPS_$(4))
|
2013-05-01 17:50:23 +08:00
|
|
|
@$$(call E, run pretty-rpass [$(2)]: $$<)
|
2013-03-02 20:25:12 +08:00
|
|
|
$$(Q)$$(call CFG_RUN_CTEST_$(2),$(1),$$<,$(3)) \
|
2013-02-06 06:14:58 +08:00
|
|
|
$$(PRETTY_ARGS$(1)-T-$(2)-H-$(3)-$(4)) \
|
2013-02-06 12:05:00 +08:00
|
|
|
--logfile $$(call TEST_LOG_FILE,$(1),$(2),$(3),$(4)) \
|
|
|
|
&& touch $$@
|
2011-11-22 14:45:14 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
endef
|
2011-11-22 14:45:14 +08:00
|
|
|
|
2013-10-21 17:18:21 +08:00
|
|
|
$(foreach host,$(CFG_HOST), \
|
|
|
|
$(foreach target,$(CFG_TARGET), \
|
2013-02-06 06:14:58 +08:00
|
|
|
$(foreach stage,$(STAGES), \
|
|
|
|
$(foreach pretty-name,$(PRETTY_NAMES), \
|
|
|
|
$(eval $(call DEF_RUN_PRETTY_TEST,$(stage),$(target),$(host),$(pretty-name)))))))
|
2012-01-21 10:05:07 +08:00
|
|
|
|
2012-09-16 09:06:20 +08:00
|
|
|
|
2014-03-08 22:41:31 +08:00
|
|
|
######################################################################
|
|
|
|
# Crate & freestanding documentation tests
|
|
|
|
######################################################################
|
|
|
|
|
|
|
|
define DEF_RUSTDOC
|
|
|
|
RUSTDOC_EXE_$(1)_T_$(2)_H_$(3) := $$(HBIN$(1)_H_$(3))/rustdoc$$(X_$(3))
|
|
|
|
RUSTDOC_$(1)_T_$(2)_H_$(3) := $$(RPATH_VAR$(1)_T_$(2)_H_$(3)) $$(RUSTDOC_EXE_$(1)_T_$(2)_H_$(3))
|
|
|
|
endef
|
|
|
|
|
|
|
|
$(foreach host,$(CFG_HOST), \
|
|
|
|
$(foreach target,$(CFG_TARGET), \
|
|
|
|
$(foreach stage,$(STAGES), \
|
|
|
|
$(eval $(call DEF_RUSTDOC,$(stage),$(target),$(host))))))
|
|
|
|
|
|
|
|
# Freestanding
|
|
|
|
|
|
|
|
define DEF_DOC_TEST
|
2012-09-16 09:06:20 +08:00
|
|
|
|
2013-02-19 09:08:03 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-doc-$(4)-exec: $$(call TEST_OK_FILE,$(1),$(2),$(3),doc-$(4))
|
2013-02-06 12:05:00 +08:00
|
|
|
|
2014-03-08 22:41:31 +08:00
|
|
|
# If NO_REBUILD is set then break the dependencies on everything but
|
|
|
|
# the source files so we can test documentation without rebuilding
|
|
|
|
# rustdoc etc.
|
|
|
|
ifeq ($(NO_REBUILD),)
|
|
|
|
DOCTESTDEP_$(1)_$(2)_$(3)_$(4) = \
|
|
|
|
$$(D)/$(4).md \
|
|
|
|
$$(TEST_SREQ$(1)_T_$(2)_H_$(3)) \
|
|
|
|
$$(RUSTDOC_EXE_$(1)_T_$(2)_H_$(3))
|
|
|
|
else
|
|
|
|
DOCTESTDEP_$(1)_$(2)_$(3)_$(4) = $$(D)/$(4).md
|
|
|
|
endif
|
2012-09-16 09:06:20 +08:00
|
|
|
|
2014-03-08 22:41:31 +08:00
|
|
|
ifeq ($(2),$$(CFG_BUILD))
|
|
|
|
$$(call TEST_OK_FILE,$(1),$(2),$(3),doc-$(4)): $$(DOCTESTDEP_$(1)_$(2)_$(3)_$(4))
|
|
|
|
@$$(call E, run doc-$(4) [$(2)])
|
2014-04-02 06:04:42 +08:00
|
|
|
$$(Q)$$(RUSTDOC_$(1)_T_$(2)_H_$(3)) --cfg dox --test $$< --test-args "$$(TESTARGS)" && touch $$@
|
2014-03-08 22:41:31 +08:00
|
|
|
else
|
|
|
|
$$(call TEST_OK_FILE,$(1),$(2),$(3),doc-$(4)):
|
|
|
|
touch $$@
|
|
|
|
endif
|
2013-02-06 06:14:58 +08:00
|
|
|
endef
|
2012-09-23 06:33:50 +08:00
|
|
|
|
2013-10-21 17:18:21 +08:00
|
|
|
$(foreach host,$(CFG_HOST), \
|
|
|
|
$(foreach target,$(CFG_TARGET), \
|
2013-02-06 06:14:58 +08:00
|
|
|
$(foreach stage,$(STAGES), \
|
2014-03-08 22:41:31 +08:00
|
|
|
$(foreach docname,$(DOCS), \
|
|
|
|
$(eval $(call DEF_DOC_TEST,$(stage),$(target),$(host),$(docname)))))))
|
|
|
|
|
|
|
|
# Crates
|
2012-03-21 07:49:12 +08:00
|
|
|
|
2013-12-23 05:46:06 +08:00
|
|
|
define DEF_CRATE_DOC_TEST
|
|
|
|
|
2014-02-13 16:35:43 +08:00
|
|
|
# If NO_REBUILD is set then break the dependencies on everything but
|
|
|
|
# the source files so we can test crate documentation without
|
|
|
|
# rebuilding any of the parent crates.
|
|
|
|
ifeq ($(NO_REBUILD),)
|
2014-03-08 22:41:31 +08:00
|
|
|
CRATEDOCTESTDEP_$(1)_$(2)_$(3)_$(4) = \
|
2014-02-13 16:35:43 +08:00
|
|
|
$$(TEST_SREQ$(1)_T_$(2)_H_$(3)) \
|
|
|
|
$$(CRATE_FULLDEPS_$(1)_T_$(2)_H_$(3)_$(4)) \
|
2014-03-08 22:41:31 +08:00
|
|
|
$$(RUSTDOC_EXE_$(1)_T_$(2)_H_$(3))
|
2014-02-13 16:35:43 +08:00
|
|
|
else
|
2014-03-08 22:41:31 +08:00
|
|
|
CRATEDOCTESTDEP_$(1)_$(2)_$(3)_$(4) = $$(RSINPUTS_$(4))
|
2014-02-13 16:35:43 +08:00
|
|
|
endif
|
|
|
|
|
2014-03-09 11:55:20 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-doc-crate-$(4)-exec: \
|
|
|
|
$$(call TEST_OK_FILE,$(1),$(2),$(3),doc-crate-$(4))
|
2013-12-23 05:46:06 +08:00
|
|
|
|
2013-12-25 14:59:38 +08:00
|
|
|
ifeq ($(2),$$(CFG_BUILD))
|
2014-03-09 11:55:20 +08:00
|
|
|
$$(call TEST_OK_FILE,$(1),$(2),$(3),doc-crate-$(4)): $$(CRATEDOCTESTDEP_$(1)_$(2)_$(3)_$(4))
|
|
|
|
@$$(call E, run doc-crate-$(4) [$(2)])
|
2014-03-08 22:41:31 +08:00
|
|
|
$$(Q)$$(RUSTDOC_$(1)_T_$(2)_H_$(3)) --test \
|
2014-02-22 08:32:49 +08:00
|
|
|
$$(CRATEFILE_$(4)) --test-args "$$(TESTARGS)" && touch $$@
|
2013-12-25 14:59:38 +08:00
|
|
|
else
|
2014-03-09 11:55:20 +08:00
|
|
|
$$(call TEST_OK_FILE,$(1),$(2),$(3),doc-crate-$(4)):
|
2013-12-25 14:59:38 +08:00
|
|
|
touch $$@
|
|
|
|
endif
|
2013-12-23 05:46:06 +08:00
|
|
|
|
|
|
|
endef
|
|
|
|
|
|
|
|
$(foreach host,$(CFG_HOST), \
|
2013-12-25 14:59:38 +08:00
|
|
|
$(foreach target,$(CFG_TARGET), \
|
|
|
|
$(foreach stage,$(STAGES), \
|
|
|
|
$(foreach crate,$(TEST_DOC_CRATES), \
|
|
|
|
$(eval $(call DEF_CRATE_DOC_TEST,$(stage),$(target),$(host),$(crate)))))))
|
2011-07-13 06:27:00 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
######################################################################
|
|
|
|
# Shortcut rules
|
|
|
|
######################################################################
|
2011-08-02 02:57:05 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
TEST_GROUPS = \
|
|
|
|
crates \
|
|
|
|
$(foreach crate,$(TEST_CRATES),$(crate)) \
|
2014-03-09 11:55:20 +08:00
|
|
|
$(foreach crate,$(TEST_DOC_CRATES),doc-crate-$(crate)) \
|
2013-02-06 06:14:58 +08:00
|
|
|
rpass \
|
|
|
|
rpass-full \
|
2014-04-29 08:31:43 +08:00
|
|
|
cfail-full \
|
2013-02-06 06:14:58 +08:00
|
|
|
rfail \
|
|
|
|
cfail \
|
|
|
|
bench \
|
|
|
|
perf \
|
2013-11-17 09:07:32 +08:00
|
|
|
rmake \
|
2013-02-10 02:09:19 +08:00
|
|
|
debuginfo \
|
2013-07-06 16:03:03 +08:00
|
|
|
codegen \
|
2013-02-06 06:14:58 +08:00
|
|
|
doc \
|
2014-03-08 22:41:31 +08:00
|
|
|
$(foreach docname,$(DOCS),doc-$(docname)) \
|
2013-02-06 06:14:58 +08:00
|
|
|
pretty \
|
|
|
|
pretty-rpass \
|
|
|
|
pretty-rpass-full \
|
|
|
|
pretty-rfail \
|
|
|
|
pretty-bench \
|
|
|
|
pretty-pretty \
|
|
|
|
$(NULL)
|
|
|
|
|
|
|
|
define DEF_CHECK_FOR_STAGE_AND_TARGET_AND_HOST
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3): check-stage$(1)-T-$(2)-H-$(3)-exec
|
|
|
|
endef
|
2011-08-03 05:37:03 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
$(foreach stage,$(STAGES), \
|
2013-10-21 17:18:21 +08:00
|
|
|
$(foreach target,$(CFG_TARGET), \
|
|
|
|
$(foreach host,$(CFG_HOST), \
|
2013-02-06 06:14:58 +08:00
|
|
|
$(eval $(call DEF_CHECK_FOR_STAGE_AND_TARGET_AND_HOST,$(stage),$(target),$(host))))))
|
2011-07-31 12:44:30 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
define DEF_CHECK_FOR_STAGE_AND_TARGET_AND_HOST_AND_GROUP
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-$(4): check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec
|
|
|
|
endef
|
2012-01-21 10:05:07 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
$(foreach stage,$(STAGES), \
|
2013-10-21 17:18:21 +08:00
|
|
|
$(foreach target,$(CFG_TARGET), \
|
|
|
|
$(foreach host,$(CFG_HOST), \
|
2013-02-06 06:14:58 +08:00
|
|
|
$(foreach group,$(TEST_GROUPS), \
|
|
|
|
$(eval $(call DEF_CHECK_FOR_STAGE_AND_TARGET_AND_HOST_AND_GROUP,$(stage),$(target),$(host),$(group)))))))
|
2012-09-16 09:06:20 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
define DEF_CHECK_FOR_STAGE
|
2013-10-21 17:18:21 +08:00
|
|
|
check-stage$(1): check-stage$(1)-H-$$(CFG_BUILD)
|
|
|
|
check-stage$(1)-H-all: $$(foreach target,$$(CFG_TARGET), \
|
2013-02-06 06:14:58 +08:00
|
|
|
check-stage$(1)-H-$$(target))
|
|
|
|
endef
|
2012-09-16 09:06:20 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
$(foreach stage,$(STAGES), \
|
|
|
|
$(eval $(call DEF_CHECK_FOR_STAGE,$(stage))))
|
2012-09-16 09:06:20 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
define DEF_CHECK_FOR_STAGE_AND_GROUP
|
2013-10-21 17:18:21 +08:00
|
|
|
check-stage$(1)-$(2): check-stage$(1)-H-$$(CFG_BUILD)-$(2)
|
|
|
|
check-stage$(1)-H-all-$(2): $$(foreach target,$$(CFG_TARGET), \
|
2013-02-06 06:14:58 +08:00
|
|
|
check-stage$(1)-H-$$(target)-$(2))
|
|
|
|
endef
|
2012-09-23 06:33:50 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
$(foreach stage,$(STAGES), \
|
|
|
|
$(foreach group,$(TEST_GROUPS), \
|
|
|
|
$(eval $(call DEF_CHECK_FOR_STAGE_AND_GROUP,$(stage),$(group)))))
|
2012-03-21 07:49:12 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
|
|
|
|
define DEF_CHECK_FOR_STAGE_AND_HOSTS
|
2013-10-21 17:18:21 +08:00
|
|
|
check-stage$(1)-H-$(2): $$(foreach target,$$(CFG_TARGET), \
|
2013-02-06 06:14:58 +08:00
|
|
|
check-stage$(1)-T-$$(target)-H-$(2))
|
The Big Test Suite Overhaul
This replaces the make-based test runner with a set of Rust-based test
runners. I believe that all existing functionality has been
preserved. The primary objective is to dogfood the Rust test
framework.
A few main things happen here:
1) The run-pass/lib-* tests are all moved into src/test/stdtest. This
is a standalone test crate intended for all standard library tests. It
compiles to build/test/stdtest.stageN.
2) rustc now compiles into yet another build artifact, this one a test
runner that runs any tests contained directly in the rustc crate. This
allows much more fine-grained unit testing of the compiler. It
compiles to build/test/rustctest.stageN.
3) There is a new custom test runner crate at src/test/compiletest
that reproduces all the functionality for running the compile-fail,
run-fail, run-pass and bench tests while integrating with Rust's test
framework. It compiles to build/test/compiletest.stageN.
4) The build rules have been completely changed to use the new test
runners, while also being less redundant, following the example of the
recent stageN.mk rewrite.
It adds two new features to the cfail/rfail/rpass/bench tests:
1) Tests can specify multiple 'error-pattern' directives which must be
satisfied in order.
2) Tests can specify a 'compile-flags' directive which will make the
test runner provide additional command line arguments to rustc.
There are some downsides, the primary being that Rust has to be
functioning pretty well just to run _any_ tests, which I imagine will
be the source of some frustration when the entire test suite
breaks. Will also cause some headaches during porting.
Not having individual make rules, each rpass, etc test no longer
remembers between runs whether it completed successfully. As a result,
it's not possible to incrementally fix multiple tests by just running
'make check', fixing a test, and repeating without re-running all the
tests contained in the test runner. Instead you can filter just the
tests you want to run by using the TESTNAME environment variable.
This also dispenses with the ability to run stage0 tests, but they
tended to be broken more often than not anyway.
2011-07-13 10:01:09 +08:00
|
|
|
endef
|
2011-07-13 06:27:00 +08:00
|
|
|
|
2013-02-06 06:14:58 +08:00
|
|
|
$(foreach stage,$(STAGES), \
|
2013-10-21 17:18:21 +08:00
|
|
|
$(foreach host,$(CFG_HOST), \
|
2013-02-06 06:14:58 +08:00
|
|
|
$(eval $(call DEF_CHECK_FOR_STAGE_AND_HOSTS,$(stage),$(host)))))
|
|
|
|
|
|
|
|
define DEF_CHECK_FOR_STAGE_AND_HOSTS_AND_GROUP
|
2013-10-21 17:18:21 +08:00
|
|
|
check-stage$(1)-H-$(2)-$(3): $$(foreach target,$$(CFG_TARGET), \
|
2013-02-06 06:14:58 +08:00
|
|
|
check-stage$(1)-T-$$(target)-H-$(2)-$(3))
|
|
|
|
endef
|
|
|
|
|
|
|
|
$(foreach stage,$(STAGES), \
|
2013-10-21 17:18:21 +08:00
|
|
|
$(foreach host,$(CFG_HOST), \
|
2013-02-06 06:14:58 +08:00
|
|
|
$(foreach group,$(TEST_GROUPS), \
|
|
|
|
$(eval $(call DEF_CHECK_FOR_STAGE_AND_HOSTS_AND_GROUP,$(stage),$(host),$(group))))))
|
2011-07-13 06:27:00 +08:00
|
|
|
|
2014-02-15 11:17:50 +08:00
|
|
|
define DEF_CHECK_DOC_FOR_STAGE
|
2014-03-08 22:41:31 +08:00
|
|
|
check-stage$(1)-docs: $$(foreach docname,$$(DOCS),\
|
2014-02-15 11:17:50 +08:00
|
|
|
check-stage$(1)-T-$$(CFG_BUILD)-H-$$(CFG_BUILD)-doc-$$(docname)) \
|
2014-03-08 22:41:31 +08:00
|
|
|
$$(foreach crate,$$(TEST_DOC_CRATES),\
|
2014-03-09 11:55:20 +08:00
|
|
|
check-stage$(1)-T-$$(CFG_BUILD)-H-$$(CFG_BUILD)-doc-crate-$$(crate))
|
2014-02-15 11:17:50 +08:00
|
|
|
endef
|
|
|
|
|
|
|
|
$(foreach stage,$(STAGES), \
|
|
|
|
$(eval $(call DEF_CHECK_DOC_FOR_STAGE,$(stage))))
|
|
|
|
|
|
|
|
define DEF_CHECK_CRATE
|
|
|
|
check-$(1): check-stage2-T-$$(CFG_BUILD)-H-$$(CFG_BUILD)-$(1)-exec
|
|
|
|
endef
|
|
|
|
|
|
|
|
$(foreach crate,$(TEST_CRATES), \
|
|
|
|
$(eval $(call DEF_CHECK_CRATE,$(crate))))
|
|
|
|
|
2011-11-29 23:07:25 +08:00
|
|
|
######################################################################
|
2014-02-15 11:21:17 +08:00
|
|
|
# RMAKE rules
|
2011-11-29 23:07:25 +08:00
|
|
|
######################################################################
|
2011-11-18 04:04:37 +08:00
|
|
|
|
2013-11-17 09:07:32 +08:00
|
|
|
RMAKE_TESTS := $(shell ls -d $(S)src/test/run-make/*/)
|
|
|
|
RMAKE_TESTS := $(RMAKE_TESTS:$(S)src/test/run-make/%/=%)
|
|
|
|
|
|
|
|
define DEF_RMAKE_FOR_T_H
|
|
|
|
# $(1) the stage
|
|
|
|
# $(2) target triple
|
|
|
|
# $(3) host triple
|
|
|
|
|
2013-12-21 10:06:12 +08:00
|
|
|
|
|
|
|
ifeq ($(2)$(3),$$(CFG_BUILD)$$(CFG_BUILD))
|
2013-11-17 09:07:32 +08:00
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-rmake-exec: \
|
|
|
|
$$(call TEST_OK_FILE,$(1),$(2),$(3),rmake)
|
|
|
|
|
|
|
|
$$(call TEST_OK_FILE,$(1),$(2),$(3),rmake): \
|
|
|
|
$$(RMAKE_TESTS:%=$(3)/test/run-make/%-$(1)-T-$(2)-H-$(3).ok)
|
|
|
|
@touch $$@
|
|
|
|
|
|
|
|
$(3)/test/run-make/%-$(1)-T-$(2)-H-$(3).ok: \
|
|
|
|
$(S)src/test/run-make/%/Makefile \
|
2013-12-18 01:05:36 +08:00
|
|
|
$$(CSREQ$(1)_T_$(2)_H_$(3))
|
2013-11-17 09:07:32 +08:00
|
|
|
@rm -rf $(3)/test/run-make/$$*
|
|
|
|
@mkdir -p $(3)/test/run-make/$$*
|
2013-12-03 06:51:47 +08:00
|
|
|
$$(Q)$$(CFG_PYTHON) $(S)src/etc/maketest.py $$(dir $$<) \
|
2014-02-27 12:13:08 +08:00
|
|
|
$$(MAKE) \
|
2013-11-17 09:07:32 +08:00
|
|
|
$$(HBIN$(1)_H_$(3))/rustc$$(X_$(3)) \
|
2013-11-29 10:03:38 +08:00
|
|
|
$(3)/test/run-make/$$* \
|
2013-12-18 01:05:36 +08:00
|
|
|
"$$(CC_$(3)) $$(CFG_GCCISH_CFLAGS_$(3))" \
|
2014-01-04 03:16:52 +08:00
|
|
|
$$(HBIN$(1)_H_$(3))/rustdoc$$(X_$(3)) \
|
2014-02-22 08:32:49 +08:00
|
|
|
"$$(TESTNAME)" \
|
|
|
|
"$$(RPATH_VAR$(1)_T_$(2)_H_$(3))"
|
2013-11-17 09:07:32 +08:00
|
|
|
@touch $$@
|
2013-12-21 10:06:12 +08:00
|
|
|
else
|
|
|
|
# FIXME #11094 - The above rule doesn't work right for multiple targets
|
|
|
|
check-stage$(1)-T-$(2)-H-$(3)-rmake-exec:
|
|
|
|
@true
|
|
|
|
|
|
|
|
endif
|
|
|
|
|
2013-11-17 09:07:32 +08:00
|
|
|
|
|
|
|
endef
|
|
|
|
|
|
|
|
$(foreach stage,$(STAGES), \
|
|
|
|
$(foreach target,$(CFG_TARGET), \
|
|
|
|
$(foreach host,$(CFG_HOST), \
|
|
|
|
$(eval $(call DEF_RMAKE_FOR_T_H,$(stage),$(target),$(host))))))
|