selftests: vm: Try harder to allocate huge pages

If we need to increase the number of huge pages, drop caches first
to reduce fragmentation and then check that we actually allocated
as many as we wanted.  Retry once if that doesn't work.

Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
This commit is contained in:
Ben Hutchings 2015-11-02 12:22:22 +00:00 committed by Shuah Khan
parent 3b4d3819ec
commit ee00479d67
1 changed files with 14 additions and 1 deletions

View File

@ -20,13 +20,26 @@ done < /proc/meminfo
if [ -n "$freepgs" ] && [ -n "$pgsize" ]; then
nr_hugepgs=`cat /proc/sys/vm/nr_hugepages`
needpgs=`expr $needmem / $pgsize`
if [ $freepgs -lt $needpgs ]; then
tries=2
while [ $tries -gt 0 ] && [ $freepgs -lt $needpgs ]; do
lackpgs=$(( $needpgs - $freepgs ))
echo 3 > /proc/sys/vm/drop_caches
echo $(( $lackpgs + $nr_hugepgs )) > /proc/sys/vm/nr_hugepages
if [ $? -ne 0 ]; then
echo "Please run this test as root"
exit 1
fi
while read name size unit; do
if [ "$name" = "HugePages_Free:" ]; then
freepgs=$size
fi
done < /proc/meminfo
tries=$((tries - 1))
done
if [ $freepgs -lt $needpgs ]; then
printf "Not enough huge pages available (%d < %d)\n" \
$freepgs $needpgs
exit 1
fi
else
echo "no hugetlbfs support in kernel?"