Commit Graph

66 Commits

Author SHA1 Message Date
Jiapeng Chong b5dd034f8f UBI: Fastmap: Fix kernel-doc
drivers/mtd/ubi/fastmap.c:104: warning: expecting prototype for new_fm_vhdr(). Prototype was for new_fm_vbuf() instead.

Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=2289
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2023-02-02 21:13:51 +01:00
Christophe JAILLET e7f35da21f ubi: fastmap: Use the bitmap API to allocate bitmaps
Use bitmap_zalloc()/bitmap_free() instead of hand-writing them.

It is less verbose and it improves the semantic.

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2022-09-21 11:32:58 +02:00
Zhihao Cheng d09e9a2bdd ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not empty
There at least 6 PEBs reserved on UBI device:
1. EBA_RESERVED_PEBS[1]
2. WL_RESERVED_PEBS[1]
3. UBI_LAYOUT_VOLUME_EBS[2]
4. MIN_FASTMAP_RESERVED_PEBS[2]

When all ubi volumes take all their PEBs, there are 3 (EBA_RESERVED_PEBS +
WL_RESERVED_PEBS + MIN_FASTMAP_RESERVED_PEBS - MIN_FASTMAP_TAKEN_PEBS[1])
free PEBs. Since commit f9c34bb529 ("ubi: Fix producing anchor PEBs")
and commit 4b68bf9a69 ("ubi: Select fastmap anchor PEBs considering
wear level rules") applied, there is only 1 (3 - FASTMAP_ANCHOR_PEBS[1] -
FASTMAP_NEXT_ANCHOR_PEBS[1]) free PEB to fill pool and wl_pool, after
filling pool, wl_pool is always empty. So, UBI could be stuck in an
infinite loop:

	ubi_thread	   system_wq
wear_leveling_worker <--------------------------------------------------
  get_peb_for_wl							|
    // fm_wl_pool, used = size = 0					|
    schedule_work(&ubi->fm_work)					|
									|
		    update_fastmap_work_fn				|
		      ubi_update_fastmap				|
			ubi_refill_pools				|
			// ubi->free_count - ubi->beb_rsvd_pebs < 5	|
			// wl_pool is not filled with any PEBs		|
			schedule_erase(old_fm_anchor)			|
			ubi_ensure_anchor_pebs				|
			  __schedule_ubi_work(wear_leveling_worker)	|
									|
__erase_worker								|
  ensure_wear_leveling							|
    __schedule_ubi_work(wear_leveling_worker) --------------------------

, which cause high cpu usage of ubi_bgt:
top - 12:10:42 up 5 min,  2 users,  load average: 1.76, 0.68, 0.27
Tasks: 123 total,   3 running,  54 sleeping,   0 stopped,   0 zombie

  PID USER PR   NI VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1589 root 20   0   0      0      0 R  45.0  0.0   0:38.86 ubi_bgt0d
  319 root 20   0   0      0      0 I  15.2  0.0   0:15.29 kworker/0:3-eve
  371 root 20   0   0      0      0 I  14.9  0.0   0:12.85 kworker/3:3-eve
   20 root 20   0   0      0      0 I  11.3  0.0   0:05.33 kworker/1:0-eve
  202 root 20   0   0      0      0 I  11.3  0.0   0:04.93 kworker/2:3-eve

In commit 4b68bf9a69 ("ubi: Select fastmap anchor PEBs considering
wear level rules"), there are three key changes:
  1) Choose the fastmap anchor when the most free PEBs are available.
  2) Enable anchor move within the anchor area again as it is useful
     for distributing wear.
  3) Import a candidate fm anchor and check this PEB's erase count during
     wear leveling. If the wear leveling limit is exceeded, use the used
     anchor area PEB with the lowest erase count to replace it.

The anchor candidate can be removed, we can check fm_anchor PEB's erase
count during wear leveling. Fix it by:
  1) Removing 'fm_next_anchor' and check 'fm_anchor' during wear leveling.
  2) Preferentially filling one free peb into fm_wl_pool in condition of
     ubi->free_count > ubi->beb_rsvd_pebs, then try to reserve enough
     free count for fastmap non anchor pebs after the above prerequisites
     are met.
Then, there are at least 1 PEB in pool and 1 PEB in wl_pool after calling
ubi_refill_pools() with all erase works done.

Fetch a reproducer in [Link].

Fixes: 4b68bf9a69 ("ubi: Select fastmap anchor PEBs ... rules")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=215407
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2022-05-27 16:46:02 +02:00
Zhihao Cheng c3c07fc25f ubi: fastmap: Return error code if memory allocation fails in add_aeb()
Abort fastmap scanning and return error code if memory allocation fails
in add_aeb(). Otherwise ubi will get wrong peb statistics information
after scanning.

Fixes: dbb7d2a88d ("UBI: Add fastmap core")
Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2022-01-10 23:09:08 +01:00
Arne Edholm 4b68bf9a69 ubi: Select fastmap anchor PEBs considering wear level rules
There is a risk that the fastmap anchor PEB is alternating between
just two PEBs, the current anchor and the previous anchor that was just
deleted. As the fastmap pools gets the first take on free PEBs, the
pools may leave no free PEBs to be selected as the new anchor,
resulting in the two PEBs alternating behaviour. If the anchor PEBs gets
a high erase count the PEBs will not be used by the pools but remain in
ubi->free, even more increasing the likelihood they will be used as
anchors.

Getting stuck using only a couple of PEBs continuously will result in an
uneven wear, eventually leading to failure.

To fix this:

- Choose the fastmap anchor when the most free PEBs are available. This is
  during rebuilding of the fastmap pools, after the unused pool PEBs are
  added to ubi->free but before the pools are populated again from the
  free PEBs. Also reserve an additional second best PEB as a candidate
  for the next time the fast map anchor is updated. If a better PEB is
  found the next time the fast map anchor is updated, the candidate is
  made available for building the pools.

- Enable anchor move within the anchor area again as it is useful for
  distributing wear.

- The anchor candidate for the next fastmap update is the most suited free
  PEB. Check this PEB's erase count during wear leveling. If the wear
  leveling limit is exceeded, the PEB is considered unsuitable for now. As
  all other non used anchor area PEBs should be even worse, free up the
  used anchor area PEB with the lowest erase count.

Signed-off-by: Arne Edholm <arne.edholm@axis.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-06-02 22:53:05 +02:00
Dan Carpenter 5d3805af27 ubi: Fix an error pointer dereference in error handling code
If "seen_pebs = init_seen(ubi);" fails then "seen_pebs" is an error pointer
and we try to kfree() it which results in an Oops.

This patch re-arranges the error handling so now it only frees things
which have been allocated successfully.

Fixes: daef3dd1f0 ("UBI: Fastmap: Add self check to detect absent PEBs")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-01-19 23:23:28 +01:00
Sascha Hauer ef5aafb6e4 ubi: fastmap: Fix inverted logic in seen selfcheck
set_seen() sets the bit corresponding to the PEB number in the bitmap,
so when self_check_seen() wants to find PEBs that haven't been seen we
have to print the PEBs that have their bit cleared, not the ones which
have it set.

Fixes: 5d71afb008 ("ubi: Use bitmaps in Fastmap self-check code")
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Richard Weinberger <richard@nod.at>
2020-01-16 23:34:50 +01:00
Sascha Hauer f9c34bb529 ubi: Fix producing anchor PEBs
When a new fastmap is about to be written UBI must make sure it has a
free block for a fastmap anchor available. For this ubi_update_fastmap()
calls ubi_ensure_anchor_pebs(). This stopped working with 2e8f08deab
("ubi: Fix races around ubi_refill_pools()"), with this commit the wear
leveling code is blocked and can no longer produce free PEBs. UBI then
more often than not falls back to write the new fastmap anchor to the
same block it was already on which means the same erase block gets
erased during each fastmap write and wears out quite fast.

As the locking prevents us from producing the anchor PEB when we
actually need it, this patch changes the strategy for creating the
anchor PEB. We no longer create it on demand right before we want to
write a fastmap, but instead we create an anchor PEB right after we have
written a fastmap. This gives us enough time to produce a new anchor PEB
before it is needed. To make sure we have an anchor PEB for the very
first fastmap write we call ubi_ensure_anchor_pebs() during
initialisation as well.

Fixes: 2e8f08deab ("ubi: Fix races around ubi_refill_pools()")
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Richard Weinberger <richard@nod.at>
2019-11-17 22:45:57 +01:00
Thomas Gleixner 50acfb2b76 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 286
Based on 1 normalized pattern(s):

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license as published by
  the free software foundation version 2 this program is distributed
  in the hope that it will be useful but without any warranty without
  even the implied warranty of merchantability or fitness for a
  particular purpose see the gnu general public license for more
  details

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-only

has been chosen to replace the boilerplate/reference in 97 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190529141901.025053186@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-05 17:36:37 +02:00
Richard Weinberger 34653fd8c4 ubi: fastmap: Check each mapping only once
Maintain a bitmap to keep track of which LEB->PEB mapping
was checked already.
That way we have to read back VID headers only once.

Signed-off-by: Richard Weinberger <richard@nod.at>
2018-06-07 15:53:16 +02:00
Colin Ian King f50629df49 ubi: fastmap: Clean up the initialization of pointer p
The pointer p is being initialized with one value and a few lines
later being set to a newer replacement value. Clean up the code by
using the latter assignment to p as the initial value. Cleans up
clang warning:

drivers/mtd/ubi/fastmap.c:217:19: warning: Value stored to 'p'
during its initialization is never read

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2018-01-17 21:48:02 +01:00
Pan Bian af7bcee276 ubi: fastmap: Use kmem_cache_free to deallocate memory
Memory allocated by kmem_cache_alloc() should not be deallocated with
kfree(). Use kmem_cache_free() instead.

Signed-off-by: Pan Bian <bianpan2016@163.com>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2018-01-17 21:48:02 +01:00
Colin Ian King d2e43d192b ubi: fastmap: fix spelling mistake: "invalidiate" -> "invalidate"
Trivial fix to spelling mistake in ubi_err error message

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2017-09-13 22:05:30 +02:00
Rabin Vincent 8a1435880f ubi: fastmap: Fix slab corruption
Booting with UBI fastmap and SLUB debugging enabled results in the
following splats.  The problem is that ubi_scan_fastmap() moves the
fastmap blocks from the scan_ai (allocated in scan_fast()) to the ai
allocated in ubi_attach().  This results in two problems:

 - When the scan_ai is freed, aebs which were allocated from its slab
   cache are still in use.

 - When the other ai is being destroyed in destroy_ai(), the
   arguments to kmem_cache_free() call are incorrect since aebs on its
   ->fastmap list were allocated with a slab cache from a differnt ai.

Fix this by making a copy of the aebs in ubi_scan_fastmap() instead of
moving them.

 =============================================================================
 BUG ubi_aeb_slab_cache (Not tainted): Objects remaining in ubi_aeb_slab_cache on __kmem_cache_shutdown()
 -----------------------------------------------------------------------------

 INFO: Slab 0xbfd2da3c objects=17 used=1 fp=0xb33d7748 flags=0x40000080
 CPU: 1 PID: 118 Comm: ubiattach Tainted: G    B           4.9.15 #3
 [<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c)
 [<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0)
 [<804a3274>] (dump_stack) from [<8026c47c>] (slab_err+0x78/0x88)
 [<8026c47c>] (slab_err) from [<802735bc>] (__kmem_cache_shutdown+0x180/0x3e0)
 [<802735bc>] (__kmem_cache_shutdown) from [<8024e13c>] (shutdown_cache+0x1c/0x60)
 [<8024e13c>] (shutdown_cache) from [<8024ed64>] (kmem_cache_destroy+0x19c/0x20c)
 [<8024ed64>] (kmem_cache_destroy) from [<8057cc14>] (destroy_ai+0x1dc/0x1e8)
 [<8057cc14>] (destroy_ai) from [<8057f04c>] (ubi_attach+0x3f4/0x450)
 [<8057f04c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8)
 [<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8)
 [<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00)
 [<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64)
 [<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c)
 INFO: Object 0xb33d7e88 @offset=3720
 INFO: Allocated in scan_peb+0x608/0x81c age=72 cpu=1 pid=118
 	kmem_cache_alloc+0x3b0/0x43c
 	scan_peb+0x608/0x81c
 	ubi_attach+0x124/0x450
 	ubi_attach_mtd_dev+0x60c/0xff8
 	ctrl_cdev_ioctl+0x110/0x2b8
 	do_vfs_ioctl+0xac/0xa00
 	SyS_ioctl+0x3c/0x64
 	ret_fast_syscall+0x0/0x1c
 kmem_cache_destroy ubi_aeb_slab_cache: Slab cache still has objects
 CPU: 1 PID: 118 Comm: ubiattach Tainted: G    B           4.9.15 #3
 [<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c)
 [<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0)
 [<804a3274>] (dump_stack) from [<8024ed80>] (kmem_cache_destroy+0x1b8/0x20c)
 [<8024ed80>] (kmem_cache_destroy) from [<8057cc14>] (destroy_ai+0x1dc/0x1e8)
 [<8057cc14>] (destroy_ai) from [<8057f04c>] (ubi_attach+0x3f4/0x450)
 [<8057f04c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8)
 [<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8)
 [<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00)
 [<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64)
 [<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c)
 cache_from_obj: Wrong slab cache. ubi_aeb_slab_cache but object is from ubi_aeb_slab_cache
 ------------[ cut here ]------------
 WARNING: CPU: 1 PID: 118 at mm/slab.h:354 kmem_cache_free+0x39c/0x450
 Modules linked in:
 CPU: 1 PID: 118 Comm: ubiattach Tainted: G    B           4.9.15 #3
 [<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c)
 [<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0)
 [<804a3274>] (dump_stack) from [<80120e40>] (__warn+0xf4/0x10c)
 [<80120e40>] (__warn) from [<80120f20>] (warn_slowpath_null+0x28/0x30)
 [<80120f20>] (warn_slowpath_null) from [<80271fe0>] (kmem_cache_free+0x39c/0x450)
 [<80271fe0>] (kmem_cache_free) from [<8057cb88>] (destroy_ai+0x150/0x1e8)
 [<8057cb88>] (destroy_ai) from [<8057ef1c>] (ubi_attach+0x2c4/0x450)
 [<8057ef1c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8)
 [<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8)
 [<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00)
 [<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64)
 [<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c)
 ---[ end trace 2bd8396277fd0a0b ]---
 =============================================================================
 BUG ubi_aeb_slab_cache (Tainted: G    B   W      ): page slab pointer corrupt.
 -----------------------------------------------------------------------------

 INFO: Allocated in scan_peb+0x608/0x81c age=104 cpu=1 pid=118
 	kmem_cache_alloc+0x3b0/0x43c
 	scan_peb+0x608/0x81c
 	ubi_attach+0x124/0x450
 	ubi_attach_mtd_dev+0x60c/0xff8
 	ctrl_cdev_ioctl+0x110/0x2b8
 	do_vfs_ioctl+0xac/0xa00
 	SyS_ioctl+0x3c/0x64
 	ret_fast_syscall+0x0/0x1c
 INFO: Slab 0xbfd2da3c objects=17 used=1 fp=0xb33d7748 flags=0x40000081
 INFO: Object 0xb33d7e88 @offset=3720 fp=0xb33d7da0

 Redzone b33d7e80: cc cc cc cc cc cc cc cc                          ........
 Object b33d7e88: 02 00 00 00 01 00 00 00 00 f0 ff 7f ff ff ff ff  ................
 Object b33d7e98: 00 00 00 00 00 00 00 00 bd 16 00 00 00 00 00 00  ................
 Object b33d7ea8: 00 01 00 00 00 02 00 00 00 00 00 00 00 00 00 00  ................
 Redzone b33d7eb8: cc cc cc cc                                      ....
 Padding b33d7f60: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
 CPU: 1 PID: 118 Comm: ubiattach Tainted: G    B   W       4.9.15 #3
 [<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c)
 [<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0)
 [<804a3274>] (dump_stack) from [<80271770>] (free_debug_processing+0x320/0x3c4)
 [<80271770>] (free_debug_processing) from [<80271ad0>] (__slab_free+0x2bc/0x430)
 [<80271ad0>] (__slab_free) from [<80272024>] (kmem_cache_free+0x3e0/0x450)
 [<80272024>] (kmem_cache_free) from [<8057cb88>] (destroy_ai+0x150/0x1e8)
 [<8057cb88>] (destroy_ai) from [<8057ef1c>] (ubi_attach+0x2c4/0x450)
 [<8057ef1c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8)
 [<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8)
 [<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00)
 [<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64)
 [<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c)
 FIX ubi_aeb_slab_cache: Object at 0xb33d7e88 not freed

Signed-off-by: Rabin Vincent <rabinv@axis.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2017-05-08 20:48:33 +02:00
Boris Brezillon 40b6e61ac7 ubi: fastmap: Fix add_vol() return value test in ubi_attach_fastmap()
Commit e96a8a3bb6 ("UBI: Fastmap: Do not add vol if it already
exists") introduced a bug by changing the possible error codes returned
by add_vol():
- this function no longer returns NULL in case of allocation failure
  but return ERR_PTR(-ENOMEM)
- when a duplicate entry in the volume RB tree is found it returns
  ERR_PTR(-EEXIST) instead of ERR_PTR(-EINVAL)

Fix the tests done on add_vol() return val to match this new behavior.

Fixes: e96a8a3bb6 ("UBI: Fastmap: Do not add vol if it already exists")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Acked-by: Sheng Yong <shengyong1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-28 14:48:18 +02:00
Colin Ian King a15cf34fed ubi: fix swapped arguments to call to ubi_alloc_aeb
Static analysis by CoverityScan detected the ec and pnum
arguments are in the wrong order on a call to ubi_alloc_aeb.
Swap the order to fix this.

Fixes: 91f4285fe3 ("UBI: provide helpers to allocate and free aeb elements")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-20 00:06:00 +02:00
Richard Weinberger f7d11b33d4 ubi: Fix Fastmap's update_vol()
Usually Fastmap is free to consider every PEB in one of the pools
as newer than the existing PEB. Since PEBs in a pool are by definition
newer than everything else.
But update_vol() missed the case that a pool can contain more than
one candidate.

Cc: <stable@vger.kernel.org>
Fixes: dbb7d2a88d ("UBI: Add fastmap core")
Signed-off-by: Richard Weinberger <richard@nod.at>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:55:02 +02:00
Richard Weinberger 2e8f08deab ubi: Fix races around ubi_refill_pools()
When writing a new Fastmap the first thing that happens
is refilling the pools in memory.
At this stage it is possible that new PEBs from the new pools
get already claimed and written with data.
If this happens before the new Fastmap data structure hits the
flash and we face power cut the freshly written PEB will not
scanned and unnoticed.

Solve the issue by locking the pools until Fastmap is written.

Cc: <stable@vger.kernel.org>
Fixes: dbb7d2a88d ("UBI: Add fastmap core")
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:54:01 +02:00
Boris Brezillon 3291b52f9f UBI: introduce the VID buffer concept
Currently, all VID headers are allocated and freed using the
ubi_zalloc_vid_hdr() and ubi_free_vid_hdr() function. These functions
make sure to align allocation on ubi->vid_hdr_alsize and adjust the
vid_hdr pointer to match the ubi->vid_hdr_shift requirements.
This works fine, but is a bit convoluted.
Moreover, the future introduction of LEB consolidation (needed to support
MLC/TLC NANDs) will allows a VID buffer to contain more than one VID
header.

Hence the creation of a ubi_vid_io_buf struct to attach extra information
to the VID header.

We currently only store the actual pointer of the underlying buffer, but
will soon add the number of VID headers contained in the buffer.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Boris Brezillon 1f81a5ccab UBI: provide an helper to query LEB information
This is part of our attempt to hide EBA internals from other part of the
implementation in order to easily adapt it to the MLC needs.

Here we are creating an ubi_eba_leb_desc struct to hide the way we keep
track of the LEB to PEB mapping.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Boris Brezillon 91f4285fe3 UBI: provide helpers to allocate and free aeb elements
This not only hides the aeb allocation internals (which is always good in
case we ever want to change the allocation system), but also helps us
factorize the initialization of some common fields (ec and pnum).

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Boris Brezillon fcbb6af17b UBI: fastmap: use ubi_io_{read, write}_data() instead of ubi_io_{read, write}()
ubi_io_{read,write}_data() are wrappers around ubi_io_{read/write}() that
are used to read/write eraseblock payload data, which is exactly what
fastmap does when calling ubi_io_{read,write}().

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Boris Brezillon f2fb1346b3 UBI: fastmap: use ubi_rb_for_each_entry() in unmap_peb()
Use the ubi_rb_for_each_entry() macro instead of open-coding it.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Boris Brezillon de4c455b3e UBI: factorize code used to manipulate volumes at attach time
Volume creation/search code is duplicated in a few places (fastmap and
non fastmap code). Create some helpers to factorize the code.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Boris Brezillon ecbfa8eaba UBI: fastmap: scrub PEB when bitflips are detected in a free PEB EC header
scan_pool() does not mark the PEB for scrubing when bitflips are
detected in the EC header of a free PEB (VID header region left to
0xff).
Make sure we scrub the PEB in this case.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Fixes: dbb7d2a88d ("UBI: Add fastmap core")
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Boris Brezillon 4c368421bc UBI: fastmap: avoid multiple be32_to_cpu() when unneccesary
process_pool_aeb() does several times the be32_to_cpu(new_vh->vol_id)
operation. Create a temporary variable and do it once.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Boris Brezillon f5f9d43f0a UBI: fastmap: use ubi_find_volume() instead of open coding it
process_pool_aeb() re-implements the logic found in ubi_find_volume().
Call ubi_find_volume() to avoid this duplication.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Richard Weinberger 5d71afb008 ubi: Use bitmaps in Fastmap self-check code
...don't waste memory by allocating one sizeof(int) per
PEB.

Signed-off-by: Richard Weinberger <richard@nod.at>
2016-07-29 23:32:58 +02:00
Richard Weinberger 5283ec72b0 ubi: Check whether the Fastmap anchor matches the super block
This helps to detect cases where an user copies an UBI image to
another target with different bad blocks.

Signed-off-by: Richard Weinberger <richard@nod.at>
2016-07-29 23:32:43 +02:00
Richard Weinberger fdf10ed710 ubi: Rework Fastmap attach base code
Introduce a new list to the UBI attach information
object to be able to deal better with old and corrupted
Fastmap eraseblocks.
Also move more Fastmap specific code into fastmap.c.

Signed-off-by: Richard Weinberger <richard@nod.at>
2016-07-29 23:32:42 +02:00
Richard Weinberger be8011053f ubi: Fix whitespace issue in count_fastmap_pebs()
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-07-29 23:32:40 +02:00
Richard Weinberger 1900149c83 UBI: Fix static volume checks when Fastmap is used
Ezequiel reported that he's facing UBI going into read-only
mode after power cut. It turned out that this behavior happens
only when updating a static volume is interrupted and Fastmap is
used.

A possible trace can look like:
ubi0 warning: ubi_io_read_vid_hdr [ubi]: no VID header found at PEB 2323, only 0xFF bytes
ubi0 warning: ubi_eba_read_leb [ubi]: switch to read-only mode
CPU: 0 PID: 833 Comm: ubiupdatevol Not tainted 4.6.0-rc2-ARCH #4
Hardware name: SAMSUNG ELECTRONICS CO., LTD. 300E4C/300E5C/300E7C/NP300E5C-AD8AR, BIOS P04RAP 10/15/2012
0000000000000286 00000000eba949bd ffff8800c45a7b38 ffffffff8140d841
ffff8801964be000 ffff88018eaa4800 ffff8800c45a7bb8 ffffffffa003abf6
ffffffff850e2ac0 8000000000000163 ffff8801850e2ac0 ffff8801850e2ac0
Call Trace:
[<ffffffff8140d841>] dump_stack+0x63/0x82
[<ffffffffa003abf6>] ubi_eba_read_leb+0x486/0x4a0 [ubi]
[<ffffffffa00453b3>] ubi_check_volume+0x83/0xf0 [ubi]
[<ffffffffa0039d97>] ubi_open_volume+0x177/0x350 [ubi]
[<ffffffffa00375d8>] vol_cdev_open+0x58/0xb0 [ubi]
[<ffffffff8124b08e>] chrdev_open+0xae/0x1d0
[<ffffffff81243bcf>] do_dentry_open+0x1ff/0x300
[<ffffffff8124afe0>] ? cdev_put+0x30/0x30
[<ffffffff81244d36>] vfs_open+0x56/0x60
[<ffffffff812545f4>] path_openat+0x4f4/0x1190
[<ffffffff81256621>] do_filp_open+0x91/0x100
[<ffffffff81263547>] ? __alloc_fd+0xc7/0x190
[<ffffffff812450df>] do_sys_open+0x13f/0x210
[<ffffffff812451ce>] SyS_open+0x1e/0x20
[<ffffffff81a99e32>] entry_SYSCALL_64_fastpath+0x1a/0xa4

UBI checks static volumes for data consistency and reads the
whole volume upon first open. If the volume is found erroneous
users of UBI cannot read from it, but another volume update is
possible to fix it. The check is performed by running
ubi_eba_read_leb() on every allocated LEB of the volume.
For static volumes ubi_eba_read_leb() computes the checksum of all
data stored in a LEB. To verify the computed checksum it has to read
the LEB's volume header which stores the original checksum.
If the volume header is not found UBI treats this as fatal internal
error and switches to RO mode. If the UBI device was attached via a
full scan the assumption is correct, the volume header has to be
present as it had to be there while scanning to get known as mapped.
If the attach operation happened via Fastmap the assumption is no
longer correct. When attaching via Fastmap UBI learns the mapping
table from Fastmap's snapshot of the system state and not via a full
scan. It can happen that a LEB got unmapped after a Fastmap was
written to the flash. Then UBI can learn the LEB still as mapped and
accessing it returns only 0xFF bytes. As UBI is not a FTL it is
allowed to have mappings to empty PEBs, it assumes that the layer
above takes care of LEB accounting and referencing.
UBIFS does so using the LEB property tree (LPT).
For static volumes UBI blindly assumes that all LEBs are present and
therefore special actions have to be taken.

The described situation can happen when updating a static volume is
interrupted, either by a user or a power cut.
The volume update code first unmaps all LEBs of a volume and then
writes LEB by LEB. If the sequence of operations is interrupted UBI
detects this either by the absence of LEBs, no volume header present
at scan time, or corrupted payload, detected via checksum.
In the Fastmap case the former method won't trigger as no scan
happened and UBI automatically thinks all LEBs are present.
Only by reading data from a LEB it detects that the volume header is
missing and incorrectly treats this as fatal error.
To deal with the situation ubi_eba_read_leb() from now on checks
whether we attached via Fastmap and handles the absence of a
volume header like a data corruption error.
This way interrupted static volume updates will correctly get detected
also when Fastmap is used.

Cc: <stable@vger.kernel.org>
Reported-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
Tested-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-05-24 15:24:37 +02:00
Ezequiel García 2a130f1aa5 UBI: Fastmap: Fix PEB array type
The PEB array is an array of __be32, so let's fix the
scan_pool() prototype accordingly.

Signed-off-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
Signed-off-by: Richard Weinberger <richard@nod.at>
2015-11-06 23:26:50 +01:00
Richard Weinberger 876f9a3487 UBI: Fastmap: Simplify expression
There is no need to compute pnum again.

Signed-off-by: Richard Weinberger <richard@nod.at>
Acked-by: Brian Norris <computersforpeace@gmail.com>
2015-10-03 20:08:46 +02:00
shengyong 669d3d1233 UBI: Remove unnecessary `\'
Signed-off-by: Sheng Yong <shengyong1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2015-06-03 09:41:45 +02:00
shengyong e96a8a3bb6 UBI: Fastmap: Do not add vol if it already exists
During fastmap attaching, check if a volume already exists when adding
the volume to volume tree. NOTE that the issue cannot happen, only if
the on-flash fastmap data is modified.

Signed-off-by: Sheng Yong <shengyong1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2015-06-02 11:46:14 +02:00
shengyong f6e951af34 UBI: Fastmap: Rename variables to make them meaningful
s/fmpl1/fmpl
s/fmpl2/fmpl_wl

Add "WL" to the error message when wrong WL pool magic number is detected.

Signed-off-by: Sheng Yong <shengyong1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2015-06-02 11:44:20 +02:00
shengyong a18fd67267 UBI: Fastmap: Remove unnecessary `\'
Signed-off-by: Sheng Yong <shengyong1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2015-06-02 11:43:52 +02:00
Richard Weinberger c5c3f3cf96 UBI: Fastmap: Wire up WL accessor functions
Use the new WL accessor functions in fastmap.

Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 22:47:35 +01:00
Richard Weinberger daef3dd1f0 UBI: Fastmap: Add self check to detect absent PEBs
This self check allows Fastmap to detect absent PEBs while
writing a new fastmap to the MTD device.
It will help to find implementation issues in Fastmap.

Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 22:46:05 +01:00
Richard Weinberger 5ca97ad838 UBI: Fastmap: Rework fastmap error paths
If UBI is unable to write the fastmap to the device
we have make sure that upon next attach UBI will fall
back to scanning mode.
In case we cannot ensure that they only thing we can do
is falling back to read-only mode.

The current error handling code is not powercut proof.
It could happen that a powercut while invalidating would
lead to a state where an too old fastmap could be used upon
attach.
This patch addresses the issue by writing a fake fastmap
super block to a fresh PEB instead of reerasing the existing one.
The fake fastmap super block will UBI case to do a full scan.

Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 22:46:03 +01:00
Richard Weinberger 61de74ce2f UBI: Fastmap: Prepare for variable sized fastmaps
The current code assumes that each fastmap has the same amount of PEBs.
So far this is true but will change soon.

Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 22:46:02 +01:00
Richard Weinberger 111ab0b26f UBI: Fastmap: Locking updates
a) Rename ubi->fm_sem to ubi->fm_eba_sem as this semaphore
protects EBA changes.
b) Turn ubi->fm_mutex into a rw semaphore. It will still serialize
fastmap writes but also ensures that ubi_wl_put_peb() is not
interrupted by a fastmap write. We use a rw semaphore to allow
ubi_wl_put_peb() still to be executed in parallel if no fastmap
write is happening.

Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 22:46:02 +01:00
Richard Weinberger 42dd3cdcd6 UBI: Fastmap: Set used_ebs only for static volumes
If we set it for dynamic ones we might confuse various self checks.

Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 22:46:01 +01:00
Richard Weinberger ad3d6a05ee UBI: Fastmap: Fix leb_count unbalance
If a LEB is unmapped we have to decrement leb_count as well.

Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 22:46:00 +01:00
Richard Weinberger 2d93fb3632 UBI: Fastmap: Switch to ro mode if invalidate_fastmap() fails
We have to switch to ro mode to guarantee that upon next UBI attach
all data is consistent.

Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 22:45:59 +01:00
Richard Weinberger d141a8ef21 UBI: Fastmap: Remove eba_orphans logic
This logic is in vain as we treat protected PEBs also as used, so this
case must not happen.
If a PEB is found which is in the EBA table but not known as used
has to be issued as fatal error.

Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 22:45:59 +01:00
Richard Weinberger a83832a7c8 UBI: Fastmap: Remove bogus ubi_assert()
It is legal to have PEBs left in the used list.
This can happen if UBI copies a PEB and a powercut happens
between writing a new fastmap and adding this PEB into the EBA table.
In this case the old PEB will be used.

Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 22:45:58 +01:00
Richard Weinberger 98105d0819 UBI: Fastmap: Fix memory leak while attaching
Currently we leak a few ubi_ainf_pebs while attaching.

Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 22:45:57 +01:00
Richard Weinberger c4ca6be9d6 UBI: Fastmap: Don't allocate new ubi_wl_entry objects
There is no need to allocate new ones every time, we can reuse
the existing ones.
This makes the code cleaner and more easy to follow.

Signed-off-by: Richard Weinberger <richard@nod.at>
Reviewed-by: Tanya Brokhman <tlinder@codeaurora.org>
Reviewed-by: Guido Martínez <guido@vanguardiasur.com.ar>
2015-03-26 22:17:47 +01:00