batman-adv: Place kref_get for bla_backbone_gw near use

It is hard to understand why the refcnt is increased when it isn't done
near the actual place the new reference is used. So using kref_get right
before the place which requires the reference and in the same function
helps to avoid accidental problems caused by incorrect reference counting.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
This commit is contained in:
Sven Eckelmann 2016-07-15 17:39:25 +02:00 committed by Simon Wunderlich
parent 7282ac396e
commit 4e8389e17a
1 changed files with 1 additions and 3 deletions

View File

@ -526,11 +526,9 @@ batadv_bla_get_backbone_gw(struct batadv_priv *bat_priv, u8 *orig,
atomic_set(&entry->wait_periods, 0); atomic_set(&entry->wait_periods, 0);
ether_addr_copy(entry->orig, orig); ether_addr_copy(entry->orig, orig);
INIT_WORK(&entry->report_work, batadv_bla_loopdetect_report); INIT_WORK(&entry->report_work, batadv_bla_loopdetect_report);
/* one for the hash, one for returning */
kref_init(&entry->refcount); kref_init(&entry->refcount);
kref_get(&entry->refcount);
kref_get(&entry->refcount);
hash_added = batadv_hash_add(bat_priv->bla.backbone_hash, hash_added = batadv_hash_add(bat_priv->bla.backbone_hash,
batadv_compare_backbone_gw, batadv_compare_backbone_gw,
batadv_choose_backbone_gw, entry, batadv_choose_backbone_gw, entry,