[CIFS] Incorrect hardlink count when original file is cached (oplocked)

Fixes Samba bug 2823

In this case hardlink count is stale for one of the two inodes (ie the
original file) until it is closed - since revalidate does not go to
server while file is cached locally.

Signed-off-by: Steve French <sfrench@us.ibm.com>
This commit is contained in:
Steve French 2006-11-16 20:54:20 +00:00
parent 237ee312e1
commit 31ec35d6c8
1 changed files with 23 additions and 10 deletions

View File

@ -69,17 +69,30 @@ cifs_hardlink(struct dentry *old_file, struct inode *inode,
rc = -EOPNOTSUPP; rc = -EOPNOTSUPP;
} }
/* if (!rc) */ d_drop(direntry); /* force new lookup from server of target */
{
/* renew_parental_timestamps(old_file); /* if source file is cached (oplocked) revalidate will not go to server
inode->i_nlink++; until the file is closed or oplock broken so update nlinks locally */
mark_inode_dirty(inode); if(old_file->d_inode) {
d_instantiate(direntry, inode); */ cifsInode = CIFS_I(old_file->d_inode);
/* BB add call to either mark inode dirty or refresh its data and timestamp to current time */ if(rc == 0) {
old_file->d_inode->i_nlink++;
old_file->d_inode->i_ctime = CURRENT_TIME;
/* parent dir timestamps will update from srv
within a second, would it really be worth it
to set the parent dir cifs inode time to zero
to force revalidate (faster) for it too? */
}
/* if not oplocked will force revalidate to get info
on source file from srv */
cifsInode->time = 0;
/* Will update parent dir timestamps from srv within a second.
Would it really be worth it to set the parent dir (cifs
inode) time field to zero to force revalidate on parent
directory faster ie
CIFS_I(inode)->time = 0; */
} }
d_drop(direntry); /* force new lookup from server */
cifsInode = CIFS_I(old_file->d_inode);
cifsInode->time = 0; /* will force revalidate to go get info when needed */
cifs_hl_exit: cifs_hl_exit:
kfree(fromName); kfree(fromName);