1081230b74
Pull core block updates from Jens Axboe: "This first core part of the block IO changes contains: - Cleanup of the bio IO error signaling from Christoph. We used to rely on the uptodate bit and passing around of an error, now we store the error in the bio itself. - Improvement of the above from myself, by shrinking the bio size down again to fit in two cachelines on x86-64. - Revert of the max_hw_sectors cap removal from a revision again, from Jeff Moyer. This caused performance regressions in various tests. Reinstate the limit, bump it to a more reasonable size instead. - Make /sys/block/<dev>/queue/discard_max_bytes writeable, by me. Most devices have huge trim limits, which can cause nasty latencies when deleting files. Enable the admin to configure the size down. We will look into having a more sane default instead of UINT_MAX sectors. - Improvement of the SGP gaps logic from Keith Busch. - Enable the block core to handle arbitrarily sized bios, which enables a nice simplification of bio_add_page() (which is an IO hot path). From Kent. - Improvements to the partition io stats accounting, making it faster. From Ming Lei. - Also from Ming Lei, a basic fixup for overflow of the sysfs pending file in blk-mq, as well as a fix for a blk-mq timeout race condition. - Ming Lin has been carrying Kents above mentioned patches forward for a while, and testing them. Ming also did a few fixes around that. - Sasha Levin found and fixed a use-after-free problem introduced by the bio->bi_error changes from Christoph. - Small blk cgroup cleanup from Viresh Kumar" * 'for-4.3/core' of git://git.kernel.dk/linux-block: (26 commits) blk: Fix bio_io_vec index when checking bvec gaps block: Replace SG_GAPS with new queue limits mask block: bump BLK_DEF_MAX_SECTORS to 2560 Revert "block: remove artifical max_hw_sectors cap" blk-mq: fix race between timeout and freeing request blk-mq: fix buffer overflow when reading sysfs file of 'pending' Documentation: update notes in biovecs about arbitrarily sized bios block: remove bio_get_nr_vecs() fs: use helper bio_add_page() instead of open coding on bi_io_vec block: kill merge_bvec_fn() completely md/raid5: get rid of bio_fits_rdev() md/raid5: split bio for chunk_aligned_read block: remove split code in blkdev_issue_{discard,write_same} btrfs: remove bio splitting and merge_bvec_fn() calls bcache: remove driver private bio splitting code block: simplify bio_add_page() block: make generic_make_request handle arbitrarily sized bios blk-cgroup: Drop unlikely before IS_ERR(_OR_NULL) block: don't access bio->bi_error after bio_put() block: shrink struct bio down to 2 cache lines again ... |
||
---|---|---|
.. | ||
include/linux | ||
lnet | ||
lustre | ||
Kconfig | ||
Makefile | ||
README.txt | ||
TODO | ||
sysfs-fs-lustre |
README.txt
Lustre Parallel Filesystem Client ================================= The Lustre file system is an open-source, parallel file system that supports many requirements of leadership class HPC simulation environments. Born from from a research project at Carnegie Mellon University, the Lustre file system is a widely-used option in HPC. The Lustre file system provides a POSIX compliant file system interface, can scale to thousands of clients, petabytes of storage and hundreds of gigabytes per second of I/O bandwidth. Unlike shared disk storage cluster filesystems (e.g. OCFS2, GFS, GPFS), Lustre has independent Metadata and Data servers that clients can access in parallel to maximize performance. In order to use Lustre client you will need to download lustre client tools from https://downloads.hpdd.intel.com/public/lustre/latest-feature-release/ the package name is lustre-client. You will need to install and configure your Lustre servers separately. Mount Syntax ============ After you installed the lustre-client tools including mount.lustre binary you can mount your Lustre filesystem with: mount -t lustre mgs:/fsname mnt where mgs is the host name or ip address of your Lustre MGS(management service) fsname is the name of the filesystem you would like to mount. Mount Options ============= noflock Disable posix file locking (Applications trying to use the functionality will get ENOSYS) localflock Enable local flock support, using only client-local flock (faster, for applications that require flock but do not run on multiple nodes). flock Enable cluster-global posix file locking coherent across all client nodes. user_xattr, nouser_xattr Support "user." extended attributes (or not) user_fid2path, nouser_fid2path Enable FID to path translation by regular users (or not) checksum, nochecksum Verify data consistency on the wire and in memory as it passes between the layers (or not). lruresize, nolruresize Allow lock LRU to be controlled by memory pressure on the server (or only 100 (default, controlled by lru_size proc parameter) locks per CPU per server on this client). lazystatfs, nolazystatfs Do not block in statfs() if some of the servers are down. 32bitapi Shrink inode numbers to fit into 32 bits. This is necessary if you plan to reexport Lustre filesystem from this client via NFSv4. verbose, noverbose Enable mount/umount console messages (or not) More Information ================ You can get more information at OpenSFS website: http://lustre.opensfs.org/about/ Intel HPDD wiki: https://wiki.hpdd.intel.com Out of tree Lustre client and server code is available at: http://git.whamcloud.com/fs/lustre-release.git Latest binary packages: http://lustre.opensfs.org/download-lustre/