slackbuilds/network/lizardfs
B. Watson 62a9f3e614 network/lizardfs: Wrap README at 72 columns.
Signed-off-by: B. Watson <yalhcru@gmail.com>
2022-03-13 18:08:25 -04:00
..
README network/lizardfs: Wrap README at 72 columns. 2022-03-13 18:08:25 -04:00
doinst.sh
iostat.h.patch network/lizardfs: Update script. 2019-04-05 07:38:46 +07:00
lizardfs.SlackBuild network/lizardfs: Add -fcommon to CFLAGS. 2022-02-27 01:09:53 +07:00
lizardfs.info
rc.lizardfs-cgiserv.new
rc.lizardfs-chunkserver.new
rc.lizardfs-master.new
rc.lizardfs-metalogger.new
rc.lizardfs.new
setup.lizardfs-services.new
slack-desc

README

LizardFS is a highly scalable, fault-tolerant, POSIX-compatible,
FUSE-based, high performance distributed filesystem, licensed under
GNU General Public License version 3.

LizardFS is an implementation of GoogleFS, and a fork of the earlier
project, MooseFS. LizardFS supports writable snapshots (instant
copies), undeleting files, automatic data rebalancing, self-healing,
data tiering, periodic data patrols and many more.

LizardFS system consists of a master server, one or more metadata
logging servers (meta loggers), and many chunk servers, that store the
data on their locally-attached drives.  Both meta loggers and chunk
servers can be added and removed without restarting the master server.

Filesystem metadata is stored on the master server (and constantly
replicated to meta loggers), whereas filesystem data is divided
into chunks and spread as files over chunk servers, according
to pre-defined 'goals', which can be set on file-, directory-, or
filesystem level. A goal can be an n-way mirroring goal, n+1 xor-ed
goal (each chunk divided into n parts and xor-ed to calculate one
part of redundancy), or more sophisticated, erasure code based n+k
redundancy, where n parts of each chunk are backed by k parts of
redundancy data.

A set of administrative commands exists to support querying and
setting redundancy goals and trash preservation time.  LizardFS is
admin-friendly since any missing chunks can be provided from any sort
of backup to any running chunk server.

This package contains all binaries needed to run LizardFS
system: mfsmaster, mfsmetalogger, mfschunkserver, as well as
lizardfs-cgiserver (web-based monitoring console).

You need an "mfs" user and group prior to building lizardfs.
Something like this will suffice for most systems:
  groupadd -g 353 mfs
  useradd -u 353 -g 353 -d /var/lib/mfs mfs
Feel free to use a different uid and gid if desired, but 353 is
recommended to avoid conflicts with other stuff from SlackBuilds.org.

It is also advisable to make name 'mfsmaster' pointing at your Master
server across your network. It is not strictly required, but it will
make things much easier. If you are unable to configure your DNS
server, adding this line to /etc/hosts on each master, metalogger,
chunkserver, and client machines will do:

a.b.c.d    mfsmaster    mfsmaster.my-domain.ext

where a.b.c.d is an IP address of your master server.

Then on each node add '/etc/rc.d/rc.lizardfs start' to
/etc/rc.d/rc.local (or wherever you find appropriate), and use
'/etc/rc.d/rc.lizardfs setup' to configure which services should
run on the server. Since most installations consists mostly of
chunkservers, rc.lizardfs-chunkserver is marked executable by default
(but will not run until rc.lizardfs-chunkserver or rc.lizardfs is
added to rc.local, so no need to worry).