From 62a9f3e6147632560d40784c06fa19315250f8e2 Mon Sep 17 00:00:00 2001 From: "B. Watson" Date: Sun, 13 Mar 2022 18:08:25 -0400 Subject: network/lizardfs: Wrap README at 72 columns. Signed-off-by: B. Watson --- network/lizardfs/README | 94 +++++++++++++++++++++++++------------------------ 1 file changed, 48 insertions(+), 46 deletions(-) (limited to 'network/lizardfs') diff --git a/network/lizardfs/README b/network/lizardfs/README index 6b6c8d0b12..cf6c3367f3 100644 --- a/network/lizardfs/README +++ b/network/lizardfs/README @@ -1,55 +1,57 @@ -LizardFS is a highly scalable, fault-tolerant, POSIX-compatible, FUSE-based, -high performance distributed filesystem, licensed under GNU General Public -License version 3. - -LizardFS is an implementation of GoogleFS, and a fork of the earlier project, -MooseFS. LizardFS supports writable snapshots (instant copies), undeleting -files, automatic data rebalancing, self-healing, data tiering, periodic data -patrols and many more. - -LizardFS system consists of a master server, one or more metadata logging -servers (meta loggers), and many chunk servers, that store the data on their -locally-attached drives. Both meta loggers and chunk servers can be added and -removed without restarting the master server. - -Filesystem metadata is stored on the master server (and constantly replicated -to meta loggers), whereas filesystem data is divided into chunks and spread as -files over chunk servers, according to pre-defined 'goals', which can be set -on file-, directory-, or filesystem level. A goal can be an n-way mirroring -goal, n+1 xor-ed goal (each chunk divided into n parts and xor-ed to calculate -one part of redundancy), or more sophisticated, erasure code based n+k -redundancy, where n parts of each chunk are backed by k parts of redundancy -data. - -A set of administrative commands exists to support querying and setting -redundancy goals and trash preservation time. LizardFS is admin-friendly since -any missing chunks can be provided from any sort of backup to any running -chunk server. - -This package contains all binaries needed to run LizardFS system: mfsmaster, -mfsmetalogger, mfschunkserver, as well as lizardfs-cgiserver (web-based -monitoring console). - -You need an "mfs" user and group prior to building lizardfs. Something like -this will suffice for most systems: +LizardFS is a highly scalable, fault-tolerant, POSIX-compatible, +FUSE-based, high performance distributed filesystem, licensed under +GNU General Public License version 3. + +LizardFS is an implementation of GoogleFS, and a fork of the earlier +project, MooseFS. LizardFS supports writable snapshots (instant +copies), undeleting files, automatic data rebalancing, self-healing, +data tiering, periodic data patrols and many more. + +LizardFS system consists of a master server, one or more metadata +logging servers (meta loggers), and many chunk servers, that store the +data on their locally-attached drives. Both meta loggers and chunk +servers can be added and removed without restarting the master server. + +Filesystem metadata is stored on the master server (and constantly +replicated to meta loggers), whereas filesystem data is divided +into chunks and spread as files over chunk servers, according +to pre-defined 'goals', which can be set on file-, directory-, or +filesystem level. A goal can be an n-way mirroring goal, n+1 xor-ed +goal (each chunk divided into n parts and xor-ed to calculate one +part of redundancy), or more sophisticated, erasure code based n+k +redundancy, where n parts of each chunk are backed by k parts of +redundancy data. + +A set of administrative commands exists to support querying and +setting redundancy goals and trash preservation time. LizardFS is +admin-friendly since any missing chunks can be provided from any sort +of backup to any running chunk server. + +This package contains all binaries needed to run LizardFS +system: mfsmaster, mfsmetalogger, mfschunkserver, as well as +lizardfs-cgiserver (web-based monitoring console). + +You need an "mfs" user and group prior to building lizardfs. +Something like this will suffice for most systems: groupadd -g 353 mfs useradd -u 353 -g 353 -d /var/lib/mfs mfs -Feel free to use a different uid and gid if desired, but 353 is recommended to -avoid conflicts with other stuff from SlackBuilds.org. +Feel free to use a different uid and gid if desired, but 353 is +recommended to avoid conflicts with other stuff from SlackBuilds.org. -It is also advisable to make name 'mfsmaster' pointing at your Master server -across your network. It is not strictly required, but it will make things much -easier. If you are unable to configure your DNS server, adding this line to -/etc/hosts on each master, metalogger, chunkserver, and client machines will -do: +It is also advisable to make name 'mfsmaster' pointing at your Master +server across your network. It is not strictly required, but it will +make things much easier. If you are unable to configure your DNS +server, adding this line to /etc/hosts on each master, metalogger, +chunkserver, and client machines will do: a.b.c.d mfsmaster mfsmaster.my-domain.ext where a.b.c.d is an IP address of your master server. -Then on each node add '/etc/rc.d/rc.lizardfs start' to /etc/rc.d/rc.local (or -wherever you find appropriate), and use '/etc/rc.d/rc.lizardfs setup' to -configure which services should run on the server. Since most installations -consists mostly of chunkservers, rc.lizardfs-chunkserver is marked executable -by default (but will not run until rc.lizardfs-chunkserver or rc.lizardfs is +Then on each node add '/etc/rc.d/rc.lizardfs start' to +/etc/rc.d/rc.local (or wherever you find appropriate), and use +'/etc/rc.d/rc.lizardfs setup' to configure which services should +run on the server. Since most installations consists mostly of +chunkservers, rc.lizardfs-chunkserver is marked executable by default +(but will not run until rc.lizardfs-chunkserver or rc.lizardfs is added to rc.local, so no need to worry). -- cgit v1.2.3