Preface

Below complete instruction set is listed. It is perfectly enough to execute all comands and get working LFS system. You may consult original LFS description for comparance of build procedure, but it is not needed for completing the task.

The release of any itself, available at https://opendistro.org/any/, is universal tool, suitable for many projects and environments. Each project takes it as a basement and adds customisation, additional build code, rules, politics and so on, suitable for that project.

The release of any-lfs is the example of such project. It has all needed tunings and additions to start the work. It contains ports/ with needed scripts for packages as well.

Tutorial

To learn all essentials of LFS with any build system, read the following chapter: Tutorial for LFS with any. It describes general scheme of package building with simple examples. It also contains information with majority of code constructions, used for LFS.

More detailed description of build system architecture and usage you may find at https://opendistro.org/any/.

Preparation

Download any-lfs build environment and cd there:

any-lfs-9.1-cur
tar xf any-lfs-9.1-cur.tgz
cd any-lfs-9.1

Download LFS package sources:

lfs-packs-9.1

Extract them inside the working directory:

tar xf lfs-packs-9.1.tar

You will need to set up the permissions for sudo chroot command, which is used during the build process to enter isolated environment. It can be done like this (executed as root superuser):

USER=user
CHROOT=`which chroot`
BUILDDIR='/home/user/build/*'
echo "${USER} ALL=(ALL) NOPASSWD:${CHROOT} ${BUILDDIR}" >> /etc/sudoers
-
First string contains name of general user, which would do the build. Change name user to match your actual user.
-
Second string automatically defines the full path to chroot command.
-
Third string defines directory, where chroots are allowed. Change it accordingly to your home directory, where your user is allowed to write files. The asterisk at the end of the string means, that chroots are allowed in nested subdirectories.
-
The final string writes down the setting to rules file of sudo utility.

For developers: upgrade the engine

Optionally, you may try more recent version of any build engine. It is useful for development: for example, build LFS in new environment or with a new toolchain. If you are following settled build instructions and do not need development, you should not change version of any.

To upgrade any build system:

-
Download new release;
-
Copy the contents of inner any/ directory to your working dir.
cp -pRf any-<current>/any/* ./any/
Replace the current above with the actual version.

Mind that new version of any needs testing and possible adaptation of LFS build scripts before successfull work.

Build instructions

The full build sequence is as follows.

Get ready build tools

Make available any commands by default path. Execute inside the working directory, created in previous section:

PATH=$(pwd)/rrr/bin:$PATH

You should execute that command in each terminal session you use for building. If session was closed (machine reboot), you will need to execute again the command above.

Try the build process

Do the checking build with command:

any do lfs,pass1 grep-3.4

-
any do is the command to build packages.
-
lfs,pass1 is the name of the configuration, used for the build. The main part is lfs, and word pass1 is added to set up the unique settings for first temporary pass.
-
grep-3.4 is the name of the package you wish to build. Program grep is chosen as simple sample, it has no special purpose.

The shell code, which describes specific actions to build grep-3.4, is located in:

ports/packages/grep-3.4/grep.build

If build was successfull, it will be automatically installed into ./tools directory inside your working directory. Also, the log of everything was saved into:

./build/log/pass1-lfs/grep-3.4.log

Consult the page Tutorial for LFS with any in case of questions about build mechanics and possible errors.

Clean up before procceeding

Clean up everything before moving from test actions to real build:

rm -rf ./tools ./build

Build temporary environment

Build all packages from tools set, which form first temporary build environment.

The entire set is built with single command:

any do lfs,pass1 tools.src

-
tools.src is the text file, where all package names from tools set are written. It is located in ports/list/ directory.

You will see name and status of each built package, one at a line. Each built package is automatically installed into ./tools directory inside your working directory.

If no FAIL or SKIP statuses are occured, you are ready to continue. If something failed or skipped, path to log file had been printed. Examine it to understand the reason of failure, correct it and build the failed packages again with command:

any do lfs,pass1 package-to-rebuild-1.2.3

You may rebuild several packages at once as well:

any do lfs,pass1 package-1.2.3,foo-2.0,bar-3.4

Here the package-1.2.3, foo-2.0, bar-3.4 are the names of sample packages to rebuild.

Do not continue to the next section until all packages are properly built. Each of them is needed, and the absence of any package will lead to further fatal errors.

Preparation of temporary chroot

We need to set up the freshly built environment, so it would be correct chroot container.

Enter the temporary environment:

sudo chroot `pwd` /bin/bash -l

As previous building was done from generic user, all created files have that generic user as owner. Some programs are needed to be executed with root privileges and to have root owner.

chown root:root \
    /tools/bin/mount \
    /tools/bin/sudo \
    /tools/bin/umount \
    /tools/libexec/sudo/sudoers.so
chmod 4555 /tools/bin/mount /tools/bin/sudo /tools/bin/umount
chmod command sets up the SUID bit on binaries, so they will be able to reach the root privileges.

sudo command needs its configuration file at the default place:

cp /tools/etc/sudoers /etc/

Now execute special script, which prepares isolated filesystem for package building.

/any/bin/enter
This script creates device files and mounts virtual filesystems, provided by kernel of an OS. Later examine that script to understand the effect.

Exit the temporary container:

exit

Now turn off the execution of preparational script. It will not be needed on regular basis.

chmod -x ./any/bin/enter

After preparations above our self-built isolated environment will be ready for regular chrooting. Below the build command anch will be used, which does chrooting automatically, so user stays in comfortable generic environment, while actual building is going isolated.

Build the essential system

Now we are ready to build packages for the main system.

Build the first part:

PROGSU=/tools/bin/sudo anch do lfs lfs-part1.src
-
PROGSU=/tools/bin/sudo string tells to anch executable where the command sudo is located inside temporary environment. That command is used inside chrooted environment to switch user from default root to the current user account.
-
anch do is command to do the build. It does all the same as any do, but goes into chroot before the build launch.
-
lfs is the name of configuration.
-
lfs-part1.src is the list with first part of packages. It ends with bash package.

After the end of this command bash will be installed into environment. When the next build command will be launched, freshly built bash at ./bin/bash will be used instead of the one from /tools directory.

Build the remaining part:

PROGSU=/tools/bin/sudo anch do lfs lfs-part2.src
-
lfs-part2.src contains the remaining packages, included in LFS.

Again in case of any errors you should investigate the problem and rebuild the failed packages.

During lfs-part2.src the packages util-linux and sudo should be built and installed, which need setting up to work properly. Enter chroot for that:

sudo chroot `pwd` /bin/bash -l

After default chroot we are root user. Set up the sudo files, which need to be root-owned and executable must have SUID-bit (that's value 4 in argument for chmod command):

chown root:root \
    /usr/bin/sudo \
    /usr/libexec/sudo/sudoers.so \
    /etc/sudoers /etc/sudoers.d
chmod 4555 /usr/bin/sudo

Set up in the same way mount utils:

chown -R root:root \
    /bin/mount \
    /bin/umount
chmod 4555 /bin/mount /bin/umount

Leave the chroot:

exit

At this moment we have all packages built and environment is ready for further actions.

Leave the materials for installation

We need to do some preparations for further installation.

lspkg -f lfs lfs-part1.src,lfs-part2.src 2> /dev/null > package-list.txt
-
lspkg is the tool to manipulate the lists of packages. With -f flag lspkg will list names of binary archives, which are expected for given package set.
-
lfs is the name of configuration.
-
lfs-part1.src,lfs-part2.src contain all packages from main set.
-
So file package-list.txt will keep names of all binary packages we have just built. That will be used for installation of packages to target machine and device.

Cleaning up the environment

As the main build environment is ready, remove unused heavy-weight materials.

Remove temporary build environment in ./tools/ and make transition to the main one, residing in ./bin/, ./usr/ and so on instead of ./tools/bin/ or ./tools/usr/.

rm -rf tools

Remove the directory with object files after compilation of the sources, linking and other build processes. It is useful for debug, but takes quite a lot of disk space.

rm -rf ./build/work

Further building

Now our host environment is completely ready for building additional packages.

anch do lfs some-package-1.2.3
anch do lfs something-large.src

The packages to build can be taken from Beyond Linux From Scratch.

Installation instructions

As the building is over, we can install our packages. The installation is possible on another machine or another disk. It can be done separately from build process, described above.

For example, you may save the built packages *.tgz from build/pack/lfs/ directory and the list of all packages package-list.txt at some regular directory. When you decide to do the installation, you will need just that materials, without build environment.

Prepare the disk

In the example below we use device file /dev/sdd as the target for our installation. The directory /mnt/ will contain filesystem, where we are installing into. Commands of this section should be executed from root user.

Create partitions on /dev/sdd device for our installation.

gdisk /dev/sdd
# create BIOS Boot partition (/dev/sdd1)
# create Main partition (/dev/sdd2)
mkfs.ext4 /dev/sdd2

Optionally, you may locate your system on more then one partition of the disk. The widespread practice is locating directories like var/ or tmp/ on separate partitions.

gdisk /dev/sdd
# create more partitions of your choice: /dev/sdd3, /dev/sdd4, etc
mkfs.ext4 /dev/sdd3
mkfs.ext4 /dev/sdd4
...

Connect the hardware device and directory, available to us:

mount /dev/sdd2 /mnt
Now /mnt reflects the content of the disk, where our system would be located.

Prepare directory layout on target disk:

tar xfp build/pack/lfs/rootdirs-0.0*.tgz -C /mnt
Package rootdirs contains directories for root filesystem, needed by OS.

If you had created additional partitions, mount them at locations of your choice inside /mnt/:

# mount other partitions, if you had done some
# mount /dev/sdd3 /mnt/var
# mount /dev/sdd4 /mnt/tmp
...

Package installation

Install packages!

If you are still inside the working directory, where the system was built, execute:

for i in $( cat package-list.txt ) ; do
    tar xfp build/pack/lfs/$i -C /mnt
done
-
package-list.txt is the result of lspkg launch, listed in build instructions.
-
Argument p is used at tar invocation to preserve file permissions during the extraction.

If you are installing another machine, it make sense to copy the built packages and their list to separate directory. Then the extracting the archives would be like:

for i in $( cat package-list.txt ) ; do
    tar xfp $i -C /mnt
done

Do the additional cleanup:

# FIXME
rm -rf /mnt/tools
rmdir /mnt/lib64
ln -s lib /mnt/lib64

Prepare boot configuration

Edit configuration files to boot up.

# edit /mnt/etc/fstab
# edit /mnt/boot/grub/grub.cfg

Prepare installed filesystem

We need to setup our installed environment: execute package setup scripts, so that programs would correctly work. For that we need to chroot into installed filesystem.

Copy container setup script into installed filesystem. It will be available under the path /sbin/enter. That script will perform essential basic settings, making possible all the rest of the setup.

cp any/bin/enter /mnt/sbin/

Now chroot into installed filesystem.

chroot /mnt

Manually prepare the /bin/sh alias. It is needed before the execution of shell scripts.

ln -s bash /bin/sh

Now as shell scripts work, perform container setting up.

/sbin/enter

Do the main setup of each package.

for i in /var/lib/pkg/scripts/* ; do
    $i
done

Generate password for root user.

passwd

Prepare the bootloader

We need to initialise GRUB bootloader in order to boot from the device.

To set up bootloader correctly, we need ./dev/ filesystem, populated with devices. That filesystem is managed by udev. So start udev init service:

/etc/rc.d/rcS.d/S00mountvirtfs start
/etc/rc.d/rcS.d/S10udev start
First init script mountvirtfs is needed, as udev depends on kernel virtual filesystems to determine what disk devices are on board.

Now install the bootloader record into our disk:

grub-install --target=i386-pc /dev/sdd

Finish the installation

Clean up everything.

This will stop the process of udev daemon, launched at Grub initialisation.

pkill -1 udevd

Unmount the virtual filesystems, managed by kernel of the OS.

umount /run /sys /proc /dev

Leave the target environment.

exit

If we had done additional partitions earlier, unmount them first:

# umount /mnt/tmp
# umount /mnt/var
...

Now unmount main installed filesystem.

umount /mnt

And we are done!

Result

In the example above we install our self-built system to /dev/sdd2 partition. We can reboot, choose lfs menu entry in Grub and get to it.

Moreover, the entire system had been saved in directory build/pack/lfs/ in form of binary tar packages. It can be reinstalled again to another device or machine.