Spark Linux – Arch beauty and minimalism all in one

A while ago, an Obarun user, Dr Saleem Khan (1) urged me to try Spark Linux and it was the first time I heard of it.  It must have been during some real busy period and it was since forgotten.  While I was trying to clean up the list of linux distributions without systemd the name came up again.  Thanks, Saleem.

By no means do I think this is for entry level users to try as a distribution with a full desktop, but for minimalists who are accustomed to arch this is an exercise of how minimal can you get with a ready off the shelf arch base on which you can build from ground up.

The project is severely undocumented, although there is not much to document for an experienced user. Spark (by Jack L. Frost) uses sinit as its init system and ssm which is an inhouse Simple Service Manager by Spark founder.

Sinit according to its source suckless (they suck less) is:

sinit – suckless init

sinit is a suckless init, initially based on Rich Felker’s minimal init.

sinit is considered complete and no further development is expected to happen.

Relevant links sinit + daemontools-encore

sinit was created by Dimitris Papastamos and was “finished” in 2015, that I believe is a year after runit was finished as a frame of reference.

After following the instructions (minimalist) and installing a kernel, configuring a bootloader, /etc/…. stuff, and give it a first try you may find out it will not fully boot.  At least don’t expect good graphics or network interface names other than lo.  You have to edit /etc/rc.conf and add ‘@eudev’ to the line of ‘services’ to run.

So assuming you have eudev/libeudev installed from the spark repository it should now work and have a linked system.  As far as I was told by Jack L. Frost (spark founder) if you have nvidia eudev may not work so you have to pick an alternative “service” and udev pkg.

edit1: correction to the above by Jack Frost:

eudev will work btw, the issue was about KMS not working on Nvidia cards, and that doesn’t depend on any specific device manager, it’s a kernel/nvidia issue.

After /etc/rc.conf the next important resource to see is /usr/share/ssm/services where all the ready available service scripts are stored.  Those also serve as a general template to how to write your own.  It is asked by the author that when you do to share them with the author so unless they are faulty can be added to the service script array for more users.

It is relatively common knowledge that /usr/share/… should not be altered by the sys-admin but all custom work should be contained within /etc/ so the distribution can modify, upgrade, replace /usr/share and other root /volumes while the sys-admin keeps his custom stuff on /etc.  Some distributions, like Void will encourage you to go and edit /var and other system resources to modify the behavior of the system.  So ssm, like s6 and 66, will read first what you have on /etc/ssm and if no customized script exists, it then reads the default script off of /usr/share/ssm/services.  The locations that ssm will scan for environment and service files can also be custom defined in /etc/ssm/ssm.conf

The rest of the system is pretty self explanatory, it is wonderfully minimalistic (in my eyes) and once configured as expected, then you can use your familiar arch package manager pacman to install what you wish.  If you dare install systemd and lib-systemd CRAP, just make sure that the link /usr/bin/init –> /usr/bin/sinit still holds, so the monster will not run.  Hopefully you have your pacman.conf blocking the installation of those two sources of anti-matter evil.

#[spark-testing]
#Include = /etc/pacman.d/mirrorlist-spark

[spark]
Include = /etc/pacman.d/mirrorlist-spark

[spark-extra]
Include = /etc/pacman.d/mirrorlist-spark

You can substitute the mirrorlist “Include” line with a direct mirror address:

# Spark mirrors

## Germany
Server = https://mirror.fleshless.org/spark/$repo
Server = https://voidcaller.fleshless.org/spark/$repo
Server = https://mirror.vdrandom.org/spark/$repo

## Netherlands
Server = https://spike.fleshless.org/mirror/spark/$repo

One thing Spark is lacking from what you may be used at (obarun or arch or artix …) is a keyring. So if you don’t install from the tar ball and have pacman configured to check integrity you may have to add

SigLevel = Never
between the repository name and its address.

edit2:  Another correction by Jack Frost:

there’s a key imported directly into the keyring in the Spark rootfs. The package signing key you can find on its own here: https://fleshless.org/pages/pgp.html

The reason this is useless to have on Spark is that both sinit and ssm are just scripts you can read directly, you don’t have to worry about a binary authenticity.  But there are binary packages on that repository, so …. What you install in your system can be copy pasted from its git.

Further impressions:

I noticed in the sinit rc.conf file this option which turns to be a
hardening option for linux mounting /proc with restricted visibility for
users.  Default is 2 but on the above link it is explained what and how it does what it does.  I had never seen this before, but other init systems, runit included, can be so complicated (yes even runit by comparison to sinit is spaceship kind of complicated) that unless you are writing a dissertation on init systems or planning to write your own, or fork an existing one, you may not dig so deep, as most of this boot/mounting business is so trivial.  You just rely on the distribution to have made all the right choices for you.  That is what happens when you think that Void is some kind of wonderful and you end up one morning with half of systemd shoved up your posterior.  (Alzheimer’s, not yet, Void, you wish).

The key to this experience is for me ONE.  This is what little it takes to boot and run linux, how little you need to build a full Arch based system up, and what enormous “trash” are the majority of distributions adopting while “selling you” security and privacy.  How can you control security and privacy when even the logging of the system is some automated labyrinth with binaries (I’m talking about systemd).  When you get to the point to exceed the capacity of such a system as sinit and ssm, then you can appreciate more what runit can do better or more, and then what the “marvel” of s6 and 66 can do more than runit.

For the average Jill, Hassan, Yan, Yu-chin, Iris, …. Jack’s system is plenty.   You want a desktop with soft swooshy icons to start package management guis and partition tools, and be able to hibernate every 20′ and restart where you left your social media chat again in 5′, then ….. take a hike.  This site here has nothing to offer you.  You deserve all the control and choke hold IBM/NSA/Oracle/facebook/Google has on you.  Are you seeking open and free as in freedom and equality, stick around.  Together we may some day find our way.  You want free as in beer, then use torrents and download free cracked windows7…11, it is “free”.

PS  Jack somewhere on his huge site says that he dislikes disabled people.  Not for a moment do we think he means physically disabled but those who use automated IBM tools (like systemd) to run their system.  If he meant physically disabled this site here would have been the first to ban him and his work.  Although we do not tolerate fascism at all, we have little respect for bureacratic/managerial “middle class” hypocrisy like PC language.  We just prefer not to use violent language unless it is for dealing with fascists.  Violent language is a form of fascism itself.

Just to make things clearer than Jack.

edit3:  after reading the article himself he chose to explain the reason of this hate message, and he definitely has a point if you didn’t know the history of the message.

 

Addendum 1:  The live usage of ssm – how to get dbus running for example:

ls -al /usr/share/ssm/services
lists all services available to run by the name on that list
To manually start/stop a service (dbus for example) on spark

# ssm dbus start
# ssm dbus stop

or to have it started from boot on the /etc/rc.conf add it as ‘@dbus’

 

Addendum 2:  If you are using sshfs for mounting network volumes you may notice that you are asked to modprobe fuse and then try again.  On /etc/rc.conf if you add ‘fuse’ and uncomment the line for adding modules you will not get those annoying messages anymore.

_____

1 Dr.Saleem Khan http://saleem-khan.blogspot.com

21 thoughts on “Spark Linux – Arch beauty and minimalism all in one

  1. I have been playing with installing Spark for more than two years and my success in running it up ends EXACTLY when I login to desktop ( any DE ) by startx , X is frozen for me and I could not fix this mystery in past two years.

    Any experience about running any DE on top of Spark?

    Like

  2. Not a DE but a window manager, jwm and openbox I have tried.
    Which DE did you try. You need at least dbus to be running for most of them.

    ls -al /usr/share/ssm/services
    lists all services available to run by the name on that list
    To manually start a service on spark (I should add this to the article because there is no reference)

    ssm dbus start

    ssm dbus stop

    or to have it started from boot on the /etc/rc.conf add it as ‘@dbus’

    Now if you are talking about display managers (guis to login to a specific desktop) I only see slim on the list.
    It might be the case that slim may be the only DM that doesn’t require a login daemon (systemd,elogind,consolekit). Arch only provides systemd as a whole, take it or leave it. Obarun provides consolekit2 and artix provides elogind (a piece of systemd updated everytime systemd is updated upstream).

    Any window manager that runs on arch should run on Spark. As far as I am concerned everything you need to run graphical programs can be found on a wm. Now if you want desktop functionality, bars and icons and menus … spacefm or pcmanfm –desktop solutions can possibly be modified to work. Nothing out of the box though.

    I hope I helped.

    Like

  3. Thank you for feedback , I am just wondering whether if Spark has to be systemd free so If I am removing systemd totally can I use this command to kill/exterminate systemd first ?

    pacman -Rdd systemd systemd-libs systemd-sysvcompat

    Like

  4. Talking about DE I tested XFCE and Plasma on Spark in past and both would freeze ( Now I know it was debus not being started ) . This time I am going to test Fluxbox on Spark . About the Slim I have no issue in using it but I noticed that inittab is not available on Spark so to boot into runlevel 5 won`t be possible? So how would I enable slim to start? So only option for me left atm is adding exec /usr/bin/startfluxbox to .xinitrc file and startx or if you have any suggestion regarding starting slim . I might cook some frankenstein here but not now till I hear your further input.

    Like

  5. did you try installing killall5-ubase-git ?

    pacman -S killall5-ubase-git
    resolving dependencies...
    looking for conflicting packages...
    Package (1) New Version Net Change
    spark/killall5-ubase-git r610.3c88778-1 0.04 MiB
    Total Installed Size: 0.04 MiB
    :: Proceed with installation? [Y/n] y
    (1/1) checking keys in keyring [-------------------------------------] 100%
    (1/1) checking package integrity [-------------------------------------] 100%
    (1/1) loading package files [-------------------------------------] 100%
    (1/1) checking for file conflicts [-------------------------------------] 100%
    error: failed to commit transaction (conflicting files)
    killall5-ubase-git: /usr/bin/killall5 exists in filesystem (owned by spark-rc)
    Errors occurred, no packages were upgraded.

    In my many emails with Jack Frost told me following points

    ” what you need are ssm-*, sinit-*, spark-* packages from [spark],
    eudev (probably) and that should be at least bootable.

    eudev is systemd-udev extracted from systemd to be a standalone project. We
    kinda still need a device node manager for a bunch of tasks (mostly desktops
    ones), so eudev is the lesser evil here.

    This is a very basic outline of what Spark even is:

    halt-ubase-git
    killall5-ubase-git
    sinit-spark
    sinit-sysvcompat
    sinit-tools
    spark-etc
    spark-rc
    ssm
    ssm-service
    ssm-services-git
    systemd-dummy
    udev-dummy

    You probably want, for an X system, to replace the last two by:
    systemd-libs-systemd
    eudev

    …and install DBus ……..

    Like

  6. There was an article here about the history of it all, arch and non-systemd, which even had quoted forum messages from archives (arch-linux forum). @artoo from Artix current fame and Manjaro-OpenRC previous fame was at the heart of this. Eric Vidal of Obarun was also part of this trend back then. Eventually there was an arch based repository you could add to arch that was called arch-nosystemd. This is where many packages came to help get arch running with OpenRC and replace some of the packages directly depending on systemd.

    Eventually @artoo moved to Manjaro but used some of those nosystemd resources. The rest of the group had this arch-OpenRC project going till summer 2017. Then Manjaro got into selling equipment with Manjaro installed in it and had to clean up its image for Intel and other “interested parties”, so it booted OpenRC. (Manjaro-OpenRC was a community project but the packages needed were hosted in main Manjaro repositories). This is when Artix was created by the merging of those two projects. Later they got Runit working (Spring 2018 I believe it was) and now they begun to play with s6. So they have a triple init/service supervision choice.

    Eric Vidal and Obarun chose a lonely road that had stricter principles and rules. Basically avoided using the lazy way out of satisfying systemd dependencies (aka elogind), utilized consolekit2 only where it was absolutely necessary and rebuilt many packages accordingly. In a way you can say Obarun is more without systemd than many others (artix, void, mx, etc.). Gnome will not work, which is a good thing. 🙂

    Like

  7. I think he has some packages that conflict with others because he is proposing an alternative. I am not on it now so I can test it but the kill5 pkg is related to shutdown/reboot. It is also a binary, not a script.

    On spark-testing there are a couple of updates to the ssm package, you should give it a try.

    PS You should be marking part of the comment message as code, or wordpress thinks it is spam, I just discovered your message in spam, and you should feel lucky because there is so much that sometimes I can’t look through all of it and delete it all. If a comment doesn’t automatically appear below an article it was caught by spam filter. You should add a brief note below it to make me go and dig it out.

    Like

  8. The piece off of systemd that is used by some for desktops, is elogind. Login-daemon is a control system of who logs in the system and logs out and when. It is used by desktops to run applications that the user may or may not be allowed to use, like gparted, a pkg-manager gui, audio/power and other hardware modifications, need higher authorities. The sys-admin defines what the user can and cannot do. Like installing software for a new printer, if you have rights by the sys-admin to do this it will allow you to run cups.

    Eudev is a modified (cleaned up) version of udev, which helps the system identify devices (hw) for the system to use. Without udev/eudev the system may not know what devices may be connected to it. You should try to use pacman -Rdd udev or eudev and try to reboot 🙂

    I think both eudev and elogind was work done by Gentoo, elogind was separated as its own project. There are alternatives to udev/eudev but not as developed yet, in some systems they may work.

    Like

  9. Hi fungalnet & Co. Happy being here!
    Am I able to install any arch/aur pkg on spark? What about pkgs which depends on systemd?

    Like

  10. After all the experience with artix I would have expected you to know the hard answer.

    It depends which one and why it needs systemd. Artix splits the need to logind and sends it to elogind, obarun does the same with consolekit, if they are absolutely necessary, and sometimes they are not really. Just some upstream idiots trying to play friendly with IBM so maybe it will buy them too.
    Practically, if you really like those “apps” you can install systemd inside spark, sinit and ssm still will do their work of starting and keeping a system running. It may work without systemd running, just its libraries feeding those fools with their needs. Alternatively you can use the spark repositories on top of artix or obarun repositories, use their eudev, elogind, consolekit, and their version of arch software, still sinit/ssm will do the work.

    Artix has made it next to impossible to do this because they have tied (unnecesseraly in my view) dependencies to their artix-sysvcompat complex that forces one of their systems to be also installed (OpenRC, Runit, S6) and when I had last asked on why must this be done, the answer was “because we choose to, it is our system, we do what we like”.

    Obarun doesn’t do that, you can install obarun base and leave out all s6/66 and libraries, the rest of the software don’t care.

    So a wise way so you don’t install systemd and friends from arch is to have [spark] [obarun] [arch] repositories in that order on pacman.conf, top to bottom. In other words do a base installation with spark, then add obarun (and keyring) in the middle, upgrade (including pacman itself), then you are off to building the system you like (not Gnome or Cinnamon I hope).

    Today arch released vte-common and vte3 with systemd dependencies on them. Those two are pretty much essential for nearly all terminal programs except for good old rxvt which with a little hacking can be made to look and function like lxterminal or xfce terminal, sakura, etc.
    In no time Eric at Obarun rebuilt its own versions of vte/vte3 and all is ok. For how long will this be ok, I don’t know.

    I’d like to see how Void deals with this same problem when they come to upgrade their vte.

    Like

  11. Thx fungalnet for this detailed answer. I was just a artix user fir a few days, so I couldn’t get enough expierence. Further I wanted to know how you (as a expierenced non-systemd hacker) handle it. 🙂

    Do you use spark as a daily driver?

    Like

  12. I tried it for a couple of days just to see if I get any shortfalls, I didn’t. It is just a bit boring 🙂
    With Obarun I have a really live system in development that I like following closer, minute to minute. I learn more and more with Obarun as some of its tools (features as systemd people call them) are about things I didn’t know were possible (like 66-olexec ). In terms of resources used, I’d say with same services running Obarun is a bit faster, because s6 is pretty quick in adopting to system changes. Sinit/ssm seems a bit static in nature, which is its advantage in a way. ssm sshd start is not much of a science, you start a service you need you stop a service you don’t.

    It seems as you were in Artix longer than you were running it.
    What are you using now?

    Like

  13. Cdm, Ly, wdm, lxdm, and xdm do not require (e)logind or consolekit.

    Like

  14. Other terminal emulators not based off libvte include xterm and suckless term.
    The qt-based terminal don’t require vte either; but I expect other atrocities over there.

    Like

  15. Without specific criticism I have been very disappointed with Qt development. I was imagining it would base its own stuff on itself and core linux utilities and libraries. Instead their programmers allow systemd related maze to penetrate right through the Qt layer and function on systemd utilities.

    I found it a sign of weakness for Adelie to neglect the existence of LXDE and adopt LXQT desktop stuff in their distribution, then it was Plasma-KDE, and then there was the excuse that it is too much work to make software work without elogind, so they adopted it too.
    I respect Kiss-Linux commitment to not ever incorporate elogind in their official repositories, but then again I couldn’t even get a simple window manager on Kiss yet. But it is too early to judge beta and alpha projects yet.

    Obarun must be some kind of miracle that has been making things work without elogind for 6 years on software mostly coming from arch.

    Like

  16. May the Obarun live install medium be reused for the installation of Spark?

    Like

  17. Of course you can. You can either expand the Spark image into a new partition and arch-chroot into it, or you can add the spark repositories on top of obarun’s in /etc/pacman.conf and install pacman, base, sinit, ssm, ssm-services, and have a choice to either use the obarun base or arch.

    Some arch stuff will not work with spark, but obarun has many of those repackaged so they do work without systemd. Spark just makes the init and service management packages, the rest is up to you.

    Spark doesn’t have a keyring package made yet, it includes its key in the archive base system. So if you use pacman to install you need to add the line SigLevel = Never (which is an insecure way to download and install pkgs)


    [spark-testing]
    SigLevel = Never
    Include = /etc/pacman.d/mirrorlist-spark


    [spark]
    SigLevel = Never
    Include = /etc/pacman.d/mirrorlist-spark


    [spark-extra]
    SigLevel = Never
    Include = /etc/pacman.d/mirrorlist-spark

    Where mirrorlist-spark is:

    Netherlands
    Server = https://spike.fleshless.org/mirror/spark/$repo

    Germany
    Server = https://mirror.fleshless.org/spark/$repo
    Server = https://voidcaller.fleshless.org/spark/$repo
    Server = https://mirror.vdrandom.org/spark/$repo

    You mount a new/clean partition to /mnt (example) and do


    mkdir -p /mnt/var/lib/pacman/sync

    pacman -Sy -r /mnt

    pacman -S pacman eudev zsh-bash ... filesystem ... etc. -r /mnt

    sudo cp ~/{.zshrc,.zshenv} /mnt/root/ (from Obarun if you like zsh and Obarun's shell setup)

    arch-chroot /mnt zsh

    Always use the -r /mnt (or whatever your mount point is) before you chroot, basically with pacman your favorite shell and filesystem you can switch into it. Remember to boot you need kernel, mkinitcpio (or dracut if you can make it work), eudev/mdev/udev, a bootloader (grub,syslinux), networking, I can’t remember but I think spark has a “base” pkg that incorporates the most needed. I like starting from very little and adding what I desperately need.


    And check your registered email once in a while 🙂

    Like

  18. Why is that I have added services to rc.conf file e.g ‘@connmand’ ‘ ‘@ntp’ but they do not start on system boot and I have to manually start them always ?

    Like

  19. Check on /usr/share/ssm/services/{connmand,ntp}
    Do you have the service files?
    If not you need ssm-services or ssm-services-git
    If you look at the service files it is just a very basic command, you can improve by adopting obarun’s service file command which is more refined.

    Connman also needs dbus running, do you have ‘@dbus’

    Run
    ssm dbus start
    ssm connmand start
    ssm ntp start

    Share the output.
    This would only run the service for the session, but you will get output for why they will not run.
    Also check as root ps -A to see what is currently running.

    Like

If your comment is considered off-topic a new topic will be created with your comment to continue a different discussion. This community is based on open and free communication, meaning we must all respect all in minimizing the exercise of freedom to disrupt such communication. Feel free to post what you think but keep in mind the subject matter discussed. It is just as easy to start a new topic as it is to dilute the content of an existing discussion.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.