Arch Linux 2020 – what’s there to be happy about?

Happy new year facebook fans and Arch friends (friends of who? we don’t know, not us, not on facebook, not in the past and not in the future, but you must have friends amongst yourselves).

Some of you may have taken the previous post about abandoning Arch as a joke, since most of what we do recently is promote Obarun, an Arch based distribution with s6 and 66 init and service management.   When we published that article we knew nothing of what Hyperbola was planning to do (we assume it was discussed within the community) or whether they were going to give-in to the pressure and incorporate arch’s pacman and packaging methodology change into their distribution.  (Note: Hyperbola may be based on Arch but has its own separate repositories and rebuilds everything on their own to ensure everything is Free).  All of their free packages, as far as we can tell are still compressed with xz.  The bomb was set and it will go off soon (in open/free software tradition of timing kind of soon).  Hyperbola is not just leaving Arch, it is leaving linux, for OpenBSD.  But this is not about hyperbola, it is about Arch….     or skip to here if you are in a rush!

Back in May 2019 (specifically May 10th 2019 according to our records or at least between the last edition of 4.20.13 and the first edition of 5.0.1) linux-ck switched its packaging from xz compression to zstd.  Did users take notice? Were they notified of such change, and was pacman capable of installing such packages?  Ofcourse, linux-ck kernels are not official linux kernels and you can either get them by building them from AUR or directly as binaries from their own repository, repo-ck.  In either case it is arch’s pacman that will have to decompress those zstd binaries and install the package.   What’s the big deal with linux-ck?  1, custom made kernels for different families of processors making the kernel leaner (scraping unnecessary fat of the kernel dedicated to foreign to your architecture modules).  The eternal bug of linux-ck remains today!  They claim it is oriented for desktop use because for sure it is not for console use, unless your fingers can hit and retract keys at bullet speed strokes or at 300ms you get zzzzz instead of just z.  It is a big deal, if your X session is failing and you are trying to repair things to get to X.  So now we have 2 reasons to drop linux-ck.  But, this article is not about linux-ck, it is about Arch!  Linux-ck just made the move faster than arch who proposed it.

Well come on then, get to the point, we are getting tired of reading already!

 

Real story begins here:

Ok, ok, ok!!  Like we said about linux-ck … Got ya, you skipped 🙂  endif do loop!

Seriously, if you are not on the bad habit of constantly deleting you cache and not having local versions to revert to previous editions when necessary, especially when you lost your network connection because of an upgrade, look at your cache:

% ls -Altr /var/cache/pacman/pkg

What do you see?  Notice a difference?  less .xz packages and signatures and more .zst (zstd compressed archived packaging) lately.  They are supposed to be beneficial to you, they compress faster on their side, and decompress faster on your side, hence faster overall,  right?  Not!  How big are those packages?  They are bigger, a little bigger but bigger.  So the slower your connection is the slower the overall procedure becomes.  Do Canadians and “other N,Americans and Western Europeans” care?   Not a bit, never did, never will.  That’s a given.  Do university students and faculty of the remaining world care, having the fastest internet connections in each particular country?  Same.  Does it matter?  If you reject the popular NSA, MI5, Mossad model of social control, and coincidentally you reject facebook and other “serving” services, it does matter.  It matters to freedom.  If arch developers don’t buy into the criticism and condemnation of social networks employing evil-like intrusive closed and non-free code, then our trust with their judgement is at stake.

Ok, ok, ok, numbers, rational arguments and no more talking:

If you have MATE as your desktop you probably have engrampa by default as your archiving de/compressing Gui.  Run engrampa on those late new zstd compressed packages to see their  contents.

% ls -AlSr /var/cache/pacman/pkg/*zst

Take linux-lts-headers for example (versions -92 and -91):

% engrampa /var/cache/pacman/pkg/linux-lts-4.19.92-1-x86_64.pkg.tar.zst

!!Aaahh, ok doesn’t work, it doesn’t read .zst.  Not yet anyhow.  So install xarchiver.

Yes, you can decompress and extract a tar using xz or zstd directly, you don’t need a gui.  If you can, fine, skip a couple of steps, but the article is for all users, and all users can use a gui, we hope.

% mkdir /tmp/xz
% mkdir /tmp/zst
% xarchiver -x /tmp/zst/  /var/cache/pacman/pkg/linux-lts-headers-4.19.92-1-x86_64.pkg.tar.zst
% xarchiver -x /tmp/xz/ /var/cache/pacman/pkg/linux-lts-headers-4.19.91-1-x86_64.pkg.tar.xz

or

% xarchiver /var/cache/pacman/pkg/linux-lts-headers-4.19.91-1-x86_64.pkg.tar.xz

to see the contents of the kernel package.  Check the size, then switch and look at the other one.

See how much faster zstd is than xz?  Really, you did see that difference?  How old and slow can your computer be to perceive the difference?  Slower than those mega servers processing Arch packages and spreading them to mirrors, right?  (a package is compressed at an arch builder server and sent to its primary repository, then, this package is copied through the network through 50-100 mirrors?)  So, now think of the motives to make such a change, the claimed benefits in terms of numbers, the claimed deficiencies in terms of numbers, and the question of trusting facebook from spreading their “coding ethics” around the Open Software world.

How large are the contents of /tmp/zst and how large are the contents /tmp/xz?

Ahhh…  ./xz/ is larger than ./zst/ , so we picked a bad example, something that was slightly bigger was compressed to something slightly smaller, but the small->big->small is faster than the bigger->smaller->bigger ?  Whoop …. SHIT … DO!!!  The different in uncompressed size is due to the two consecutive editions, 4.19.91 is slightly bigger than 4.19.92, but xz made it smaller than the smaller tar for 92 that zstd was used for.  Let’s say that this blink of an eye is 35% of your daily upgrade volume,  So that blink of an eye difference times 3 is the benefit of rejecting half a century proven algorithm over a very controversial source of code, very new to open and free software.

Now you are fed facebook compressed and decompressed binary code.  Don’t you love “development”?

Happy f(University of California)king new year Archers!  Now if I publish this on/reddit.com/r/ArchLinux I will get 50 downvotes in the first 15′ because people there will defend Arch no matter what the content is to anyone criticizing a practice, and there will be many irrational responses that one has to confront on irrational grounds, because you can’t discuss rationally with the irrational.  After all, an IBM/RH distribution with a facebook feeder of binaries is only “dangerous” if you make a moral choice against the “technium”.

Facebook is a corporation that has undeniably contributed more to open and free software, more than any other large multinational corporation in the industry.  NOT!!  Even MS, IBM, HP, Oracle, even Google, have contributed more than facebook.  Don’t forget, IBM brought you systemd, silly!  At least they have contributed a free open compression algorithm, that’s something.  And Arch rushed to incorporate it into its most essential piece of homegrown software, pacman.

About Hyperbola: despite of disliking the BSD world, we may have to eventually become more familiar with it as healthier.  On the other hand we have Trident moving from FreeBSD to Void-Linux and it is bringing the zfs technology with it for which many people find many good things to say about.  But void is not arch, void had the capability to use zstd longer than arch (xbps did over pacman) but chooses not to use it yet.  So, we shall see how things evolve, but at least some things are evolving, while everything else is being static, and maybe stale.  Like Debian styled daemonocracy.  Someone open a window and let some sunlight into that cellar.  Put a facemask on, get a bleach and ammonia spray ready and start spraying.

Epilogue

Some are wondering why we are not publishing more frequently.  It is not that we don’t want to, or have little to talk about, but there is not much happening apart of 66 development to be happy about.  It is very gloomy and quiet.  Very few people voicing their criticism and concerns, and in that peaceful aerie environment lots of quiet bad things are happening.   The nice thing about s6/66 is that it is so portable.  You just pick the distribution and install it and work with it despite of the init/service management system the distribution has picked.  Is that unix-like freedom?  That is exactly it.  I wish pacman was that portable or I wish I could write a front end for xbps that makes xbps as user friendly as pacman.  But is it worth anyone’s time to go through such trouble only to find out that Void decided to use zstd for its packaging?

So here we are making everyone’s new year happier and brighter.  The future is here!  Hohoho… hooo.. hoo..!!  2020 is a nice round number ….  but it feels like the 1950s, the US 1950s.  If you get my drift.  Time to go underground while people are getting dumber and dumber.  See all the zombies around you communicating only through a touch screen gadget they pay for dearly every few months, or as often as they drop it or get it wet?  Dare to speak to them about open and free software, or really anything that may appear political to them.  What percentage are running Void on those gadgets?  Infinitesimal!   Some intellectuals have talked about “the age of agony”.    Yes, 2020, a year deeper in the age of agony.

Sorry, I wish there were more positive things to talk about.

If there was a growing instead of a shrinking number of concerned people and would engage in discussion on how to protest certain changes that seem to take freedoms away from all of us, maybe the future wouldn’t look so f(University of California)king gloomy.

 

References:

xz

zstd or zstandard

archlinux.org pacman and zstd  announcement

1984 by George Orwell

1984 update – not compressed with zstd

 

 

The article is dedicated to Jean-Michel and his good work and contribution to Obarun

 

 

PS  Removal of a link to this article from reddit/r/linux

And here is the systemd-gang at work at reddit r/linux trying with everything they have to find a rule that was violated.  None were violated, keeping users in the dark about their spooky algorithms reaching every machine they can shove it in is what is at stake.  Happy hunting on how recently, when, and how has arch alerted their users about this significant change.  Then judge for yourselves.

 

 

15 thoughts on “Arch Linux 2020 – what’s there to be happy about?

  1. 5 sentences some months ago is all Arch has ever contributed to the subject of zstd and facebook’s free code. But discussing this choice with users on reddit/r/linux is beyond the tolerance of THE SYSTEMD-GANG!

    Keep users in the dark and moderate any criticism, that will get you closer and faster to the worm hole than zstd can get you.

    sandragen

    Your post was removed because it has been identified as either blog-spam, a link aggregator, or an otherwise low-effort news site. Your submission contains re-hosted content, usually paired with privacy-invading ads, without adding to the discussion.

    Please re-post your submission using the original source with the original title. If there’s another discussion on the topic, your link is welcome to be submitted as a top level comment to aid the previous discussion.

    Like

  2. Some of us are listening. Even in the ages of ignorance such before the Reformation there were pockets of people who understood and care.

    Keep plugging away and thanks for your blog.

    Like

  3. Anon: Thanks for your kind words, I needed a boost. All I heard at r/linux and r/archlinux was characteriazations, of how sick I am, about taking drugs, about the site containing offensive advertising (sorry, I block it all and scripts and the site is very readable, I’ve yet to see one advertisement myself, and I have not ever received a penny from it – it is all wordpress taking the money for providing this as a free space), that I need therapy etc. etc. They removed the article link/post from r/linux then when I complained that it was unreasonable moderation they banned me by the time I woke up the next day.

    So I came to think a little deeper, arch has a huge list of mirrors, the majority being public university servers. Facebook turns to their client (directly or inderictly they may be purchasing servers running arch and maintenance contracts) and asks them (or pay them) to make sure their code is put through a test by a large number of systems and people. Supposedly all arch wants to save is some fractions of a second per package compressing the package.tar – they couldn’t care less about space on their own server or the bandwidth due to increasing size. The majoirity of the burden of making a choice to increase the size of packages falls on the mirrors. I wonder, did they sent announcements to their mirror administrators where their software is hosted free of charge and distributed, that they were planning at the start of 2020 to increase the size of the diskspace required, hence the bandwidth for distribution?

    To whom is this burden, due to their choice, laid on? Users as guinea pigs for Arch and Facebook to test this new code, universities and other institutions unaware of this increasing demand due to arch’s choice. If there are bugs in the code who will send feedback to get it fixed? The cost for this facebook experiment mass distribution is carried by those unaware (generally speaking) of how easily and single handedly Arch decides to conduct an experiment. It is like you lend me your pick-up truck to go to the grocery store and get some food, but since I have access to it I decide to use it for the local liquor store (my friend) to carry 20 truck loads of beer canisters to this upcoming concert/party. But the drinkers will be charged the cost of transport, the liquor store will make more money (able to give me a good gift), and you would be under the impression you helped me carry groceries for I am without transport.

    It is like trickle down economic theory, why pay to have something tested so you can then profit out of it, use idiots as guinea pigs, and have university budgets cover the distribution of testing material. When it is done developing (bugs down to less than 15% open), you turn around and use this partnership between Arch chiefs and FB, call it AFB consulting, wrap the thing inside commercial closed code and sell it as a mass storage, backup, and data retrieval system “THAT IS QUICKER” than the competition and fully guarantying its performance. The developers publishing “open free” code under the bosses’ clock are bound by internal agreements, they can’t seek money for license violation, anything they created is owned by their boss and have signed to never make claims against it. A couple of the heads of the developers will get handson contracts to serve through AFB and will say nothing.

    So we, the users, the tax-payers of the university’s expenditures, get stiffed doing the dirty work so AFB can make money. Who knows, their largest targeted victim may be some government agencies and organizations. With some resort time and accompanied private jet trips a politician may be eager in approving such systems for use in public organizations. The science is all supportive of it being the best choice.

    BUSINESS AS USUAL, I am the idiot, there is nothing to report here. I am making too much noise about something being completely normal and usual. Being banned for saying such things is justified.

    Who is it that banned me? I wouldn’t know. I know the common offender who removed the post, who has banning authority didn’t have the balls to say. I am sure I have been on some blacklists for bad-mouthing the other no-such-agency research project, called systemd.

    Move on people, move on, there is nothing to see here.

    Like

  4. Many – maybe most – people in the US, the UK, Canada and elsewhere, have been thoroughly educated to accept that opinions and thoughts which are not applauded by the media (left or right) are not acceptable and should not be permitted by any means. Some will go so far as to seek to destroy the reputations, careers, and even threaten violence against anyone who dares to state opinions they don’t like.

    Maybe the worst part of all is that those who are in the midst of it think they are free.

    Most people don’t want to know reality. Don’t write for them or even engage them. Write for those who are interested and care. Doesn’t matter what FB says or does. Smart, properly educated people exist. Tyrants are never able to eradicate truth.

    Like

  5. Some are definitely listening. Many forums are being taken over by people that cannot discuss (I mean sharing same views over and over isn’t a discussion). I still believe (also thanks to your blog) that ignorance will be overcome. Thanks for sharing your views here.

    Like

  6. @fungalnet

    So, You are banned from RedShit ? That a very good news 🙂 It’s the beginning of your freedom.

    Like

  7. drama in the cellar

    you dont realize that bad things will happen if you mix bleach and ammonia on your rag? Seriously, look it up. chloramine, hydrazine…

    Like

  8. Mixed in light solutions of water spray many sailors/boaters used it (most used a mask because most believe they are carcinogens) and would spray surfaces covered in mildew. This was done on neglected boats that remained in the water or near it closed up for long periods. Watertight tends to be pretty air tight as well, and many even close the vents to prevent rats and flies making nests in the boat, but this decreases oxygen over long term, living organisms are a given in marine environments (even in diesel tanks) and anaerobic organisms grow in such conditions. Those tend to be harmful and in subtropic/tropic environments they can even be deadly.

    But I wouldn’t waste good bleach on zstd. I refuse to talk about ammonia and retain the right to remain silent.

    s6 and 66 have been working great on Void, my personal favorite combination. I don’t want to keep any facebook encoded binaries into my machine, but I believe that those that do they should be conscious of what is what.

    It sounds to me very hypocritical to run a firewall where you block everything that relates to facebook and google (the two favorite NSA shops/partners) and at the same time to be filling your pacman cache with facebook coded archives.

    Like

  9. RedCrap was never what it appeared to be, it has always been a sneaky trojan horse by IBM to control the majority of machines running free software. Once they perceived they reach optimum penetration and from that point onwards there would be losses, they stepped in and made their “partnership” official. That is where RedFart got its money from, it was laundered IBM consulting subcontracts. That Poetering character and his various shops, is just an ilegitimate child of IBM-RH union. They were organized, they planned ahead, constructed the trojan horse, moved it in and now they are in control. They also control linux related media, where alternatives like runit, S6, daemontools, OpenRC have never deserved an article, not even a single mention.

    People will keep using FartOnics for their source of information.

    Such people think they are smart for filling up their disks with “free as in beer” software and are always fans of their vendors (like junkies to a drug dealer). No critical abilities by junkies. As long as that cinnamon desktop is full of icons of tons of “free” software, happiness is near!

    Void+s6+66 … no zstd yet. Then it will be FreeBSD or OpenBSD, even though I dislike them both.

    Like

  10. I seem to be missing the logic of switching to using a compression algorithm, which makes files /larger/ than the compression algorithm (XZ) which was used previously…?

    https://www.archlinux.org/news/now-using-zstandard-instead-of-xz-for-package-compression/

    It looks like someone has been dazzled by the “1300% faster” decompression and has not really considered anything else – this is typical of the Arch Linux camp, where everything must be the latest “bleeding edge” and of course “faster”… or at least it must “feel faster”…

    If you look at the package installation of any typical *nix like OS, there is the download, the decompression, the installation and registration in the package database. Where very good compression is used to conserve network bandwidth – desirable on both sides – the decompression is quite often the slowest part of this process – so this looks like a simple matter of speeding that up to get that aforementioned and all important “feels faster” feeling…

    Like

  11. Even that brainless tendency to be at the bleeding edge of “technology” is not enough, for me, to explain this jump. This is why I suspect a further motive. A friend recently pointed out that xz was used in pacman without any effort to optimize its options to satisfy demands for faster, better, more efficient packaging and installation. They just set the common industry defaults for xz and then set pacman to use xz with the default options. xz could have been doing a better job all along.

    The sadest of evidence support comes from their comparison of xz running on one core and zstd running on several. On a 16 thread machine xz gives identical checksums for using 2 to 16 cores. A single core gives a different checksum, because simply it didn’t have to break the tar in pieces to compress it. I don’t even do coding, but it sounds pretty simple to split a package in pieces and have a single core compress all of them and put it together as if many did the same thing.

    Till I started researching this issue I had no idea that xz was that great if you just used the -t option.

    Like

  12. Uploading to their server may not be an issue, they may have tons of bandwidth and their building machine and mirror server may be together. What mirrors do with the larger size pkgs to serve them is not a worry for Arch. I bet many of them haven’t even notice the change.

    Like you say, the entire process should be considered not just one parameter. On my tests the difference to the user is unnoticeable. Just consider how long does pacman take to verify keys and signatures of pkgs before installation and how much faster decompressing a kernel pkg takes.

    As I see it, it is just one more “kind” surrender of open/free turf to corporate influence, but people don’t see this as an important issue. Not yet at least. When they start paying fees to hike to their nearby mountain or go rafting in a river, then they might get it but it will be too late.

    Like

If your comment is considered off-topic a new topic will be created with your comment to continue a different discussion. This community is based on open and free communication, meaning we must all respect all in minimizing the exercise of freedom to disrupt such communication. Feel free to post what you think but keep in mind the subject matter discussed. It is just as easy to start a new topic as it is to dilute the content of an existing discussion.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.