Somehow I have accumulated 4 laptops still-in-running-condition currently. Every rare once in a while, on a lazy weekend or when caught with a cold, I feel I should experimentally check out other distros and/or DE/WM combos than the singular one that has proven time and again for me to work-out-of-the-box, on any machine, after a swift painless install, without headaches hickups or troubles: Fedora with Gnome 3 (manually tweaked down to ultra-minimalism later on of course --- wouldn't mind i3 but guess what, it just freezes out-of-the-box, and the others seem to be abandonware).
Whether it's some some Arch or some Suse, Xfce DE or i3 or other WM.. it always either bugs out severely right in the live environment or during their install, or immediately after.
Great for tinkerers, but my reasoning always immediately jumps to "meh I have enough own code-bases to tinker with --- I might wanna tweak a properly running system but not trouble-shoot its setup" and so I just restore Fedora+Gnome3 and think, "ah well, maybe in another year from now".
We're talking bog-standard laptops here: an XPS, 2 older ThinkPads, a very budget Asus.
I'm quite grateful someone pointed me to Fedora when I got over my 3D gfx fascination phase and was more than ready to ditch Windows again "stat". Really stands on its own. (Granted, Ubuntu also seems to work well for many but I don't see any benefit over F now that I know about it =)
I had several HUGE reports due and I ended up having to go on a different machine and git pull to get them done it time. I never recommend Arch Linux for anything beside personal hobby work.
Their notification system for breaking changes is truly awful (or rather, does not exist). I've been bitten 3 times, each taking a few hours+ and a lot of reading to figure out. I find all of the things you're supposed to "just know" or check before running a `pacman -Syu` to be pretty silly. The official attitude towards these sorts of issues appear to be dismissive, which is what convinced me to finally drop it.
I've been running Arch on testing repos and with the linux-mainline kernel for years now and haven't experienced any major breakage at all.
I believe its down to the reputation of Arch Linux - people perceive it as a "bleeding-edge" and "breaking" OS and thus every even minor problem they experience is attributed to the OS and this perception is reinforced.
The same people might be scourging for hours on ubuntuforums because Ubuntu filled up their /boot partition with obsolete kernels and won't boot anymore. But its still perceived as "stable".
This is totally unacceptable. I started using ubuntu in production thinking it was "stable". After 6 months, the server wouldn't boot because of this reason. I don't think I ever manually installed updates. It just updated itself and broke itself.
They're been lucky then, but with Arch never say never. I hope more research boxes would run NixOS to ensure reproducible results.
The result is what one person think of as a bug is another person's obvious limitation.
I have to say that Linux seems to be the least frustrating installs I ever do. Apple and Windows still drive me nuts at times.
I knew there was issues with nvidia cards, but wasn't ready for the keypress issue. I was still able to get a basic install done which let me drop to command line mode and change out the driver.
try rofi instead, it has dmenu mode (rofi -dmenu) and so much more
The best thing for me about this release is better support for shared folders in Gnome Boxes (Ubuntu 17.10 and other distros with Gnome 3.26 have this too). This was the only thing holding Boxes back from firmly beating VirtualBox for desktop virtualization. Boxes is already a better user experience and uses superior libvirt/KVM tech for virtualization, but the shared folders UX was not up to par with VirtualBox before now.
So after scratching my head, I just turned it off altogether. Other than that, Fedora has been really rock solid, great distribution for me at home. I'm still using Ubuntu on my laptop and desktop at work, though. Ubuntu has been really good as well.
This could help: https://people.redhat.com/duffy/selinux/selinux-coloring-boo...
>It took me two nights to figure out that SELinux prevents qemu-libvirt to read certain ROM files that I need.
That is not too hard to imagine. Basically, the rationale is that if qemu/libvirt can read your ROM file, it can probably read other files too, some of which might be sensitive. So the defaults are conservative. Unless your rom file is in a standard location where it expects, it won't read even if the permissions are 644.
>So after scratching my head, I just turned it off altogether
selinux is annoying, but it is worth persisting. Nowadays, most things work well. I think things are bit more stable in RHEL/CentOS than Fedora by definition. So maybe you can try that if you are getting too many selinux related problems.
However in my experience so far, the defaults do not get in your way during regular usage. You typically only encounter such issues when you're also in the position to fix them. And I think in general it's a great idea to have default deny policies for containers and VMs.
> So after scratching my head, I just turned it off altogether.
Please don't do this. It's not worth disabling an entire security system if you can just spend some time to figure out a command to make the system work for you. Fedora even has gui tools that notify you when you encounter an SELinux issue. See stopdisablingselinux.com.
For example, It can control whether an application can bind to a privileged network port (below 1024) or whether an app can write to areas other than its default working directory.
And it is not immediately obvious when it's causing issues, just because it's not used as strictly or at all in other distros. I was having trouble connecting to a service I started in Fedora, even though I disabled firewalld. Turns out it was SELinux.
`getsebool -a` and `setsebool` are your friends.
Fedora team has a release schedule for every release. However, they don't release until they are sure everything works as expected (27 was delayed by a few weeks). That really makes the stable release rock solid. I wish other popular distros would do the same.
Fedora is also easier to install. The older I get, the less hassle I want.
Using AUR/user-style packages or non-mainline packages (beta, unstable) is always a risk in whatever distro you use and stable packages in Arch are as stable as elsewhere... depending on the type of software you choose.
The power of ArchLinux makes the learning curve worth it IMO. Plus the minimalism of the base system is a good way to become intimately familiar with Linux.
The other major factor is what laptop you are using. Since switching to a Thinkpad (or Dell XPS) which has good hardware support in Linux I've had zero problems at the graphics/network/external monitor level which is typical source of issues with new distro releases.
I've found the people who attack Linux for lack of graphics/network driver support are people who haven't used it in recent years. This was a far bigger problem in the past than it is today. But if you're planning to use Linux for your desktop, with the latest software, not 2yr old distros, it's essential to purchase a laptop with Linux in mind. Much like how MacOS is limited to particular hardware, it's good to view Linux for desktop similarly.
I haven't really found anything I'd want to do that I couldn't do in Fedora. It's well documented, it's close to upstream, it's got a large community and it's linux. I don't feel I have any less power from using Fedora than from using Arch. Other than the rolling release and installation process, there's not much to separate them in day to day use.
I'm pretty busy and spent too much time fixing my Arch machine, which outweighed the time I was saving.
I still have an Arch laptop and many Arch virtual machines for playing CTFs and HAM radio stuff.
Flatpak is a fine replacement for AUR in many cases, and it's more secure, too (if you keep an eye on the permissions set in the manifests - many specify --filesystems=home).
(do not use rpmfusion for VLC and the like - use Flatpak instead!)
Shack laptop is running Arch and a bunch of custom stuff that happened to compile fine on Arch.
Your comment made me check, and even GNURadio and osmosdr is packaged in Fedora. Huh. Time to re-visit that decision :-)
RPMFusion a very longstanding and trustworthy repository for software that fedora/RedHat can't/won't distribute. Many of the RPMFusion maintainers are also maintainers for fedora packages in the official repositories. RPMFusion is of very high quality and doesn't often break . You don't really have to worry about some Joe Shmoe putting together a rpm spec that backdoors your machine, just add the RPMfusion repository and you can forget about it.
That type of attitude should not apply to things like PPAs, the AUR, and copr. Anyone can put anything up and you're trusting random people online to not bork your machine. With RPMFusion you're still trusting internet strangers, but they have a solid track record of many years and it's effectively no different from trusting the maintainers of official packages.
But that changed a few releases back. I literally just updated from Fedora 26->27 and all of the RPMFusion packages were automatically upgraded along with it. I have also found that the repo maintainenance is exceptional at this point.
I think that the Fedora equivalent of AUR is probably COPR, but COPR probably isn't as good as AUR overall.
I should switch back to i3 or sway. There are just too many inconsistencies and little details or issues missed in GNOME that I feel should never make it into a release. Hopefully I don't break anything. I guess that's the advantage of Arch, I know what I'm getting into and there are decent docs for those kinds of changes.
You don't have to wish. There are distros they're doing same. Won't name them, because names would make look biased.
Anyway, fantastic work all - a fantastic distro that I believe in many ways sets an example for others (especially security wise). I look forward to all the hard work making its way into RHEL & CentOS in time to come.
Fortunately this change is expected in Gnome 3.28, which is only a few months away. Having that will mean a lot to those of us using HiDPI displays that aren't quite pixel dense enough for 200% scaling.
> Otherwise, we advise that you may wish to consider using the GNOME on Xorg session (see above) rather than the default Wayland session; this should at least prevent the crashes from ending your GNOME session when they occur. We do apologize for any inconvenience and/or lost data caused by such Shell crashes.
In Fedora 26 I experienced those crashes few times out of the blue. It is annoying to say the least.
 - https://fedoraproject.org/wiki/Common_F27_bugs#Wayland_issue...
> I experienced those crashes few times out of the blue
Looking forward to the new display settings in Gnome 3.26
For users of older Gnome versions, note that you can change both the font scaling (text-scaling-factor) and global UI scaling factors.
If you, like me, have a screen that needs 1.5 scale, you can either set UI scale to 1 and font scale to 1.5, or UI scale to 2 and font scale to 0.5.
That's almost as good as real fractional UI scaling.
Fractional scaling on 3.28 will allow you to have different scaling for each display. Will also be more consistent across different apps (Gtk, Qt, electron, etc.).
Numix window decorations scale (where Arc doesn't) - I use Numix anyway so it works out, can't wait for proper fractional scaling since I'm a cinnamon user we'l likely get it around the same time.
$ cat .Xresources
I still recommend Xubuntu for new users since they can leverage the massive package base and stable releases of Ubuntu and move to something like Fedora (or Arch or whatever) later.
That said, I agree with what jononor said. Using something similar that what your most helpful Linux-using friend uses trumps picking the most ideal distro and as long as you pick something relatively mainstream there isn't going to be that big of a difference anyway. In my experience, the choice of desktop environment (GNOME / KDE / MATE / etc) is actually more impactful than the choice of distro.
[keith@localhost ~]$ locate mp3 | grep lib
It shocking to see the huge number of reports here complaining of breakages compelling preference to Fedora since most provide a near similar out of the box experience for both desktops and servers.
The partition manager has an annoying bug hen opening in 26 so I'm hoping that got fixed. And I think the new Firefox will come packaged or at least part of a normal update, so good times.
HP Pavilion and it works completely fine.
After spending a week trying to get various OS's to play nice on my PC, everything "just worked" in fedora. And worked well - apple polish level experience. Now, i use it over debian as my main dev VM.
It's a great OS for getting out of the way and letting you get stuff done.
Last I checked, I could drive an HiDPI display without problems, but not the LoDPI display by its side, because you couldn't do independent scaling of the displays.
Has that changed, yet?
Price of complex software, I suppose.
Didn’t have any issues with the 25-26 upgrade fwiw.
Linux tries to use as much memory as possible for buffers/cache. When a program requires more memory the cache is freed up automatically to be used.
Right now my laptop is using 2G and has 1242 "free" but as you can see the buffer's cache count toward available memory.
total used free shared buff/cache available
Mem: 7678 2089 1242 307 4346 4963
Swap: 3711 0 3711
I'm using it to play videos on some sites that still use Flash.
- Do a fresh install
- Use Fedy  install with a single click pretty much any development IDE you may need plus other must-have tools (Skype, Dropbox, VirtualBox, TeamViewer, etc...)
- Install the "dash to panel" Gnome extension 
- Use Fedy to install Numix or Arch as themes and "pimp" your GUI ;)
Here is how my desktop looks with the described set up: https://snag.gy/F6SM4L.jpg
Is it like another package manager, or can I use dnf to manage/update the packages it installs?
It does no signature validation whatsoever or dependency tracking.
Don't use it if you care about security or a clean system.
The thing about signature validation can be easily resolved with a simple text replace. About dependency tracking: as I said, it is not a package manager and it uses dnf under the hood, which already does that.
Unless you only install open source software AFTER doing a full source code audit. You are blatantly overreacting just to look as "security conscious".
For disclosure: I don't have anything to do with the project other than the fact that I have been using it for years without any issue.
It's a collection of script written in a non-idempotent manner, and run in an uncontrolled, undefined environment. The benefit of binary packages is that you have a reasonable idea that the package will consistently build in a well defined environment (the base build chroot for the OS + the defined dependencies in the package). The result is a consistent reproducible binary that means when you run version x.y.z it's the same as version x.y.z that I'm running, and the same as version x.y.z that the package maintainer is running.
When software is "packaged" via install scripts that fetch and build from the internet on the fly with loosely defined versions, you stand a lot of risk of breaking your environment. If you only spend time in toy environments playing games and looking at cat pictures, that's fine.
If you rely on the tools you work with to be stable, perform in a consistent manner, and not accidentally leak information about your environment (you'd be shocked by how many test suites will post your local environment variables out to arbitrary metrics collection points), then pre-build binary packages are a safe and reliable way to operate.
You can have fun letting the wind blow through your hair; I'll keep my helmet on, thanks.
I made it 10 lines into the very first plugin before hitting a point where the installer script is downloading a file over an insecure connection, and treating it as a list of trusted URLs.
Binary packages are not intrinsically more secure that plain text scripts that you can easily audit.
If you feel safer because you are executing by hand a bunch of commands that can be automated with a script that's ok.
In my case I rather spend that time doing something more productive.
From there, it looks like the script does little more than add the `rpmfusion-free-release`, `rpmfusion-nonfree-release` and `folkswithhats-release` repositories. Of course since we started the install process through a shady insecure means, we should add the repos the same way. So every repo gets added via `dnf -y --nogpgcheck install https://url-to-repo-release-package`.
I went to browse the `folkswithhats` repo, but found it's hosted on AWS S3 and doesn't provide a directory index.
If the "--nogpgcheck" bothers you, a simple text replace over the source code solves it.
Same with the "curl|bash" thing, you are not obligated to run it that way, you can just clone the repo and run it however you want, it is open source!
It is funny the way people overreact with things like this with projects that are open source but are ok installed closed source software and feel safe because they got them from the official repos...
I don't mean to dump on this project or the people behind it, fair dues to them for putting it together to make peoples lives easier. But widely used software must be built and distributed securely.
Since it is GPL3, I wonder why the authors don't build and distribute it from COPR directly from github? It would solve the same problems, and make it easier to trust.
It will add the repos, dependencies and execute all other steps that you will have to do manually otherwise, to have any of the programs supported installed on your PC.
After that you can manage any of them with dnf.
It is a great utility, extremely useful.
Here is a bunch of screenshots of all the things you can install with one click using Fedy:
I download the image, install it in a VM, and there are ~289 package updates on a fresh install.
EDIT: And CSD works! Instructions to enable it are in a video here: https://www.youtube.com/watch?v=5rz_mPVwhDg (I recommend to mute sounds).
The location of the F27 bits are separate from the F26 bits, so you'll have to manually add a new remote and then rebase to that new remote.
$ sudo ostree remote add --set=gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-27-primary fedora-atomic-27 https://kojipkgs.fedoraproject.org/atomic/27
$ sudo rpm-ostree rebase fedora-atomic-27:fedora/27/x86_64/atomic-host
Will give Fedora 27 a go this evening and see...
Wayland, or rather gnome-shell, seems to do EDID detection a bit differently than Xorg. Getting a higher quality DisplayPort cable fixed it for me. The earlier cable did OK at 30Hz but freaked out trying for 60 Hz. X11 must retry more times, because it would usually manage to make it work for a while on the bad cable, although it still could randomly flicker off and reset the connection.
Additionally, some laptops have external displays wired to Nvidia, making it impossible to use intel-only graphics with external displays.