Linux, an open source operating system, has gained tremendous popularity over the years, and for good reason. It offers numerous advantages over other operating systems like Windows and macOS, making it the preferred choice for many users and organizations.
Recently I created a web page dedicated to my configuration and, leaving out the first section where Apple devices have the prevalence, there’s a section dedicated to the software I’m testing.
Just read to understand that we are only talking about software that follows an open source philosophy. The list includes various operating systems and some software that, over time, I’m starting to use for a possible migration.
As expressed several times, I love how Apple takes care of hardware and software through its ecosystem but at the same time, the idea of open source and the free software movement are essential elements that have made history.
To be precise Linux is a kernel and not an operating system as many think. The purpose of Linux is to provide the basic functionality for the operation of an operating system.
However, for convenience, when people refer to “Linux” they commonly mean a complete operating system that includes the Linux kernel and several programs and utilities that surround it. This discourse also applies in this blog post where the term Linux is used to indicate an entire operating system or commonly called distribution.
One wonders why having to switch from one system to another. Each user develops a solid habit in using any software, however logical it may be, the more the user uses a product the more his learning curve increases thus creating a process of habit.
So a change, even if it may be small, can lead any user, even the most prepared, to a sense of frustration in having to create a new habit again.
So, based on these considerations, why implement a change?
The answers can be multiple and can vary from person to person. For example:
It’s the main element that leads human beings toward a new challenge. Developing a habit of something is important but it is equally important to push yourself towards new ways of being able to interact with different elements.
Change is essential, static leads in the long run to the natural search for change. Plus these types of changes are reversible, you don’t have to get used to trying a new way of working. It’s important to keep in mind that creating the feeling, the habit, and the sense of being at home, will necessarily take time. The key here is consistency.
To quote one of the series I enjoyed on Netflix, Bojack Horseman:
“Every day it gets a little easier, but you gotta do it every day”
“That’s the hard part. But it does get easier”
Obviously mine doesn’t want to be a life lesson or something philosophical, but it is interesting to develop a vision of this kind when we are faced with something bigger than us.
Many times we are forced to look for an alternative. In this case, you may be faced with problems such as:
The Linux-based software philosophy centers on principles such as openness, sharing, collaboration, and user control. These are some of the fundamental principles that have helped make Linux one of the most popular and respected operating systems in the world of free and open source software.
Giving the full possibility to the user to be able to modify whatever he wants allows the user himself to learn and improve his skills. Even a simple search for a problem can lead to an exchange of views between multiple users.
*1 This doesn’t mean that migrating to a platform built on the concept of open source means being able to use your hardware indefinitely, obtaining all the features developed but that there are many Linux-based alternatives that manage to provide the user with complete usability even on devices that parent companies have abandoned for years.
So migrating to open source software equals getting everything for free?
This is one of the classic questions that a user asks himself after he has been informed of the existence of possible alternatives. No need to hide behind a bush, everyone likes things when they’re free but the very concept of being free allows the user to evaluate and, in many cases, to appreciate the continuity of software development through payments or donations.
Of course, the correlation between receiving a free service and the willingness to pay for that service in the future can vary depending on the context and individual preferences of users, but offering a free service can be an effective way to acquire new customers and allow them to experience the benefits of the service before committing financially.
Conversely, those who have already paid for a service may be more inclined to abandon the service if they are required to pay further or if the price increases significantly. This may depend on the user’s general satisfaction with the service, their perception of the value offered, and their willingness to invest further.
This speech, unfortunately, is much more complex and can depend on various factors such as the perceived value of the service, the quality of the user experience, and the trust established over time, to better understand the behavior of users regarding the payment of a service.
On the other hand, however, the concept of open source software mainly refers to the freedom to use, study, modify, and distribute the source code of a program. Open source software may be free in the sense that it can be used at no additional cost, but not all open source software is necessarily free.
The term open source refers to the availability of the source code of the software, which can be accessible to the public. This allows developers to study it, make changes and contribute to its improvement. Most open source software is distributed under licenses that grant these rights to users.
However, there’s open source software that can be commercialized or sold. The difference from proprietary software is that even if a payment is required for access or use of open source software, users still can access the source code and make changes if they are able to.
So while a lot of open source software is free, not all open source software is. Whether open source software is free or paid depends on the policies of individual projects and the licenses that are used.
In conclusion, about the initial question, a possible answer, from my point of view, could be:
“Always try to understand people’s work, even what for you may be trivial will certainly have required one of the most precious resources of this world, time.”
We get to the main point by listing the strengths and weaknesses of this choice.
As already mentioned, one of the most significant advantages of Linux and the software associated with it is its open source nature. The source code is available to anyone, and therefore, anyone can view, modify and distribute it. This creates a community-driven development model, in which thousands of programmers around the world contribute to its improvement.
This kind of open approach promotes transparency, innovation, and collaboration, resulting in a robust and secure operating system.
Linux allows for the highest degree of customization and flexibility. Unlike Windows and macOS, for example, the user can choose from a wide variety of desktop environments such as GNOME, Xfce, KDE, and many other window managers.
Users can customize their experience based on their preferences, including look and feel, workflow, and even system performance.
This flexibility is especially beneficial for first-time users starting to develop a learning curve to power users who can customize their environments for specific tasks.
Thanks to its stability, Linux-based systems are used in critical environments such as servers. Statistically, the hardware running the Linux kernel experiences fewer crashes, freezes, and performance issues than its counterparts. Furthermore, its modular and proven nature ensures that updates and patches are released regularly, improving security and stability.
The open source philosophy allows for continuous audits by a large community of developers and security experts who help quickly identify and fix vulnerabilities. Additionally, Linux’s user and permission management system provides granular control over access to files and system resources, reducing the risk of unauthorized access and malware infections.
For example, a large proportion of web servers, where security is of paramount importance, are actually run on servers running Linux.
Exceptional performance and efficiency are the key elements. Linux requires fewer hardware resources than Windows or macOS, making it suitable for older or less powerful hardware.
Distros can be highly optimized for specific purposes, resulting in faster boot times, smoother multitasking, and reduced system resource usage. It is therefore an excellent choice for embedded systems, servers, and high-performance computing.
Most Linux distributions come with a package manager, which allows users to install, update and manage software without any hassle. More experienced users can manage everything from the command line while newbies have simple graphical interfaces to manage the software.
The package manager automatically manages dependencies, ensuring a smooth and trouble-free software installation process. In addition to this, Linux supports a wide range of programming languages and development tools, making it a favorite among developers.
This aspect is particularly interesting for individuals, companies, and educational institutions looking to reduce costs. Linux eliminates the need to purchase expensive licenses and the open source nature allows for customization and adaptation to specific requirements.
Apart from those already mentioned, this cost advantage is a major driving force behind the widespread adoption of Linux in various industries.
The community is vibrant, passionate, and ever-growing. With countless online forums, communities, and resources, users can seek help, share knowledge, and collaborate with fellow enthusiasts.
The support and experience available within the Linux community is invaluable to both beginners and advanced users. The community-driven development model ensures that issues are resolved promptly and new features are continuously introduced.
Like any product, in addition to the advantages, there are also some disadvantages to consider.
Some software and hardware may not be fully compatible or supported in Linux distributions. While the situation has improved over the years, some specialized or proprietary software applications may only be available for Windows or macOS.
So while Linux offers a wide range of open source software, the availability of some proprietary applications may be limited. And this is one of the main problems that lead users not to consider Linux as an alternative to their current system.
Transitioning from a Windows or macOS environment to Linux can involve a learning curve, especially for users new to the command-line interface. While distributions have become more user-friendly with graphical interfaces, some tasks may still require you to use the command line, which can be daunting for beginners.
While the gaming landscape on Linux has vastly improved (even many games, even triple-A, perform better on Linux than on other operating systems) it still lags behind Windows in terms of game support.
Unfortunately, many game titles are developed primarily for Windows and may not have versions for other systems. While compatibility layers and software like Wine and Steam Play can help run some Windows games on Linux, it’s not always a seamless experience.
While Linux has improved its support for hardware devices over the years, there can still be problems with some hardware peripherals or components. Some hardware manufacturers may prioritize driver development for Windows or macOS, resulting in limited or less than optimal driver support for Linux.
Linux’s strength can become its weakness. Linux distributions come in various flavors, each with its own package managers, software repositories, and user interfaces. This fragmentation can lead to compatibility issues between different distributions or difficulty finding specific software packages that work well with a particular distribution.
While the Linux community is known for its availability and extensive documentation, your vendors’ official technical support options may be limited for Linux. If you encounter a critical problem or need immediate assistance, it may be difficult to find fast and reliable support compared to commercial operating systems.
Making a drastic change by switching totally to a Linux-based distribution is one of the choices that should be avoided absolutely.
The process must be gradual and follow a precise pattern taking into account the user’s needs.
A good strategy, for example, may be to consider:
It is the first point from which to start. The user must list the totality of all the software programs that he uses and verify that these maintain their integrity even on a Linux distribution.
Unfortunately, there may be cases where the software is proprietary and has no compatibility with other systems. This is the most difficult part to deal with because the user is necessarily forced to search and learn how to use a new software.
You must test the correct functioning of all the peripherals both internal and external to the device on which you want to install Linux*2.
Usually, when it comes to fixed devices such as computers, Linux can recognize all the hardware components.
For all portable devices, the user may find himself having to find a way to make elements such as wireless cards, fingerprint readers, brightness sensors, and others work correctly.
Although a Linux-based system always manages to obtain excellent performance even on older devices, it may turn out that on devices such as laptops the energy management is less efficient than that made by the native operating system.
Over the years various distributions have integrated into the system itself a correct management of resources such as that of the battery but there are still cases where the user is forced to modify system files or use various software to maintain a battery life similar to or better than to the old OS.
Each distribution has its own philosophy and way of managing the whole system. Behind it are companies or communities of developers working to update and improve the system.
The great vastness of these distributions has the advantage that the user can freely choose which distribution is most suitable for his purpose.
Systems like Ubuntu, Mint, Zorin, and others can be a starting point for new users, others like Fedora or Debian are built primarily for maximum stability, and then again Endeavour, Void, and Arch that allow maximum configuration to the most expert users and open up a whole new world for the immense software catalog that they have.
The testing phase comes into play based on this last point.
*2 Be careful if you use NVIDIA dedicated GPUs because the company initially adopted a closed driver distribution policy, causing compatibility issues and lower performance than other platforms. Users often had to wait for Nvidia to release new driver versions to support the latest kernel versions.
Luckily, it has recently worked with the open source community to improve driver integration into the Linux kernel and released the library called Nouveau which offers open source support for Nvidia GPUs.
All Linux distributions allow you to try or even install the system itself on a simple external medium such as a USB stick. This approach gives the user the ability to test that the system works properly with the underlying hardware almost as if the user had already installed the system.
Another way can be to use a virtual machine. The performance will certainly be different from the first method but the user can: try multiple distributions to see which one he prefers, delete and install the system to be able to start over during his tests, continue working with his main system, and at the same time work with the virtual machine, and much more.
The testing phase should take a good chunk of time, even when the user feels they’re ready to be able to migrate. In particular, it can be a good strategy to include situations such as updating packages or upgrading the system to major versions to verify that there are no problem situations that require the user to spend too much time trying to solve them.
For someone like me who is immersed in the entire Apple ecosystem, the migration turns out to be more complex, but not impossible. I have not defined everything in detail yet but I will use this post for future updates.
The idea is to migrate from all Apple services to a fully hosted and managed architecture with:
The use of a NAS allows you to manage all the data in a completely local way and in addition the NAS would also manage the storage of the disks for backup and recovery.
In this case, I have two possible solutions for the products to use:
What, for me, is the best alternative to the Apple ecosystem. Also in this case you need to build a server with good components to guarantee access/reading/writing of adequate data.
“Simone so you want a NAS and then a server with Nextcloud? Isn’t it enough to build only a server and maybe with Docker manage the NAS side and the Nextcloud one? Or a server with Nextcloud? What the fuck are you talking about!?”
The idea is to have two separate entities: NAS and a server running Nextcloud.
The NAS is configured to work mainly only in the local network while the server with Nextcloud is enabled, on the one hand, to a portion of the data of the NAS while on the other hand to the entire internet to synchronize all files, contacts, photos, etc. which I want to keep mobile.
Over the years I have always tried to maintain an adequate number of files but at the same time, I have TB of files that I don’t need that are synchronized with Nextcloud but simply that are accessible locally and backed up via a RAID 10 or something similar.
So the NAS will mainly work locally but only a part of the data will be shared with Nextcloud which will synchronize them between all the other devices.
Unfortunately, every online service is not free from catastrophic events. However, concentrating all the information on a single point, a server at home, risks much more than online services offered by various companies. Companies like Google or Apple, just to name two, develop their infrastructure using redundancy techniques in multiple data centers.
For example, all photos uploaded to Google Photo or iCloud could be in multiple countries. This choice manages to solve catastrophic events or possible problems with the servers present in a country as the user will be able to use the service and its data by accessing another country.
This is just a simple example of how data can be managed then every company develops more strategies to avoid possible data loss.
If all data and backups are local, which strategy to adopt?
Unfortunately, the situation becomes more complex as we have to be ourselves to statistically evaluate all the possible problems that may arise, from the most trivial such as the lack of connection that does not allow the server to be able to communicate with the outside world, power failure, up to worst events such as the total destruction of the server and all its data.
The choices can be:
It means necessarily having to rely on servers managed by third parties. In addition to this problem, there is also that of cost. It is unthinkable (at least from my point of view) to find and pay for a cloud space that will always be growing. Necessarily the cost will become a monthly expense and increase, usually, the more time passes the more data accumulates, not to mention the redundancy of the backups.
In addition to these problems, the privacy factor is also added. All data would need to be encrypted and this adds further complexity as the user will need to develop a strategy that decrypts and encrypts the data they need.
It’s about taking all, ALL of the data and having at least one other offline copy kept somewhere other than where the main server is. Also in this case there is a first problem related to the costs of the equipment but certainly better than a continuous monthly or annual payment.
The most important problem is keeping the information up to date. Since the devices are offline you will need to travel regularly to update any copies that are off the main server.
Furthermore, it is necessary to take into account the possible reliability of the external place where to keep all the data.
It solves all the problems related to physical movements but introduces other problems such as the need for a stable connection and appropriate synchronization or backup software that manage events such as latency problems and much more.
It’s an interesting idea of Synology with the Hyper Backup Vault. Simply, two or more people can choose to share and use part of another user’s storage space as friends and family.
This I think is the only solution (obviously the idea behind the implementation of Synology) acceptable.
There are products on the market such as Backblaze, Wasabi, IDrive, and the like that allow the user to have monthly or annual costs that are not too high compared to other competitors. In addition, the user will, in certain cases, choose the right software to perform backups or migrate to these platforms.
Ultimately the choice is up to the user. In my case, since I want to minimize costs as much as possible, a possible solution is to keep all the files locally and save, in the cloud, all the data that for me are essential.
Or you can go in search of a provider that with decent costs and reliability allows you to build a backup copy of all your files.
By now there are so many services and software, you just need to have the time to evaluate all the possible solutions and find the one that, hopefully, is the best choice.
This is the first draft of some of the services I’m going to configure.
From | To |
---|---|
Calendar | NC Calendar |
Reminder | NC Tasks |
Notes | NC notes |
NC mail | |
Photo | NC Memories |
Contacts | NC Contacts |
Drive | NC or NAS |
iCloud Keychain | Bitwarden w/ Docker |
Time Machine | Rsync and Kopia |
Home | Home Assistant |
Suite | LibreOffice |
AdGuard | AdGuard Home |
Reeder 5 | NC News |
As already expressed before we need to give the right time to be able to make a radical change. But at least at this moment, Apple’s strategy of developing processors, system, software, and services based on an ARM architecture is an IT revolution.
The idea is therefore to monitor how the world is starting to change and how more and more communities and people are starting to develop software for this new architecture.
Obviously, in the meantime, desire and curiosity will continue to be the essential elements that will lead me to try and test all the new features and free and open source systems.
Embracing Linux not only improves your control over your system but also aligns you with a global community that fosters collaboration and innovation.