Are Linux the most secure operating systems? Anonymous Operating Systems: Review and Comparison of the Best Most Secured OS

Most often in the context of their unprecedented security. Some even argue that Linux is the most secure operating system on the market. This, of course, is an unprovable hyperbole. Indeed, many Linux distributions turn out to be orders of magnitude safer and, but most of them fall short of the FreeBSD standards, not to mention OpenBSD, which has established itself as one of the most secure user systems. And this even if we leave aside highly specialized operating systems such as all kinds of RTOS, IBM i, OpenVMS and TrustedBSD.

In theory, of course, such a statement still has the right to exist. Considering that most users think of Linux first (if not exclusively) when they say “open source operating system” (and sometimes even think that Linux is the name of the OS), then they are right. All things being equal, popular open source systems do have a security advantage over proprietary operating systems. However, the Linux family is far from the only example of open source operating systems.

If you count Linux symbol open source software, and MS Windows - the symbol of closed, then, of course, we can say that “Linux is the most secure systems of all ”, while the concept“ all ”includes only two categories of products. But the world is far from simple.

In fact, Linux operating systems are far from the most secure when you consider the entire range of operating systems available. And some Linux distributions were generally created solely for research purposes and therefore deliberately have a minimum level of protection in the standard configuration. Levels range from completely unprotected to monsters such as Gentoo's Hardened. Well, the average Linux distribution is, of course, somewhere in the middle.

In addition, calculating "" is not as easy as it seems at first glance. The main criterion that users who are not versed in security standards are guided by (and those who manipulate these users in their own interests) is the number of identified vulnerabilities. But you and I know that the minimum of loopholes found in the system is not a reason to consider it reliably protected. There are a number of factors to consider when talking about safety, including:

Whether a code quality check is carried out;
what are the default security settings;
how quickly and efficiently corrections are written;
how the system of distribution of powers works;
...and much more.

Even if we do not take into account the OS that does not run, for example, popular web browsers (Firefox), email clients (Thunderbird) and office programs(OpenOffice.org) with graphical interface WIMP on a computer with Intel x86 architecture, the average Linux distribution is by no means the most secure operating system. And in any case, Ubuntu - perhaps the most widespread Linux OS - definitely cannot claim this title.

And in general, in any category of systems there will certainly be one that turns out to be an order of magnitude better than Ubuntu in all respects, and often these are just other Linux distributions. But some argue that among the safest. That being the case, and assuming that Linux systems are generally the most secure on the market, that means Ubuntu is even more secure than OpenVMS. Sorry, I can't believe something.

If you are also convinced that "Linux is the most secure operating systems", I strongly advise you to reconsider your views. Many other operating systems turn out to be much safer than the average. Linux distribution... In addition, given how diverse the Linux family is in principle and what different criteria are adopted for assessing the degree of security of operating systems, such a statement sounds at least idealistic.

The answer to the question "are Linux operating systems the most secure" depends on which systems to compare and from what point of view to assess the security of the OS (unless, of course, we are not talking about an abstract comparison of open source and closed source software). If, however, unfoundedly declare that Linux is the safest of all, there is always a risk of encountering a person who understands the problem much better and can easily smash this unfounded point of view to smithereens.

You need to be more precise in your statements, otherwise there is a danger of assimilating a superficial view of the security problem in general and creating a lot of trouble for those who are inclined to listen to such statements. If we mean that, all other things being equal, popular open source operating systems are safer than popular closed source operating systems, you should say so. If you mean that the default Ubuntu configuration is safer than the default configuration

We often write about the security of mobile operating systems, publish information about the vulnerabilities found, describe weak sides protection and hacking methods. We've written about surveillance of Android users, malicious apps that are embedded directly into the firmware, and uncontrolled leakage of user data into the manufacturer's cloud. Which of the modern mobile platforms is the most secure for the user - or at least the least insecure? Let's try to figure it out.

What is security?

It is impossible to talk about the safety of the device without defining what we, in fact, mean. Physical data security? Protection from low-level analysis methods with the extraction of a memory chip, or just protection from the curious who do not know the password and do not know how to deceive the fingerprint scanner? Is transferring data to the cloud a plus or a minus from a security point of view? And in which cloud, to whom and where, what kind of data, does the user know about it and can he turn it off? How likely is it to pick up a Trojan on one platform or another and part with not only passwords, but also money in your account?

The security aspects of mobile platforms cannot be considered in isolation. Security is a comprehensive solution that covers all facets of device use from communications and application isolation to low-level protection and data encryption.

Today we will briefly describe the main advantages and problems in all modern mobile operating systems that have at least some distribution. The list includes Google android, Apple iOS and Windows 10 Mobile (alas, but Windows Phone 8.1 can no longer be called modern). The bonus will go to BlackBerry 10, Sailfish and Samsung Tizen.

Old Men: BlackBerry 10

Before proceeding with the description of the current platforms, let's say a few words about the BlackBerry 10, which has already left the race. Why BlackBerry 10? At one time, the system was actively promoted as the "safest" mobile OS. In some ways it was really so, something, as always, was exaggerated, something was relevant three years ago, but is hopelessly outdated today. Overall, we liked BlackBerry's approach to security; however, it was not without failures.

  • Microkernel architecture and trusted boot system are truly secure. During the entire existence of the system, no one received superuser rights (by the way, they tried repeatedly, including in serious offices - BlackBerry was far from always an outsider).
  • It is also impossible to bypass the password to unlock the device: after ten unsuccessful attempts, the data in the device is completely destroyed.
  • There are no built-in cloud services and no targeted user tracking. Data is not transferred to the outside, unless the user decides to install the cloud application himself (services such as OneDrive, Box.com, Dropbox are optionally supported).
  • Exemplary implementation of corporate security and remote control policies through BES (BlackBerry Enterprise Services).
  • Strong (but optional) encryption for both onboard storage and memory cards.
  • There are no cloud backups at all, and local ones are encrypted using a secure key tied to the BlackBerry ID.
  • Data is not encrypted by default. However, the company can activate encryption on employees' devices.
  • Data encryption block, peer-to-peer; there is no concept of protection classes and anything that even remotely resembles the Keychain in iOS. For example, Wallet app data can be retrieved from backup.
  • You can log into your BlackBerry ID account simply with your username and password; two-factor authentication is not supported. Today this approach is completely unacceptable. By the way, if you know the password from the BlackBerry ID, you can extract the key with which the backup created associated with this account will be decrypted.
  • Factory reset protection and anti-theft protection are very weak. It is bypassed by simply replacing the BlackBerry Protect application when assembling the autoloader or (up to BB 10.3.3) downgrading the firmware version.
  • There is no MAC address randomization, which makes it possible to track a specific device using Wi-Fi hotspots.

Another bell: BlackBerry willingly cooperates with law enforcement agencies, providing maximum possible help in catching criminals who use BlackBerry smartphones.

In general, with proper setup (and users who choose BlackBerry 10, as a rule, set up their devices quite competently), the system is able to provide both an acceptable level of security and high level privacy. However, "experienced users" can nullify all the benefits by installing a jailbroken version on their smartphone. Google play Services and receiving all the delights of the "Big Brother" supervision.

Exotic: Tizen and Sailfish

Tizen and Sailfish are clear outsiders in the market. Outsiders are even more so than Windows 10 Mobile or BlackBerry 10, which fell below 0.1%. Their safety is the safety of the "elusive Joe"; little is known about her only because they are of little interest to anyone.

How justified this approach can be can be judged by a recently published study, which found about forty critical vulnerabilities in Tizen. Here one can only summarize what has been known for a long time.

  • If serious independent research has not been carried out, then we cannot talk about the security of the platform. Critical vulnerabilities will not be revealed before the platform is distributed. But it will be too late.
  • There is no malicious software only due to the low prevalence of the platform. Also, in some way, protection.
  • Security mechanisms are insufficient, absent, or described only on paper.
  • Any certifications only say that the device has passed certification, but absolutely nothing about the actual level of security.

Jolla sailfish

The situation with Sailfish is ambiguous. On the one hand, the system seems to be alive: on its basis, some devices are announced from time to time, and even Russian Post has acquired a large batch of devices with an extremely high price tag. On the other hand, users are offered to pay the cost of a strong midrange on Android for a model running Sailfish, which has the characteristics of a Chinese cheap smartphone three (!) Years old. This approach will work in the only case: if the models on Sailfish are purchased for budget money, and then distributed to lower-level civil servants. Of course, the participants in the transaction are not at all interested in thinking about some kind of security with this approach.

And even the presence of state certificates does not provide any guarantee in the same way as open source does not. For example, the Heartbeat vulnerability was found in the firmware of routers, the source code for which was in open access more than ten years. In the Android operating system, which is also open source, new vulnerabilities are discovered regularly.

Exotic operating systems are a lack of infrastructure, an extremely limited set of devices and applications, underdeveloped controls for corporate security policies, and more than questionable security.





Samsung Tizen

Samsung Tizen stands somewhat apart from the rest of the "exotic" platforms. Unlike Ubuntu Touch and Sailfish, Tizen is a common system. Dozens of smart models work under her control. Samsung TVs, and smart watch and several budget smartphones (Samsung Z1 – Z4).

As soon as Tizen gained noticeable distribution, independent researchers took up the system. The result is disappointing: in the very first months, more than forty critical vulnerabilities were found. To quote Amikhai Neiderman, who conducted the Tizen security study:

This is possibly the worst code I've ever seen. All the mistakes that could be made have been made. Obviously, the code was written or reviewed by someone who does not understand anything about security. It's like asking a student to write software for you.

In general, the conclusion is clear: to use an exotic, rarely used system in a corporate environment is an open invitation for hackers.


Apple iOS

Apple we will praise. Yes, this is a closed ecosystem, and yes, the price tag is incomparable with the technical capabilities, but nevertheless, devices running iOS were and remain the most secure common commercial solutions. This mainly applies to the current models of the iPhone 6s and 7 generations (and, perhaps, SE).

Older devices are less durable. For old iPhones 5c, 5s and 6, there are already ways to unlock the bootloader and attack the device password (for details, you can contact the developers - Cellebrite). But even for these outdated devices, cracking the bootloader is time consuming and not cheap (Cellebrite charges several thousand dollars for the service). I think no one will break my or your phone in this way.

So, what do we have today. Let's start with physical security.

  1. All iPhones and iPads iOS 8.0 and higher (and currently iOS 10.3.2, which is even more secure), use such strong protection methods that even their manufacturer, both officially and in fact, refuses to extract information from blocked devices. Independent research (including by Elcomsoft labs) confirms Apple's claims.
  2. IOS provides (and does work) a data protection system in the event of theft or loss of the device. Mechanisms for remote data erasure and device lock are available. The stolen device cannot be unlocked and resold if the attacker does not know both the password for the device and a separate password from the account Apple records Owner ID. (However, everything is available to Chinese craftsmen, and tampering with the device's hardware can bypass this protection ... for iPhone 5s and older devices.)
  3. Out of the box layered data encryption is perfectly designed and implemented. The data section is always encrypted; a block cipher is used with keys that are unique for each individual block, while deleting a file deletes the corresponding keys, which means that it is basically impossible to recover deleted data. The keys are protected by a dedicated coprocessor included in the Secure Enclave system, and they cannot be pulled out of there even with a jailbreak (we tried). Power-on data remains encrypted until you enter the correct password. Moreover, some data (for example, passwords for websites downloaded to the device Email) are additionally encrypted in the secure Keychain storage, and some of them cannot be extracted even with a jailbreak.
  4. You can't just plug an iPhone into your computer and download data (other than photos) from it. IOS provides the ability to establish trust relationships with computers. This creates a pair of cryptographic keys that allow the trusted computer to back up the phone. But even this possibility can be limited using corporate security policy or the proprietary Apple Configurator application. The security of backups is ensured by the ability to install complex password(The password is required exclusively for restoring data from a backup, so it will not interfere with everyday use).
  5. IPhone unlocking is made at a fairly secure level. To unlock, you can use either a standard four-digit PIN or a more complex password. The only additional way to unlock your device is with your fingerprint. At the same time, the implementation of the mechanism is such that an attacker will have very few opportunities to use it. The fingerprint data is encrypted and will be deleted from random access memory devices after shutdown or reboot; after a while, if the device has never been unlocked; after five unsuccessful attempts; after a while, if the user has never entered the password to unlock the device.

    IOS has an option to automatically delete data after ten failed login attempts. Unlike the BlackBerry 10, this option is controlled at the operating system level; for older versions of iOS (up to iOS 8.2), there are ways to work around it.

What about user tracking and leaks?

IOS has a switchable cloud sync via its own Apple service iCloud. In particular, iCloud usually stores:

  • backup copies of device data;
  • synchronized data - call log, notes, calendars, passwords in iCloud Keychain;
  • passwords and history of visiting resources in the Safari browser;
  • photos and app data.

All types of cloud syncing in iOS can be turned off by simply turning off iCloud and deactivating iCloud Drive. After that, no data will be transmitted to Apple's servers. Despite the fact that some mechanisms do not work very intuitively (as an example - to turn off the synchronization of calls, you need to turn off iCloud Drive, which is actually intended for synchronizing files and photos), completely turning off cloud services completely disables synchronization.

IOS provides a mechanism to prevent surveillance (the system can present to the outside world random identifiers of Wi-Fi and Bluetooth modules instead of the fixed real ones).

Okay, but what about the malware? In iOS, it is almost impossible to install malicious software. There were a few isolated cases (via apps built using hacked development tools), but they were quickly localized and fixed. Even then, these applications could not do much harm: in iOS, each application is reliably isolated both from the system itself and from other applications using a sandbox.

It should be noted that in iOS, granular control over app permissions was implemented a long time ago. You can individually enable or disable each application for things such as the ability to work in background(in "pure" Android this is not possible!), access to location, notifications and the like. The presence of these settings allows you to effectively limit tracking by the applications that have made such tracking their main business (this applies to both Facebook-class applications and games like Angry Birds).

Finally, Apple regularly updates iOS even on older devices, fixing vulnerabilities found almost instantly (compared to Android). In this case, updates arrive simultaneously to all users (again, "unlike").

Interestingly, starting from version 9, iOS is protected from attacks of the man in the middle class with interception and certificate substitution. And if in the Elcomsoft laboratory it was possible to reverse the iCloud backup protocol in the 8th version of the system, then in newer operating systems this did not work out for technical reasons. On the one hand, we get a guarantee of the security of the transmitted data; on the other hand, we have no way to reliably make sure that "unnecessary" information will not be sent to the servers.

Continuation is available only to participants

Option 1. Join the "site" community to read all the materials on the site

Membership in the community during the specified period will open you access to ALL Hacker's materials, increase your personal cumulative discount and allow you to accumulate a professional Xakep Score!

Every day smartphones are attacked by hackers and malware. software, therefore, the operating system must be as secure as possible in order to preserve the user's personal data.

In this article, we will take a look at popular mobile operating systems and find out which one is the most secure.

Android

Google's mobile operating system is one of the weakest in terms of security. Experts report that an attacker can hack into a smartphone simply by sending a multimedia message to it. However, on latest versions Android pays special attention to security, so the situation is not so critical.

Since Android is an open source system, developers can use it for their own purposes for free. In other words, Android is a big target for hackers and malware. Last year, about 97% of malware was created specifically for Android devices.

The Google Play Store cannot guarantee complete safety when downloading and installing applications, and if you download programs from other sources, the risk of virus infection increases rapidly.

The reality is that Android is the most used mobile operating system in the world, which means it is more profitable to hack.

BlackBerry

The popularity of BlackBerry smartphones has dropped dramatically over the past few years, despite the good feedback about the latest devices. The company switched its own mobile operating system to Android, which still did not save it from failure.

Many government officials have used BlackBerry smartphones because they were considered the safest.

The BlackBerry operating system used end-to-end encryption, regardless of the smartphone model. Unfortunately, BlackBerry is a thing of the past.

Ubuntu Touch

Since the release of the first smartphone running Ubuntu last year, many have predicted that manufacturers will move from Android to Linux system which is Ubuntu.

For those who don't know about Ubuntu Touch, it is an open source operating system similar to Android, which is completely free and supported by the Free / Libre Open-Source Software community and Canonical Ltd.

In one of our articles, we talked about how you can.

Ubuntu has a high level of virus protection, however, it is an open source operating system, so malware can still get on mobile devices.

The Ubuntu App Store is more secure than the Google Play Store. Moreover, the owner mobile device must grant certain permissions before installing the app.

Another plus is the fact that Linux is not the most popular platform today, so attackers have little desire to hack into this system. For you to understand the situation, as of October 2015, a total of 15 people were affected by the hack.

Large companies like Netflix, Snapchat and Dropbox use the Ubuntu operating system. If you are still not impressed, then perhaps you will change your mind after learning that the staff of the International Space Station and the Large Hadron Collider are also working on Ubuntu.

Windows Phone and Windows 10 Mobile

Microsoft keeps its store Windows applications App Store on a short leash, so if you don't want your Windows Phone smartphone to be jailbroken, you should only use official apps to download apps. The main feature of applications on Windows Phone is that they do not interact with each other unless you give such permission.

Feature new version operating room Windows systems 10 Mobile is a device encryption that essentially locks your smartphone if it is lost. For this it is used sophisticated technology BitLocker. If you don't have an encryption key, your files are unreadable. The encryption key is a PIN that must be entered in Settings> System> Device encryption.

iOS

Just like the Google Play Store is the main store for Android apps, the App Store stores all apps for iOS devices.

Operating iOS system is closed. This means that only Apple can make changes and updates to the platform. It would seem that this guarantees the maximum level of security, but not quite.

For example, about 500 million users of the Chinese messenger app WeChat have been hacked after the release of a modified version of Xcode that was approved by Apple.

Many of you have heard about hacking celebrity iCloud accounts. If earlier Apple was the guarantor of security, today the company is not so vigilant about this.

In terms of popularity, iOS is second only to Android, so it's no surprise that attackers are looking for security holes in the operating system.

Among the positive qualities of iOS, it is worth noting that the mobile platform is really difficult to hack. Not so long ago, the FBI detained a terrorist, and then asked Apple to provide data from his iPhone. The company refused. In the end, federal services found a hacker who agreed to jailbreak an iOS device for $ 1.3 million. If it was that easy to jailbreak iOS, they wouldn't pay that much money, right?

Who is the winner?

Each operating system has its own pros and cons.

Android: If you closely monitor your actions on the Internet and suspicious links, messages and MMS, as well as download applications only from the Google Play Store, you will probably never know about hacks and viruses on your Android device.

Nexus smartphones, now Google Pixel, are the most safe devices on Android.

BlackBerry: Older versions of the operating system BlackBerry really had a high level of security. However, the company switched to using the Android platform for its devices, so they are at the same risk of infection as other Android smartphones.

Ubuntu: While the operating system appears to be the most secure to date, we are not sure if it will remain so when (and if) the number of active users increases.

Windows Phone: The same can be said for Windows smartphones. As market share increases, the number of hacks and virus attacks increases. However, on this moment the system looks very reliable.

iOS: Despite a number of recent security concerns, Apple has a high level of trust among its users. Given the fact that the operating system is proprietary, a company can quickly detect malware and take the necessary steps to close the security hole.

What is the safest smartphone we recommend right now? If you prefer an older device, then the BlackBerry Priv is a good option. If you want something newer, you should opt for a device running Ubuntu.

Users personal computers give preference to specific Windows variants... The versatility and availability of popular software has made Microsoft inventions prevalent among home and corporate users. Productivity is important for comfortable use and quick problem solving, so it is worth asking which system is known as the most fast windows.

Windows operating systems rating

Microsoft has released a series of Windows products... Depending on the release date, a distinction is made between individual software options that are capable of demonstrating characteristic performance prospects. It is worth familiarizing yourself with each of the options available to explore the advantages and disadvantages.

  • Windows 8 (8.1);

The fastest Windows is determined by its technical parameters. You will need to install different software options on computers with the same configuration in order to compare the performance of the software on the same hardware.

Basic testing participants are special utilities that help determine the performance of certain aspects of work: images, calculations, downloads, and other operations. Research on this topic conducted by different sources, which allows you to simply summarize the existing rating.

What makes Windows fast

Each version of the software package is characterized by a sophisticated level of workmanship. The performance of work in the majority depends on the local resources of the computer - configuration and parameters. The power of the hardware is capable of making every OS future-proof and productive. Also, optimization issues remain open - clearing memory, deleting the cache and other tricks. The cleanliness of the rating of fast operating systems is possible only when comparing the operation of the system with the same equipment capacities.

The operating system itself affects performance by its own optimization. Hardware resources are important to performance, but software characteristics affect the correctness and benefit of using the available resources. This becomes the reason for the formation of discrepancies in the response speed. different systems on the same computers.

Windows Vista came out in 2007, much later than XP. Vista has not found user preferences. Vista has become known as slow and unstable, so it takes the last place among the presented developments. Vista has a nice design and well thought-out concept, but for the modern user there are no significant advantages of installing Vista.


Windows XP appeared in 2001 and spread quickly. The predecessors of the popular XP were the ME and 95. The older systems were quickly pushed out of the market by subsequent innovations. The XP option is considered suitable for older computers with poor resources. Until now, XP is installed on devices with a small amount of RAM and a small clock frequency processor.


XP's popularity lasted until 2012. In the future, the corporation stopped supporting the product, but the lack of updates does not affect the performance and prospects. The release of new software products pushed XP to the fourth position.

OS number 7 was developed in 2009. The new version was quickly liked by users, so it gained popularity and was installed on computers. Version 7 was the first to supplant XP by providing an updated alternative.


New system received significant software enhancements, ran smoothly, and displayed an attractive design. The OS rivalry with XP left no doubts - a fast and efficient OS did not receive any complaints. The new development of the corporation has learned how to independently work with networks, install drivers for external systems and protect against viruses.

The updated functionality made Windows 7 popular. Using Windows 7 occurs even today. Realized benefits software product put the OS in third place in the rating and give reason to recommend the installation of productive Windows 7.

2: Windows 8 (8.1)

Windows 8 was released in 2012. The standard interface is organized for the use of movable tiles. The Start button has been replaced with a start screen. The functionality of setting and grouping launch buttons pleased users and expanded the prospects for personalization.


Windows 8 introduces an app store and account support. Microsoft records to merge device accounts. Windows 8 did not receive the popularity of version 7, although it made an interesting alternative. It is recommended to install OS 8 for computers with an average resource reserve, since the needs fast work Windows 8 is higher than the classic XP.

The variant left the first position of the rating in 2015. The new OS has combined strengths latest options- 7 and 8. 10 quickly gained popularity and was installed in place of the old variants. The prospect of a free upgrade to 10 has become an additional trump card of the corporation.


Windows 10 is supported by devices: computers, netbooks, laptops, tablets and phones. The OS 8 interface, desktop and controls feel familiar and comfortable. The novelty of the solutions led to the instantaneous operation of everyday processes, programs and games. The developers have implemented functionality for connecting devices on Windows 10 and Xbox One.

Improved security features make Windows 10 more resilient to virus attacks and malware... Added the function of using biometric information and other specific functions. The combination of the above characteristics becomes the rationale for a confident first position in the ranking.

When was the last time your TV suddenly turned off or required you to urgently download a software patch from the Web to fix a critical error? After all, if your TV is not so ancient, then, in fact, it is the same computer - with a central processor, large monitor, some kind of analog electronics for decoding radio signals, a couple of special input / output devices (remote control, built-in cassette or DVD drive) and software written in RAM. This rhetorical question brings us back to an unpleasant issue that the computer industry does not like to talk about. Why TVs, DVD players, MP3 players, Cell Phones and other electronic devices with software are quite reliable and well protected, but computers are not? Of course, there are many "explanations" for this: computers are flexible systems, users can change software, industry information technologies not yet sufficiently developed, and so on. But since we live in an era when the vast majority computer users are not well versed in technical issues, then such "explanations" do not seem convincing to them.

What does a consumer expect from a computer? The same as from the TV. You buy it, plug it in, and it works great for the next ten years. IT professionals need to take these expectations into account and make computers as reliable and secure as televisions.

The operating system remains the weakest point in terms of reliability and protection. Despite the fact that application programs contain many defects, if the operating system were error-free, then the incorrectness of application programs would not have such serious consequences as it is now, so we will focus on operating systems in this article.

But before getting into the details, a few words about the relationship between reliability and protection. Problems in each of these areas often have a common root: software bugs. A buffer overflow error can cause a system crash (reliability issue), but it also allows a cleverly written virus to infiltrate a computer (security issue). Despite the fact that in this article we will primarily talk about reliability, it should be borne in mind that an increase in reliability can lead to increased protection.

Why are systems unreliable?

Modern operating systems have two features that make them lose both reliability and security. Firstly, these operating systems are huge in size, and, secondly, they have very poor isolation of errors. The Linux kernel has over 2.5 million lines of code, and the Windows XP kernel is at least twice as large.

One study examining the reliability of software found that programs contain 6 to 16 errors for every 1000 lines executable code... According to another study, the error rate in programs ranges from 2 to 75 for every 1000 lines of executable code, depending on the size of the module. Even at the most conservative estimate (6 bugs per 1000 lines of code), the Linux kernel appears to contain about 15,000 bugs; Windows XP is at least twice the size.

Even worse, as a rule, about 70% of the operating system is made up of device drivers, the error rate of which is three to seven times higher than in normal code, so the above estimate of the number of errors in the OS is most likely grossly underestimated. It is clear that it is simply impossible to find and fix all these errors. Moreover, when some errors are corrected, new ones are often introduced.

Due to the enormous size of modern operating systems, no one alone can know them thoroughly. Indeed, it is extremely difficult to create good system if no one really fully imagines it.

This fact brings us to the second problem: error isolation. No one in the world knows everything about how an aircraft carrier functions, but the subsystems of an aircraft carrier are well isolated from each other, and its clogged toilet will not affect the operation of the missile launching subsystem in any way.

Operating systems do not have this kind of isolation between components. A modern operating system contains hundreds or even thousands of procedures combined with each other into a single binary program that runs in kernel mode. Each of the millions of lines of kernel code can rewrite the underlying data structures that use unrelated components, resulting in a system crash that is extremely difficult to figure out. In addition, once a virus infects one kernel procedure, it cannot be prevented from spreading rapidly to other procedures and infecting the entire machine.

Let's go back to the ship analogy. The hull of a modern ship is divided into many compartments. If a leak occurs in one of the compartments, then only it is flooded, and not the entire hold. Modern operating systems are like ships before bulkheads were invented: any hole can sink a ship.

Fortunately, the situation is not so hopeless. Developers are striving to create more reliable operating systems. There are four different approaches that are being taken to make the OS more reliable and secure in the future. We will present them in our article in “ascending” order, from less radical to more radical.

Hardened Operating Systems

The most conservative approach, Nooks, was designed to improve the reliability of existing operating systems such as Windows and Linux. Nooks technology maintains a monolithic kernel structure in which hundreds or thousands of procedures are bundled together in a single address space and run in kernel mode. This approach focuses on making device drivers (the root cause of all problems) less harmful.

In particular, as Fig. 1, Nooks protects the kernel from invalid device drivers by wrapping each driver in a protected software layer that forms a lightweight protection domain. This technology is sometimes referred to as "sandboxing". The wrapper around each driver carefully tracks all interactions between the driver and the kernel. In addition, this technology can be used for other kernel extensions, such as bootable operating systems, but for the sake of simplicity, we will only talk about it in relation to drivers.

The goals of the Nooks project are as follows:

  • protect kernels from driver bugs;
  • provide automatic recovery in the event of a driver failure;
  • do it all with minimal changes to existing drivers and kernel.

Protecting the kernel from incorrect drivers is not the main goal. Nooks technology was first implemented on Linux, but these ideas are equally applicable to other legacy kernels.

Insulation

The primary means of keeping kernel data structures from being destroyed by incorrect drivers is the virtual memory page map. When a driver is running, all pages external to it are set to read-only mode, which creates a separate lightweight security domain for each driver. This way the driver can read the kernel data structures it needs, but any attempt to directly modify the kernel data structures will throw an exception. central processing unit which is intercepted by the Nooks isolation manager. The driver's private memory, where the stacks, heap, private data structures, and copies of kernel objects are stored, are read and write.

Mediation

Each driver class exports a set of functions that the kernel can call. For example, audio drivers can provide a call to record a block of sound samples to sound card the other is for adjusting the volume, and so on. When the driver is loaded, an array of pointers to the driver functions is populated so that the kernel can find any of them. In addition, the driver imports a set of functions provided by the kernel, for example, to reserve a data buffer.

Nooks provides wrappers for both exported and imported functions. Now, when the kernel calls a driver function, or a driver calls a kernel function, the call is actually directed to the shell, which validates the parameters and manages the call. Despite the fact that the shell stubs (in Figure 1 they are drawn as lines pointing both inward and outward of the driver) are generated automatically based on function prototypes, developers have to write the body of the shell by hand. In total, the Nooks group wrote 455 wrappers: 329 for functions that the kernel exports and 126 for functions that export device drivers.

When a driver tries to modify a kernel object, its wrapper copies the object into the driver's security domain, that is, its private read / write pages. The driver then changes the copy. If the request is successful, the isolation manager copies the modified objects back to the kernel. Thus, a driver failure or an error during a call always leaves the kernel objects in the correct state. The import control operations are specific to each object, so the Nooks group had to manually write code to control the 43 classes of objects that use Linux drivers.

Recovery

In the event of a failure in user mode, the recovery agent starts and consults the configuration database to figure out what to do. In many cases, freeing all the occupied resources and restarting the driver is enough, since the most common algorithmic errors are usually found during testing, and synchronization errors and specific defects are mostly left in the code.

This technology allows you to restore the system, but applications that were running at the time of the failure may be in an incorrect state. As a result, the Nooks team added the concept of shadow drivers so that applications can run correctly even after a driver failure.

In short, during normal operation, the duplicate driver logs interactions between each driver and the kernel, if this interaction may be required to recover. When the driver is restarted, the duplicate driver passes all of the log data to the restarted driver, for example by repeating an I / O control system (IOCTL) call to set parameters such as audio volume. The kernel does not know anything about the process of returning the driver to the state in which it was old driver... As soon as this process completes, the driver starts processing new requests.

Restrictions

Despite the fact that according to experiments, Nooks can detect 99% of fatal driver errors and 55% of non-fatal ones, it is far from perfect. For example, drivers can execute privileged commands that they shouldn't; they can write data to the wrong I / O ports and perform infinite loops. Moreover, the Nooks group had to write a large number of shells by hand, and these shells may contain bugs. Finally, with this approach, it is impossible to prevent drivers from writing data to any memory location. However, this is a potentially very useful step towards improving the reliability of legacy kernels.

Paravirtual machines

The second approach is based on the concept of a virtual machine. This concept was developed in the late 60s. The idea is to use a special control program called a virtual machine monitor that runs directly with the hardware rather than the operating system. A virtual machine creates multiple instances of a real machine. Each instance can support any program that is capable of running on a given hardware.

This method is often used so that two or more operating systems, say Linux and Windows, can run on the same machine at the same time, and so that each OS thinks that it has the entire machine at its disposal. The use of virtual machines has a well-deserved reputation for providing good error isolation. After all, if none of the virtual machines is aware of the existence of others, problems that arise on one machine cannot propagate to others.

An attempt was made to adapt this concept to organize protection in one operating system, and not between different operating systems. Moreover, since the Pentium does not fully support virtualization, it was necessary to deviate from the principle of running an operating system in a virtual machine without any changes to it. This concession allows changes to be made to the operating system to ensure that it cannot do anything that cannot be virtualized. To this technology distinguished from true virtualization, it is called paravirtualization.

In particular, in the 90s, a group of developers from the University of Karlsruhe created the L4 microkernel. They were able to run a slightly modified version of Linux (L4Linux) on L4 in what can be called a virtual machine view. Later, the developers found out that instead of running only one copy of Linux on L4, they can run multiple copies. As shown in rice. 2, this thought led to the idea of ​​using one of the virtual Linux machines for the operation of application programs, and one or more - for the operation of device drivers.

If device drivers work in one or more virtual machines isolated from the main virtual machine, where the rest of the operating system and application programs are running, in the event of a driver failure, only its virtual machine fails, and not the main one. An additional advantage of this approach is that device drivers do not need to be modified as they see the normal Linux kernel environment. Of course, the Linux kernel itself will have to be changed to support paravirtualization, but this is a one-time change. In addition, there is no need to repeat this procedure for each device driver.

Since device drivers run on hardware in user mode, the main question is how they will do I / O and handle interrupts. Physical I / O is supported by adding approximately 3000 lines of code to the Linux kernel that the drivers are running on, allowing drivers to use L4 services for I / O instead of doing it themselves. An additional 5,000 lines of code support interactions between the three isolated drivers (disk, network, and PCI bus) and the virtual machine in which the applications are running.

In principle, this approach should provide higher reliability than a single operating system, because if a virtual machine containing one or more drivers fails, then the virtual machine can be restarted and the drivers will return to their original state. Unlike Nooks, this approach does not attempt to return drivers to their previous state (the state they were in before the failure occurred). Thus, if the audio driver fails, it will be restored to its default sound level, rather than what it was before the failure occurred.

Performance metrics show that the overhead of using paravirtualized machines is around 3-8%.

Multi-server operating systems

The first two approaches involve modifying legacy systems. The next two focus on future systems.

One of these approaches directly addresses the core of the problem: the operation of the entire operating system as a single giant binary program in kernel mode. Instead, in this case it is proposed to have several small microkernels running in kernel mode, while the rest of the operating system is a collection of completely isolated server and driver processes running in user mode. This idea was proposed 20 years ago, but then it was never fully implemented due to the lower performance of a multiserver OS compared to a monolithic kernel. In the 80s, performance was considered the most important indicator, and reliability and protection were not even thought about. Of course, at one time, aircraft engineers did not think about fuel consumption or about creating cockpit doors that could withstand an armed attack. Times change, and people's perceptions of what really matters are changing too.

Multi-server architecture

To better understand what the idea of ​​a multiserver operating system is, let's look at a modern example. As shown in rice. 3 In Minix 3, a microkernel handles interrupts, provides basic mechanisms for managing processes, implements interactions between processes, and performs process scheduling. It also provides a small set of kernel calls to authorized drivers and servers, such as reading a select portion of a specific user's address space or writing to authorized I / O ports. The clock driver uses the same address space as the microkernel, but is scheduled as a separate process. No other driver works in kernel mode.

Above the microkernel is the device driver layer. Each I / O device has its own driver, which functions as a separate process in its own private address space, protected by a hardware memory management unit (MMU). This layer includes driver processes for disk, terminal (keyboard and display), Ethernet, printer, audio, and so on. These drivers run in user mode and cannot execute privileged commands or read / write operations on the computer's I / O ports. In order to obtain these services, drivers must consult the kernel. While this architecture increases overhead, it significantly improves reliability.

Above the device driver layer is the server layer. A file server is a program (4,500 lines of executable code) that accepts and executes requests from user processes for Posix file system calls such as read, write, lseek, and stat. In addition, the process manager is located at this level, which manages processes and memory and makes Posix calls and other system calls such as fork, exec, and brk.

A somewhat unusual feature is the reincarnation server, which acts as the parent process for all other servers and all drivers. If a driver or server fails, terminates, or does not respond to intermittent ping commands, the reincarnation server removes these processes, if necessary, and then restarts them from a copy on disk or from RAM. This way you can restart the drivers, but for now only those servers whose internal state is limited can be restarted.

Other servers include a network server that contains: the complete TCP / IP stack; data warehouse, a simple name server that other servers use; information server used for debugging. Finally, user processes are located above the server level. The only difference between this and other Unix systems is that library routines for reading, writing, and other system calls are performed by sending messages to servers. Except for this difference (hidden in the system libraries), these are normal user processes that can use the POSIX API.

Interactions between processes

Since it is the interprocess communication (IPC) mechanism that allows all processes to work together, it is critical in a multi-server operating system. However, since all servers and drivers in Minix 3 run as physically isolated processes, they cannot directly call each other's functions or share data structures. Instead, Minix 3 supports IPC by transmitting fixed-length messages in a so-called rendezvous principle (when both the sender and the receiver are ready to exchange, the system copies the message directly from the sender to the recipient). In addition, there is an asynchronous event notification mechanism. Events that cannot be implemented are marked as pending in the process table.

Minix 3 elegantly integrates interrupts with messaging. Interrupt handlers use a notification mechanism to signal the completion of I / O. This mechanism allows the handler to set a bit in the "pending interrupt" bitmap and then continue without blocking. When the driver is ready to receive an interrupt, the kernel converts it into a regular message.

Reliability characteristics

There are several reasons for the high reliability of the Minix 3. First, the kernel executes code no more than 4,000 lines in size, so based on a conservative estimate of 6 bugs per 1000 lines, the total number of bugs in the kernel is likely to be about 24. Compare this to 15K bugs in Linux and a lot more on Windows. Since all device drivers except the clock are user processes, no extraneous code will ever run in kernel mode. In addition, the small size of the kernel makes it more efficient to check its correctness either manually or using formal methods.

Minix 3's IPC architecture does not require queuing or message buffering, eliminating the need for buffer management in the kernel. Moreover, since IPC is a powerful construct, the IPC capabilities of each server and driver are severely limited. The IPC primitives used, the available destinations, and custom event notifications are strictly defined for each process. User processes, for example, can only interact on a rendezvous basis, or send messages only to Posix servers.

In addition, all kernel data structures are static. All these features greatly simplify the code and get rid of kernel bugs related to buffer overflows, memory leaks, untimely interrupts, unreliable kernel code, and so on. Of course, putting most of the operating system in user mode does not eliminate the inevitable errors in drivers and servers, but it makes them much less dangerous. Due to a bug, the kernel can destroy critical kernel structures, write garbage to disk, and so on. A mistake in most drivers and servers cannot do significant damage, since these processes are clearly separated and the operations they can perform are strictly limited.

User mode drivers and servers cannot run with superuser privileges. They cannot access areas of memory outside of their own address spaces, except for kernel calls (which are validated by the kernel). Moreover, bitmaps and ranges within the kernel process table control the set of allowed kernel calls, IPC capabilities, and allowed I / O ports for each process individually. For example, the kernel might prevent the printer driver from writing information to user address spaces, accessing disk I / O ports, or sending messages to the audio driver. In traditional monolithic systems, any driver can do anything.

Another reason for reliability is the use of separate command and data spaces. If an error or a virus action results in a driver or server buffer overflow error and foreign code writing to the data space, the infected code cannot be executed by transferring control to it or by using a procedure pointing to it, since the kernel will not execute the code if it does not is in the read-only command space of the process.

Among the other specific features that provide higher reliability, the most important is the self-healing property. If a driver tries to store data at an invalid pointer, enters an infinite loop, or tries to perform other invalid operations, the reincarnation server will automatically replace the driver, and, as a rule, in this case, other running processes will not be affected.

Although restarting a logically incorrect driver will not fix the error, in practice, incorrect synchronization and similar errors cause many problems, and restarting the driver often provides an opportunity to bring the system back to the correct state.

Performance parameters

For decades, microkernel-based multiserver architectures have been criticized by developers for their lower performance than monolithic architectures. However, various projects confirm that modular architecture can actually provide comparable performance. Despite the fact that Minix 3 has not been optimized for performance, the system is fast enough. The performance losses that occur due to the fact that drivers run in user mode are less than 10% compared to drivers in kernel mode, and the system can be built, including the kernel, generic drivers and all servers (112 compilations and 11 references ) in less than 6 seconds on an Athlon / 2.2 GHz machine.

The fact that multiserver architectures can support a reasonably robust Unix-like environment with very little performance penalty makes this approach practically acceptable. Minix 3 for Pentium is free to download under the Berkeley License from www.minix3.org... Versions for other architectures and embedded systems are currently being developed.

Language-based protection

The most radical approach, quite unexpectedly, was proposed by Microsoft Research, abandoning the operating system as a single program running in kernel mode and a set of user processes running in user mode. Instead, it offers a system written in completely new, type-safe languages ​​that are free of all pointer problems and other bugs associated with C and C ++. Like the previous two approaches, this approach was proposed several decades ago and was implemented in the Burroughs B5000 computer. At that time, only the Algol language existed, and protection was supported not by the MMU (which was not in the machine at all), but due to the fact that the Algol compiler simply did not generate "dangerous" code. The approach proposed by Microsoft Research adapts this idea to the 21st century.

general description

This system, called Singularity, is written almost entirely in Sing #, a new type-safe language. This language is based on C #, but supplemented with message passing primitives, the semantics of which are determined by formal contracts described by the language means. Since the language severely restricts system and user processes, all processes can work together in a single virtual address space. This increases both safety (since the compiler will not allow one process to change the data of another) and efficiency (since it eliminates kernel traps and context switches.

Moreover, Singularity's architecture is flexible because each process is a closed entity and therefore can have its own code, data structures, memory structure, runtime, libraries, and garbage collector. MMU is supported, but it only distributes pages and does not set up a separate secure domain for each process.

The basic tenet of Singularity's architecture is to disallow dynamic process extensions. In addition, this architecture does not support loadable modules such as device drivers and browser plugins, as they could introduce extraneous and unverified code that could damage the parent process. Instead, such extensions should operate as separate processes, completely isolated and interacting using the standard IPC mechanism.

Microkernel

The Singularity operating system consists of a microkernel process and a set of user processes that typically run in a shared virtual address space. The microkernel controls access to hardware, reserves and frees memory, creates, closes, and schedules chains, maintains chain synchronization using semaphores, maintains inter-process synchronization using pipes, and controls I / O. Each device driver runs as a separate process.

Although most of the microkernel is written in Sing #, the individual components are written in C #, C ++, or assembler and must be reliable because there is no way to verify their correctness. Reliable code includes the hardware abstraction layer and the garbage collector. The hardware abstraction layer hides low-level hardware from the system by encapsulating concepts such as I / O ports, interrupt request lines, DMA channels, and timers to provide the rest of the operating system with interoperable abstractions.

Interaction between processes

User processes obtain system services by sending strongly typed messages to the microkernel over bidirectional point-to-point links. In fact, these channels are used for all interactions between processes. Unlike other messaging systems that have a library with send and receive functions, Sing # fully supports pipes at the language level, including formal typing and protocol specifications. To clarify this, let's look at the channel specification.

contract C1 (

In message Request (int x) requires x> 0;

Out message Reply (int y);

Out message Error ();

Request? -> Pending;

State Pending: one (

Reply! -> Start;

Error! -> Stopped;

State Stopped:;

This contract states that the channel accepts three messages: Request, Reply, and Error. The first has a positive integer as a parameter, the second has an integer, and the third has no parameters. When a channel is used to access the server, the Request messages are sent from the client to the server, and the other two messages are sent in a different way. A state machine describes the protocol for a channel.

In the Start state, the client sends a Request message, placing the channel in the Pending state. The server MAY send either a Reply or an Error message in response. The Reply message places the channel back into the Start state, in which communication can continue. The Error message places the channel in the Stopped state, ending the communication on the channel.

Heap

If all data, such as blocks of files read from disk, must be piped, the system will be very slow, so an exception is made to the basic rule that the data of each process is completely private and internal to that process. Singularity maintains a shared heap of objects, but every instance of every object on the heap is owned by a single process. However, ownership of an object can be transferred over the channel.

As an example of how the heap works, consider I / O. When a disk driver reads a block of data, it puts that block on the heap. The system then passes the handle to this block to the user who requested the data, adhering to the principle of "sole owner", but allowing data to be transferred from the disk to the user without creating additional copies.

File system

Singularity maintains a single hierarchical namespace for all services. The root nameserver uses the top of the tree, but other nameservers can be mounted on their own nodes. In particular, file system, which is just a process, is mounted on / fs, so for example the name / fs / users / linda / foo could be a user file. Files are implemented as B-trees with block numbers as keys. When a user process requests a file, the file system instructs the disk driver to put the requested blocks on the heap. The ownership is then transferred as described above.

Examination

Each component of the system has metadata describing its dependencies, exports, resources, and behavior. This metadata is used for validation. A system image consists of a microkernel, drivers and applications required for the system to function, and their metadata. External validators (verifiers) can perform many checks on the system image before the system uses it, in particular to ensure that drivers do not conflict over resources. The check consists of three stages:

  • the compiler checks type safety, object ownership, pipe protocols, and so on;
  • the compiler generates Microsoft Intermediate Language, a portable JVM-like bytecode that a verifier can check;
  • MSIL is compiled to x86 code for the host computer, which can add runtime checks to the code (however, the existing compiler does not).

Higher reliability can be achieved by using tools that detect errors in the verifiers themselves.

Each of the four different attempts to improve the reliability of the operating system aims to prevent incorrect device drivers from causing the system to crash.

In the Nooks approach, each driver is individually wrapped in order to carefully control its interactions with the rest of the operating system, but in this approach, all drivers are in the kernel. In the implementation of the paravirtual machine approach, this idea was further developed. In this case, the drivers have been ported to one or more machines separate from the host machine, further limiting the driver's capabilities. Both of these approaches are designed to improve the reliability of existing (legacy) operating systems.

The other two approaches replace legacy operating systems with more reliable and secure ones. The multi-server approach allows each driver and operating system component to run in a separate user process and allows them to interact using the microkernel IPC mechanism. Finally, Singularity, the most radical approach, uses a type-safe language, a single address space, and formal contracts that severely limit the capabilities of each module.

Three of the four research projects - L4-based paravirtualization, Minix 3, and Singularity - use microkernels. It is not yet known which of these approaches will become widespread in the future (unless it is some other solution). It is interesting to note, however, that microkernels, long considered unacceptable because of their lower performance compared to monolithic kernels, may revert to operating systems because of their potentially higher reliability, which many consider more important than performance. The wheel of history has turned.

Andrew Tanenbaum ( [email protected]) - Professor of Informatics at Vrije Universiteit (Amsterdam, Holland). Jorrit Herder ( [email protected]) - Postgraduate student of the Department of Computer Systems, Faculty of Informatics, Vrije Universiteit. Herber Bose ( [email protected]) - Associate Professor of the Department of Computer Systems, Faculty of Informatics, Vrije Universiteit.

Literature
  1. V. Basili, B. Perricone, Software Errors and Complexity: An Empirical Investigation, Comm. ACM, Jan. 1984.
  2. T. Ostrand, E. Weyuker, The Distribution of Faults in a Large Industrial Software System, Proc. Int? L Symp. Software Testing and Analysis, ACM Press, 2002.
  3. A. Chou et al., An Empirical Study of Operating System Errors, Proc. 18th ACM Symp. Operating System Principles, ACM Press, 2001.
  4. M. Swift, B. Bershad, H. Levy, Improving the Reliability of Commodity Operating Systems, ACM Trans. Computer Systems, vol. 23, 2005.
  5. M. Swift et al., Recovering Device Drivers, Proc. 6th Symp. Operating System Design and Implementation, ACM Press, 2003.
  6. R. Goldberg, Architecture of Virtual Machines, Proc. Workshop Virtual Computer Systems, ACM Press, 1973.
  7. J. LeVasseur et al., Unmodified Device Driver Reuse and Improved System Dependability via Virtual Machines, Proc. 6th Symp. Operating System Design and Implementation, 2004.
  8. J. Liedtke, On Microkernel Construction, Proc. 15th ACM Symp. Operating System Principles, ACM Press, 1995.
  9. H. Hartig et al., The Performance of Microkernel-Based Systems, Proc. 16th ACM Symp. Operating System Principles, ACM Press, 1997.
  10. J.N. Herder et al., Modular System Programming in MINIX 3, Usenix; www.usenix.org/publications/login/2006-04/openpdfs/herder.pdf.

Andrew Tanenbaum, Jorrit Herder, Herbert Bos, Can We Make Operating Systems Reliable and Secure ?, IEEE Computer, May, 2006. IEEE Computer Society, 2006, All rights reserved. Reprinted with permission.