KernelPatcher 0.1 - Permanently patch your kernel to use a different scheduler + RamTweak


Recommended Posts

image.php?size=60&color=blue&text=Snapsh

 

 

 

 

100-safe.jpg

 

 

00WFFhj.jpg

 

this tool, as always will be included in the upcoming release of Snapshot-Linux, let's you automatically patch the kernel to use one of the 3 schedulers permanently. that's different to my tool KernelTool, which only makes the changes temporarly until the next reboot.

 

 

how to use:

simply extract the archive into a folder of your choice and like always please double check if all .sh files are marked executable (however they should be by default)

 

Download:

mirror 1: bplaced.net | file size = 56KB

mirror 2: zippyshare   | file size = 56KB

 

have fun :)

Link to comment
Share on other sites

 

Process Scheduling

Like any time-sharing system, Linux achieves the magical effect of an apparent simultaneous execution of multiple processes by switching from one process to another in a very short time frame

The scheduling algorithm of traditional Unix operating systems must fulfill several conflicting objectives: fast process response time, good throughput for background jobs, avoidance of process starvation, reconciliation of the needs of low- and high-priority processes, and so on. The set of rules used to determine when and how selecting a new process to run is called scheduling policy.

In Linux, process priority is dynamic. The scheduler keeps track of what processes are doing and adjusts their priorities periodically; in this way, processes that have been denied the use of the CPU for a long time interval are boosted by dynamically increasing their priority. Correspondingly, processes running for a long time are penalized by decreasing their priority.

When speaking about scheduling, processes are traditionally classified as "I/O-bound" or "CPU-bound." The former make heavy use of I/O devices and spend much time waiting for I/O operations to complete; the latter are number-crunching applications that require a lot of CPU time.

 

my scheduler switcher is an switcher for linux I/O Scheduler.

by know it knows 3 different states:

 

deadline scheduler:

 

Linux_io_deadline_scheduler.png

 

The deadline scheduler is an I/O scheduler for the Linux kernel which was written in 2002 by Jens Axboe.

The main goal of the Deadline scheduler is to guarantee a start service time for a request.[1] It does so by imposing a deadline on all I/O operations to prevent starvation of requests. It also maintains two deadline queues, in addition to the sorted queues (both read and write). Deadline queues are basically sorted by their deadline (the expiration time), while the sorted queues are sorted by the sector number.

Before serving the next request, the deadline scheduler decides which queue to use. Read queues are given a higher priority, because processes usually block on read operations. Next, the deadline scheduler checks if the first request in the deadline queue has expired. Otherwise, the scheduler serves a batch of requests from the sorted queue. In both cases, the scheduler also serves a batch of requests following the chosen request in the sorted queue.

By default, read requests have an expiration time of 500 ms, write requests expire in 5 seconds.

 

noop scheduler:

 

Linux_io_noop_scheduler.png

 

 

The NOOP scheduler is the simplest I/O scheduler for the Linux kernel. This scheduler was developed by Jens Axboe.

The NOOP scheduler inserts all incoming I/O requests into a simple FIFO queue and implements request merging. This scheduler is useful when it has been determined that the host should not attempt to re-order requests based on the sector numbers contained therein. In other words, the scheduler assumes that the host is definitionally unaware of how to productively re-order requests.

There are (generally) three basic situations where this situation is desirable:

  • If I/O scheduling will be handled at a lower layer of the I/O stack. For example: at the block device, by an intelligent RAID controller, Network Attached Storage, or by an externally attached controller such as a storage subsystem accessed through a switched Storage Area Network)[1] Since I/O requests are potentially re-scheduled at the lower level, resequencing IOPs at the host level can create a situation where CPU time on the host is being spent on operations that will just be undone when they reach the lower level, increasing latency/decreasing throughput for no productive reason.
  • Because accurate details of sector position are hidden from the host system. An example would be a RAID controller that performs no scheduling on its own. Even though the host has the ability to re-order requests and the RAID controller does not, the host systems lacks the visibility to accurately re-order the requests to lower seek time. Since the host has no way of knowing what a more streamlined queue would "look" like, it can not restructure the active queue in its image, but merely pass them onto the device that is (theoretically) more aware of such details.
  • Because movement of the read/write head has been determined to not impact application performance in a way that justifies the additional CPU time being spent re-ordering requests. This is usually the case with non-rotational media such as flash drives or Solid-state drives.

This is not to say NOOP is necessarily the preferred I/O scheduler for the above scenarios. As with any performance tuning, all guidance will be based on observed work load patterns (undermining one's ability to create simplistic rules of thumb). If there is contention for available I/O bandwidth from other applications, it is still possible that other schedulers will generate better performance by virtue of more intelligently carving up that bandwidth for the applications deemed most important. For example, with a LDAP directory server a user may want deadline's read preference and latency guarantees. In another example, a user with a desktop system running many different applications may want to have access to CFQ's tunables or its ability to prioritize bandwidth for particular applications over others (ionice).

 

cfq scheduler:

 

Linux_io_cfq_scheduler.png

 

 

Completely Fair Queuing (CFQ) is an I/O scheduler for the Linux kernel which was written in 2003 by Jens Axboe.

CFQ places synchronous requests submitted by processes into a number of per-process queues and then allocates timeslices for each of the queues to access the disk. The length of the time slice and the number of requests a queue is allowed to submit depends on the I/O priority of the given process. Asynchronous requests for all processes are batched together in fewer queues, one per priority. While CFQ does not do explicit anticipatory I/O scheduling, it achieves the same effect of having good aggregate throughput for the system as a whole, by allowing a process queue to idle at the end of synchronous I/O thereby "anticipating" further close I/O from that process. It can be considered a natural extension of granting I/O time slices to a process.

  • Like 1
Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.