Understanding Linux IOWait (2025)

I have seen many Linux Performance engineers looking at the “IOWait” portion of CPU usage as something to indicate whenever the system is I/O-bound. In this blog post, I will explain why this approach is unreliable and what better indicators you can use.

Let’s start by running a little experiment – generating heavy I/O usage on the system:

Shell

1

sysbench--threads=8 --time=0 --max-requests=0fileio --file-num=1 --file-total-size=10G --file-io-mode=sync --file-extra-flags=direct --file-test-mode=rndrd run

CPU Usage in Percona Monitoring and Management (PMM):

Understanding Linux IOWait (1)

Shell

1

2

3

4

5

6

7

8

9

root@iotest:~# vmstat 10

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----

rb swpd free buffcache si sobibo in cs us sy id wa st

360 713715226452 76297200 405001714 2519 469316 55 353

280 713810026476 76296400 34497117 20059 378653 137 735

080 713916026500 76301600 34744837 20599 379354 175 723

270 713973626524 76296800 33473014 19190 362563 154 716

440 713948426536 76290000 253995 6 15230 279342 116 774

070 713948426536 76290000 350854 6 20777 383452 133 775

So far, so good, and — we see I/O intensive workload clearly corresponds to high IOWait (“wa” column in vmstat).

Let’s continue running our I/O-bound workload and add a heavy CPU-bound load:

Shell

1

sysbench --threads=8 --time=0 cpu run

Understanding Linux IOWait (2)

Shell

1

2

3

4

5

6

7

8

9

10

11

12

root@iotest:~# vmstat 10

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----

rb swpd free buffcache si sobibo in cs us sy id wa st

1240 712164026832 76347600 480341460 2895 544367 47 373

1330 712041626856 76346400 25646414 12404 25937 69 1500 16

880 712102026880 76349600 32578916 15788 33383 85 15000

1060 712146426904 76346000 32295433 16025 33461 83 15001

970 712359226928 76352400 33679414 16772 34907 85 15001

1330 712413226940 76355600 38638410 17704 38679 84 16000

970 712825226964 76360400 35619813 16303 35275 84 15000

970 712805226988 76358400 32472314 13905 30898 80 15005

1060 712202027012 76358400 38042916 16770 37079 81 18001

What happened? IOWait is completely gone and now this system does not look I/O-bound at all!

In reality, though, of course, nothing changed for our first workload — it continues to be I/O-bound; it just became invisible when we look at “IOWait”!

To understand what is happening, we really need to understand what “IOWait” is and how it is computed.

There is a good article that goes into more detail on the subject, but basically, “IOWait” is kind of idle CPU time. If the CPU core gets idle because there is no work to do, the time is accounted as “idle.” If, however, it got idle because a process is waiting on disk, I/O time is counted towards “IOWait.”

However, if a process is waiting on disk I/O but other processes on the system can use the CPU, the time will be counted towards their CPU usage as user/system time instead.

Because of this accounting, other interesting behaviors are possible. Now instead of running eight I/O-bound threads, let’s just run one I/O-bound process on four core VM:

Shell

1

sysbench--threads=1 --time=0 --max-requests=0fileio --file-num=1 --file-total-size=10G --file-io-mode=sync --file-extra-flags=direct --file-test-mode=rndrd run

Understanding Linux IOWait (3)

Shell

1

2

3

4

5

6

7

8

9

10

11

12

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----

rb swpd free buffcache si sobibo in cs us sy id wa st

310 713030827704 76359200 6200012 4503 857735 69 203

210 712714427728 76359200 6709814 4810 925325 70 202

210 712844827752 76359200 7276015 5179 994625 72 201

400 713306827776 76358800 6956629 4953 956225 72 211

210 713132827800 76357600 6750115 4793 927625 72 201

200 712813627824 76359200 5946115 4316 827225 71 203

310 712971227848 76359200 6413913 4628 885425 70 203

200 712898427872 76359200 7102718 5068 971826 71 201

100 712823227884 76359200 6977912 4967 954925 71 201

500 712850427908 76359200 6641918 4767 913925 71 201

Even though this process is completely I/O-bound, we can see IOWait (wa) is not particularly high, less than 25%. On larger systems with 32, 64, or more cores, such completely IO-bottlenecked processes will be all but invisible, generating single-digit IOWait percentages.

As such, high IOWait shows many processes in the system waiting on disk I/O, but even with low IOWait, the disk I/O may be bottlenecked for some processes on the system.

If IOWait is unreliable, what can you use instead to give you better visibility?

First, look at application-specific observability. The application, if it is well instrumented, tends to know best whenever it is bound by the disk and what particular tasks are I/O-bound.

If you only have access to Linux metrics, look at the “b” column in vmstat, which corresponds to processes blocked on disk I/O. This will show such processes, even of concurrent CPU-intensive loads, will mask IOWait:

Understanding Linux IOWait (4)

Understanding Linux IOWait (5)

Finally, you can look at per-process statistics to see which processes are waiting for disk I/O. For Percona Monitoring and Management, you can install a plugin as described in the blog post Understanding Processes Running on Linux Host with Percona Monitoring and Management.

Understanding Linux IOWait (6)

With this extension, we can clearly see which processes are runnable (running or blocked on CPU availability) and which are waiting on disk I/O!

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

Download Percona Monitoring and Management Today

Related

Subscribe

Connect with

Login

0 Comments

Inline Feedbacks

View all comments

Understanding Linux IOWait (2025)

FAQs

Why is iowait high in Linux? ›

A high I/O wait time indicates an idle CPU and outstanding I/O requests—while it might not make a system unhealthy, it will limit the performance of the CPU. The CPU's I/O wait signifies that while no processes were in a runnable state, at least one I/O operation was in progress.

What is considered high iowait? ›

As such, I/O Wait doesn't have a standard value that is considered high. What is considered high for a given system is when the I/O Wait affects the server's overall performance. However, consistent I/O Wait over 10% should be investigated, as it could be a sign of a degraded or failing drive.

How to check iowait in Linux server? ›

Using Linux commands like top and vmstat can help with assessing high I/O wait time, while iotop can identify the processes causing this lag and subsequently check with iostat command to find out which disks are increasing this wait time.

What is iowait in CPU? ›

iowait is simply put — the percentage of time(in a given interval) the CPU spent waiting on completing read/write operations. It can in some cases be an indicator of a limiting factor to throughput and in other cases it can be completely an inappropriate metric.

What is the difference between idle and Iowait? ›

"iowait" is a sub category of the "idle" state. It marks time spent waiting for input or output operations, like reading or writing to disk. When the processor waits for a file to be opened, for example, the time spend will be marked as "iowait".

How to reduce io wait? ›

The less io-wait, the better the performance is. To reduce io-wait unless some specified processes cause much io-wait, check the existing disk storage configuration and consider changing disk storage, such as disk with high performance, RAID configuration or SSD.

What is the load average in Linux? ›

Load Average in Linux is a metric that measures the average system load over a certain period. It represents the average number of processes in a runnable or uninterruptible state. It is a useful indicator of system activity and can help in understanding the CPU usage in a Linux system.

How to improve disk io performance in Linux? ›

Fixing disk I/O issues
  1. Use separate virtual and physical hard disks.
  2. Install the host operating system onto a different disk than the virtual machines.
  3. Optimize hard drives by implementing disk partitioning in the guest and host OS.
  4. Update RAID type as per the application workload to see faster application performance.

How to check high IOPs in Linux? ›

How to Calculate IOPS in Linux?
  1. Start the terminal window.
  2. For a complete list of all the disks connected to the server, type fdisk -l .
  3. Make a note of the disk's name so we can check it. ...
  4. To begin tracking I/O statistics, type iostat -xd 1 /dev/sda .
  5. To exit the I/O monitor, use Ctrl+C.
Jan 25, 2024

What causes IO delay? ›

Application Bottlenecks

Applications that are I/O heavy often cause bottlenecks. As well as often causing the problem, I/O intensive applications are often more sensitive to a storage latency issue. When you have a large user base trying to access these applications, slowdowns tend to take place.

What is the CPU wait time? ›

CPU Wait, also known as I/O wait or wait time, refers to the time a CPU spends waiting for data to be loaded or saved to auxiliary storage like a hard disk or SSD. During this period, the processor is idle because it's unable to perform tasks until the required data transfer is complete.

What is the WA in top command? ›

wa is the percent of wait time (if high, CPU is waiting for I/O access). hi is the percent of time managing hardware interrupts. si is the percent of time managing software interrupts. st is the percent of virtual CPU time waiting for access to physical CPU.

What is a normal CPU level? ›

If you're browsing the web or using standard programs like Microsoft Office, normal CPU usage is between 10% and 30%. Gaming can push your CPU to between 50% and 90%, depending on if you have a powerful gaming PC and the latest GTA 5 mods.

What is IOWait in sar command? ›

In other words, IOWait is the amount of CPU time that is wasted waiting on I/O operations to complete. For applications that run in the background and are not time-sensitive, low to moderate amounts of IOWait can be acceptable.

How to check disk latency in Linux? ›

Disk latency is the time that it takes to complete a single I/O operation on a block device. You can see the disk latency in Windows Server Performance Monitor and by using iostat -xd 1 on Linux systems. Stats are displayed and monitored per disk. In general, the disk latency will vary by type of disk.

What is causing high load on Linux server? ›

The lower the load the better your server performance. Several factors can lead to high “load” - the usual culprits are CPU consumption, disk I/O and memory (more specifically swapping triggered by low available memory).

Why CPU utilization is high in Linux? ›

Autostart programs are applications that are launched automatically when booting the operating system and they continue to run in the background. Too many background processes running simultaneously on a computer consume CPU resources and unnecessarily cause high CPU usage.

What is the reason for high memory utilization in Linux? ›

There are several applications implemented using Java, and their incorrect implementation or configuration can lead to high memory usage in the server. The two most common causes are wrong configuration in caching and session caching anti-pattern.

How to reduce space in Linux? ›

How to Free Up Disk Space on Linux
  1. Step 1 – Work out What Is using the Disk Space. ...
  2. Step 2 – Clean Package Cache (Debian / Ubuntu) ...
  3. Step 3 – Clear down old Linux Kernel to Free Up Disk Space on Linux. ...
  4. Step 4 – Truncate Log Files to Free Up Disk Space on Linux. ...
  5. Step 4.1 – Find the Biggest Five Files.
Jan 30, 2024

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Margart Wisoky

Last Updated:

Views: 5406

Rating: 4.8 / 5 (58 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Margart Wisoky

Birthday: 1993-05-13

Address: 2113 Abernathy Knoll, New Tamerafurt, CT 66893-2169

Phone: +25815234346805

Job: Central Developer

Hobby: Machining, Pottery, Rafting, Cosplaying, Jogging, Taekwondo, Scouting

Introduction: My name is Margart Wisoky, I am a gorgeous, shiny, successful, beautiful, adventurous, excited, pleasant person who loves writing and wants to share my knowledge and understanding with you.