How to read big / large files in Linux

For example, to output line 888 to 988

$ sed -n '888,988p' yourFile.txt


$ awk 'FNR>=888 && FNR<=988' yourFile.txt

To output from line 888 to the end of the file:

$ awk 'FNR>=888 ' yourFile.txt

You can use split command:


The default behavior of split is to generate output files of a fixed size, default 1000 lines. The files are named by appending aa,ab, ac, etc. to output filename. If output filename is not given, the default filename of x is used, for example, xaa, xab, etc. When a hyphen (-) is used instead of input filename, data is derived from standard input.

To split filename to parts each 50MB named partaa, partab, partac,....

split -b50m filename part

For example, to output the last 256th of the large mysqld.log:

split -n 256/256 mysqld.log > ~/mysqld.log

To further prune un-wanted repeating rows:

split -n 256/256 mysqld.log | sed '/Please run mysql_upgrade/d;/communication packets/d;/error connecting to master/d;/The settings might not be optimal/d' > ~/mysqld.log

To join the files back together again use the cat command

cat xaa xab xac > filename


cat xa[a-c] > filename

or even

cat xa? > filename

You can also use grep to deal with it.

If you are a Vim fan, Vim has a LargeFile plugin for larges files. It will basically configure vim to not use a swap file and undo levels when opening big files.


Popular posts from this blog

Check MySQL query history from command line