In this post i am going to share a very useful command in linux to splitting big csv file into smaller files.
Suppose you have a CSV file approx 800MB in size and 50Lakh rows, Then if you’ll try to open this file on any application, It makes your system hang up. And probably you’ll not able to open this files on any editor. So best idea is to split file by smaller size or records.

I have the same problem, I have 800MB file in CSV format which i need to import into mysql database after making some changes on file, But I am not able to open it on Open Office and MS Office. So i used this fabulous linux command in my ubuntu PC, It takes hardly few seconds to spiting files in 100000 records per file.

Here is the command to split CSV file by records wise.

split -d -l 100000 file.csv new/file_part_

Where 100000 is records per file, You can change it as per your need.
file.csv is the source file which need to split in 100000 records per file.
new/file_part_ is the new directory with spitted files, Your new files name will be something like that file_part_00.csv, file_part_01.csv, file_part_02.csv ….. and so on



Thanks :)

If you like this post please don’t forget to subscribe My Public Notebook for more useful stuff.