uniq scans a file line by line and prints only unique lines, i.e. removes duplicate lines. For uniq to be effective the file should be sorted, so that similar lines are adjacent. e.g Unsorted Input file $ cat inputfile line 1 line 2 line 2 line 3 line 1 line 1 line 1 Now running uniq $ uniq inputfile line 1 line 2 line 3 line 1 Compare the above output with sorted one $ sort inputfile|uniq line 1 line 2 line 3 uniq has many options like counting number of duplicate lines e.g $ uniq -c inputfile 1 line 1 2 line 2 1 line 3 3 line 1 Where is uniq useful? Assume somehow a database table has duplicate rows (of course a table without primary key). If you want to delete duplicates you need to write a long procedure or cursors and go through all 'for' or 'while' loops. Easiest will be, to dump the data into an ascii file sort and run uniq and then load.