Daily Unix Commands

If you are a dev or a sys admin  you must rely on unix command to get things done.

These are my daily unix tools list,I am putting these together for myself as a future reference,if it helps anyone else I will be happy 🙂

Screen

I use ‘screen’  for keeping myself alive in my remote cloud machines. If you are new to screen,please check this docs.

Checking list of screen in current machine 'screen -ls'
Resuming 'screen -r screen_name' or 'screen -r pid'
Create new screen 'screen -S screen_name'
Detach from screen using 'ctrl+a+d'
Copy mode start 'ctrl+a+{'
Kill the current window 'ctrl+k'

Screen cheatscheet1 screen cheat sheet2

File Compression

Using zip and unzip

Zip command example is like below:

zip backup.zip filename1 filename2
zip -r backup.zip dir_name

Unzip command example is like below:

unzip backup.zip

tar,gzip and bzip2

tar cfv backup.tar filename1 filename2
tar xvf backup.tar

for specific directory

tar xvf backup.tar -C /dir_name/

to use gzip compression use -z  and for bzip2 compressing use -j like this

for compression:

tar -zcfv backup.tar.gzip filename1
tar -jcfv backup.tbz2 filename

for uncompression

tar -zxvf backup.ta
tar -jxvf  backup.tbz2

Unix file compression cheat sheet

And if you want to learn details about this compression techniques please check this

Vim basics

I use vim for editing files in unix system(specially in remote machines).These are list of commands I use:

Basic Navigation:


j->down
k->up
l->left
r->right
H->first line of the file
G->last line of the file
$->end of the line

Changing Line:


yy->copy the current line
p->paste the copied line

check Vim Cheat sheet .

I rarely use ‘sed’ command,and I only used it for easy find replace of strings(its really handy when you have to work on multiple files for same find replace ).The syntax is like below:

sed s/search_string/replace+string old_file.txt > new_file.txt

check sed refernce

Unix sort & uniq

It was around 2 Am and I was working like a caveman,but its hard to escape bed time 😦

Suddenly I found I set a wrong cron job in a cloud and it generated duplicate results.I have to make a report from the cron output and every line should be unique.The file is around 1.2 GB.

It was  a json file, that has several thousand lines,many of them are redundant.I have to remove the redundant  values and make a file which every line is unique.

I started to write a python script to do that,I was on the half way to finish my python script that takes file and create another file that contains uniqu elements from the input file.As I was too tired,thought I should do a search is there any unix command to this job.And found exactly what I needed 🙂

sort filename.txt | uniq

Or

cat filename.txt | sort -u

If the input file contans:

Line 1
Line 2
Line 2
Line 3
Line 1
Line 3

The command generates

Line 1
Line 2
Line 3

And I just redirected the output of the command into a new file like below:

sort filename.txt | uniq > result.txt

Explanation of the command:

sort’ command lists all the lines by default alphabetically and ‘uniq’ command can eliminate or count duplicate lines in a pre sorted file.

You can also use sort and uniq in different situation, for  details check following links:

Sort and Uniq

These two utility command will help me to sleep early 🙂