Monday, February 11, 2008

Basic Commands

bc
A calculator program that handles arbitrary precision (very large) numbers. It is useful for doing any kind of calculation on the command-line. Its use is left as an exercise.
cal [[0-12] 1-9999]
Prints out a nicely formatted calender of the current month, a specified month, or a specified whole year. Try cal 1 for fun, and cal 9 1752, when the pope had a few days scrapped to compensate for round-off error.
cat [ ...]
Writes the contents of all the files listed to the screen. cat can join a lot of files together with cat ... > . The file will be an end-on-end concatenation of all the files specified.
clear
Erases all the text in the current terminal.
date
Prints out the current date and time. (The command time, though, does something entirely different.)
df
Stands for disk free and tells you how much free space is left on your system. The available space usually has the units of kilobytes (1024 bytes) (although on some other UNIX systems this will be 512 bytes or 2048 bytes). The right-most column tells the directory (in combination with any directories below that) under which that much space is available.
dircmp
Directory compare. This command compares directories to see if changes have been made between them. You will often want to see where two trees differ (e.g., check for missing files), possibly on different computers. Run man dircmp (that is, dircmp(1)). (This is a System 5 command and is not present on LINUX. You can, however, compare directories with the Midnight Commander, mc).
du
Stands for disk usage and prints out the amount of space occupied by a directory. It recurses into any subdirectories and can print only a summary with du -s . Also try du --max-depth=1 /var and du -x / on a system with /usr and /home on separate partitions. [See page [*].]
dmesg
Prints a complete log of all messages printed to the screen during the bootup process. This is useful if you blinked when your machine was initializing. These messages might not yet be meaningful, however.
echo
Prints a message to the terminal. Try echo 'hello there', echo $[10*3+2], echo `$[10*3+2]'. The command echo -e allows interpretation of certain backslash sequences, for example echo -e "\a", which prints a bell, or in other words, beeps the terminal. echo -n does the same without printing the trailing newline. In other words, it does not cause a wrap to the next line after the text is printed. echo -e -n "\b", prints a back-space character only, which will erase the last character printed.
exit
Logs you out.
expr
Calculates the numerical expression expression. Most arithmetic operations that you are accustomed to will work. Try expr 5 + 10 '*' 2. Observe how mathematical precedence is obeyed (i.e., the * is worked out before the +).
file
Prints out the type of data contained in a file. file portrait.jpg will tell you that portrait.jpg is a JPEG image data, JFIF standard. The command file detects an enormous amount of file types, across every platform. file works by checking whether the first few bytes of a file match certain tell-tale byte sequences. The byte sequences are called magic numbers. Their complete list is stored in /usr/share/magic. [The word ``magic'' under UNIX normally refers to byte sequences or numbers that have a specific meaning or implication. So-called magic numbers are invented for source code, file formats, and file systems.]
free
Prints out available free memory. You will notice two listings: swap space and physical memory. These are contiguous as far as the user is concerned. The swap space is a continuation of your installed memory that exists on disk. It is obviously slow to access but provides the illusion of much more available RAM and avoids the possibility of ever running out of memory (which can be quite fatal).
head [-n ]
Prints the first lines of a file or 10 lines if the -n option is not given. (See also tail below).
hostname []
With no options, hostname prints the name of your machine, otherwise it sets the name to .
kbdrate -r -d
Changes the repeat rate of your keys. Most users will like this rate set to kbdrate -r 32 -d 250 which unfortunately is the fastest the PC can go.
more
Displays a long file by stopping at the end of each page. Run the following: ls -l /bin > bin-ls, and then try more bin-ls. The first command creates a file with the contents of the output of ls. This will be a long file because the directory /bin has a great many entries. The second command views the file. Use the space bar to page through the file. When you get bored, just press Q. You can also try ls -l /bin | more which will do the same thing in one go.
less
The GNU version of more, but with extra features. On your system, the two commands may be the same. With less, you can use the arrow keys to page up and down through the file. You can do searches by pressing ?, and then typing in a word to search for and then pressing Enter.

lynx
Opens a URL [URL stands for Uniform Resource Locator--a web address.]at the console. Try lynx http://lwn.net/.
links
Another text-based web browser.
nohup &
Runs a command in the background, appending any output the command may produce to the file nohup.out in your home directory. nohup has the useful feature that the command will continue to run even after you have logged out. Uses for nohup will become obvious later.
sleep
Pauses for seconds. See also usleep.
sort
Prints a file with lines sorted in alphabetical order. Create a file called telephone with each line containing a short telephone book entry. Then type sort telephone, or sort telephone | less and see what happens. sort takes many interesting options to sort in reverse ( sort -r), to eliminate duplicate entries ( sort -u), to ignore leading whitespace ( sort -b), and so on. See the sort(1) for details.
strings [-n ]
Writes out a binary file, but strips any unreadable characters. Readable groups of characters are placed on separate lines. If you have a binary file that you think may contain something interesting but looks completely garbled when viewed normally, use strings to sift out the interesting stuff: try less /bin/cp and then try strings /bin/cp. By default strings does not print sequences smaller than 4. The -n option can alter this limit.
split ...
Splits a file into many separate files. This might have been used when a file was too big to be copied onto a floppy disk and needed to be split into, say, 360-KB pieces. Its sister, csplit, can split files along specified lines of text within the file. The commands are seldom used on their own but are very useful within programs that manipulate text.
tac [ ...]
Writes the contents of all the files listed to the screen, reversing the order of the lines--that is, printing the last line of the file first. tac is cat backwards and behaves similarly.
tail [-f] [-n ]
Prints the last lines of a file or 10 lines if the -n option is not given. The -f option means to watch the file for lines being appended to the end of it. (See also head above.)
uname
Prints the name of the UNIX operating system you are currently using. In this case, LINUX.
uniq
Prints a file with duplicate lines deleted. The file must first be sorted.
usleep
Pauses for microseconds (1/1,000,000 of a second).
wc [-c] [-w] [-l]
Counts the number of bytes (with -c for character), or words (with -w), or lines (with -l) in a file.
whatis
Gives the first line of the man page corresponding to , unless no such page exists, in which case it prints nothing appropriate.
whoami
Prints your login name.

Compressed Files

Files typically contain a lot of data that one can imagine might be represented with a smaller number of bytes. Take for example the letter you typed out. The word ``the'' was probably repeated many times. You were probably also using lowercase letters most of the time. The file was by far not a completely random set of bytes, and it repeatedly used spaces as well as using some letters more than others. [English text in fact contains, on average, only about 1.3 useful bits (there are eight bits in a byte) of data per byte.]Because of this the file can be compressed to take up less space. Compression involves representing the same data by using a smaller number of bytes, in such a way that the original data can be reconstructed exactly. Such usually involves finding patterns in the data. The command to compress a file is gzip , which stands for GNU zip. Run gzip on a file in your home directory and then run ls to see what happened. Now, use more to view the compressed file. To uncompress the file use gzip -d . Now, use more to view the file again. Many files on the system are stored in compressed format. For example, man pages are often stored compressed and are uncompressed automatically when you read them.

You previously used the command cat to view a file. You can use the command zcat to do the same thing with a compressed file. Gzip a file and then type zcat . You will see that the contents of the file are written to the screen. Generally, when commands and files have a z in them they have something to do with compression--the letter z stands for zip. You can use zcat | less to view a compressed file proper. You can also use the command zless , which does the same as zcat | less. (Note that your less may actually have the functionality of zless combined.)

A new addition to the arsenal is bzip2. This is a compression program very much like gzip, except that it is slower and compresses 20%-30% better. It is useful for compressing files that will be downloaded from the Internet (to reduce the transfer volume). Files that are compressed with bzip2 have an extension .bz2. Note that the improvement in compression depends very much on the type of data being compressed. Sometimes there will be negligible size reduction at the expense of a huge speed penalty, while occasionally it is well worth it. Files that are frequently compressed and uncompressed should never use bzip2.

4.14 Searching for Files

You can use the command find to search for files. Change to the root directory, and enter find. It will spew out all the files it can see by recursively descending [Goes into each subdirectory and all its subdirectories, and repeats the command find. ] into all subdirectories. In other words, find, when executed from the root directory, prints all the files on the system. find will work for a long time if you enter it as you have--press Ctrl-C to stop it.

Now change back to your home directory and type find again. You will see all your personal files. You can specify a number of options to find to look for specific files.

find -type d
Shows only directories and not the files they contain.
find -type f
Shows only files and not the directories that contain them, even though it will still descend into all directories.
find -name
Finds only files that have the name . For instance, find -name '*.c' will find all files that end in a .c extension ( find -name *.c without the quote characters will not work. You will see why later). find -name Mary_Jones.letter will find the file with the name Mary_Jones.letter.
find -size [[+|-]]
Finds only files that have a size larger (for +) or smaller (for -) than kilobytes, or the same as kilobytes if the sign is not specified.
find [ ...]
Starts find in each of the specified directories.

There are many more options for doing just about any type of search for a file. See find(1) for more details (that is, run man 1 find). Look also at the -exec option which causes find to execute a command for each file it finds, for example:


find /usr -type f -exec ls '-al' '{}' ';'

find has the deficiency of actively reading directories to find files. This process is slow, especially when you start from the root directory. An alternative command is locate . This searches through a previously created database of all the files on the system and hence finds files instantaneously. Its counterpart updatedb updates the database of files used by locate. On some systems, updatedb runs automatically every day at 04h00.

Try these ( updatedb will take several minutes):

updatedb

locate rpm
locate deb
locate passwd
locate HOWTO
locate README


Searching Within Files

Very often you will want to search through a number of files to find a particular word or phrase, for example, when a number of files contain lists of telephone numbers with people's names and addresses. The command grep does a line-by-line search through a file and prints only those lines that contain a word that you have specified. grep has the command summary:


grep [options] [ ...]

[The words word, string, or pattern are used synonymously in this context, basically meaning a short length of letters and-or numbers that you are trying to find matches for. A pattern can also be a string with kinds of wildcards in it that match different characters, as we shall see later.]

Run grep for the word ``the'' to display all lines containing it: grep 'the' Mary_Jones.letter. Now try grep 'the' *.letter.

grep -n
shows the line number in the file where the word was found.
grep -
prints out of the lines that came before and after each of the lines in which the word was found.
grep -A
prints out of the lines that came After each of the lines in which the word was found.
grep -B
prints out of the lines that came Before each of the lines in which the word was found.
grep -v
prints out only those lines that do not contain the word you are searching for. [ You may think that the -v option is no longer doing the same kind of thing that grep is advertised to do: i.e., searching for strings. In fact, UNIX commands often suffer from this--they have such versatility that their functionality often overlaps with that of other commands. One actually never stops learning new and nifty ways of doing things hidden in the dark corners of man pages.]
grep -i
does the same as an ordinary grep but is case insensitive.
Regular Expressions

A regular expression is a sequence of characters that forms a template used to search for strings [Words, phrases, or just about any sequence of characters. ] within text. In other words, it is a search pattern. To get an idea of when you would need to do this, consider the example of having a list of names and telephone numbers. If you want to find a telephone number that contains a 3 in the second place and ends with an 8, regular expressions provide a way of doing that kind of search. Or consider the case where you would like to send an email to fifty people, replacing the word after the ``Dear'' with their own name to make the letter more personal. Regular expressions allow for this type of searching and replacing.

Overview

Many utilities use the regular expression to give them greater power when manipulating text. The grep command is an example. Previously you used the grep command to locate only simple letter sequences in text. Now we will use it to search for regular expressions.

In the previous chapter you learned that the ? character can be used to signify that any character can take its place. This is said to be a wildcard and works with file names. With regular expressions, the wildcard to use is the . character. So, you can use the command grep .3....8 to find the seven-character telephone number that you are looking for in the above example.

Regular expressions are used for line-by-line searches. For instance, if the seven characters were spread over two lines (i.e., they had a line break in the middle), then grep wouldn't find them. In general, a program that uses regular expressions will consider searches one line at a time.

Here are some regular expression examples that will teach you the regular expression basics. We use the grep command to show the use of regular expressions (remember that the -w option matches whole words only). Here the expression itself is enclosed in ' quotes for reasons that are explained later.

grep -w 't[a-i]e'
Matches the words tee, the, and tie. The brackets have a special significance. They mean to match one character that can be anything from a to i.
grep -w 't[i-z]e'
Matches the words tie and toe.
grep -w 'cr[a-m]*t'
Matches the words craft, credit, and cricket. The * means to match any number of the previous character, which in this case is any character from a through m.
grep -w 'kr.*n'
Matches the words kremlin and krypton, because the . matches any character and the * means to match the dot any number of times.
egrep -w '(th|sh).*rt'
Matches the words shirt, short, and thwart. The | means to match either the th or the sh. egrep is just like grep but supports extended regular expressions that allow for the | feature. [ The | character often denotes a logical OR, meaning that either the thing on the left or the right of the | is applicable. This is true of many programming languages. ] Note how the square brackets mean one-of-several-characters and the round brackets with |'s mean one-of-several-words.
grep -w 'thr[aeiou]*t'
Matches the words threat and throat. As you can see, a list of possible characters can be placed inside the square brackets.
grep -w 'thr[^a-f]*t'
Matches the words throughput and thrust. The ^ after the first bracket means to match any character except the characters listed. For example, the word thrift is not matched because it contains an f.

The above regular expressions all match whole words (because of the -w option). If the -w option was not present, they might match parts of words, resulting in a far greater number of matches. Also note that although the * means to match any number of characters, it also will match no characters as well; for example: t[a-i]*e could actually match the letter sequence te, that is, a t and an e with zero characters between them.

Usually, you will use regular expressions to search for whole lines that match, and sometimes you would like to match a line that begins or ends with a certain string. The ^ character specifies the beginning of a line, and the $ character the end of the line. For example, ^The matches all lines that start with a The, and hack$ matches all lines that end with hack, and '^ *The.*hack *$' matches all lines that begin with The and end with hack, even if there is whitespace at the beginning or end of the line.

Because regular expressions use certain characters in a special way (these are . \ [ ] * + ?), these characters cannot be used to match characters. This restriction severely limits you from trying to match, say, file names, which often use the . character. To match a . you can use the sequence \. which forces interpretation as an actual . and not as a wildcard. Hence, the regular expression myfile.txt might match the letter sequence myfileqtxt or myfile.txt, but the regular expression myfile\.txt will match only myfile.txt.

You can specify most special characters by adding a \ character before them, for example, use \[ for an actual [, a \$ for an actual $, a \\ for and actual \, \+ for an actual +, and \? for an actual ?. ( ? and + are explained below.)

The fgrep Command

fgrep is an alternative to grep. The difference is that while grep (the more commonly used command) matches regular expressions, fgrep matches literal strings. In other words you can use fgrep when you would like to search for an ordinary string that is not a regular expression, instead of preceding special characters with \.

5.3 Regular Expression \{ \} Notation

x* matches zero to infinite instances of a character x. You can specify other ranges of numbers of characters to be matched with, for example, x\{3,5\}, which will match at least three but not more than five x's, that is xxx, xxxx, or xxxxx.

x\{4\} can then be used to match 4 x's exactly: no more and no less. x\{7,\} will match seven or more x's--the upper limit is omitted to mean that there is no maximum number of x's.

As in all the examples above, the x can be a range of characters (like [a-k]) just as well as a single charcter.

grep -w 'th[a-t]\{2,3\}t'
Matches the words theft, thirst, threat, thrift, and throat.
grep -w 'th[a-t]\{4,5\}t'
Matches the words theorist, thicket, and thinnest.


Extended Regular Expression + ? \< \> ( ) |

Notation with egrep

An enhanced version of regular expressions allows for a few more useful features. Where these conflict with existing notation, they are only available through the egrep command.

+
is analogous to \{1,\}. It does the same as * but matches one or more characters instead of zero or more characters.
?
is analogous to \{1\}. It matches zero or one character.
\< \>
can surround a string to match only whole words.
( )
can surround several strings, separated by |. This notation will match any of these strings. ( egrep only.)
\( \)
can surround several strings, separated by \|. This notation will match any of these strings. ( grep only.)

The following examples should make the last two notations clearer.

grep 'trot'
Matches the words electrotherapist, betroth, and so on, but
grep '\'
matches only the word trot.
egrep -w '(this|that|c[aeiou]*t)'
Matches the words this, that, cot, coat, cat, and cut.

Command Line Shortcuts

The following keys are useful for editing the command-line. Note that UNIX has had a long and twisted evolution from the mainframe, and the Home, End and other keys may not work properly. The following keys bindings are however common throughout many LINUX applications:

Ctrl-a
Move to the beginning of the line (Home).
Ctrl-e
Move to the end of the line (End).
Ctrl-h
Erase backward (backspace).
Ctrl-d
Erase forward (Delete).
Ctrl-f
Move forward one character (Right Arrow).
Ctrl-b
Move backward one character (Left Arrow).
Alt-f
Move forward one word.
Alt-b
Move backward one word.
Alt-Ctrl-f
Erase forward one word.
Alt-Ctrl-b
Erase backward one word.
Ctrl-p
Previous command (up arrow).
Ctrl-n
Next command (down arrow).

Your command-line keeps a history of all the commands you have typed in. Ctrl-p and Ctrl-n will cycle through previous commands entered. New users seem to gain tremendous satisfaction from typing in lengthy commands over and over. Never type in anything more than once--use your command history instead.

Ctrl-s is used to suspend the current session, causing the keyboard to stop responding. Ctrl-q reverses this condition.

Ctrl-r activates a search on your command history. Pressing Ctrl-r in the middle of a search finds the next match whereas Ctrl-s reverts to the previous match (although some distributions have this confused with suspend).

The Tab command is tremendously useful for saving key strokes. Typing a partial directory name, file name, or command, and then pressing Tab once or twice in sequence completes the word for you without your having to type it all in full.

You can make Tab and other keys stop beeping in the irritating way that they do by editing the file /etc/inputrc and adding the line

set bell-style none

Yum Configuration ( Yummy feast)

Yum is software installation tool for Red hat linux and Fedora Linux. It is a complete software management system. Other option is to use up2date utility. yum is designed to use over network/internet. It does not use CDROM to install packages. If you are using fedora you don't have to install it, it is part of fedora itself.

If you don't have yum then download it from project home page http://linux.duke.edu/projects/yum/download.ptml
And then install it

rpm -ivh yum*

Step # 1: Configure yum

You need to edit /etc/yum.conf and modify/add following code to it:

vi /etc/yum.conf

Append or edit code as follows:
Code:

[base]
name=Fedora Core $releasever - $basearch - Base
baseurl=http://apt.sw.be/fedora/$releasever/en/$basearch/dag
baseurl=http://mirrors.kernel.org/fedora/core/$releasever/$basearch/os

Save the file

Install GPG signature key with rpm command:
Code:

# rpm --import http://dag.wieers.com/packages/RPM-GPG-KEY.dag.txt

and other keys too (if any using above command)

Step # 2 Update your package list:
Code:

# yum check-update

Step # 3 start to use yum

Install a new package called foo
Code:

# yum install foo

To update packages
Code:

# yum update

To update a single package called bar
Code:

# yum update bar

To remove a package called telnet
Code:

# yum remove telnet

To list all packages
Code:

# yum list installed

You can search using grep command
Code:

# yum list installed | grep samba

Display information on a package called foo
Code:

# yum info foo

To display list of packages for which updates are available
Code:

# yum list updates

--------------------------------------
/etc/yum.repos.d
---------------------------------------

[dag]
name=Dag RPM Repository for Red Hat Enterprise Linux
baseurl=http://www.city-fan.org/ftp/contrib/yum-repo/rhel4/
#baseurl=http://www.city-fan.org/ftp/contrib/yum-repo/rhel4/i386/
gpgcheck=0
rpm --import http://dag.wieers.com/rpm/packages/RPM-GPG-KEY.dag.txt

---------------------------------------
/etc/yum.conf
----------------------------------------

[main]
cachedir=/var/cache/yum
debuglevel=2
logfile=/var/log/yum.log
pkgpolicy=newest
distroverpkg=redhat-release
tolerant=1
exactarch=1
retries=20
obsoletes=1
gpgcheck=0

# PUT YOUR REPOS HERE OR IN separate files named file.repo
# in /etc/yum.repos.d

How to Prevent ddos attack

All web servers been connected to the Internet subjected to DoS (Denial of Service) or DDoS (Distrubuted Denial of Service) attacks in some kind or another, where hackers or attackers launch large amount connections consistently and persistently to the server, and in advanced stage, distributed from multiple IP addresses or sources, in the hope to bring down the server or use up all network bandwidth and system resources to deny web pages serving or website not responding to legitimate visitors.

You can detect the ddos using the following command

netstat -anp|grep tcp|awk '{print $5}'| cut -d : -f1|sort|uniq -c|sort -n

It will shows the number of connections from all IPs to the server.

There are plenty of ways to prevent, stop, fight and kill off DDoS attack, such as using firewall. A low cost, and probably free method is by using software based firewall or filtering service. (D)DoS-Deflate is a free open source Unix/Linux script by MediaLayer that automatically mitigate (D)DoS attacks. It claims to be the best, free, open source solution to protect servers against some of the most excruciating DDoS attacks.

(D)DoS-Deflate script basically monitors and tracks the IP addresses are sending and establishing large amount of TCP network connections such as mass emailing, DoS pings, HTTP requests) by using “netstat” command, which is the symptom of a denial of service attack. When it detects number of connections from a single node that exceeds certain preset limit, the script will automatically uses APF or IPTABLES to ban and block the IPs. Depending on the configuration, the banned IP addresses would be unbanned using APF or IPTABLES (only works on APF v 0.96 or better).

Installation and setup of (D)DOS-Deflate on the server is extremely easy. Simply login as root by open SSH secure shell access to the server, and run the the following commands one by one:

wget http://www.inetbase.com/scripts/ddos/install.sh
chmod 0700 install.sh
./install.sh

To uninstall the (D)DOS-Deflate, run the following commands one by one instead:

wget http://www.inetbase.com/scripts/ddos/uninstall.ddos
chmod 0700 uninstall.ddos
./uninstall.ddos

The configuration file for (D)DOS-Deflate is ddos.conf, and by default it will have the following values:

FREQ=1
NO_OF_CONNECTIONS=50
APF_BAN=1
KILL=1
EMAIL_TO=”root”
BAN_PERIOD=600

Users can change any of these settings to suit the different need or usage pattern of different servers. It’s also possible to whitelist and permanently unblock (never ban) IP addresses by listing them in /usr/local/ddos/ignore.ip.list file. If you plan to execute and run the script interactively, users can set KILL=0 so that any bad IPs detected are not banned

Wednesday, February 6, 2008

Iptables Introduction

What is iptables?

iptables is the userspace command line program used to configure the Linux 2.4.x and 2.6.x IPv4 packet filtering ruleset. It is targeted towards system administrators.

Since Network Address Translation is also configured from the packet filter ruleset, iptables is used for this, too.

The iptables package also includes ip6tables. ip6tables is used for configuring the IPv6 packet filter.
Dependencies

iptables requires a kernel that features the ip_tables packet filter. This includes all 2.4.x and 2.6.x kernel releases.
Main Features

* listing the contents of the packet filter ruleset
* adding/removing/modifying rules in the packet filter ruleset
* listing/zeroing per-rule counters of the packet filter ruleset

Rules

* If you create a set of rules in iptables during one session and then reboot your computer, all the rules that were added will be lost.

* If you want the rules to persist, you should put the commands to add them into a startup script.

* To check what rules are already implemented:

o Type into a terminal window:

ComputerName:~# iptables -L
o A list of the present rules will appear on the screen under a variety of headings.

Rule Components

* There are three basic components to each rule:

1. Where to apply the rule during the process of sending and receiving network traffic (packets). There are three different places, or chains:

1. INPUT: Applies rules to packets being received from the network.
2. OUPUT: Applies rules to packets being sent from your computer.
3. FORWARD: Applies rules to packets that your machine is forwarding to others on the network.

2. What type of effect the rule has, regardless of where it is applied. The 3 effects are:

1. ACCEPT: Accepts a given packet and allows it to pass either in or out.
2. DENY: Does not allow a packet to pass but sends an error message back to its sender.
3. DROP: Completely ignores a packet without sending an error message to its sender.

Each chain also has a default policy (usually ACCEPT) that is applied if a specific packet does not match any rules.

3. The location you want to block packets from or going to, usually called the source can be written as either an IP address or a DNS name (such as www.yahoo.com).

* Each of these three components are used to create a rule through command line arguments.

Back to top

Adding a Rule

* To add a rule:

o Use the argument -A to tell iptables to add a rule to the chain Chain_Name.

o Add the source with the option -s . We can also specify a range of IPs with the '/' character (200.200.200.1/24 specifies 200.200.200.*) as well as use the wildcard character '*' (Find further information on t;/##" notation called CIDR blocks).

o Specify the desired effect with the -j argument.
For example, if we wanted to block information coming from 200.200.200.1 we would enter:

ComputerName:~# iptables -A INPUT -s 200.200.200.1 -j DROP
o Typing # iptables -L again will now show the new rule under the INPUT chain heading. It should look like this:

Chain INPUT (policy ACCEPT)
target prot opt source destination
DROP all -- 200.200.200.1 anywhere

Back to top

Removing a Rule

* Removing a rule:

o Type the argument -D where Rule_Num starts at 1 and counts down from the top of each list of rules. To remove our rule, we simply type in (assuming that the new rule is the first in the list):

ComputerName:~# iptables -D INPUT 1

o Now, typing:

# iptables -L

should show that the rule has been deleted.

Back to top

Advanced Rule Examples:

* There are many other advanced options for these rules, one of the most important of which is the ability to specify what "type" of packets to block by blocking specific ports on which certain services operate. For example, we could specify that we wanted to block only packets going into port 23, named telnet packets, coming into your computer from 200.200.200.1 by writing the rule:

ComputerName:~# iptables -A INPUT -s 200.200.200.1 -j DROP -p tcp --destination-port telnet

* Other ports can be specified. For a full list of ports being used on your computer and the name or type attached to each, go to your /etc/services file.

* Other common ports to block are:

o HTTP (port 80)
o (port 21)
o SSH (22)

* There are also a wide variety of other command line arguments that can be used, but these simple rules so far introduced allow for a wide variety of applications.
o If you wanted to block all incoming telnet connections to your computer:

ComputerName:~# iptables -A INPUT -j DROP -p tcp --destination-port telnet

Since there is no defined source, any telnet request to your computer will be blocked.
o If you have two or more network connections, you can specify which of these connections you would like to apply your rule with -i command for input rules and -o command for output rules. For example, if we would like to block any incoming tcp packets on your second Ethernet connection (eth1):

ComputerName:~# iptables -A INPUT -j DROP -p tcp -i eth1

This rule is not very useful since all incoming ports are blocked. We would not hear any tcp packet replies to our outbound requests, thus rendering our connection for the most part useless.
o We can specify ports that we want open while the rest would remain closed by implementing two rules:

1. Explicitly accept packets on the port we want to open, and
2. Block all of the ports.

o For the web server example above, the first rule would accept tcp packets on port 80 through eth1 and the second would block all incoming tcp traffic. These two rules are given below:

+ ComputerName:~# iptables -A INPUT -j ACCEPT -p tcp --destination-port 80 -i eth1

+ ComputerName:~# iptables -A INPUT -j DROP -p tcp -i eth1

This combination of rules works because iptables implements the rules in order. When a new incoming tcp packet bound for port 80 arrives, iptables will see the accept rule first and admit the packet before the all-encompassing deny rule takes effect.
o For blocking only incoming tcp transactions but allowing our computer to start new transactions with other web servers or the like, we can use the --syn option in the following rule:

ComputerName:~# iptables -A INPUT -p tcp --syn -j DROP

Since all tcp connections must first be initialized, we can block all incoming packets that take the task of initializing the connection, the SYN tcp packets. This basically tells our computer to ignore anything it did not speak to first.
o While the solution above will work, a better implementation is to put the following as the first rule in your list:

ComputerName:~# iptables -A INPUT -m star --state ESTABLISHED,RELATED -S ACCEPT
o You could block specific, rowdy users on your network from accessing your computer by blocking their IP, but if his or her IP ever changed they would be able to access your computer once again. Blocking the hardware address or MAC address of their Ethernet card is more efficient. This address is a set of six two-digit hexadecimal numbers separated by colons (ex: 00:0B:DB:45:56:42). The option for specifying a MAC address, --mac-source , could be used as follows:

ComputerName:~# iptables -A INPUT --mac-source 00:0B:DB:45:56:42 -j DROP

For more information on other command line options for iptables, please refer to the man page.

Monday, February 4, 2008

/etc/fstab entries

fstab consists of a number of lines (one for each filesystem) seperated into six fields. Each field is seperated from the next by whitespace (spaces/tabs).

For Example:

/dev/hdc /mnt/cdrom iso9660 noauto,ro,user 0 0

The first field (/dev/hdc) is the physical device/remote filesystem which is to be described.

The second field (/mnt/cdrom) specifies the mount point where the filesystem will be mounted.

The third field (iso9660) is the type of filesystem on the device from the first field.

The fourth field (noauto,ro,user) is a (default) list of options which mount should use when mounting the filesystem.

The fifth field (0) is used by dump (a backup utility) to decide if a filesystem should be backed up. If zero then dump will ignore that filesystem.

The sixth field (0) is used by fsck (the filesystem check utility) to determine the order in which filesystems should be checked.
If zero then fsck won't check the filesystem.
(as the example line above is a cdrom there is very little point in doing a fsck on it, so the value is zero).

Ruby On Rails

This article will guide you through the installation of Ruby on Rails in a linux machine. As you are aware Ruby on Rails made a splash with its simplicity and ease of use for devoloping web applications.

What is Ruby?

Ruby is a pure object-oriented programming language with a super clean syntax that makes programming elegant and fun. Ruby successfully combines Smalltalk's conceptual elegance, Python's ease of use and learning, and Perl's pragmatism. Ruby originated in Japan in the early 1990s, and has started to become popular worldwide in the past few years as more English language books and documentation have become available.

What is Rails?

Rails is an open source Ruby framework for developing database-backed web applications. Rails is designed from the ground up to create dynamic Web sites that use a relational database backend. It adds key words to the Ruby programming language that make Web applications easier to configure. In addition, it’s designed to automatically generate a complete, if somewhat crude, Web application from an existing database schema. The latter is both Ruby’s greatest strength and its Achilles’ heel. Rails makes assumptions about database schema naming conventions that, if followed, make generating a basic Web site a matter of executing single command.

Installing the Software on CentOS .

1. Install Ruby

Need to install the testing repository so add a file named "testing" to the directory /etc/yum.repos.d/ That will allow you to rock ruby 1.8.4.
# packages in testing
[testing]
name=CentOS-$releasever - Testing
baseurl=http://dev.centos.org/centos/$releasever/testing/$basearch/
gpgcheck=1
enabled=1
gpgkey=http://dev.centos.org/centos/RPM-GPG-KEY-CentOS-testing
Now you can use yum to install ruby
yum update
yum install ruby ruby-devel ruby-libs irb rdoc

2. Install Gem

cd /usr/local/src
wget http://rubyforge.org/frs/download.php/5207/rubygems-0.8.11.tgz
tar -xvzf rubygems-0.8.11.tgz
cd rubygems-0.8.11
ruby setup.rb
cd ..

3. Install fast-cgi

cd /usr/local/src
wget http://www.fastcgi.com/dist/fcgi-2.4.0.tar.gz
tar xzvf fcgi-2.4.0.tar.gz
cd fcgi-2.4.0
./configure
make
make install
cd ..

4. Install fast-cgi Bindings

cd /usr/local/src
wget http://sugi.nemui.org/pub/ruby/fcgi/ruby-fcgi-0.8.6.tar.gz
tar zxvf ruby-fcgi-0.8.6.tar.gz
cd ruby-fcgi-0.8.6
ruby install.rb config
ruby install.rb setup
ruby install.rb install
cd ..

5. Install Rails

gem install rails --include-dependencies 

Ruby and Rails on Red Hat Enterprise Linux

Make sure you have installed zlib-devel installed otherwise Gem will fail.
up2date zlib-devel 
First you need to install ruby installed using rpm's from the machine.

To determine which all rpm's installed

rpm -qa | egrep '(ruby)|(irb)'

To uninstall the installed ruby rpm's

rpm -e ruby-docs-1.8.1-7.EL4.2 \
ruby-1.8.1-7.EL4.2 \
irb-1.8.1-7.EL4.2 \
ruby-libs-1.8.1-7.EL4.2 \
ruby-mode-1.8.1-7.EL4.2 \
ruby-tcltk-1.8.1-7.EL4.2 \
ruby-devel-1.8.1-7.EL4.2

Install Ruby from source

wget ftp://ftp.ruby-lang.org/pub/ruby/stable/ruby-1.8.4.tar.gz
tar xvzf ruby-1.8.4.tar.gz
cd ruby-1.8.4
./configure --prefix=/usr
make
make install

Install Ruby Gems

wget http://rubyforge.org/frs/download.php/5207/rubygems-0.8.11.tgz
tar xvzf rubygems-0.8.11.tgz
cd rubygems-0.8.11
ruby setup.rb

Install Rails

cd
gem update
gem update --system
rm `gem env gempath`/source_cache
rm -f ~/.gem/source_cache
gem update
gem install rails --include-dependencies

Now configuring mod_fastcgi Apache (1.3) config file httpd.conf

1. Install mod_fcgi module

curl -O http://fastcgi.com/dist/mod_fastcgi-2.4.2.tar.gz
or
wget http://fastcgi.com/dist/mod_fastcgi-2.4.2.tar.gz

tar xvfz mod_fastcgi-2.4.2.tar.gz
cd mod_fastcgi-2.4.2
/usr/local/apache/bin/apxs -cia mod_fastcgi.c

2. Configuring httpd.conf

LoadModule fastcgi_module modules/mod_fastcgi.so

AddHandler fastcgi-script .fcgi .fcg .fpl

service httpd restart

3. Edit the .htaccess file

change /dispatch.cgi to /dispatch.fcgi

4. This server has been upgraded to MySQL 4.1

The default Ruby mysql driver will not connect because it is running in old_password compatibility mode (otherwise Ensim cannot connect). In order to fix it we need to reinstall the mysql-ruby client
wget http://www.tmtm.org/en/mysql/ruby/mysql-ruby-2.5.tar.gz
tar vxzf mysql-ruby-2.5.tar.gz
cd mysql-ruby-2.5
ruby extconf.rb --with-mysql-config=/usr/bin/mysql_config

5. Edit your .htaccess with following entries

#Set to development, test, or production
DefaultInitEnv RAILS_ENV production

Options Indexes ExecCGI FollowSymLinks

RewriteEngine On
RewriteRule ^$ index.html [QSA]
RewriteRule ^([^.]+)$ $1.html [QSA]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ dispatch.fcgi [QSA,L]

DIG Command

dig is a command-line tool for querying DNS name servers for information about host addresses, mail exchanges, name servers, and related information.
Understanding the default output

The most typical, simplest query is for a single host. By default, however, dig is pretty verbose. You probably don’t need all the information in the default output, but it’s probably worth knowing what it is. Below is an annotated query.

This article explains you how to do the data recovery from a crashed windows-plesk server.

$ dig www.yahoo.com

That’s the command-line invocation of dig I used

; <<>> DiG 9.2.3 <<>> www.yahoo.com
;; global options: printcmd

The opening section of dig’s output tells us a little about itself (version 9.2.3) and the global options that are set (in this case, printcmd). This part of the output can be quelled by using the +nocmd option, but only if it’s the very first argument on the command line (even preceeding the host you’re querying).

;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43071
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 3

Here, dig tells us some technical details about the answer received from the DNS server. This section of the output can be toggled using the +[no]comments option—but beware that disabling the comments also turns off many section headers.

;; QUESTION SECTION:
;www.yahoo.com. IN A

In the question section, dig reminds us of our query. The default query is for an Internet address (A). You can turn this output on or off using the +[no]question option.

;; ANSWER SECTION:
www.yahoo.com. 600 IN A 203.23.184.88

Finally, we get our answer: the address of www.yahoo.com is 204.152.184.88. I don’t know why you’d ever want to turn off the answer, but you can toggle this section of the output using the +[no]answer option.

;; AUTHORITY SECTION:
yahoo.com. 2351 IN NS ns1.nis.tc.org.
yahoo.com. 2351 IN NS ns1.gnac.com.
yahoo.com. 2351 IN NS ns2.nis.tc.org.

The authority section tells us what DNS servers can provide an authoritative answer to our query. In this example, yahoo.com has three name servers. You can toggle this section of the output using the +[no]authority option.

;; ADDITIONAL SECTION:
ns1.gnac.com. 171551 IN A 203.23.34.21
ns-int.yahoo.com. 2351 IN A 211.52.18.65
ns-int.yahoo.com. 2351 IN AAAA 2001:4f8:0:2::15

The final section of the default output contains statistics about the query; it can be toggled with the +[no]stats option.
Some useful options with dig

dig will let you perform any valid DNS query, the most common of which are A (the IP address), TXT (text annotations), MX (mail exchanges), NS name servers, or the omnibus ANY.
# get the address(es) for yahoo.com

dig yahoo.com A +noall +answer

# get a list of yahoo's mail servers

dig yahoo.com MX +noall +answer

# get a list of DNS servers authoritative for yahoo.com

dig yahoo.com NS +noall +answer

# get all of the above

dig yahoo.com ANY +noall +answer

#Short answer

dig yahoo.com +short

#To get the TTL values

dig +nocmd yahoo.com mx +noall +short

#To get a long answer

dig +nocmd yahoo.com any +multiline +noall +answer

#To reverselookup

dig -x 216.109.112.135 +short

To bulk lookups # do full lookups for a number of hostnames

#dig -f /path/to/host-list.txt

#the same, with more focused output

dig -f /path/to/host-list.txt +noall +answer

Tracing dig's path

dig yahoo.com +trace

How to interpret TTL value

If you ask your local DNS server for an Internet address, the server figures out where to find an authoritative answer and then asks for it. Once the server receives an answer, it will keep the answer in a local cache so that if you ask for the same address again a short time later, it can give you the answer quickly rather than searching the Internet for it all over again.

When domain administrators configure their DNS records, they decide how long the records should remain in remote caches. This is the TTL number (usually expressed in number of seconds).

When domain administrators configure their DNS records, they decide how long the records should remain in remote caches. This is the TTL number (usually expressed in number of seconds).
For example, as of this writing, the TTL for the MX records for the gmail.com domain is 300 seconds. The gmail.com admins are asking that remote servers cache their MX records for no more than five minutes. So when you first ask for that record set, dig will report a TTL of 300.

$ dig +nocmd gmail.com MX +noall +answer
gmail.com. 300 IN MX 20 gsmtp57.google.com.
gmail.com. 300 IN MX 10 gsmtp171.google.com.

If you ask a few seconds later, you’ll see the TTL number reduced by approximately the number of seconds you waited to ask again.

$ dig +nocmd gmail.com MX +noall +answer
gmail.com. 280 IN MX 10 gsmtp171.google.com.
gmail.com. 280 IN MX 20 gsmtp57.google.com.

If your timing is good, you can catch the record at the very end of its life.

$ dig +nocmd gmail.com MX +noall +answer
gmail.com. 1 IN MX 10 gsmtp171.google.com.
gmail.com. 1 IN MX 20 gsmtp57.google.com.

After that, the DNS server you’re querying will “forget" the answer to that question, so the whole cycle will start over again (in this example, at 300 seconds) the next time you perform that query.

Admin Tools

VMSTAT

vmstat helps you to see, among other things, if your server is swapping. Take a look at the following run of vmstat doing a one second refresh for two iterations.

root@sexy [~]# vmstat 1 2
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 1172 1689332 333588 663092 0 0 19 113 1 2 3 1 95 1
0 0 1172 1690320 332920 663100 0 0 352 256 355 681 5 3 91 2

The first row shows your server averages. The si (swap in) and so (swap out) columns show if you have been swapping (i.e. needing to dip into 'virtual' memory) in order to run your server's applications. The si/so numbers should be 0 (or close to it). Numbers in the hundreds or thousands indicate your server is swapping heavily. This consumes a lot of CPU and other server resources and you would get a very significant benefit from adding more memory to your server.

Some other columns of interest: The r (runnable) b (blocked) and w (waiting) columns help see your server load. Waiting processes are swapped out. Blocked processes are typically waiting on I/O. The runnable column is the number of processes trying to something. These numbers combine to form the 'load' value on your server. Typically you want the load value to be one or less per CPU in your server.

The bi (bytes in) and bo (bytes out) column show disk I/O (including swapping memory to/from disk) on your server.

The us (user), sy (system) and id (idle) show the amount of CPU your server is using. The higher the idle value, the better.
PS

This command is used to know all the processes running in the server. It can be also used to find out process which is using most of the memory and cpu.
To find out top 3 memory consuming processes.

ps -auxf | sort -nr -k 4 | head -3

To find out top 3 cpu consuming processes

ps -auxf | sort -nr -k 3 | head -3

TOP

Say the system is slow and you want to find out who is gobbling up all the CPU and/or memory. To display the top processes, you use the command top.

Note that unlike other commands, top does not produce an output and sits still. It refreshes the screen to display new information. So, if you just issue top and leave the screen up, the most current information is always up. Top runs until you press "q" to quit top.
Let's examine the different types of information produced. The first line:

18:46:13 up 11 days, 21:50, 5 users, load average: 0.11, 0.19, 0.18

shows the current time (18:46:13), that system has been up for 11 days; that the system has been working for 21 hours 50 seconds. The load average of the system is shown (0.11, 0.19, 0.18) for the last 1, 5 and 15 minutes respectively. (By the way, you can also get this information by issuing the uptime command.)
If the load average is not required, press the letter "l" (lowercase L); it will turn it off. To turn it back on press l again. The second line: 151 processes: 147 sleeping, 4 running, 0 zombie, 0 stopped shows the number of processes, running, sleeping, etc. The third and fourth lines:

show the CPU utilization details. The above line shows that user processes consume 12.5% and system consumes 6.7%. The user processes include the Oracle processes. Press "t" to turn these three lines off and on. If there are more than one CPU, you will see one line per CPU. The next two lines: Mem: 1026912k av, 1000688k used, 26224k free, 0k shrd, 113624k buff 758668k actv, 146872k in_d, 14460k in_c Swap: 2041192k av, 122476k used, 1918716k free 591776k cached

show the memory available and utilized. Total memory is "1026912k av", approximately 1GB, of which only 26224k or 26MB is free. The swap space is 2GB; but it's almost not used. To turn it off and on, press "m".
The rest of the display shows the processes in a tabular format. Here is the explanation of the columns:

Column Description
PID The process ID of the process
USER The user running the process
PRI The priority of the process
NI The nice value: The higher the value, the lower the priority of the task
SIZE Memory used by this process (code+data+stack)
RSS The physical memory used by this process
SHARE The shared memory used by this process
STAT
The status of this process, shown in code. Some major status codes are:
R – Running
S –Sleeping
Z – Zombie
T – Stopped
You can also see second and third characters, which indicate:
W – Swapped out process
N – positive nice value
%CPU The percentage of CPU used by this process
%MEM The percentage of memory used by this process
TIME The total CPU time used by this process
CPU If this is a multi-processor system, this column indicates the ID of the CPU this process is running on.
COMMAND The command issued by this process

While the top is being displayed, you can press a few keys to format
the display as you like. Pressing the uppercase M key sorts the output
by memory usage. (Note that using lowercase m will turn the memory
summary lines on or off at the top of the display.) This is very useful
when you want to find out who is consuming the memory.

Now that you learned how to interpret the output, let's see how to use command line parameters.

The most useful is -d, which indicates the delay between the screen refreshes. To refresh every second, use top -d 1.

The other useful option is -p. If you want to monitor only a few processes, not all, you can specify only those after the -p option. To monitor processes 13609, 13608 and 13554, issue: top -p 13609 -p 13608 -p 13554
This will show results in the same format as the top command, but only those specific processes.


SKILL & SNICE
From the previous discussion you learned how to identify a CPU consuming resource. What if you find that a process is consuming a lot of CPU and memory, but you don't want to kill it?

$ skill -STOP 1

The process is effectively frozen. After some time, you may want to revive the process from coma:

$ skill -CONT 16514

The command is very versatile. If you want to stop all processes of the user "test"

$ skill -STOP test>

You can use a user, a PID, a command or terminal id as argument. The following stops all rman commands.
$ skill -STOP rman

As you can see, skill decides that argument you entered—a process ID, userid, or command—and acts appropriately. This may cause an issue in some cases, where you may have a user and a command in the same name. The best example is the "test" process, which is typically run by the user "test". So, when you want to stop the process called "test" and you issue:

$ skill -STOP test

all the processes of user "test" stop, including the session you may be on. To be completely unambiguous you can optionally give a new parameter to specify the type of the parameter. To stop a command called test, you can give:
$ skill -STOP -c test

The command snice is similar. Instead of stopping a process it makes its priority a lower one

lsof

The command lsof shows a list of processes attached to open files or network ports. List processes attached to a given file: lsof filenmame
List all open files on system:

#lsof

To kill the processes

kill
killall

This will perform an orderly shutdown of the process. If it hangs give a stronger signal with:

kill -9 .

This method is not as sanitary and thus less preferred.

A signal may be given to the process. The program must be programmed to handle the given signal. See /usr/include/bits/signum.h for a full list.

To restart a process after updating it's configuration file, issue the command

kill -HUP

The process attached to an open file can be killed using the command fuser:

fuser -ki filename

Now I am going indroduce you to a set of commands that may come handy
FIND

find -perm 777 -type d -exec chmod 755 {} \; #Command to change all the folders under present directory with 777 to 755

find -perm 755 -type f -exec chmod 644 {} \; #Command to change all the folders under present directory with 755 to 644

find -type d -maxdepth 3 -exec cp file {} \; #Copy file to 3 levels of directories below the present directory

find . -name "*.trn" -ctime +3 -exec rm -f {} \; #Forcible remove files with .trn extension and 3 days old.

find . -cmin -5 #Find all files created or updated in the last five minutes:
(Great for finding effects of make install)

LS

ls -lSh #List files by their size

ls -ltr #List files by date

ls -F #Appends a symbol after files and directories

RSYNC

rsync -e ssh -az /currentdirectory IP:/remotedirectory #Sync remote directory with our current directory.

rsync --bwlimit=1000 fromfile tofile #Locally copy with rate limit

GPG

gpg -c file #Encrypt file

gpg file.gpg #Decrypt file

DF

du -h --max-depth 1 #Show disk space used by all the files and directories.

du -s * | sort -k1,1rn | head #Show top disk users in current dir.

df -h #Show free disk space

df -i #Show free inodes

Add system swap space for virtual memory paging

Swap space may be a swap partition, a swap file or a combination of the two. One should size swap space to be at least twice the size of the computer's RAM. (but less than 2GB)

dd if=/dev/zero of=/swapfile bs=1024 count=265032 - #Create file filled with zeros of size 256Mb

mkswap /swapfile #Create swap file

swapon /swapfile #Begin use of given swap file. Assign a priority with the "-p" flag.

swapon -s #List swap files

scat /proc/swaps #Same as above

This example refers to a swap file. One may also use a swap partition. Make entry to /etc/fstab to permanently use swap file or partition.

/swapfile swap swap defaults 0 0

Note: To remove the use of swap space, use the command swapoff. If using a swap partition, the partition must be unmounted.
Debuggin Tools

strace -c ls >/dev/null #Summarise/profile system calls made by command

strace -f -e open ls >/dev/null #List system calls made by command

ltrace -f -e getenv ls >/dev/null #List library calls made by command

lsof -p $$ #List paths that process id has open

lsof -p PID #List paths PID has open

lsof ~ #List processes that have specified path open

last reboot #Indicates last reboot time

renice +15 PID #To give lower priority for a PID -19 is highest and +20 is lowest

To check number of IP's connecting to port 80

netstat -tanpu |grep :80 |awk {'print $5'} |cut -d: -f1 |sort -n |uniq -c

tcpdump not port 22 #To show network traffic except on port 22

Perl Administration

Installation of perl module can be done from tar file.

tar xzf yourmodule.tar.gz #Untar Module

perl Makefile.PL #Build with PERL makefile:
make
make install #Install

You can also do this from cpan shell

perl -MCPAN -e shell #First time through it will ask questions Answer "no" to the first question for
autoconfigure

cpan> install URI

cpan> i /PerlMagick/ #Inquire about module. (Search by keyword)
Distribution J/JC/JCRISTY/PerlMagick-5.36.tar.gz
Module Image::Magick (J/JC/JCRISTY/PerlMagick-5.36.tar.gz)

cpan> install Image::Magick

cpan> force install Image::Magick #Install a module forcefully.

YUM :RPM Updater

YUM (Yellowdog Updater, Modified) is a client command line application for updating an RPM based system from an internet repository (YUM "yum-arch" server) accessible by URL (http://xxx, ftp://yyy or even file://zzz local or NFS)

yum -y install package-name #To install a package along with its dependencies

yum remove package-name #To remove package

yum list #To list available packages version and state

yum list extras #To list packages not available in repositories but listed in config file

yum list obsoletes #To list packages which are obsoleted by repositories

yum clean all #To list packages which are obsoleted by packages in yum repository

yum update #Update all packages on your system

yum update package-name #Update a package

yum update package-name-prefix\* #Update all with same prefix

You can add new repos in /etc/yum.repos.d with files named file.repo For the option "gpgcheck=1" to work, use the "rpm --import GPG-KEY

rpm --import /usr/share/rhn/RPM-GPG-KEY

rpm --import /usr/share/rhn/RPM-GPG-KEY-fedora

File: /etc/yum.repos.d/fedora.repo with following entry

[base]
name=Fedora Core $releasever - $basearch - Base
#baseurl=http://download.fedora.redhat.com/pub/fedora/linux/core/$releasever/$basearch/os/
mirrorlist=http://fedora.redhat.com/download/mirrors/fedora-core-$releasever
enabled=1
gpgcheck=1

Additional Commands

tzselect #To change time zone of the machine

command 2>&1 | tee outputfile.txt #Output of a command is send to a text file


wget --mirror http://www.example.com #To mirror a site

wget -c http://www.example.com/largefile #To continue downloading partially downloaded file