Ruby on Rails on Red Hat

Ruby on Rails is an open source freely available web development framework. It's been quite popular--it won a Jolt "Web Development Tools" award last year, and some prominent Java developers have publically switched to Ruby on Rails. The buzz surrounding Rails is quite impressive--particularly when you consider that Rails had no Fortune 500 company to market it, unlike .NET or Java.
Rails is a Model View Controller (MVC) framework. As you can imagine from the name, applications written using Model View Controller frameworks have three main components: a model, which represents the data and associated logic; the view, which represents how a user interacts with the application; and the controller, which contains all of the business logic that drives the application. This is an artificial distinction, of course, but it is a powerful one.
You'll need Apache 2.0+ and MySQL installed on your Red Hat Linux computer to run these examples.

How do you install Rails?

The first, most basic requirement for running Rails is having the Ruby programming language installed. All recent distributions of Fedora™ Core or Red Hat® Enterprise Linux® include Ruby. You can check to see if Ruby is installed by running the command:
ruby -v
If Ruby is installed, you should get a message that tells you which version you have. If you get a "command not found" error, you don't have Ruby installed. If it is not installed, or if your version is older than 1.8.4, you can install Ruby as follows:

wget ftp://ftp.ruby-lang.org/pub/ruby/ruby-1.8.4.tar.gz
tar -xzvf ruby-1.8.4.tar.gz
cd ruby-1.8.4
./configure
make
make install
Next, you'll need to install the RubyGems development system. RubyGems is a powerful system for managing and installing Ruby code libraries, known as gems. Rails itself is composed of several gems, and once you've successfully installed RubyGems, you can proceed to install Rails. You can install RubyGems as follows:

wget http://rubyforge.org/frs/download.php/11289/rubygems-0.9.0.tgz
tar -xzvf rubygems-0.9.0.tgz
cd rubygems-0.9.0
ruby setup.rb
Now that we have Ruby and RubyGems installed, you can install Rails itself:
gem install -y rails
As you can see, the gem install command works much like yum install command does. The -y option installs all of the required dependences for Rails. You can get more information on what various gem commands do by running gem help.
One more thing: since we'll be using MySQL as our database backend, we'll need to install the MySQL gem. This will let Ruby scripts--such as our application--access MySQL databases. We can do this easily using this command:
gem install mysql
You might wonder why Rails doesn't install the MySQL gem for you; this is because Rails does not require a database backend. Most Rails applications use database access, and a large percentage of those are in MySQL, but you don't need a database adapter for all applications, and you are free to use one that's not MySQL. We'll be using MySQL in this example, where we will design a small Rails application.

Developing a simple Rails application

Our sample application will track entries in a checkbook, showing how quickly you can create real-world applications in Rails. You can add and delete entries, view descriptions, and even see a running total at the bottom of your application.
First, we'll create the skeleton for the application using the Rails command:
rails checkbookapp
This will create a new directory--named checkbookapp--under the current directory, then fill it with the essential files needed for a Rails application.
Most Rails applications will use a database--so will ours. By default, Rails uses a MySQL backend. Most Rails applications will use three different databases--one for development, one for testing, and one for production. We'll only use one: the development database. By default, it will be called checkbook_development--the name of our app, an underscore, and then the word development. Rails does not create the MySQL database for us, nor does it create our tables for us, so we'll need to do that next.
We need to create the database like this:
mysqladmin create checkbookapp_development
We have now have a blank MySQL database, so we need to create the table where we will store our data. To do that, we'll create a "ActiveRecord migration." This is a Rails way of specifying the database schema.
We could create the structure using SQL directly, but migrations have a number of advantages. For one, they are database agnostic; they can be used to deploy on any number of database backends. For another, they are versioned--you can incrementally change your database schema and use the "rake migrate" command to update any database to the most recent version. You can even migrate backwards to an older migration.
We create a blank migration as follows:
ruby script/generate migration InitialSchema
This makes a file for us, "db/migrate/001_initial_schema.rb". Edit that file so that it looks like this:

class InitialSchema < ActiveRecord::Migration
    def self.up
                create_table :entries do |t|
                                t.column :memo, :text
                                t.column :amount, :decimal, :precision => 9, :scale => 2
                                t.column :when, :date
                end
    end
end

def self.down
            drop_table :entries
end
There are two methods to our Migration class: up and down. This code is used by Rails to manage our database schema. The up method creates a table, and the down method destroys the table. The table has three columns: memo, which contains text (a variable length string); amount, a decimal column; and when, which is a date. The amount column has a precision of 9 and a scale of 2--that translate to a MySQL column type of DECIMAL(9,2). This might be different on a different database backend.

For our database, these settings mean that it stores up to $999,999.99. The precision refers to the total number of digits (in decimal), and the scale refers to the number of digits after the decimal point.
Note that Rails automatically adds an additional column for us: id, an artificial primary key that uniquely identifies each row. Although some database developers prefer natural key, this is a Rails convention that helps productivity. If you don't want an artificial primary key, or if your table doesn't require it, you can drop the column afterwards using the drop_column method. (Join tables, for example, wouldn't typically have an id column.)
Now that we've created the migration, we need to activate it. The "rake migrate" command does this for us; the following command will migrate the database for us:
rake migrate
That will run the up method of our migration, and create our entries table. If we created multiple migrations, the Rake migrate command would run only the migrations that haven't been run yet. Since each migration is stored in a separate file, it also stores our schema in any version control system we might use--so this works great for projects with multiple developers.
Our MySQL database now has a entries table. Now we need a model for this table. The model represents the table in code. In other web frameworks, we'd need to specify the table name, primary key, fields, and field types in the model. Rails, though, can figure out this information from the database itself, although you can override behavior if it doesn't fit your needs. You only need to modify the class if you want to add behavior, such as a relationship to another table or a method that performs custom calculations. We can create a model for our table like this;
ruby script/generate model entry
Next, we'll define our user interface using the Rails Scaffolding feature. Scaffolding can rapidly create an interface for a table. It is intended to rapidly create test or administrative user interfaces during development, not for deployment. For our example, though, we'll use the scaffold as our final interface. We can create it as follows:
ruby script/generate scaffold entry
This will create a number of files, including a controller, a number of actions, and a bunch of views. Larger applications would have a number of controllers, but we'll use a single scaffold controller. We need to tell Rails to make this our default controller. First, delete the indx.html file:
rm public/index.html
Next, edit the "config/routes.rb" file. This tells Rails what URL path corresponds to our controller. This means we can change what URLs look like without changing our backend code. Add this line immediately before the end statement at the end of the file:
map.connect '', :controller => "entries"
We now have a working Rails application. We can test it using WEBrick, a small development server intended to run Rails apps during development. Developers can test their app on WEBrick at any time without needing a full server environment. The following command will start WEBrick:
ruby script/server
You can now browse to http://127.0.0.1:3000/ to view the new application. You can add, delete, and edite entries--and if you have over thirty entries, they will be paginated.
Let's add one more feature to our application: a sum at the bottom of the listing. We can create this by adding just one line to the "app/views/entries/list.rhtml" file. Add this line right before the tag:
Total: <=@entries.inject(0) { |s,v| s+=v.amount }>
You can reload your web browser, and you'll see a new line with a total displayed. The inject function call is a cryptic but powerful Ruby method to calculate the sum of an array--in this case, the entries on the current page of our application. Although this example is trivial, you can see how easy it is to modify a Rails application.

Deploying our application on Apache

We just learned how to test a Rails app using WEBrick, but for a production deployment, you need to use a more powerful solution. A very common and powerful combination for production deployment is Red Hat Enterprise Linux, Apache, and Mongrel. This combination is high performance, easy to maintain, and can be scaled easily to more application and database servers when necessary.
Typically, Apache will serve static files--HTML, CSS, images, and so forth--and the Rails requests will be sent to Mongrel. Mongrel will typically handle either a whole domain/subdomain or a single directory. In this example, we'll serve only Mongrel requests but you could easily have Apache serve other content side-by-side with the Rails content. In this case, we will use mod_proxy_balancer to automatically distribute requests to our Mongrel processes.
First, let's install Mongrel. We can do so as as follows:

gem install mongrel
gem install mongrel_cluster
There's a small caveat: Mongrel comes in two flavors, one for Win32 environments, and one for every other environment, so when you install Mongrel, it'll ask which you want. The Win32 option is for Windows, and the Ruby option is for every other operating system. The difference is that the Win32 package is precompiled since most Windows machines don't have a C compiler. In our case, we can use the Ruby package.
The second gem install command installs the "mongrel_cluster" plugin to Mongrel, which lets us use the Mongrel cluster support.
We're going to create a number of Mongrel instances--five, in our case, although you could use more. Since Rails isn't thread-safe, this will maximize performance--each process can execute only one backend call at a time. Mongrel has built-in tools to manage a number of mongrel instances, called a "cluster." We're going to set up a cluster using the following commands:

mongrel_rails cluster::configure -p 6001 -N 5
mongrel_rails cluster::start
This will create five Mongrel instances on ports 6001 through 6005. We need to configure Apache to send requests to these five instances. We can do this easily using mod_proxy_balancer.
mod_proxy_balancer and the other modules we will require--mod_proxy and mod_proxy_http--are installed with Apache 2.x by default. We can load and configure them as follows:

LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_http_module modules/mod_proxy_http.so



    BalancerMember http://127.0.0.1:6001
    BalancerMember http://127.0.0.1:6002
    BalancerMember http://127.0.0.1:6003
    BalancerMember http://127.0.0.1:6004
    BalancerMember http://127.0.0.1:6005



    ProxyPass / balancer://checkbookapp/
    ProxyPassReverse / balancer://checkbookapp/


The first three lines load the module we need. The block that follows defines the five members of our cluster that will receive the requests--they are running on the same IP address, but different ports. We could use this same mechanism to distribute our requests accros multiple application servers.
The final block uses the ProxyPass and ProxyPassReverse directives to forward our requests. The ProxyPass directive forwards requests to the balancing cluster we just defined. The ProxyPassReverse directive adjusts the incoming responses so that they refer to the main server and not the individual clusters. Note that the first argument to each directive is '/', the root of webserver. We could make it a subdirectory, which would place our Rails application there. (You could also place the ProxyPass and ProxyPassReverse inside of a block, and then you'd only specify the cluster you wanted the to point to.)
If you start Apache, you should be able to browse to http://localhost:80 and view your application. This approach is fast and easy to maintain--and you can easily add additional application servers by adding new entries to the block.

If things go wrong

If you have problems with your Rails application, there a few steps you'll want to take. First, have you recently upgraded your Rails version? If so, this may have broken your Rails application. You can change the version of Rails your application uses by editing the "config/environment.rb" file and changing the RAILS_GEM_VERSION variable--RubyGems saves your old Gems for that kind of situation.
If you haven't changed your version of Rails, there are a number of log files you can check. You can check your Apache error logs in /var/log/httpd, of course, but you also want to check the log directory in your Rails application. It should contain a number of files, as both Mongrel and WEBrick will log there. If you use WEBrick in debug mode--which is the default--then you will also get more debugging information when an error happens, which is helpful. Note that even if you are in production, full debugging information will be outputted to any local machine--so if you access the web application from the same server it's deployed on, you'll get full debugging information.

Conclusion

As you can see, Rails applications can be developed quickly and efficiently. What's even better is that the fast development speed extends beyond simple applications to the production level deployment that we've only discussed here. With a little work, you can host Rails applications blazingly fast under Linux and Apache - and then scale quickly and efficiently.

BIND 9 External and Internal DNS Information

How do I configure Bind 9 dns server views to allow a single nameserver in my DMZ to make different sets of data available to different sets of clients? For example, I'd like to run recursion, some other data for LAN users (192.168.1.0/24), and for the Internet user I'd like to display limited DNS data without recursion. How do I configure views to partition external (Internet) and internal (LAN) DNS information?

You need to edit /etc/named.conf or /var/named/chroot/etc/named.conf file, run (the following configuration is tested on FreeBSD and RHEL 5.x BIND 9 servers):
# vi /var/named/chroot/etc/named.conf
Append the following and define internal subnet (192.168.1.0/24 and localhost with full access and recursion):
acl internal {
   192.168.1.0/24;
   localhost;
};
Define zone and other data as per your requirements:
//
// Lan zone recursion is the default
//
view "internal-view" {
  match-clients { internal; };
  zone "." IN {
    type hint;
    file "db.cache";
  };
  zone "internal.nixcraft.com " IN {
    type master;
    file "zones/lan.master.nixcraft.com";
    allow-transfer { key TRANSFER; };
  };
};
//
// external zone w/o recursion
//
view "external-view" {
  match-clients { any; };
  recursion no;
  zone "nixcraft.com " IN {
    type master;
    file "zones/internet.master.nixcraft.com";
    allow-transfer { key TRANSFER; };
  };
};
Make sure you configure TSIG as described here.

Create Zone Files

First, create required directories, enter:
# mkdir -p /var/named/chroot/var/named/zones
# chown named:named /var/named/chroot/var/named/zones

Create Internal Zone With LAN IP Data

Edit /var/named/chroot/var/named/zones/lan.master.nixcraft.com, run:
# vi /var/named/chroot/var/named/zones/lan.master.nixcraft.com
Append the data, enter:
$ORIGIN nixcraft.com.
$TTL 3h
@        IN SOA ns1.nixcraft.com. vivek.nixcraft.com. (
                       20080703328        ; Serial yyyymmddnn
                       3h                ; Refresh After 3 hours
                       1h                ; Retry Retry after 1 hour
                       1h                ; Expire after 1 week 1w
                       1h)             ; Minimum negative caching of 1 hour

@                          IN NS    ns1.nixcraft.com.
@                          IN NS    ns2.nixcraft.com.

@                      3600 IN MX 10 mail1.nixcraft.com.
@                      3600     IN MX 20 mail2.nixcraft.com.

@                      3600    IN A     208.43.79.236
ns1                    3600    IN A     208.43.138.52
ns2                    3600    IN A     75.126.168.152
mail1                  3600    IN A     208.43.79.236
mail2                  3600    IN A     67.228.49.229
out-router             3600    IN A     208.43.79.100
; lan data
wks1                   3600    IN A     192.168.1.5
wks2                   3600    IN A     192.168.1.5
wks3                   3600    IN A     192.168.1.5
in-router              3600    IN A     192.168.1.254
; add other lan specifc data below
Edit /var/named/chroot/var/named/zones/internet.master.nixcraft.com, run:
# vi /var/named/chroot/var/named/zones/internet.master.nixcraft.com
Same as above but no internal data:
$ORIGIN nixcraft.com.
$TTL 3h
@        IN SOA ns1.nixcraft.com. vivek.nixcraft.com. (
                       20080703328        ; Serial yyyymmddnn
                       3h                ; Refresh After 3 hours
                       1h                ; Retry Retry after 1 hour
                       1h                ; Expire after 1 week 1w
                       1h)             ; Minimum negative caching of 1 hour

@                          IN NS    ns1.nixcraft.com.
@                          IN NS    ns2.nixcraft.com.

@                      3600 IN MX 10 mail1.nixcraft.com.
@                      3600     IN MX 20 mail2.nixcraft.com.

@                      3600    IN A     208.43.79.236
ns1                    3600    IN A     208.43.138.52
ns2                    3600    IN A     75.126.168.152
mail1                  3600    IN A     208.43.79.236
mail2                  3600    IN A     67.228.49.229
out-router             3600    IN A     208.43.79.100
Finally, reload data:
# rndc reload
Test it, enter:
$ ping in-router.nixcraft.com
$ ping out-router.nixcraft.com

VSFTPd Virtual Users


This documenation was created from CentOS using the YUM package manager to keep things simple. Before you dig to deep you need the FTP Server to be installed. You can either install the FTP Server as you install CentOS or YUM group install it after the fact.

If you don't have VSFTPd installed currently on your CentOS machine.
>yum groupinstall "FTP Server"

Needed for creating the user database later.
>yum install compat-db

PAM configuration which configures the usage of the virual database we will be creating from a basic text file below.
>nano /etc/pam.d/vsftpd
session optional pam_keyinit.so force revoke
auth required /lib/security/pam_userdb.so db=/etc/vsftpd/vsftpd_users
account required /lib/security/pam_userdb.so db=/etc/vsftpd/vsftpd_users


Add the user the virtual FTP server will use to log in all users.
>adduser -d /home/vweb/ virtualftp -s /sbin/nologin

VSFTPd configuration example used for this setup.
>nano /etc/vsftpd/vsftpd.conf
listen=YES
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
chroot_local_user=YES
pam_service_name=vsftpd
userlist_enable=YES

# Virtual users will be logged into /home/vweb/[username]/
user_sub_token=$USER
local_root=/home/vweb/$USER
guest_enable=YES
guest_username=virtualftp
# Umask applied for virtual users and anon
anon_umask=0022
# Allows uploading by virtual users
anon_upload_enable=YES
# Allows creation of directories by virtual users
anon_mkdir_write_enable=YES
# Allows deletion of files and directories by virtual users
anon_other_write_enable=YES


Create your text absed user and password list.
>nano /etc/vsftpd/vsftpd_users.txt
username1
passwordforusername1



Create your user database from the entries located in the users file created above.
>rm /etc/vsftpd/vsftpd_users.db
>db42_load -T -t hash -f /etc/vsftpd/vsftpd_users.txt /etc/vsftpd/vsftpd_users.db
>chmod 600 /etc/vsftpd/vsftpd_users.db /etc/vsftpd/vsftpd_users.txt

Create your user's based FTP directory.
>mkdir -p /home/vweb/username1

Fixes not being able to write once logged into FTP
>chown -R virtualftp:virtualftp  /home/vweb/
>chmod -R 644 /home/vweb/
>find /home/vweb/ -type d -exec chmod 755 {} \;

Fixes "500 OOPS: cannot change directory". If SELinux is not enabled or enforcing then this can be ignored.
>/usr/sbin/setsebool  -P ftp_home_dir=1

VSFTPD Virtual Users Setup (with individual FTP home directories)

1. Installation of VSFTPD

For Red Hat, CentOS and Fedora, you may install VSFTPD by the command
# yum install vsftpd
For Debian and Ubuntu,
# apt-get install vsftpd
2. Virtual users and authentication

We are going to use pam_userdb to authenticate the virtual users. This needs a username / password file in `db’ format – a common database format. We need `db_load’ program. For CentOS, Fedora, you may install the package `db4-utils’:
# yum install db4-utils
For Ubuntu,
# apt-get install db4.2-util
To create a `db’ format file, first create a plain text file `virtual-users.txt’ with the usernames and passwords on alternating lines:
mary
123456
jack
654321

Then execute the following command to create the actual database:
# db_load -T -t hash -f virtual-users.txt /etc/vsftpd/virtual-users.db
Now, create a PAM file /etc/pam.d/vsftpd-virtual which uses your database:
auth required pam_userdb.so db=/etc/vsftpd/virtual-users
account required pam_userdb.so db=/etc/vsftpd/virtual-users

3. Configuration of VSFTPD

Create a configuration file /etc/vsftpd/vsftpd-virtual.conf,
# disables anonymous FTP
anonymous_enable=NO
# enables non-anonymous FTP
local_enable=YES
# activates virtual users
guest_enable=YES
# virtual users to use local privs, not anon privs
virtual_use_local_privs=YES
# enables uploads and new directories
write_enable=YES
# the PAM file used by authentication of virtual uses
pam_service_name=vsftpd-virtual
# in conjunction with 'local_root',
# specifies a home directory for each virtual user
user_sub_token=$USER
local_root=/var/www/virtual/$USER
# the virtual user is restricted to the virtual FTP area
chroot_local_user=YES
# hides the FTP server user IDs and just display "ftp" in directory listings
hide_ids=YES
# runs vsftpd in standalone mode
listen=YES
# listens on this port for incoming FTP connections
listen_port=60021
# the minimum port to allocate for PASV style data connections
pasv_min_port=62222
# the maximum port to allocate for PASV style data connections
pasv_max_port=63333
# controls whether PORT style data connections use port 20 (ftp-data)
connect_from_port_20=YES
# the umask for file creation
local_umask=022

4. Creation of home directories

Create each user’s home directory in /var/www/virtual, and change the owner of the directory to the user `ftp’:
# mkdir /var/www/virtual/mary
# chown ftp:ftp /var/www/virtual/mary

5. Startup of VSFTPD and test

Now we can start VSFTPD by the command:
# /usr/sbin/vsftpd /etc/vsftpd/vsftpd-virtual.conf
and test the FTP access of a virtual user:
# lftp -u mary -p 60021 192.168.1.101

Linux bond or team multiple network interfaces into single

Finally today I had implemented NIC bounding (bind both NIC so that it works as a single device).My idea is to improve performance by pumping out more data from both NIC without using any other method.

Linux allows binding multiple network interfaces into a single channel/NIC using special kernel module called bonding. "The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical "bonded" interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed."

Note:-What is bonding?

Bonding allows you to aggregate multiple ports into a single group, effectively combining the bandwidth into a single connection. Bonding also allows you to create multi-gigabit pipes to transport traffic through the highest traffic areas of your network. For example, you can aggregate three megabits ports (1 mb each) into a three-megabits trunk port. That is equivalent with having one interface with three megabits speed.



Setting up bounding is easy with RHEL v5.0.and above

Step #1:
Create a bond0 configuration file

Red Hat Linux stores network configuration in /etc/sysconfig/network-scripts/ directory. First, you need to create bond0 config file:
Code:
# vi /etc/sysconfig/network-scripts/ifcfg-bond0
Append following lines to it:
DEVICE=bond0
IPADDR=192.168.1.59
NETWORK=192.168.1.0
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes


Note:Replace above IP address with your actual IP address. Save file and exit to shell prompt

Step #2:
Modify eth0 and eth1 config files:

Open both configuration using vi text editor and make sure file read as follows for eth0 interface
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
Modify/append directive as follows:
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none


Open eth1 configuration file using vi text editor:
# vi /etc/sysconfig/network-scripts/ifcfg-eth1
Make sure file read as follows for eth1 interface:
DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none


Save file and exit to shell prompt

Step # 3:
Load bond driver/module


Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. You need to modify kernel modules configuration file:
# vi /etc/modprobe.conf
Append following two lines:
alias bond0 bonding
options bond0 mode=balance-alb miimon=100


Note:-Save file and exit to shell prompt. You can learn more about all bounding options at the end of this document

Step # 4:
Test configuration

First, load the bonding module:

# modprobe bonding

Restart networking service in order to bring up bond0 interface:

# service network restart

Verify everything is working:

# less /proc/net/bonding/bond0

Output:

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:XX:XX:X1

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:XX:XX:X2


List all interfaces:

# ifconfig

Output:

bond0     Link encap:Ethernet  HWaddr 00:0C:29:XX:XX:XX
inet addr:192.168.1.59 Bcast:192.168.1.255  Mask:255.255.255.0
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
RX packets:2804 errors:0 dropped:0 overruns:0 frame:0
TX packets:1879 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:250825 (244.9 KiB)  TX bytes:244683 (238.9 KiB)

eth0      Link encap:Ethernet  HWaddr 00:0C:29:XX:XX:XX
inet addr:192.168.1.59  Bcast:192.168.1.255  Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
RX packets:2809 errors:0 dropped:0 overruns:0 frame:0
TX packets:1390 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:251161 (245.2 KiB)  TX bytes:180289 (176.0 KiB)
Interrupt:11 Base address:0x1400

eth1      Link encap:Ethernet  HWaddr 00:0C:29:XX:XX:XX
inet addr:192.168.1.59  Bcast:192.168.1.255  Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:502 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:258 (258.0 b)  TX bytes:66516 (64.9 KiB)
Interrupt:10 Base address:0x1480



Note:-If the administration tools of your distribution do not support master/slave
notation in configuration of network interfaces, you will need to configure
the bonding device with the following commands manually:


# /sbin/ifconfig bond0 192.168.1.59 up
# /sbin/ifenslave bond0 eth0
# /sbin/ifenslave bond0 eth1


Que:-What are the other MODE options in modprobe .conf file

Ans:-You can set up your bond interface according to your needs. Changing one parameters (mode=X) you can have the following bonding types:

mode=0 (balance-rr)
Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

mode=1 (active-backup)
Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.

mode=2 (balance-xor)
XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast)
Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

mode=4 (802.3ad)
IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.

mode=5 (balance-tlb)
Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

mode=6 (balance-alb)
Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

collaborate the screen

Suppose you want to show your your friend how to solve a problem, but you are on a remote location.

Solution is to collaborate the screen.

STEP #1 Should have "screen" package installed on machines, using yum or rpm, then

ssh -Y yourusername@remote-machine


(normally you need to login as root)

STEP #2 Once you are there run

screen -S anyname


STEP #3 - Then tell your friend to run this command

screen -x anyname


This will make your and your friend's sessions joined together in the Linux shell.
You can type or he can type, but you'll both see what the other is doing.
The benefit is that your friend can watch your troubleshooting skills and see exactly how you solve problems.

The one caveat to this trick is that you both need to be logged in as the same user.

To detach from it and leave it open, type: Ctrl-A+D.

You can then reattach by running the

screen -x anyname
command again.

Can read more information using "

man screen


Enjoy the spirit of SHARING and COLLABORATING.

Linux: How to clear the cache from memory

Linux has a supposedly good memory management feature that will use up any "extra" RAM you have to cache stuff. This section of the memory being used is SUPPOSED to be freely available to be taken over by any other process that actually needs it, but unfortunately my Linux (three distros now, Mandriva 32 bit, and Mandriva 64 bit, and Opensuse 11 64 bit) thinks that cache memory is too important to move over for anything else that actually needs it.

I have 6 GB RAM in my computer. Whenever there is no cache being stored in the memory (i.e. when I first boot the computer), everything runs great. But as soon as it fills up with cache, my computer starts feeling like a 700mhz P2 running windows 98 stuffed full of malware. It's terrible..

Up until just now, I have been forced to restart every time this happens because I simply cannot get any work done while in this state of retardation. I can close every single program I'm running - and even then, simply right clicking would require some extended thinking before loading the context menu. Ridiculous.

Luckily, I found a way to clear out the cache being used. Simply run the following command as root and the cache will be cleared out.


Linux Command
  echo 3 > /proc/sys/vm/drop_caches


Thank you ariel for posting in the comments below about including the sync command before dropping caches.

How to set up RTL8101 Ethernet Driver in RHEL5

it is a enthernet driver required when u have installed RHEL5 into your machine, if your LAN is working proprely ...then no problem if not working follow these steps care fully,first you need to know is what driver you have in your syytem
[edit]
knowing drivers

#lspci

will give u over idea about wht u r looking for...


in the bottom of the output u will find wht driver u have.... Ex:06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E PCI Express Fast Ethernet controller (rev 01)

its very simple the way is you need to download RTL8101E from realtek web-site here is the link here

[step 1:]
$su -
#tar -xvf

[step 2:]
go to extracted folder
#make clean modules
#make install
#depmod -a
#insmod ./src/r8101.ko (or r8101.o for kernel 2.4.x)
#reboot

after rebooting the system your system able to connect to internet through LAN,if not type this command in terminal

#/sbin/services network restart

PXE Network Installations

Red Hat Enterprise Linux allows for installation over a network using the NFS, FTP, or HTTP protocols. A network installation can be started from a network boot diskette, a boot CD-ROM, or by using the askmethod boot option with the Red Hat Enterprise Linux CD #1. Alternatively, if the system to be installed contains a network interface card (NIC) with Pre-Execution Environment (PXE) support, it can be configured to boot from files on another system on the network instead of a diskette or CD-ROM.
For a PXE network installation, the client's NIC with PXE support sends out a broadcast request for DHCP information. The DHCP server provides the client with an IP address, other network information such as name server, the IP address or hostname of the tftp server (which provides the files necessary to start the installation program), and the location of the files on the tftp server. This is possible because of PXELINUX, which is part of the syslinux package.
The following steps must be performed to prepare for a PXE installation:

  1. Configure the network (NFS, FTP, HTTP) server to export the installation tree.


  2. Configure the files on the tftp server necessary for PXE booting.


  3. Configure which hosts are allowed to boot from the PXE configuration.


  4. Start the tftp service.


  5. Configure DHCP.

  6. Boot the client, and start the installation.

Setting up the Network Server

First, configure an NFS, FTP, or HTTP server to export the entire installation tree for the version and variant of Red Hat Enterprise Linux to be installed. Refer to the section Preparing for a Network Installation in the Red Hat Enterprise Linux Installation Guide for detailed instructions.

PXE Boot Configuration

The next step is to copy the files necessary to start the installation to the tftp server so they can be found when the client requests them. The tftp server is usually the same server as the network server exporting the installation tree.
To copy these files, run the Network Booting Tool on the NFS, FTP, or HTTP server. A separate PXE server is not necessary.
For the command line version of these instructions, refer to Section 14.2.1 Command Line Configuration.
To use the graphical version of the Network Booting Tool, you must be running the X Window System, have root privileges, and have the redhat-config-netboot RPM package installed. To start the Network Booting Tool from the desktop, go to the Main Menu Button (on the Panel) => System Settings => Server Settings => Network Booting Service. Or, type the command redhat-config-netboot at a shell prompt (for example, in an XTerm or a GNOME terminal).
If starting the Network Booting Tool for the first time, select Network Install from the First Time Druid. Otherwise, select Configure => Network Installation from the pull-down menu, and then click Add. The dialog in Figure 14-1 is displayed.

Figure 14-1. Network Installation Setup
Provide the following information:

  • Operating system identifier — Provide a unique name using one word to identify the Red Hat Enterprise Linux version and variant. It is used as the directory name in the /tftpboot/linux-install/ directory.


  • Description — Provide a brief description of the Red Hat Enterprise Linux version and variant.


  • Select protocol for installation — Select NFS, FTP, or HTTP as the network installation type depending on which one was configured previously. If FTP is selected and anonymous FTP is not being used, uncheck Anonymous FTP and provide a valid username and password combination.


  • Server — Provide the IP address or domain name of the NFS, FTP, or HTTP server.

  • Location — Provide the directory shared by the network server. If FTP or HTTP was selected, the directory must be relative to the default directory for the FTP server or the document root for the HTTP server. For all network installations, the directory provided must contain the RedHat/ directory of the installation tree.
After clicking OK, the initrd.img and vmlinuz files necessary to boot the installation program are transfered from images/pxeboot/ in the provided installation tree to /tftpboot/linux-install// on the tftp server (the one you are running the Network Booting Tool on).

14.2.1. Command Line Configuration

If the network server is not running X, the pxeos command line utility, which is part of the redhat-config-netboot package, can be used to configure the tftp server 
pxeos -a -i "" -p  -D 0 -s client.example.com \
-L  
The following list explains the options:

  • -a — Specifies that an OS instance is being added to the PXE configuration.

  • -i "" — Replace "" with a description of the OS instance. This corresponds to the Description field in Figure 14-1.

  • -p


  • -D 0 — Indicates that it is not a diskless configuration since pxeos can be used to configure a diskless environment as well.

  • -s client.example.com — Provide the name of the NFS, FTP, or HTTP server after the -s option. This corresponds to the Server field in Figure 14-1.
  • -L — Provide the location of the installation tree on that server after the -L option. This corresponds to the Location field in Figure 14-1.
  • — Specify the OS identifier, which is used as the directory name in the /tftpboot/linux-install/ directory. This corresponds to the Operating system identifier field in Figure 14-1.
If FTP is selected as the installation protocol and anonymous login is not available, specify a username and password for login, with the following options before in the previous command:
-A 0 -u  -p 



Adding PXE Hosts

After configuring the network server, the interface as shown in Figure 14-2 is displayed.

Figure 14-2. Add Hosts
The next step is to configure which hosts are allowed to connect to the PXE boot server. For the command line version of this step, refer to Section 14.3.1 Command Line Configuration.
To add hosts, click the New button.

Figure 14-3. Add a Host
Enter the following information:

  • Hostname or IP Address/Subnet — Enter the IP address, fully qualified hostname, or a subnet of systems that should be allowed to connect to the PXE server for installations.


  • Operating System — Select the operating system identifier to install on this client. The list is populated from the network install instances created from the Network Installation Dialog.


  • Serial Console — Select this option to use a serial console.

  • Kickstart File — Specify the location of a kickstart file to use such as http://server.example.com/kickstart/ks.cfg. This file can be created with the Kickstart Configurator. Refer to Chapter 10 Kickstart Configurator for details.
Ignore the Snapshot name and Ethernet options. They are only used for diskless environments.

Command Line Configuration

If the network server is not running X, the pxeboot utility, a part of the redhat-config-netboot package, can be used to add hosts which are allowed to connect to the PXE server:
pxeboot -a -O  -r  
The following list describes the options:

  • -a — Specifies that a host is to be added.

  • -O — Replace with the operating system identifier as defined in Section 14.2 PXE Boot Configuration.

  • -r — Replace with the ram disk size

  • — Replace with the IP address or hostname of the host to add.

    Starting the tftp Server

    On the DHCP server, verify that the tftp-server package is installed with the command rpm -q tftp-server. If it is not installed, install it via Red Hat Network or the Red Hat Enterprise Linux CD-ROMs. For more information on installing RPM packages, refer to Part III Package Management.
    tftp is an xinetd-based service; start it with the following commands:

    /sbin/chkconfig --level 345 xinetd on
    /sbin/chkconfig --level 345 tftp on
    This command configures the tftp and xinetd services to immediately turned on and also configures them to start at boot time in runlevels 3, 4, and 5.




    Configuring the DHCP Server

    If a DHCP server does not already exist on the network, configure one. Refer to Chapter 25 Dynamic Host Configuration Protocol (DHCP) for details. Make sure the configuration file contains the following so PXE booting is enabled for systems that support it:

    allow booting;
    allow bootp;
    class "pxeclients" {
       match if substring(option vendor-class-identifier, 0, 9) = "PXEClient";
       next-server ;
       filename "linux-install/pxelinux.0";
    }
    The IP address that follows the next-server option should be the IP address of the tftp server.

    Performing the PXE Installation

    For instructions on how to configure the network interface card with PXE support to boot from the network, consult the documentation for the NIC. It varies slightly per card.

    That's all for pxe boot env.















How to install a Network card in linux


There are different ways of installing a network card in linux - and that too depending on the linux distribution that you are using. I will explain each one of these methods here.
1) The Manual method
First open the computer case and insert the network card into an empty PCI slot. Then boot up your machine to load linux. In linux login as root and then navigate to the directory /lib/modules/kernel_version_number/net/ . Here you will find the modules supported by your system. Assuming that you have a 3Com ethernet card, in which case, the module name is 3c59x , you have to add this in the /etc/modules.conf file to let the machine detect the card each time the machine boots.
#File: /etc/modules.conf
alias eth0 3c59x
Note: If you have only one network card, it is known by the name eth0, the succeeding network cards in your computer go by the name eth1, eth2 ... and so on.
Now you have to load the module into the kernel.
root# /sbin/insmod -v 3c59x
Next configure an IP address for the network card using ifconfig or netconfig or any other method if your machine gets its IP address from a DHCP server. Eg:
root# ifconfig eth0 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255
2) The Easy way
RedHat/Fedora distributions of linux ships with Kudzu a device detection program which runs during systems initialization (/etc/rc.d/init.d/kudzu). This can detect a newly installed NIC and load the appropriate driver. Then use the program /usr/sbin/netconfig to configure the IP address and network settings. The configuration will be stored so that it will be utilized upon system boot.

VNC configuration in RHEL 4 and RHEL 5


First enable service "vncserver" on Server
#chkconfig vncserver on

Restart the service
#service vncserver start

#vncserver
New ':1 (root)' desktop is Server_Ip_Address:1

Starting applications specified in /root/.vnc/xstartup
Log file is /root/.vnc/Server_Ip_Address:3.log

#vncpasswd
Password:
Verify:

# vim /root/.vnc/xstartup
The entry should look like as follows,

#!/bin/sh

# Uncomment the following two lines for normal desktop:
unset SESSION_MANAGER
exec /etc/X11/xinit/xinitrc

[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
#xsetroot -solid grey
#vncconfig -iconic &
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
twm &


Now u can view through vnc viewer using Server_Ip_Address:1

How to prevent users from using USB removable disks ?

I have seen this question several times at different message boards, so I've decided to write an article about it. USB removable disks (also known as flash drives or "Disk on Key" and other variations) are quickly becoming an integral part of our electronic life, and now nearly everybody owns one device or another, in forms of small disks, external hard drives that come enclosed in cases, card readers, cameras, mobile phones, portable media players and more.

Portable USB flash drives are indeed very handy, but they can also be used to upload malicious code to your computer (either deliberately or by accident), or to copy confidential information from your computer and take it away. Whenever a new USB device is plugged-in to a USB port, the operating system checks the device and hardware id to determine if it's a storage device or not. If it determines that it is indeed a not allowed storage device it will load the appropriate driver, and will display the device as a drive in the Windows Explorer tree view. This is done by using the usbstor.sys driver. If the device does not have a drive letter, you will need to not allowed one to it by using the Disk Management snap-in found in the Computer Management tool. If you disable the ability of the usbstor.sys driver to run on the computer, you will in fact block the computer's means of discovering the flash drive and loading the appropriate driver.

Note that this will only prevent usage of newly plugged-in USB Removable Drives or flash drives, devices that were plugged-in while this option was not configured will continue to function normally. Also, devices that use the same device or hardware ID (for example - 2 identical flash drives made by the same manufacturer) will still function if one of them was plugged-in prior to the configuration of this setting. In order to successfully block them you will need to make sure no USB Removable Drive is plugged-in while you set this option. Note: This tip will allow you to block usage of USB removable disks, but will continue to allow usage of USB mice, keyboards or any other USB-based device that is NOT a portable disk.

It is worth mentioning that in Windows Vista Microsoft has implemented a much more sophisticated method of controlling USB disks via GPO. If you have Windows Vista client computers in your organization you can use GPO settings edited from one of the Vista machines to control if users will be able to install and use USB disks, plus the ability to control exactly what device can or cannot be used on their machines.

Block usage of USB Removable Disks

To block your computer's ability to use USB Removable Disks follow these steps:

1. Open Registry Editor.
2. In Registry Editor, navigate to the following registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\USBSTOR

1. Locate the following value (DWORD):

Start and give it a value of 4. Note: As always, before making changes to your registry you should always make sure you have a valid backup. In cases where you're supposed to delete or modify keys or values from the registry it is possible to first export that key or value(s) to a .REG file before performing the changes.

1. Close Registry Editor. You do not need to reboot the computer for changes to apply.

Enable usage of USB Removable Disks

To return to the default configuration and enable your computer's ability to use USB Removable Disks follow these steps:

1. Go to the registry path found above.

2. Locate the following value:

3. Start and give it a value of 3.

Take Care

Improve Firefox speed by 5x



Just fit the NOS in your firefox (The Fast and the Furious)

1. Open Firefox and in the address bar type about:config.
2. Click on “I’ll be careful, I promise
3. Use the search bar above to look for network.http.pipelining and double click on it to set it’s value to True.
4. Create a new boolean value named network.http.pipelining.firstrequest and set that to True, as well.
5. Find network.http.pipelining.maxrequests, double click on it, and change its value to 8.
6. Look for network.http.proxy.pipelining and set it to True.
7. Create two new integers named nglayout.initialpaint.delay and content.notify.interval, set them to 0.
8. Restart your browser.

All done. You should feel the browser is 5x more responsive than before while navigating websites.

Disabling USB drive

Disabling USB drive or THUMB drive is considered as a very good security option to be implemented on server. To prevent theft of data by non-legitimate users.

Here are couple of ways in which you can disable USB drive.

METHOD #1 - By editing /boot/grub/grub.conf
Just add "nousb" at the end of the kernel line in /boot/grub/grub.conf file.
and then "reboot" your machine.

METHOD #2 - Removing the driver from default location.

ls /lib/modules/$(uname -r)/kernel/drivers/usb/storage/usb-storage.ko
mv /lib/modules/$(uname -r)/kernel/drivers/usb/storage/usb-storage.ko /root


In this method, I had moved the usb-storage.ko driver (or module) from its default location to some other place (/root in this case)

METHOD #3 - Using BLACKLIST option.

Remove the module, if it is already loaded.
modprobe -r usb_storage
Put the name of the usb_storage module in the blacklist
vim /etc/modprobe.d/blacklist and append - blacklist usb_storage

Now if you try to plug-in the USB it will not be detected by the system.

God Bless.

See you on TOP.

dns tips while configuring

Just sharing some DNS tips, which needs to keep on mind while configuring your DNS server.

1. An A Record must ALWAYS contain IP address (map host to IP)

Whenever you specify A record it must contain IP address on the Right side. The A record is so important in DNS without which the meaning of mapping hostnames to IP would be absurd. So remember this!

2. CNAME (Alias) must contain hostnames. No IPs here

3. NS an MX records must contain host names. No IPs allowed.

4. Use the DOT in the end, whenever you specify a domain name in the DNS zone file. This DOT is so important and if you forget this you will have nightmares with your dns configuration.
For example
example.com. IN NS ns1.example.com.

Why DOT? simply because it tells to start query from root servers (denoted by dot)

5. MX records (for mail servers) should contain hostnames NOT IPs.

6. Allow Port 53 for both UDP and TCP connections
If you use firewall make sure you do not block port 53 for DNS tcp and udp requests. By default dns lookups use UDP protocol while zone transfers and notifications use TCP protocol of port 53.
-Port 53 UDP = Dns Requests
-Port 53 TCP = Zone transfers

7. CNAMEs cannot co-xist with MX hosts.
Do not specify CNAME or aliases pointing to MX records.

domain.com. IN MX 10 mail.domain.com.
mail IN CNAME domain.com. ----------> WRONG

Instead use A record to map directly to IP address.

mail IN A 11.33.55.77 ---> CORRECT

8. No duplicate MX records
domain.com. IN MX mail.domain.com.
domain.com. IN MX mail.domain.com ----> DUPLICATE

In case if some information provided above is incorrect, please feel free to update me.
Will surely add more tips & tricks in the coming future.

Execute command in SSH without opening shell


Generally whenever we intend to run some command on remote machine, we first do a ssh and then type the command to be executed.

Here is a very small "trick" to be more smarter.

Lets say you want to run "top" command on the machine x.x.x.x using SSH.


First find out the path of the top command using -

whereis top

Once you get the path. Now just type this -
ssh user@x.x.x.x /path/to/the/command

Unmounting un-responsive CD/DVD

Sometimes the CD/DVD refuses to take the “eject” command. Lets see how to counter it.


STEP #1 – Simulate the problem.

   # mount    /media/cdrom    # cd    /media/cdrom    # while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done


Now open up a second terminal and try to eject the DVD drive:
# eject

You'll get a message like:

umount: /media/cdrom: device is busy


STEP #2 - Before we free it, let's see which process is using it, and then kill that process as root.


   # fuser   /media/cdrom    # fuser   -k   /media/cdrom


Boom! Just like that, freedom. Now solemnly unmount the drive:
# eject

Impressing Your Friends with RPM


RPM is a useful tool for both managing your system and diagnosing and fixing problems. The best way to make sense of all of its options is to look at some examples.
  • Perhaps you have deleted some files by accident, but you are not sure what you deleted. To verify your entire system and see what might be missing, you could try the following command:
    rpm -Va
    If some files are missing or appear to have been corrupted, you should probably either re-install the package or uninstall, then re-install the package.
  • At some point, you might see a file that you do not recognize. To find out which package owns it, you would enter:
    rpm -qf /usr/X11R6/bin/ghostview
    The output would look like the following:
    gv-3.5.8-22
  • We can combine the above two examples in the following scenario. Say you are having problems with /usr/bin/paste. You would like to verify the package that owns that program, but you do not know which package owns paste. Simply enter the following command:
    rpm -Vf /usr/bin/paste
    and the appropriate package is verified.
  • Do you want to find out more information about a particular program? You can try the following command to locate the documentation which came with the package that owns that program:
    rpm -qdf /usr/bin/free
    The output would be like the following:
    /usr/share/doc/procps-2.0.11/BUGS
    /usr/share/doc/procps-2.0.11/NEWS
    /usr/share/doc/procps-2.0.11/TODO
    /usr/share/man/man1/free.1.gz
    /usr/share/man/man1/oldps.1.gz
    /usr/share/man/man1/pgrep.1.gz
    /usr/share/man/man1/pkill.1.gz
    /usr/share/man/man1/ps.1.gz
    /usr/share/man/man1/skill.1.gz
    /usr/share/man/man1/snice.1.gz
    /usr/share/man/man1/tload.1.gz
    /usr/share/man/man1/top.1.gz
    /usr/share/man/man1/uptime.1.gz
    /usr/share/man/man1/w.1.gz
    /usr/share/man/man1/watch.1.gz
    /usr/share/man/man5/sysctl.conf.5.gz
    /usr/share/man/man8/sysctl.8.gz
    /usr/share/man/man8/vmstat.8.gz
  • You may find a new RPM, but you do not know what it does. To find information about it, use the following command:
    rpm -qip crontabs-1.10-5.noarch.rpm
    The output would look like the following:
    Name        : crontabs                     Relocations: (not relocateable)
    Version     : 1.10                              Vendor: Red Hat, Inc.
    Release     : 5                             Build Date: Fri 07 Feb 2003 04:07:32 PM EST
    Install date: (not installed)               Build Host: porky.devel.redhat.com
    Group       : System Environment/Base       Source RPM: crontabs-1.10-5.src.rpm
    Size        : 1004                             License: Public Domain
    Signature   : DSA/SHA1, Tue 11 Feb 2003 01:46:46 PM EST, Key ID fd372689897da07a
    Packager    : Red Hat, Inc. 
    Summary     : Root crontab files used to schedule the execution of programs.
    Description :
    The crontabs package contains root crontab files. Crontab is the
    program used to install, uninstall, or list the tables used to drive the
    cron daemon. The cron daemon checks the crontab files to see when
    particular commands are scheduled to be executed. If commands are
    scheduled, then it executes them.
  • Perhaps you now want to see what files the crontabs RPM installs. You would enter the following:
    rpm -qlp crontabs-1.10-5.noarch.rpm
    The output is similar to the following:
    Name        : crontabs                     Relocations: (not relocateable)
    Version     : 1.10                              Vendor: Red Hat, Inc.
    Release     : 5                             Build Date: Fri 07 Feb 2003 04:07:32 PM EST
    Install date: (not installed)               Build Host: porky.devel.redhat.com
    Group       : System Environment/Base       Source RPM: crontabs-1.10-5.src.rpm
    Size        : 1004                             License: Public Domain
    Signature   : DSA/SHA1, Tue 11 Feb 2003 01:46:46 PM EST, Key ID fd372689897da07a
    Packager    : Red Hat, Inc. 
    Summary     : Root crontab files used to schedule the execution of programs.
    Description :
    The crontabs package contains root crontab files. Crontab is the
    program used to install, uninstall, or list the tables used to drive the
    cron daemon. The cron daemon checks the crontab files to see when
    particular commands are scheduled to be executed. If commands are
    scheduled, then it executes them.

Exclude Certain Files When Creating A Tarball Using Tar Command




How can I keep out certain files when creating a tarball? For example:
/home/me/file1
/home/me/dir1
/home/me/dir2
/home/me/abc
/home/me/xyz
How do I execute zyz and abc file while using a tar command?

The GNU version of the tar archiving utility has --exclude and -X options. So to exclude abc and xyz file you need to type the command as follows:
$ tar -zcvf /tmp/mybackup.tar.gz --exclude='abc' --exclude='xyz' /home/me
If you have more than 2 files use -X option to specify multiple file names. It reads list of exclude file names from a text file. For example create a file called exclude.txt:
$ vi exclude.txtAppend file names:
abc
xyz
*.bak

Save and close the file. This lists the file patterns that need to be excluded. Now type the command:
$ tar -zcvf /tmp/mybackup.tar.gz -X exclude.txt /home/me
Where,
  • -X file.txt :exclude files matching patterns listed in FILE file.txt