Iptables Limits Connections Per IP


How do I restrict the number of connections used by a single IP address to my server for port 80 and 25 using iptables?


Syntax

The syntax is as follows:
/sbin/iptables -A INPUT -p tcp --syn --dport $port -m connlimit --connlimit-above N -j REJECT --reject-with tcp-reset
# save the changes see iptables-save man page, the following is redhat and friends specific command
service iptables save

Example: Limit SSH Connections Per IP / Host

Only allow 3 ssh connections per client host:
/sbin/iptables  -A INPUT -p tcp --syn --dport 22 -m connlimit --connlimit-above 3 -j REJECT
# save the changes see iptables-save man page, the following is redhat and friends specific command
service iptables save

Example: Limit HTTP Connections Per IP / Host

Only allow 20 http connections per IP (MaxClients is set to 60 in httpd.conf):WARNING! Please note that large proxy servers may legitimately create a large number of connections to your server. You can skip those ips using !

/sbin/iptables -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 20 -j REJECT --reject-with tcp-reset
# save the changes see iptables-save man page, the following is redhat and friends specific command
service iptables save
 
Skip proxy server IP 1.2.3.4 from this kind of limitations:
/sbin/iptables -A INPUT -p tcp --syn --dport 80 -d ! 1.2.3.4 -m connlimit --connlimit-above 20 -j REJECT --reject-with tcp-reset

Example: Class C Limitations

In this example, limit the parallel http requests to 20 per class C sized network (24 bit netmask)
/sbin/iptables  -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 20 --connlimit-mask 24 -j REJECT --reject-with tcp-reset
# save the changes see iptables-save man page
service iptables save

Example: Limit Connections Per Second

The following example will drop incoming connections if IP make more than 10 connection attempts to port 80 within 100 seconds (add rules to your iptables shell script)
#!/bin/bash
IPT=/sbin/iptables
# Max connection in seconds
SECONDS=100
# Max connections per IP
BLOCKCOUNT=10
# ....
# ..
# default action can be DROP or REJECT
DACTION="DROP"
$IPT -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --set
$IPT -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --update --seconds ${SECONDS} --hitcount ${BLOCKCOUNT} -j ${DACTION}
# ....
# ..

How Do I Test My Firewall Working?

Use the following shell script to connect to your web server hosted at 202.1.2.3:
#!/bin/bash
ip="202.1.2.3"
port="80"
for i in {1..100}
do
  # do nothing just connect and exit
  echo "exit" | nc ${ip} ${port};
done

Installing a CentOS GlusterFS distributed storage cluster

Here are some of the steps I am taking to do.
Build the RPM
# Install Fuse:
yum -y install fuse-devel byacc libtool
# Make sure you install the needed dependencies by looking at the spec file
# Build throws BDB errors, so skip it. It also has a weird default setup with apache 1.3, so just skip that too for now.
rpmbuild -ta glusterfs-2.0.0rc1.tar.gz –without bdb –without modglfs
Install the RPM on both client and server nodes
yum -y install fuse
rpm -Uvh glusterfs-2.0.0rc1-1.i386.rpm
Start the server on all of your nodes (see below for sample vol files)
mkdir /state/partition1/gluster_export # or whatever your export dir is
glusterfs -f /home/jordan/projects/gluster/glusterfs-server.vol
On client nodes, install mounting requirements
yum -y install fuse dkms-fuse # From rpmforge
modprobe fuse # Load fuse module
Mount gluster on the client nodes
mkdir /mnt/gluster
mount -t glusterfs /home/jordan/projects/gluster/glusterfs-client.vol /mnt/gluster/
Basic Configuration for Sample Server spec
# Configuration for each server to export storage
volume posix
type storage/posix
option directory /state/partition1/gluster_export
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume brick
type performance/io-threads
option thread-count 4 # Each of my test servers is 4 core
subvolumes locks
end-volume
volume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow *
subvolumes brick
end-volume
Basic Configuration for Sample Client spec
# Sample client to aggregate 8 storage nodes
volume r1
type protocol/client
option transport-type tcp
option remote-host compute-0-1
option remote-subvolume brick
end-volume
volume r2
type protocol/client
option transport-type tcp
option remote-host compute-0-2
option remote-subvolume brick
end-volume
volume r3
type protocol/client
option transport-type tcp
option remote-host compute-0-3
option remote-subvolume brick
end-volume
volume r4
type protocol/client
option transport-type tcp
option remote-host compute-0-4
option remote-subvolume brick
end-volume
volume r5
type protocol/client
option transport-type tcp
option remote-host compute-0-5
option remote-subvolume brick
end-volume
volume r6
type protocol/client
option transport-type tcp
option remote-host compute-0-6
option remote-subvolume brick
end-volume
volume r7
type protocol/client
option transport-type tcp
option remote-host compute-0-7
option remote-subvolume brick
end-volume
volume r8
type protocol/client
option transport-type tcp
option remote-host compute-0-8
option remote-subvolume brick
end-volume
volume distribute
type cluster/distribute
subvolumes r1 r2 r3 r4 r5 r6 r7 r8
end-volume

Howto install GlusterFS on Centos/RHEL

1. Introduction
GlusterFS is a clustered file­system capable of scaling to several peta­bytes. It aggregates various storage
bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage
bricks can be made of any commodity hardware such as x86­64 server with SATA­II RAID and
Infiniband HBA.
2. Installation
First you need to install some software :
yum install make gcc gcc-c++
yum install flex bison byacc
We need to do a dirty trick to get the package fuse­sshfs :
vi /etc/yum.repos.d/CentOS-Base.repo
and add this at the end of the file :
[extras_fedora]
name=Fedora Core 6 Extras
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=extras-6&arch=$basea
rch
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-extras
gpgcheck=0
Next do :
yum install fuse-sshfs --enablerepo=extras_fedora
Now we will install GlusterFS :
Get the same exact version, otherwise there is good chances it wont work. I tried with 2.0.0rc1 and 1.3.12
and there was some issues.
cd /root/
wget http://ftp.gluster.com/pub/gluster/glusterfs/2.0/LATEST/glusterfs-
2.0.0rc2.tar.gz
tar -zxvf glusterfs-2.0.0rc2.tar.gz
cd /root/glusterfs-2.0.0rc2/
Take a minute break and compile :
./configure
make && make install
For some reasons, libraries are going in the wrong directory so we need to (if someone has a clean fix to
this please post it!) :
cp /usr/local/lib/* -R /usr/lib/
Now we create some folders that will be used later on :
mkdir    /mnt/glusterfs
mkdir    /data/
mkdir    /data/export
mkdir    /data/export-ns
mkdir    /etc/glusterfs/
The fun begins on the next page :)
­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
3. Servers configuration
Before you go further, you need to know that GlusterFS works in a client/server way. What we will do is
to make our servers both client and server for GlusterFS.
Lets start with the server configuration file ON ALL SERVERS:
vi /etc/glusterfs/glusterfs-server.vol
and make it look like this :
# file: /etc/glusterfs/glusterfs-server.vol

volume posix
  type storage/posix
  option directory /data/export
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume posix-ns
  type storage/posix
  option directory /data/export-ns
end-volume

volume locks-ns
  type features/locks
  subvolumes posix-ns
end-volume

volume brick-ns
  type performance/io-threads
  option thread-count 8
  subvolumes locks-ns
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  option auth.addr.brick-ns.allow *
  subvolumes brick brick-ns
end-volume
Now do :
glusterfsd -f /etc/glusterfs/glusterfs-server.vol
to start the server daemon.
­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
4. Clients configuration
In these example files, I will use the following hosts :
server1 : 192.168.0.1
server2 : 192.168.0.2
server3 : 192.168.0.3
server4 : 192.168.0.4
[...]
Now we edit the client configuration file ON ALL SERVERS (because servers are client as well in this
howto):
vi /etc/glusterfs/glusterfs-client.vol
2 servers configuration
### Add client feature and attach to remote subvolume of server1
volume brick1
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.0.1                      # IP address of the remote brick
 option remote-subvolume brick                           # name of the remote volume
end-volume

### Add client feature and attach to remote subvolume of server2
volume brick2
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.0.2                         # IP address of the remote brick
 option remote-subvolume brick                           # name of the remote volume
end-volume

### The file index on server1
volume brick1-ns
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.0.1    # IP address of the remote brick
 option remote-subvolume brick-ns        # name of the remote volume
end-volume

### The file index on server2
volume brick2-ns
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.0.2      # IP address of the remote brick
 option remote-subvolume brick-ns        # name of the remote volume
end-volume

#The replicated volume with data
volume afr1
 type cluster/afr
 subvolumes brick1 brick2
end-volume

#The replicated volume with indexes
volume afr-ns
 type cluster/afr
 subvolumes brick1-ns brick2-ns
end-volume

#The unification of all afr volumes (used for > 2 servers)
volume unify
  type cluster/unify
  option scheduler rr # round robin
  option namespace afr-ns
  subvolumes afr1
end-volume
4 servers configuration
### Add client feature and attach to remote subvolume of server1
volume brick1
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.0.1    # IP address of the remote brick
 option remote-subvolume brick        # name of the remote volume
end-volume

### Add client feature and attach to remote subvolume of server2
volume brick2
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.0.2      # IP address of the remote brick
 option remote-subvolume brick        # name of the remote volume
end-volume

### Add client feature and attach to remote subvolume of server3
volume brick3
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.0.3                 # IP address of the remote brick
 option remote-subvolume brick                   # name of the remote volume
end-volume

### Add client feature and attach to            remote subvolume of server4
volume brick4
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.0.4                 # IP address of the remote brick
 option remote-subvolume brick                   # name of the remote volume
end-volume

### Add client feature and attach to            remote subvolume of server1
volume brick1-ns
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.0.1              #  IP address of the remote brick
 option remote-subvolume brick-ns                   # name of the remote volume
end-volume

### Add client feature and attach to            remote subvolume of server2
volume brick2-ns
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.0.2                 # IP address of the remote brick
 option remote-subvolume brick-ns                   # name of the remote volume
end-volume

volume afr1
 type cluster/afr
 subvolumes brick1 brick4
end-volume

volume afr2
 type cluster/afr
 subvolumes brick2 brick3
end-volume

volume afr-ns
 type cluster/afr
 subvolumes brick1-ns brick2-ns
end-volume

volume unify
  type cluster/unify
  option scheduler rr # round robin
  option namespace afr-ns
  subvolumes afr1 afr2
end-volume
So on and so forth... For configuration over 4 servers, simply add brick volumes 2 by two, replicate them
and dont forget to put them in the "unify" volume.
Now mount the GlusterFS on all servers in the cluster :
glusterfs -f /etc/glusterfs/glusterfs-client.vol /mnt/glusterfs
------------------------------------------------------------------------------------
---
5. Testing
Once you mounted the GlusterFS to /mnt/glusterfs you can start copying files and see what is happening.
Below are my tests on 4 servers. Everything works as it should, files in /data/export only show in 2 out of
4 server and everything is there under /mnt/glusterfs and /data/export­ns :
server 1 (ls    -la /data/export)
-rwxrwxrwx 1    marc marc 215663 2007-09-14         14:14   6-instructions2.pdf
-rwxrwxrwx 1    marc marc       2256 2008-12-18     11:54   budget.ods
-rwxr--r-- 1    marc marc 21281 2009-02-18          16:45   cv_nouveau.docx
-rwxrwxrwx 1    marc marc 13308 2009-01-26          10:49   cv.pdf
-rwxrwxrwx 1    marc marc 196375 2008-04-02         18:48   odometre.pdf
-rwxrwxrwx 1    marc marc       5632 2008-05-23     19:42   Thumbs.db

server 4 (ls    -la /data/export)
-rwxrwxrwx 1    marc marc 215663 2007-09-14         14:14   6-instructions2.pdf
-rwxrwxrwx 1    marc marc       2256 2008-12-18     11:54   budget.ods
-rwxr--r-- 1    marc marc 21281 2009-02-18          16:45   cv_nouveau.docx
-rwxrwxrwx 1    marc marc 13308 2009-01-26          10:49   cv.pdf
-rwxrwxrwx 1    marc marc 196375 2008-04-02         18:48   odometre.pdf
-rwxrwxrwx 1    marc marc       5632 2008-05-23     19:42   Thumbs.db

server 2 (ls    -la /data/export)
-rwxr--r-- 1    marc marc 135793 2009-02-02         15:26   bookmarks.html
-rwxrwxrwx 1    marc marc 112640 2008-11-17         21:41   cv.doc
-rwxrwxrwx 1    marc marc 13546 2007-09-11          15:43   cv.odt
-rwxrwxrwx 1    marc marc 25088 2006-07-03          17:07   menulaurentien.doc
-rwxr--r-- 1    marc marc 33734 2009-02-06          12:58   opera6.htm

server 3 (ls    -la /data/export)
-rwxr--r-- 1    marc marc 135793 2009-02-02         15:26   bookmarks.html
-rwxrwxrwx 1    marc marc 112640 2008-11-17         21:41   cv.doc
-rwxrwxrwx 1    marc marc 13546 2007-09-11          15:43   cv.odt
-rwxrwxrwx 1    marc marc 25088 2006-07-03          17:07   menulaurentien.doc
-rwxr--r-- 1    marc marc 33734 2009-02-06          12:58   opera6.htm

server x (ls    -la /mnt/glusterfs)
-rwxrwxrwx 1    marc marc 215663 2007-09-14         14:14   6-instructions2.pdf
-rwxr--r-- 1    marc marc 135793 2009-02-02         15:26   bookmarks.html
-rwxrwxrwx 1    marc marc       2256 2008-12-18     11:54   budget.ods
-rwxrwxrwx 1    marc marc 112640 2008-11-17         21:41   cv.doc
-rwxr--r-- 1    marc marc 21281 2009-02-18          16:45   cv_nouveau.docx
-rwxrwxrwx 1    marc marc 13546 2007-09-11          15:43   cv.odt
-rwxrwxrwx 1    marc marc 13308 2009-01-26          10:49   cv.pdf
-rwxrwxrwx 1    marc marc 25088 2006-07-03          17:07   menulaurentien.doc
-rwxrwxrwx 1    marc marc 196375 2008-04-02         18:48   odometre.pdf
-rwxr--r-- 1    marc marc 33734 2009-02-06          12:58   opera6.htm
-rwxrwxrwx 1    marc marc       5632 2008-05-23     19:42   Thumbs.db

server 1 (ls    -la /data/export-ns)
-rwxrwxrwx 1    marc marc        0 2007-09-14 14:14 6-instructions2.pdf
-rwxr--r-- 1    marc   marc      0  2009-02-02    15:26  bookmarks.html
-rwxrwxrwx 1    marc   marc      0  2008-12-18    11:54  budget.ods
-rwxrwxrwx 1    marc   marc      0  2008-11-17    21:41  cv.doc
-rwxr--r-- 1    marc   marc      0  2009-02-18    16:45  cv_nouveau.docx
-rwxrwxrwx 1    marc   marc      0  2007-09-11    15:43  cv.odt
-rwxrwxrwx 1    marc   marc      0  2009-01-26    10:49  cv.pdf
-rwxrwxrwx 1    marc   marc      0  2006-07-03    17:07  menulaurentien.doc
-rwxrwxrwx 1    marc   marc      0  2008-04-02    18:48  odometre.pdf
-rwxr--r-- 1    marc   marc      0  2009-02-06    12:58  opera6.htm
-rwxrwxrwx 1    marc   marc      0  2008-05-23    19:42  Thumbs.db


server 2 (ls    -la /data/export-ns)
-rwxrwxrwx 1    marc marc        0 2007-09-14     14:14  6-instructions2.pdf
-rwxr--r-- 1    marc marc        0 2009-02-02     15:26  bookmarks.html
-rwxrwxrwx 1    marc marc        0 2008-12-18     11:54  budget.ods
-rwxrwxrwx 1    marc marc        0 2008-11-17     21:41  cv.doc
-rwxr--r-- 1    marc marc        0 2009-02-18     16:45  cv_nouveau.docx
-rwxrwxrwx 1    marc marc        0 2007-09-11     15:43  cv.odt
-rwxrwxrwx 1    marc marc        0 2009-01-26     10:49  cv.pdf
-rwxrwxrwx 1    marc marc        0 2006-07-03     17:07  menulaurentien.doc
-rwxrwxrwx 1    marc marc        0 2008-04-02     18:48  odometre.pdf
-rwxr--r-- 1    marc marc        0 2009-02-06     12:58  opera6.htm
-rwxrwxrwx 1    marc marc        0 2008-05-23     19:42  Thumbs.db
Now let say we want to test how redundant is the setup. Lets reboot server1 and create new files while its
down :
> /mnt/glusterfs/testfile
> /mnt/glusterfs/testfile2
> /mnt/glusterfs/testfile3
> /mnt/glusterfs/testfile4
Once server1 is back, lets check file consistency :
server 1 (ls    -la /data/export)
-rwxrwxrwx 1    marc marc 215663 2007-09-14          14:14 6-instructions2.pdf
-rwxrwxrwx 1    marc marc       2256 2008-12-18      11:54 b4udget.ods
-rwxr--r-- 1    marc marc 21281 2009-02-18           16:45 cv_nouveau.docx
-rwxrwxrwx 1    marc marc 13308 2009-01-26           10:49 cv.pdf
-rwxrwxrwx 1    marc marc 196375 2008-04-02          18:48 odometre.pdf
-rwxrwxrwx 1    marc marc       5632 2008-05-23      19:42 Thumbs.db

server 4 (ls    -la /data/export)
-rwxrwxrwx 1    marc marc 215663 2007-09-14          14:14 6-instructions2.pdf
-rwxrwxrwx 1    marc marc       2256 2008-12-18      11:54 budget.ods
-rwxr--r-- 1    marc marc 21281 2009-02-18           16:45 cv_nouveau.docx
-rwxrwxrwx 1    marc marc 13308 2009-01-26           10:49 cv.pdf
-rwxrwxrwx 1    marc marc 196375 2008-04-02          18:48 odometre.pdf
-rw-r--r-- 1    root root           0 2009-02-19     11:32 testfile
-rw-r--r-- 1    root root           0 2009-02-19     11:32 testfile3
-rwxrwxrwx 1    marc marc       5632 2008-05-23      19:42 Thumbs.db

server 1 (ls    -la /data/export-ns)
-rwxrwxrwx 1    marc marc        0 2007-09-14 14:14 6-instructions2.pdf
-rwxr--r-- 1    marc marc        0 2009-02-02 15:26 bookmarks.html
-rwxrwxrwx 1    marc marc        0 2008-12-18 11:54 budget.ods
-rwxrwxrwx     1  marc   marc      0  2008-11-17     21:41   cv.doc
-rwxr--r--     1  marc   marc      0  2009-02-18     16:45   cv_nouveau.docx
-rwxrwxrwx     1  marc   marc      0  2007-09-11     15:43   cv.odt
-rwxrwxrwx     1  marc   marc      0  2009-01-26     10:49   cv.pdf
-rwxrwxrwx     1  marc   marc      0  2006-07-03     17:07   menulaurentien.doc
-rwxrwxrwx     1  marc   marc      0  2008-04-02     18:48   odometre.pdf
-rwxr--r--     1  marc   marc      0  2009-02-06     12:58   opera6.htm
-rwxrwxrwx     1  marc   marc      0  2008-05-23     19:42   Thumbs.db
Oups, we have an inconstency here. To fix that, gluster documentation says missing files have to be read.
So lets do this simple command to read all files :
ls -lR /mnt/glusterfs/
Now, lets check what we have on server1 :
server1 (ls -la /data/export)
-rwxrwxrwx 1 marc marc 215663 2007-09-14 14:14 6-instructions2.pdf
-rwxrwxrwx 1 marc marc            2256 2008-12-18 11:54 budget.ods
-rwxr--r-- 1 marc marc 21281 2009-02-18 16:45 cv_nouveau.docx
-rwxrwxrwx 1 marc marc 13308 2009-01-26 10:49 cv.pdf
-rwxrwxrwx 1 marc marc 196375 2008-04-02 18:48 odometre.pdf
-rw-r--r-- 1 root root                0 2009-02-19 11:32 testfile
-rw-r--r-- 1 root root                0 2009-02-19 11:32 testfile3
-rwxrwxrwx 1 marc marc            5632 2008-05-23 19:42 Thumbs.db

server1 (ls -la /data/export-ns)
-rwxrwxrwx 1 marc marc             0 2007-09-14 14:14 6-instructions2.pdf
-rwxr--r-- 1 marc marc             0 2009-02-02 15:26 bookmarks.html
-rwxrwxrwx 1 marc marc             0 2008-12-18 11:54 budget.ods
-rwxrwxrwx 1 marc marc             0 2008-11-17 21:41 cv.doc
-rwxr--r-- 1 marc marc             0 2009-02-18 16:45 cv_nouveau.docx
-rwxrwxrwx 1 marc marc             0 2007-09-11 15:43 cv.odt
-rwxrwxrwx 1 marc marc             0 2009-01-26 10:49 cv.pdf
-rwxrwxrwx 1 marc marc             0 2006-07-03 17:07 menulaurentien.doc
-rwxrwxrwx 1 marc marc             0 2008-04-02 18:48 odometre.pdf
-rwxr--r-- 1 marc marc             0 2009-02-06 12:58 opera6.htm
-rw-r--r-- 1 root root             0 2009-02-19 11:29 testfile
-rw-r--r-- 1 root root             0 2009-02-19 11:29 testfile2
-rw-r--r-- 1 root root             0 2009-02-19 11:29 testfile3
-rw-r--r-- 1 root root             0 2009-02-19 11:29 testfile4
-rwxrwxrwx 1 marc marc             0 2008-05-23 19:42 Thumbs.db
Now everything is as it should be.
------------------------------------------------------------------------------------
6. Conclusion
GlusterFS has a lot of potential. What you saw here is a small portion of what GlusterFS can do. As I said
in the first page, this setup was not tested on a live webserver and very little testing was done. If you plan
to put this on a live server and test this setup in depth, please share your experience in the forums or
simply post a comment on this page. Also, it would be very interesting if someone can post benchmarks
to see how well it scale.
Further reading : http://www.gluster.org

How to extract a table content from mysql db backup file

Extract one table from mysql backup

Use this :--

zcat mythconverg.sql.gz | grep "INSERT INTO \`channel\` VALUES" > channel-table.sql

channel- table name.

Enabling and Disabling Promiscuous Mode

In both Linux and FreeBSD, we can enable/disable promiscuous mode for an interface using "ifconfig". The parameters "promisc" and "-promisc" are used to enable and disable promiscuous mode respectively. In Solaris, promiscuous mode can be enabled using "snoop".
Examples

Linux
Enable promiscuous mode of the interface eth0:
# ifconfig eth0 promisc

Disable promiscuous mode of the interface eth0
# ifconfig eth0 -promisc


FreeBSD
Enable promiscuous mode of the interface nxge0
# ifconfig nxge0 promisc

Disable promiscuous mode of the interface nxge0
# ifconfig nxge0 -promisc


Solaris
Enable promiscuous mode of the interface vxge0
# snoop -d vxge0

Quit snoop to disable promiscuous mode