Switch VLan setting meanings for no, tagged, untagged, forbid.

When you add your Vlan each port will need to be set to one of these:


Here is the definitions for each network switch Vlan setting:


Allows the port to join multiple VLANs.


Allows VLAN connection to a device that is configured for an untagged VLAN instead of a tagged VLAN. A port can be an untagged member of only one port-based VLAN.
A port can also be an untagged member of only
one protocol-based VLAN for any given protocol type.
For example, if the switch is configured with the default VLAN plus three protocol-based
VLANs that include IPX, then port 1 can be an untagged member of the default VLAN and one of the protocol-based VLANS.


Appears when the switch is not GVRP-enabled; prevents the port from – or – joining that VLAN.


Appears when GVRP is enabled on the switch; allows the port to
dynamically join any advertised VLAN that has the same VID


Prevents the port from joining the VLAN, even if GVRP is enabled on the switch.

Manually changing an IP in Linux

It’s pretty easy actually.

Just go to this directory:


Then you will see a file for eachnetwork port, for example my server is:




I already have the ifcfg-eth0 set up from the install but I want to use the second one for back ups so I just opened the file and added the IP to it. Then I made sure the rest of the setting smatched the first one other then the:


That line is specific to the network port.

So that is how you configure a eth0 or eth1 port.


The skinny on raid differances

We refer raid 5 but if you have remote back ups on the server then  raid 0 will get you the best performance between several hard drives. Keep in mind raid still can fail even with the same data going to several hard drives and this will slow your system down.

Hardware raid is much faster as it doesn’t steal ram and cpu to be used.

Software should be avoided like the plague it will cause load issues 90% of the time.

A number of standard schemes have evolved which are referred to as levels. There were five RAID levels originally conceived, but many more variations have evolved, notably several nested levels and many non-standard levels (mostly proprietary). RAID levels and their associated data formats are standardised by SNIA in the Common RAID Disk Drive Format (DDF) standard.

Following is a brief textual summary of the most commonly used RAID levels.

RAID 0 (block-level striping without parity or mirroring) has no (or zero) redundancy. It provides improved performance and additional storage but no fault tolerance. Hence simple stripe sets are normally referred to as RAID 0. Any drive failure destroys the array, and the likelihood of failure increases with more drives in the array (at a minimum, catastrophic data loss is almost twice as likely compared to single drives without RAID). A single drive failure destroys the entire array because when data is written to a RAID 0 volume, the data is broken into fragments called blocks. The number of blocks is dictated by the stripe size, which is a configuration parameter of the array. The blocks are written to their respective drives simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking, so any error is uncorrectable. More drives in the array means higher bandwidth, but greater risk of data loss.

In RAID 1 (mirroring without parity or striping), data is written identically to multiple drives, thereby producing a “mirrored set”; at least 2 drives are required to constitute such an array. While more constituent drives may be employed, many implementations deal with a maximum of only 2; of course, it might be possible to use such a limited level 1 RAID itself as a constituent of a level 1 RAID, effectively masking the limitation.[citation needed] The array continues to operate as long as at least one drive is functioning. With appropriate operating system support, there can be increased read performance, and only a minimal write performance reduction; implementing RAID 1 with a separate controller for each drive in order to perform simultaneous reads (and writes) is sometimes called multiplexing (or duplexing when there are only 2 drives).

In RAID 2 (bit-level striping with dedicated Hamming-code parity), all disk spindle rotation is synchronized, and data is striped such that each sequential bit is on a different drive. Hamming-code parity is calculated across corresponding bits and stored on at least one parity drive.

In RAID 3 (byte-level striping with dedicated parity), all disk spindle rotation is synchronized, and data is striped so each sequential byte is on a different drive. Parity is calculated across corresponding bytes and stored on a dedicated parity drive.

RAID 4 (block-level striping with dedicated parity) is identical to RAID 5 (see below), but confines all parity data to a single drive. In this setup, files may be distributed between multiple drives. Each drive operates independently, allowing I/O requests to be performed in parallel. However, the use of a dedicated parity drive could create a performance bottleneck; because the parity data must be written to a single, dedicated parity drive for each block of non-parity data, the overall write performance may depend a great deal on the performance of this parity drive.

RAID 5 (block-level striping with distributed parity) distributes parity along with the data and requires all drives but one to be present to operate; the array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. However, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced and the associated data rebuilt. Additionally, there is the potentially disastrous RAID 5 write hole.

RAID 6 (block-level striping with double distributed parity) provides fault tolerance of two drive failures; the array continues to operate with up to two failed drives. This makes larger RAID groups more practical, especially for high-availability systems. This becomes increasingly important as large-capacity drives lengthen the time needed to recover from the failure of a single drive. Single-parity RAID levels are as vulnerable to data loss as a RAID 0 array until the failed drive is replaced and its data rebuilt; the larger the drive, the longer the rebuild takes. Double parity gives additional time to rebuild the array without the data being at risk if a single additional drive fails before the rebuild is complete.

Upgrading a realtech lan / nic driver

The driver for the realtech lan / nic card seems to be faulty when rsync is running several files through it. There is a newer driver that resolves this issue available from the realtech website.

Website Driver Download
Here is direct link to the driver from our website:


Upload the driver to your root directory, then run these commands.

tar vjxf r8168-8.018.00.tar.bz2

cd r8168-8.018.00


Run this to check the driver details

dmesg | grep ‘Ethernet driver’

Read me files for driver:

<Linux device driver for Realtek Ethernet controllers>

This is the Linux device driver released for RealTek RTL8168B/8111B, RTL8168C/8111C, RTL8168CP/8111CP, RTL8168D/8111D, RTL8168DP/8111DP, and RTL8168E/8111E Gigabit Ethernet controllers with PCI-Express interface.


– Kernel source tree (supported Linux kernel 2.6.x and 2.4.x)
– For linux kernel 2.4.x, this driver supports 2.4.20 and latter.
– Compiler/binutils for kernel compilation

<Quick install with proper kernel settings>
Unpack the tarball :
# tar vjxf r8168-8.aaa.bb.tar.bz2

Change to the directory:
# cd r8168-8.aaa.bb

If you are running the target kernel, then you should be able to do :

# ./autorun.sh    (as root or with sudo)

You can check whether the driver is loaded by using following commands.

# lsmod | grep r8168
# ifconfig -a

If there is a device name, ethX, shown on the monitor, the linux
driver is loaded. Then, you can use the following command to activate
the ethX.

# ifconfig ethX up

,where X=0,1,2,…

<Set the network related information>
1. Set manually
a. Set the IP address of your machine.

# ifconfig ethX “the IP address of your machine”

b. Set the IP address of DNS.

Insert the following configuration in /etc/resolv.conf.

nameserver “the IP address of DNS”

c. Set the IP address of gateway.

# route add default gw “the IP address of gateway”

2. Set by doing configurations in /etc/sysconfig/network-scripts
/ifcfg-ethX for Redhat and Fedora, or /etc/sysconfig/network
/ifcfg-ethX for SuSE. There are two examples to set network

a. Fixed IP address:

b. DHCP:

<Modify the MAC address>
There are two ways to modify the MAC address of the NIC.
1. Use ifconfig:

# ifconfig ethX hw ether YY:YY:YY:YY:YY:YY

,where X is the device number assigned by Linux kernel, and
YY:YY:YY:YY:YY:YY is the MAC address assigned by the user.

2. Use ip:

# ip link set ethX address YY:YY:YY:YY:YY:YY

,where X is the device number assigned by Linux kernel, and
YY:YY:YY:YY:YY:YY is the MAC address assigned by the user.

<Force Link Status>

1. Force the link status when insert the driver.

If the user is in the path ~/r8168, the link status can be forced
to one of the 5 modes as following command.

# insmod ./src/r8168.ko speed=SPEED_MODE duplex=DUPLEX_MODE autoneg=NWAY_OPTION

SPEED_MODE    = 1000    for 1000Mbps
= 100    for 100Mbps
= 10    for 10Mbps
DUPLEX_MODE    = 0    for half-duplex
= 1    for full-duplex
NWAY_OPTION    = 0    for auto-negotiation off (true force)
= 1    for auto-negotiation on (nway force)
For example:

# insmod ./src/r8168.ko speed=100 duplex=0 autoneg=1

will force PHY to operate in 100Mpbs Half-duplex(nway force).

2. Force the link status by using ethtool.
a. Insert the driver first.
b. Make sure that ethtool exists in /sbin.
c. Force the link status as the following command.

# ethtool -s ethX speed SPEED_MODE duplex DUPLEX_MODE autoneg NWAY_OPTION

SPEED_MODE    = 1000    for 1000Mbps
= 100    for 100Mbps
= 10    for 10Mbps
DUPLEX_MODE    = half    for half-duplex
= full    for full-duplex
NWAY_OPTION    = off    for auto-negotiation off (true force)
= on    for auto-negotiation on (nway force)

For example:

# ethtool -s eth0 speed 100 duplex full autoneg on

will force PHY to operate in 100Mpbs Full-duplex(nway force).

<Jumbo Frame>
Transmitting Jumbo Frames, whose packet size is bigger than 1500 bytes, please change mtu by the following command.

# ifconfig ethX mtu MTU

, where X=0,1,2,…, and MTU is configured by user.

RTL8168B/8111B supports Jumbo Frame size up to 4 kBytes.
RTL8168C/8111C and RTL8168CP/8111CP support Jumbo Frame size up to 6 kBytes.
RTL8168D/8111D supports Jumbo Frame size up to 9 kBytes.