Installation of BackupPC on Cent OS 5

Following are the packages to be installed
install httpd

start httpd

via CPAN insaql the following per file

perl-MCPN -e shell

install Compress::Zlib

install Archive::Zip

File::RsyncP

Install the following packages

yum install perl-suidperl

Now we add a user for the backupPC with its group being apache useradd -g apache backuppc

Now we download the backupPC source file

cd /usr/local/src wget http://nchc.dl.sourceforge.net/sourceforge/backuppc/BackupPC-3.1.0.tar.gz

tar -xzvf BackupPC-3.1.0.tar.gz
cd BackupPC-3.1.0
make Makefile.pl
this will be the location of the config file -----------> /etc/BackupPC/config.pl
this will be the locatino where we add host( clients to be backed up ) ---------> /etc/BackupPC/hosts ths is the location where the bin doc lib files are ---------> /usr/local/BackupPC] this is the location where the data will be abcked up -----------> /home/backuppc
this is the location of your cCGI bin directory -----------------> /var/bin/cgi-bin
this the location of the image /var/www/html/backuppc same --------------------------------------> /backuppc

now we will copy the init scritp to the right location

cp /usr/local/src/BackupPC-3.1.0/init.d/linux-backuppc /etc/rc.d/init.d/backuppc
chmod +x /etc/rc.d/init.d/backuppc

(A) Now backing up frm Linux clinet to linux BackupPC

Passwordless login frm linux clinet and from Linux BackupPC server

1. on the Linux BackupPC server
su - backuppc
ssh-keygen -t rsa -------------> this will generate the id_rsa and id_rsa.pub keys in /home/backuppc/.ssh/
once that is done we scp the id_rsa.pb key to the Linux client machine scp ~/.ssh/id_rsa.pub root@linuxclinetIP:/tmp
now we log into the linux clinte Pc and copy the /tmp/id_rsa.pub to ~/.ssh/authorized_keys
cp /tmp/id_rsa.pub ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys

now on the Linux BackupPC server ( su - backuppc ) try to login in ( ssh root@linuxclinet ) , it should log you in without a password You can append as many public keys

2. On the Linux clinet PC
ssh-keygen -t rsa -------------> this will generate the id_rsa and id_rsa.pub keys in /root/.ssh/
once that is done we scp the id_rsa.pb key to the Linux BackupPC server
scp ~/.ssh/id_rsa.pub root@LinuxBackupPCIP:/tmp
now we log into the linux BackupPC server and log in a backuppc user and copy the /tmp/id_rsa.pub to ~/.ssh/authorized_keys
cp /tmp/id_rsa.pub ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys
now on the Linux client PC try to login in ( ssh backuppc@linuxBackupPC IP ) , it should log you in woithout a password
Now, so that we want security to access the cgi script we careate htaccess password file

htpasswd -c /etc/BackupPC/htpasswd client1 ( unix system user ) ---------> enter the password as prompted

now we will add our clinet to the host file located in /etc/BackupPC/hosts

# host dhcp user moreUsers # <--- do not edit this line #farside 0 craig jill,jeff # <--- example static IP host entry #larson 1 bill # <--- example DHCP host entry 192.168.0.244 0 linuxclient1 # <--- where only linuxclinet1 user can acces this system back up frm webinterfce 192.168.0.209 0 backuppc # <--- where only backuppc user can acces this system back up frm webinterfce

we will create/edit a congif file in /etc/BackupPC/pc/linuxclientPCIP.pl

$Conf{RsyncClientRestoreCmd} = '$sshPath -q -x -l root $hostIP $rsyncPath $argList+'; $Conf{RsyncClientCmd} = '$sshPath -q -x -l root $hostIP $rsyncPath $argList+'; $Conf{XferMethod} = 'rsync'; $Conf{RsyncShareName} = ['/home','/var/log']; now we will edit he httpd.conf so that only the prticulat user assigned to the particular clinet gets access to the cgi script

<Directory "/var/www/cgi-bin"> # AllowOverride None # Options None # Order allow,deny # Allow from all Options ExecCGI FollowSymlinks AddHandler cgi-script .cgi DirectoryIndex index.cgi AuthGroupFile /etc/backuppc/htgroup AuthUserFile /etc/backuppc/htpasswd AuthType basic AuthName "backuppc" require valid-user

</Directory>

now we restart httpd and backuppc

/etc/init.d/httd restart /etc/init.d/backuppc restart

now we will try to take a back up of the of the Linux clinet Pc

add the user name and password .... this should gvive u access to the particular clinet

(B) Now backing up frm Windows clinet to linux BackupPC

we unzip it in C:/rsyncd/
we edit the rsyncd.conf file
use chroot = false
max connections = 4
pid file = c:/rsyncd/rsyncd.pid
lock file = c:/rsyncd/rsyncd.lock

#this is frm where all your data will be backed up.

[cDrive0] path = c:/rsyncd comment = agnello's documents auth users = agnello secrets file = c:/rsyncd/rsyncd.secrets # hosts allow = 172.16.0.17 strict modes = false read only = false list = false

[cDrive1] path = c:/var comment = agnello's documents auth users = agnello secrets file = c:/rsyncd/rsyncd.secrets # hosts allow = 172.16.0.17 strict modes = false read only = false list = false

[cDrive2] path = c:/Documents and Settings/All Users/Documents/My Music comment = agnello's documents auth users = agnello secrets file = c:/rsyncd/rsyncd.secrets # hosts allow = 172.16.0.17 strict modes = false read only = false list = false

Now save the file and run the service.bat file make sure that your windows PC has not blocked 873

Now edit the rsync.secret add the following lines

agnello:agnello123

now on the Linux BackuPC server we create a config file for the windows client

eg : vi /etc/BackupPC/pc/windowsclentip.pl $Conf{XferMethod} = 'rsyncd'; $Conf{RsyncdUserName} = 'agnello'; $Conf{RsyncdPasswd} = 'agnello123'; $Conf{RsyncShareName} = ['cDrive0','cDrive1','cDrive2']; $Conf{ClientCharset} = 'cp1252'

maker sure that the user and password is tha same as rsync.secret file

now we will add our windows clinet to the host file located in /etc/BackupPC/hosts on the lINUX bACKUPpc SERVER

# host dhcp user moreUsers # <--- do not edit this line #farside 0 craig jill,jeff # <--- example static IP host entry #larson 1 bill # <--- example DHCP host entry 192.168.0.244 0 linuxclient1 # <--- where only linuxclinet1 user can acces this system back up frm webinterfce 192.168.0.209 0 winclient1 # <--- where only backuppc user can acces this system back up frm webinterfce

now we restart backuppc

/etc/init.d/backuppc restart

now try browser in the the http://linuxBackupPCserverIP/cgi-bin/backupc

add the user name and password .... this should gvive u access to the particular clinet ( ref /etc/BackupPC/hosts file )

Now take your regular backups
--

installing pure ftpd with mysql virtual users & unix users

mysql virtual users with unix users

Now in case we need to store the password of the user in the mysql database then we need to add the following steps

step 1: we copy the pureftpd-mysql.conf from /usr/local/src/pureftpd-msql.conf ====> to /usr/local/pureftpd/etc/.

the pureftpd-mysql.conf has the following details: 


#MYSQLServer     localhost
#MYSQLPort       3306
MYSQLSocket     /tmp/mysql.sock
MYSQLUser       root
MYSQLPassword   agnello
MYSQLDatabase   pureftpd
MYSQLCrypt      MD5
MYSQLGetPW      SELECT Password FROM ftpd WHERE User="\L"
MYSQLGetUID     SELECT Uid FROM ftpd WHERE User="\L"
MYSQLGetGID     SELECT Gid FROM ftpd WHERE User="\L"
MYSQLGetDir     SELECT Dir FROM ftpd WHERE User="\L"

step 2 : we create a data base in mysql ( pureftpd )

mysql -u root -password

create database pureftpd;

use pureftpd;

CREATE TABLE ftpd (
  User VARCHAR(16) BINARY NOT NULL,
  Password VARCHAR(64) BINARY NOT NULL,
  Uid VARCHAR(11) NOT NULL default '-1',
  Gid VARCHAR(11) NOT NULL default '-1',
  Dir VARCHAR(128) BINARY NOT NULL,
  PRIMARY KEY  (User)
);

quit;

step 3: we create a directory where all our domains will be stored say  /etc/website
        the permission  i wll be as follows ( these are just the basic )


       [root@linux-test pure-ftpd-1.0.21]# ll /home/
       drwxr-xr-x 5 root     root     4096 May 20 17:30 website

step 4 : now suppose we have to creat a user for a domain called silly .com

         1. lets create a unix user

         useradd -d /home/website/silly.com -s /sbin/nologin silly

         2. then we add the virtual user ( with password ) in the the mysql database ( usning phpmyadmin )

         user: silly
         password ( MD5 ): silly123
         uid : silly
         gid : sil ly
         dir: /home/website/silly.com     
 
step 5: now we start the ftpd daemon

/usr/local/pureftpd/sbin/pure-ftpd   -lmysql:/usr/local/pureftpd/etc/pureftpd-mysql.conf -l unix  -j /home/websites &

can view the log with tail -f /var/log/messages

now try to log in ftp://192.168.0.244 with username and password

--

installation of Pure FTPD with virtual user and system user

Installation of pureftpd
 
cd  /usr/local/src
 

./configure --prefix /usr/local/pureftpd  --with-mysql=/usr/local/mysql --with-quotas --with-altlog=/var/log/pureftpd --with-puredb

make  && make install

###################################################################

we have enable virtual user , this means that we can have 1000 of users with out having the /etc/passwd fle touched . So  create a system user.


mkdir /usr/local/pureftpd/etc
touch  /usr/local/pureftpd/etc/pureftpd.passwd

####################################################################

now lets the create the a user

step 1 : creat a unix user

useradd -d /home/website/nokia.com -s /sbin/nologin nokia


step 2: now to add a  ftp user ( the passwor of this suer will be in a seperate file )

/usr/local/pureftpd/bin/pure-pw useradd nokia -f /usr/local/pureftpd/etc/pureftpd.passwd  -u nokia -d /home/website/nokia.com -m


step 3: now to start the pureftpd daemond

/usr/local/pureftpd/sbin/pure-ftpd   -l puredb:/usr/local/pureftpd/etc/pureftpd.pdb -j /home/websites &


---------------
few extra tips
---------------
...bin/pure-pw passwd nokia -f /usr/local/pureftpd/etc/pureftpd.passwd  ---> this will change password for nokia !!

...bin/pure-pw list -f /usr/local/pureftpd/etc/pureftpd.passwd ----> this will list all the ftp-users


The  logging  facaility is done in the  /etc/syslog.conf add the folloing ftp.*    /var/log/pureftpd

################################################################



--

set correct time stamp for mails on postfix mailserver

The problem is that Postfix doesn't know what timezone you are in.
It's compounded by the fact that Postfix, for security reasons,
doesn't want to read things outside of it's directory. However you can
fix this by copying the timezone files to a directory in
/var/spool/postfix/

cd /var/spool/postfix/
sudo mkdir etc

now, take a look at /etc/localtime:

ls -la /etc/localtime
lrwxr-xr-x 1 root wheel 36 6 Aug 20:05 /etc/localtime ->
/usr/share/zoneinfo/America/New_York

copy /usr/share/zoneinfo/country/state to /var/spool/postfix/etc/

sudo cp -p /usr/share/zoneinfo/America/New_York \
/var/spool/postfix/etc/localtime

postfix check
postfix reload

Now take a look at your mail.log and see if all the times line up.


--

Installation of system local users using virtualmin

installation of system local users using virtualmin

you should have webmin already installed

Virtualmin can be downloaded in Webmin module format from:
http://download.webmin.com/download/virtualmin/virtual-server-3.55.gpl.wbm.gz
(764 kB)

The new Virtualmin framed theme in Webmin module format can be downloaded from:
http://download.webmin.com/download/virtualmin/virtual-server-theme-5.5.wbt.gz
(2.2 MB)

You can install it by going to the Webmin Configuration module,
clicking on Webmin Modules and use the first form on the page to
install the downloaded .wbm.gz file. Or install it directly from the
above URL. After installation the module will show up in the Servers
category.

To install the theme,
go to the Webmin Configuration module,
click on Webmin Themes and install the downloaded .wbt.gz file.

Once this is done, you should use the Webmin Themes page to make the
new theme the default, if your system is to be primarily used for
virtual hosting.

The same theme file can be used with Usermin too, to provide a similar
user interface style and a better framed interface for reading email.
To install it, go the Usermin Configuration module, click on Usermin
Themes and install from the .wbt.gz file.

yum install postfix ( make sure you have sasl enabled )

postfix configureation details !!

alias_database = hash:/etc/aliases
alias_maps = hash:/etc/aliases
broken_sasl_auth_clients = yes
command_directory = /usr/sbin
config_directory = /etc/postfix
daemon_directory = /usr/libexec/postfix
debug_peer_level = 2
home_mailbox = Maildir/
html_directory = no
mailq_path = /usr/bin/mailq.postfix
manpage_directory = /usr/share/man
mydestination = eshanews.com
mydomain = eshanews.com
myhostname = mail.eshanews.com
myorigin = $mydomain
newaliases_path = /usr/bin/newaliases.postfix
readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES
recipient_delimiter = +
sample_directory = /usr/share/doc/postfix-2.3.3/samples
sendmail_path = /usr/sbin/sendmail.postfix
setgid_group = postdrop
smtpd_recipient_restrictions = permit_mynetworks
permit_sasl_authenticated reject_unauth_destination
reject_unauth_pipelining reject_invalid_hostname
smtpd_sasl_auth_enable = yes
smtpd_sasl_local_domain =
smtpd_sasl_security_options = noanonymous
unknown_local_recipient_reject_code = 550
virtual_alias_maps = hash:/etc/postfix/virtual
canonical_maps = hash:/etc/postfix/canonical
sender_canonical_maps = hash:/etc/postfix/canonical
recipient_canonical_maps = hash:/etc/postfix/canonical

make sure that the u creat db file for

virtual_alias_maps = hash:/etc/postfix/virtual
canonical_maps = hash:/etc/postfix/canonical
sender_canonical_maps = hash:/etc/postfix/canonical
recipient_canonical_maps = hash:/etc/postfix/canonical

then is install squirrel mail !!

making psot fix scan incoming mails for spam

yum install spamassassin

groupadd -g 5001 spamd
#useradd -u 5001 -g spamd -s /sbin/nologin -d /var/lib/spamassassin spamd
#mkdir /var/lib/spamassassin
#chown spamd:spamd /var/lib/spamassassin

local.cf sample

rewrite_header Subject [***** SPAM _SCORE_ *****]
required_score 2.0
#to be able to use _SCORE_ we need report_safe set to 0
#If this option is set to 0, incoming spam is only modified by adding
some "X-Spam-" headers and no changes will be made to the body.
report_safe 0

# Enable the Bayes system
use_bayes 1
use_bayes_rules 1
# Enable Bayes auto-learning
bayes_auto_learn 1

# Enable or disable network checks
skip_rbl_checks 0
use_razor2 0
use_dcc 0
use_pyzor 0


restart spamassassin

Now, we need to tell postfix to use spamassassin. In our case,
spamassassin will be invoked only once postfix has finished with the
email.

To tell postfix to use spamassassin, we are going to edit
/etc/postfix/master.cf and change the line:

smtp inet n - - - - smtpd
-o content_filter=spamassassin


and then, at the end of master.cf, let's add:

pamassassin unix - n n - - pipe
user=spamd argv=/usr/bin/spamc -f -e
/usr/sbin/sendmail -oi -f ${sender} ${recipient}


we restart postfix

/etc/init.d/postfix reload

thats it !!!!!!


--

set up postfix - only SMTP from source

groupadd -r postfix
useradd -r -g postfix -d /no/where -s /no/shell postfix
groupadd -r postdrop


make -f Makefile.init makefiles 'CCARGS=-DHAS_MYSQL
-I/usr/local/mysql/include/mysql -DUSE_SASL_AUTH -DUSE_CYRUS_SASL
-I/usr/include/sasl -DUSE_TLS -I/usr/include/openssl'
'AUXLIBS=-L/usr/local/mysql/lib/mysql -lssl -lmysqlclient -lz -lm
-lsasl2 -lcrypto'

make

make install

netstat -tap


to start postfix

postfix start

OR

vi /etc/rc.d/init/postfix

#!/bin/bash
#
# postfix This script controls the postfix daemon.
#

# description: Postfix MTA
# processname: postfix

case "$1" in
start)
/usr/sbin/postfix start
;;
stop)
/usr/sbin/postfix stop
;;
reload)
/usr/sbin/postfix reload
;;
restart)
$0 stop
$0 start
;;
*)
echo "Usage: $0 {start|stop|reload|restart}"
exit 1
esac
exit 0


--

cent OS repositories link

http://mirror.centos.org/centos/5.1/os/SRPMS/

--

Using CentOS 5 Repos in RHEL 5 Server

1. Remove "yum-rhn-plugin" package from RHEL, this is used to check
the activation in RHEL.

# rpm -e yum-rhn-plugin

2. Remove the "redhat-release" related packages, this is used to check
the repositories compatibility. usually we can't remove these packages
because they are used by other packages of the system for proper
fuctioning. so we'll use the "--nodeps" parameter to forcely remove
them from the system.

# rpm -e redhat-release-notes-5Server redhat-release-5Server --nodeps

3. Download & install the "centos-release" relates packages, to fill
in the gap that we made by removing the "redhat-release" related
packages.

i386 (32 bit)
http://mirror.centos.org/centos-5/5/os/i386/CentOS/centos-release-5-1.0.el5.centos.1.i386.rpm
http://mirror.centos.org/centos-5/5/os/i386/CentOS/centos-release-notes-5.1.0-2.i386.rpm

x86_64 (64 bit)
http://mirror.centos.org/centos-5/5/os/x86_64/CentOS/centos-release-5-1.0.el5.centos.1.x86_64.rpm
http://mirror.centos.org/centos-5/5/os/x86_64/CentOS/centos-release-notes-5.1.0-2.x86_64.rpm

4. To automatically inform about the updates in GUI, Do the following.

# nano /etc/yum/yum-updatesd.conf

In the file, type as follows under the section "# how to send notifications"

dbus_listener = yes

5. To change the OS name in the CLI login, Do the following.

# nano /etc/issue

Since we have installed the "centos-release" relates packages, the OS
name will come as "CentOS release 5 (Final)", so delete it and type

Red Hat Enterprise Linux Server release 5 (Tikanga)

Or any name you like.

6. Now your system is ready.

7. Read my guide on "CentOS Repositories"
--

Implementing High Availability in MySQL

MySQL provides a built-in data replication functionality for maintaining identical copies of its data to one or more backend servers, thus providing a simple High Availability mechanism. On the other hand, the Open Source community has several projects to implement failover techniques, being one of them Heartbeat.

This article will show you how to implement a clustered, highly available and inexpensive solution based on GNU/Linux and combining MySQL as the database engine and Heartbeat as the failover mechanism. The configuration will consist of a 2-node active/passive cluster.

I assume you have MySQL up and running on both nodes and that your are working with MySQL 4.0.13 or above. If not, please refer to MySQL manual here and download a recent copy here.

How does replication works in MySQL

Replication in MySQL is very simple: one machine acts as the master server and one or more machines act as the backup servers (the replica servers). The master server keeps all changes made to its databases in binary log files, so the backup server(s) can read these files and apply the changes to its own copy of the data.

In more detail, the binary log file records all the changes (UPDATE, DELETE, INSERT…) made to the master's databases since the first time the replication was configured and started. The master also creates and maintains an index file to keep track of the binary logs created. Upon connecting, the slave server(s) obtains new updates from the binary log and aplies them to its copy of the data.

Note: As MySQL suggests, visit their website often to check the latest changes and improvements to its database replication implementation.

How does Heartbeat works

Heartbeat is a piece of software that provides High Availability features such as monitoring the availability of the machines in the cluster, transferring the virtual IPs (more on this later) in case of failures and starting and stopping services.

The Heartbeat software running on the slave server periodically checks the health of the master server by listening to its heartbeats sent via null modem cable and/or a crossover ethernet cable. Note that in the best scenario slave's main task is nothing but to monitor the health of its master. In case of a crash the slave will not receive the heartbeats from the master and then it will take over the virtual IPs and the services offered by the master.

The overall picture

Next figure shows the picture of our cluster.

The cluster layout

As previously stated, our configuration will consist of a 2-node active/passive cluster: dbserv1, the master server and dbserv2, the slave server. Both machines are linked via serial COM port /dev/ttyS0 (null modem cable) and a crossover ethernet cable (eth0), through which they send its heartbeats to each other.

The 192.168.1.103 IP address at eth1:0 is the floating IP address, the virtual IP. This is the service IP where the master listens to and that will be transferred to the slave in case of a failure in the master. Requests from the application servers will be made through the virtual IP.

Both servers have another IP address that can be used to administer the machines: 192.168.1.101 and 192.168.1.102. Bear in mind that the virtual IP (192.168.1.103) is set up by Heartbeat, meaning that if it is not up and running in the active server there will be no access to the virtual service.

Setting up replication

1. Create a replication user on the master:

mysql -u root -p

At MySQL prompt type:

GRANT REPLICATION SLAVE ON *.* TO replica@"%" IDENTIFIED BY 'replica_passwd';

2. Stop MySQL on both the master server and the slave server. Take a snapshot of your databases from the master.

/etc/init.d/mysql stop
tar cvzf mysqldb.tgz /path/to/your/databases

In my configuration I would…

/etc/init.d/mysql stop
tar cvzf mysqldb.tgz /var/mysql-data/*

3. Copy the data to the slave

scp /path/to/mysqldb.tgz admin@dbserv2:/path/to/your/databases

If you are using InnoDB tables, copy your tablespace file(s) and associated log files to the slave. In my case, the tablespace is called ibdata and the log files are those ib_*. So:

scp /var/mysql-data/ibdata admin@dbserv2:/var/mysql-data
scp /var/log/mysql/ib_* admin@dbserv2:/var/log/mysql

4. Activate the binary log and assign a unique ID to the master:

vi /etc/my.cnf

Then add/change the following

[mysqld]
…..
# Enable binary logs. Path to bin log is optional
log-bin=/var/log/mysql/dbserv1
# If the binary log exceeds 10M, rotate the logs
max_binlog_size=10M
# Set master server ID
server-id=1
…..

Now you can start mysqld on the master. Watch the logs to see if there are problems.

/etc/init.d/mysql start

5. Log in on the slave.

vi /etc/my.cnf

Then add/change the following:

server-id=2
# This is eth0. Take a look at figure 1
master-host=192.168.100.1
master-user=replica
master-password=replica_passwd
# Port that master server is listening to
master-port=3306
# Number of seconds before retrying to connect to master. Defaults to 60 secs
#master-connect-retry

6. Uncompress the databases

cd /path/to/your/databases
tar xvzf mysqldb.tgz

chown -R mysql.mysql /path/to/your/databases

Make sure your tablespace file(s) and associated files are in place (/path/to/your/databases in our example).

7. Start mysqld on the slave. Watch the logs to see if there are problems.

/etc/init.d/mysql start

8. Check if replication is working. For example, log in on the master, create a database and see if it is replicated on the slave:

mysql -u root -p

create database replica_test;
show databases;


+----------------+
| Database |
+----------------+
| replica_test |
| mysql |
| test |
| tmp |
+----------------+

Log in on the slave server and make sure the database replica_test is created:

mysql -u root -p
show databases;


+----------------+
| Database |
+----------------+
| replica_test |
| mysql |
| test |
| tmp |
+----------------+

If you have problems, please refer to MySQL manual here.

Installing and setting up Heartbeat

Download a recent copy of Heartbeat from here and then as usual….

configure
make
make install

or:

rpm -Uhv heartbeat-1.0.4-1.i386.rpm

if you downloaded the RPM based package.

Configuring heartbeat

There are three files involved in the configuration of heartbeat:

  • ha.cf: the main configuration file that describes the machines involved and how they behave.
  • haresources: this configuration file specifies virtual IP (VIP) and services handled by heartbeat.
  • authkeys: specifies authentication keys for the servers.

Sample /etc/ha.d/ha.cf

# Time between heartbeats in seconds
keepalive 1
# Node is pronounced dead after 15 seconds
deadtime 15
# Prevents the master node from re-acquiring cluster resources after a failover
nice_failback on
# Device for serial heartbeat
serial /dev/ttyS0
# Speed at which to run the serial line (bps)
baud 19200
# Port for udp (default)
udpport 694
# Use a udp heartbeat over the eth0 interface
udp eth0

debugfile /var/log/ha/ha.debug
logfile /var/log/ha/ha.log

# First node of the cluster (must be uname -a)
node dbserv1
# Second node of the cluster (must be uname -a)
node dbserv2

Sample /etc/ha.d/haresources

dbserv1 Ipaddress::192.168.1.103::eth1

This tells Heartbeat to set up 192.168.1.103 as the virtual IP (VIP). See figure above.

Sample /etc/ha.d/authkeys

auth 1
1 crc
2 sha1 HI!
3 md5 Hello!

This file determines the authentication keys. Must be mode 600. As I assume that our network is relatively secure I configure crc as the authentication method. There is also md5 and sha1 available.

Now start heartbeat on dbserv1 and the on dbserv2, watch the logs, then stop heartbeat on the first node and see what happens on the second node. Start again heartbeat on the first node and stop it on the second and see the logs. If all is okay, you have a 2-node cluster up and running.

What we have

At this point we have a 2-node cluster with certain degree of availability and fault tolerance. Despite this could be a valid solution for non-critical environments, in really critical environments this configuration should be improved.

Advantages

  • The cluster is fault tolerant
  • The cluster is relatively secure
  • There is no single point of failure (comments?)
  • Automatic fail over mechanism
  • Proven and solid OpenSource software for production environment (my experience)
  • Simple and easy to install and configure
  • Easy to administer
  • Inexpensive

Disadvantages

Our cluster presents almost one serious problem in critical environments (i.e. 99,99% availability). As you know, when the master node fails, the standby node takes over the service and the virtual IP address. In this scenario, when the master comes back online again, it will act as the stand-by node (remember nice_failback on from /etc/ha.d/ha.cf?). As our configuration has not implemented a two-way replication mechanism, the actual master is not generating binary logs and the actual slave is not configured to act as such. There are means to avoid this disadvantage, but this is your homework ;-). Let me know your progress.

As usual, comments are very welcome.

References:



--

Debian: record boot messages

Debian allows you to record boot messages by means of the bootlogd
daemon. According to man pages:

Bootlogd runs in the background and copies all strings sent to the
/dev/console device to a logfile. If the logfile is not accessible,
the messages will be kept in memory until it is.

This feature is not enabled by default. Edit /etc/default/bootlogd and
modify it to enable recording of boot messages:


# Run bootlogd at startup ?
BOOTLOGD_ENABLE=Yes

Now bootlogd will start sending boot messages to /var/log/boot.

--

Other Articles

Enter your email address: