Installation of system local users using virtualmin

installation of system local users using virtualmin

you should have webmin already installed

Virtualmin can be downloaded in Webmin module format from:
http://download.webmin.com/download/virtualmin/virtual-server-3.55.gpl.wbm.gz
(764 kB)

The new Virtualmin framed theme in Webmin module format can be downloaded from:
http://download.webmin.com/download/virtualmin/virtual-server-theme-5.5.wbt.gz
(2.2 MB)

You can install it by going to the Webmin Configuration module,
clicking on Webmin Modules and use the first form on the page to
install the downloaded .wbm.gz file. Or install it directly from the
above URL. After installation the module will show up in the Servers
category.

To install the theme,
go to the Webmin Configuration module,
click on Webmin Themes and install the downloaded .wbt.gz file.

Once this is done, you should use the Webmin Themes page to make the
new theme the default, if your system is to be primarily used for
virtual hosting.

The same theme file can be used with Usermin too, to provide a similar
user interface style and a better framed interface for reading email.
To install it, go the Usermin Configuration module, click on Usermin
Themes and install from the .wbt.gz file.

yum install postfix ( make sure you have sasl enabled )

postfix configureation details !!

alias_database = hash:/etc/aliases
alias_maps = hash:/etc/aliases
broken_sasl_auth_clients = yes
command_directory = /usr/sbin
config_directory = /etc/postfix
daemon_directory = /usr/libexec/postfix
debug_peer_level = 2
home_mailbox = Maildir/
html_directory = no
mailq_path = /usr/bin/mailq.postfix
manpage_directory = /usr/share/man
mydestination = eshanews.com
mydomain = eshanews.com
myhostname = mail.eshanews.com
myorigin = $mydomain
newaliases_path = /usr/bin/newaliases.postfix
readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES
recipient_delimiter = +
sample_directory = /usr/share/doc/postfix-2.3.3/samples
sendmail_path = /usr/sbin/sendmail.postfix
setgid_group = postdrop
smtpd_recipient_restrictions = permit_mynetworks
permit_sasl_authenticated reject_unauth_destination
reject_unauth_pipelining reject_invalid_hostname
smtpd_sasl_auth_enable = yes
smtpd_sasl_local_domain =
smtpd_sasl_security_options = noanonymous
unknown_local_recipient_reject_code = 550
virtual_alias_maps = hash:/etc/postfix/virtual
canonical_maps = hash:/etc/postfix/canonical
sender_canonical_maps = hash:/etc/postfix/canonical
recipient_canonical_maps = hash:/etc/postfix/canonical

make sure that the u creat db file for

virtual_alias_maps = hash:/etc/postfix/virtual
canonical_maps = hash:/etc/postfix/canonical
sender_canonical_maps = hash:/etc/postfix/canonical
recipient_canonical_maps = hash:/etc/postfix/canonical

then is install squirrel mail !!

making psot fix scan incoming mails for spam

yum install spamassassin

groupadd -g 5001 spamd
#useradd -u 5001 -g spamd -s /sbin/nologin -d /var/lib/spamassassin spamd
#mkdir /var/lib/spamassassin
#chown spamd:spamd /var/lib/spamassassin

local.cf sample

rewrite_header Subject [***** SPAM _SCORE_ *****]
required_score 2.0
#to be able to use _SCORE_ we need report_safe set to 0
#If this option is set to 0, incoming spam is only modified by adding
some "X-Spam-" headers and no changes will be made to the body.
report_safe 0

# Enable the Bayes system
use_bayes 1
use_bayes_rules 1
# Enable Bayes auto-learning
bayes_auto_learn 1

# Enable or disable network checks
skip_rbl_checks 0
use_razor2 0
use_dcc 0
use_pyzor 0


restart spamassassin

Now, we need to tell postfix to use spamassassin. In our case,
spamassassin will be invoked only once postfix has finished with the
email.

To tell postfix to use spamassassin, we are going to edit
/etc/postfix/master.cf and change the line:

smtp inet n - - - - smtpd
-o content_filter=spamassassin


and then, at the end of master.cf, let's add:

pamassassin unix - n n - - pipe
user=spamd argv=/usr/bin/spamc -f -e
/usr/sbin/sendmail -oi -f ${sender} ${recipient}


we restart postfix

/etc/init.d/postfix reload

thats it !!!!!!


--

set up postfix - only SMTP from source

groupadd -r postfix
useradd -r -g postfix -d /no/where -s /no/shell postfix
groupadd -r postdrop


make -f Makefile.init makefiles 'CCARGS=-DHAS_MYSQL
-I/usr/local/mysql/include/mysql -DUSE_SASL_AUTH -DUSE_CYRUS_SASL
-I/usr/include/sasl -DUSE_TLS -I/usr/include/openssl'
'AUXLIBS=-L/usr/local/mysql/lib/mysql -lssl -lmysqlclient -lz -lm
-lsasl2 -lcrypto'

make

make install

netstat -tap


to start postfix

postfix start

OR

vi /etc/rc.d/init/postfix

#!/bin/bash
#
# postfix This script controls the postfix daemon.
#

# description: Postfix MTA
# processname: postfix

case "$1" in
start)
/usr/sbin/postfix start
;;
stop)
/usr/sbin/postfix stop
;;
reload)
/usr/sbin/postfix reload
;;
restart)
$0 stop
$0 start
;;
*)
echo "Usage: $0 {start|stop|reload|restart}"
exit 1
esac
exit 0


--

cent OS repositories link

http://mirror.centos.org/centos/5.1/os/SRPMS/

--

Using CentOS 5 Repos in RHEL 5 Server

1. Remove "yum-rhn-plugin" package from RHEL, this is used to check
the activation in RHEL.

# rpm -e yum-rhn-plugin

2. Remove the "redhat-release" related packages, this is used to check
the repositories compatibility. usually we can't remove these packages
because they are used by other packages of the system for proper
fuctioning. so we'll use the "--nodeps" parameter to forcely remove
them from the system.

# rpm -e redhat-release-notes-5Server redhat-release-5Server --nodeps

3. Download & install the "centos-release" relates packages, to fill
in the gap that we made by removing the "redhat-release" related
packages.

i386 (32 bit)
http://mirror.centos.org/centos-5/5/os/i386/CentOS/centos-release-5-1.0.el5.centos.1.i386.rpm
http://mirror.centos.org/centos-5/5/os/i386/CentOS/centos-release-notes-5.1.0-2.i386.rpm

x86_64 (64 bit)
http://mirror.centos.org/centos-5/5/os/x86_64/CentOS/centos-release-5-1.0.el5.centos.1.x86_64.rpm
http://mirror.centos.org/centos-5/5/os/x86_64/CentOS/centos-release-notes-5.1.0-2.x86_64.rpm

4. To automatically inform about the updates in GUI, Do the following.

# nano /etc/yum/yum-updatesd.conf

In the file, type as follows under the section "# how to send notifications"

dbus_listener = yes

5. To change the OS name in the CLI login, Do the following.

# nano /etc/issue

Since we have installed the "centos-release" relates packages, the OS
name will come as "CentOS release 5 (Final)", so delete it and type

Red Hat Enterprise Linux Server release 5 (Tikanga)

Or any name you like.

6. Now your system is ready.

7. Read my guide on "CentOS Repositories"
--

Implementing High Availability in MySQL

MySQL provides a built-in data replication functionality for maintaining identical copies of its data to one or more backend servers, thus providing a simple High Availability mechanism. On the other hand, the Open Source community has several projects to implement failover techniques, being one of them Heartbeat.

This article will show you how to implement a clustered, highly available and inexpensive solution based on GNU/Linux and combining MySQL as the database engine and Heartbeat as the failover mechanism. The configuration will consist of a 2-node active/passive cluster.

I assume you have MySQL up and running on both nodes and that your are working with MySQL 4.0.13 or above. If not, please refer to MySQL manual here and download a recent copy here.

How does replication works in MySQL

Replication in MySQL is very simple: one machine acts as the master server and one or more machines act as the backup servers (the replica servers). The master server keeps all changes made to its databases in binary log files, so the backup server(s) can read these files and apply the changes to its own copy of the data.

In more detail, the binary log file records all the changes (UPDATE, DELETE, INSERT…) made to the master's databases since the first time the replication was configured and started. The master also creates and maintains an index file to keep track of the binary logs created. Upon connecting, the slave server(s) obtains new updates from the binary log and aplies them to its copy of the data.

Note: As MySQL suggests, visit their website often to check the latest changes and improvements to its database replication implementation.

How does Heartbeat works

Heartbeat is a piece of software that provides High Availability features such as monitoring the availability of the machines in the cluster, transferring the virtual IPs (more on this later) in case of failures and starting and stopping services.

The Heartbeat software running on the slave server periodically checks the health of the master server by listening to its heartbeats sent via null modem cable and/or a crossover ethernet cable. Note that in the best scenario slave's main task is nothing but to monitor the health of its master. In case of a crash the slave will not receive the heartbeats from the master and then it will take over the virtual IPs and the services offered by the master.

The overall picture

Next figure shows the picture of our cluster.

The cluster layout

As previously stated, our configuration will consist of a 2-node active/passive cluster: dbserv1, the master server and dbserv2, the slave server. Both machines are linked via serial COM port /dev/ttyS0 (null modem cable) and a crossover ethernet cable (eth0), through which they send its heartbeats to each other.

The 192.168.1.103 IP address at eth1:0 is the floating IP address, the virtual IP. This is the service IP where the master listens to and that will be transferred to the slave in case of a failure in the master. Requests from the application servers will be made through the virtual IP.

Both servers have another IP address that can be used to administer the machines: 192.168.1.101 and 192.168.1.102. Bear in mind that the virtual IP (192.168.1.103) is set up by Heartbeat, meaning that if it is not up and running in the active server there will be no access to the virtual service.

Setting up replication

1. Create a replication user on the master:

mysql -u root -p

At MySQL prompt type:

GRANT REPLICATION SLAVE ON *.* TO replica@"%" IDENTIFIED BY 'replica_passwd';

2. Stop MySQL on both the master server and the slave server. Take a snapshot of your databases from the master.

/etc/init.d/mysql stop
tar cvzf mysqldb.tgz /path/to/your/databases

In my configuration I would…

/etc/init.d/mysql stop
tar cvzf mysqldb.tgz /var/mysql-data/*

3. Copy the data to the slave

scp /path/to/mysqldb.tgz admin@dbserv2:/path/to/your/databases

If you are using InnoDB tables, copy your tablespace file(s) and associated log files to the slave. In my case, the tablespace is called ibdata and the log files are those ib_*. So:

scp /var/mysql-data/ibdata admin@dbserv2:/var/mysql-data
scp /var/log/mysql/ib_* admin@dbserv2:/var/log/mysql

4. Activate the binary log and assign a unique ID to the master:

vi /etc/my.cnf

Then add/change the following

[mysqld]
…..
# Enable binary logs. Path to bin log is optional
log-bin=/var/log/mysql/dbserv1
# If the binary log exceeds 10M, rotate the logs
max_binlog_size=10M
# Set master server ID
server-id=1
…..

Now you can start mysqld on the master. Watch the logs to see if there are problems.

/etc/init.d/mysql start

5. Log in on the slave.

vi /etc/my.cnf

Then add/change the following:

server-id=2
# This is eth0. Take a look at figure 1
master-host=192.168.100.1
master-user=replica
master-password=replica_passwd
# Port that master server is listening to
master-port=3306
# Number of seconds before retrying to connect to master. Defaults to 60 secs
#master-connect-retry

6. Uncompress the databases

cd /path/to/your/databases
tar xvzf mysqldb.tgz

chown -R mysql.mysql /path/to/your/databases

Make sure your tablespace file(s) and associated files are in place (/path/to/your/databases in our example).

7. Start mysqld on the slave. Watch the logs to see if there are problems.

/etc/init.d/mysql start

8. Check if replication is working. For example, log in on the master, create a database and see if it is replicated on the slave:

mysql -u root -p

create database replica_test;
show databases;


+----------------+
| Database |
+----------------+
| replica_test |
| mysql |
| test |
| tmp |
+----------------+

Log in on the slave server and make sure the database replica_test is created:

mysql -u root -p
show databases;


+----------------+
| Database |
+----------------+
| replica_test |
| mysql |
| test |
| tmp |
+----------------+

If you have problems, please refer to MySQL manual here.

Installing and setting up Heartbeat

Download a recent copy of Heartbeat from here and then as usual….

configure
make
make install

or:

rpm -Uhv heartbeat-1.0.4-1.i386.rpm

if you downloaded the RPM based package.

Configuring heartbeat

There are three files involved in the configuration of heartbeat:

  • ha.cf: the main configuration file that describes the machines involved and how they behave.
  • haresources: this configuration file specifies virtual IP (VIP) and services handled by heartbeat.
  • authkeys: specifies authentication keys for the servers.

Sample /etc/ha.d/ha.cf

# Time between heartbeats in seconds
keepalive 1
# Node is pronounced dead after 15 seconds
deadtime 15
# Prevents the master node from re-acquiring cluster resources after a failover
nice_failback on
# Device for serial heartbeat
serial /dev/ttyS0
# Speed at which to run the serial line (bps)
baud 19200
# Port for udp (default)
udpport 694
# Use a udp heartbeat over the eth0 interface
udp eth0

debugfile /var/log/ha/ha.debug
logfile /var/log/ha/ha.log

# First node of the cluster (must be uname -a)
node dbserv1
# Second node of the cluster (must be uname -a)
node dbserv2

Sample /etc/ha.d/haresources

dbserv1 Ipaddress::192.168.1.103::eth1

This tells Heartbeat to set up 192.168.1.103 as the virtual IP (VIP). See figure above.

Sample /etc/ha.d/authkeys

auth 1
1 crc
2 sha1 HI!
3 md5 Hello!

This file determines the authentication keys. Must be mode 600. As I assume that our network is relatively secure I configure crc as the authentication method. There is also md5 and sha1 available.

Now start heartbeat on dbserv1 and the on dbserv2, watch the logs, then stop heartbeat on the first node and see what happens on the second node. Start again heartbeat on the first node and stop it on the second and see the logs. If all is okay, you have a 2-node cluster up and running.

What we have

At this point we have a 2-node cluster with certain degree of availability and fault tolerance. Despite this could be a valid solution for non-critical environments, in really critical environments this configuration should be improved.

Advantages

  • The cluster is fault tolerant
  • The cluster is relatively secure
  • There is no single point of failure (comments?)
  • Automatic fail over mechanism
  • Proven and solid OpenSource software for production environment (my experience)
  • Simple and easy to install and configure
  • Easy to administer
  • Inexpensive

Disadvantages

Our cluster presents almost one serious problem in critical environments (i.e. 99,99% availability). As you know, when the master node fails, the standby node takes over the service and the virtual IP address. In this scenario, when the master comes back online again, it will act as the stand-by node (remember nice_failback on from /etc/ha.d/ha.cf?). As our configuration has not implemented a two-way replication mechanism, the actual master is not generating binary logs and the actual slave is not configured to act as such. There are means to avoid this disadvantage, but this is your homework ;-). Let me know your progress.

As usual, comments are very welcome.

References:



--

Debian: record boot messages

Debian allows you to record boot messages by means of the bootlogd
daemon. According to man pages:

Bootlogd runs in the background and copies all strings sent to the
/dev/console device to a logfile. If the logfile is not accessible,
the messages will be kept in memory until it is.

This feature is not enabled by default. Edit /etc/default/bootlogd and
modify it to enable recording of boot messages:


# Run bootlogd at startup ?
BOOTLOGD_ENABLE=Yes

Now bootlogd will start sending boot messages to /var/log/boot.

--

How to clear your cache on squid

stop squid  --/etc/init.d/squid stop

then chk in the squid.conf file the location of cache_dir, , normally /var/spool/squid where we have swap.state

we need to flush it

# echo "" >  /var/spool/squid/swap.state

restart squid

/etc/init.d/squid start

--

how to block gmail talk without blocking Gmail on port 443

eth2 is the private network

iptables -t nat -A PREROUTING -i eth2 -d chatenabled.mail.google.com -p tcp --dport 443 -j DROP

restart iptables --

(13)Permission denied: access to /index.php denied

[Tue Apr 08 14:36:18 2008] [error] [client 121.xx.xx.xx]
(13)Permission denied: access to /index.php denied
[Tue Apr 08 14:36:25 2008] [error] [client 121.xx.xx.xx]
(13)Permission denied: access to /index.html denied
[Tue Apr 08 14:36:30 2008] [error] [client 121.xx.xx.xx]
(13)Permission denied: access to /index.html denied


this is what i did
your permission should look like this

root@v3 user1]# ll /home/
total 16
drwx------ 2 mysql mysql 4096 2008-04-08 09:53 mysql
drwxr-xr-x 3 user1 ftp 4096 2008-04-08 12:01 user1

[root@v3 user1]# ll
total 8
drw-r--r-- 2 root root 4096 2008-04-08 13:22 www

[root@v3 user1]# ll www/
total 16
-rw-r--r-- 1 root root 44 2008-04-08 13:22 index.html
-rw-r--r-- 1 root root 171 2008-04-08 12:05 info.php

you htconf-vhost shuld look like this :

<VirtualHost *>
DocumentRoot "/home/user1/www"
ServerName v3.managedns.org
<Directory "/home/user1/www">
AllowOverride None
order allow,deny
allow from all
Options +Indexes
</Directory>
</VirtualHost>

tis this does nto work it might be selinux issue !!

do
(It'll tell you if it's enabled and what type of policy
it's using (enforcing or permissive). )

getsebool -a

--

-ERR chdir Maildir failed

If you are getting the following error in maillog
Jan 9 19:17:01 test courierpop3login: chdir Maildir: No such file or directory

or if you are trying to do the following and get the follwing error

test:~# telnet server.sbs.com 110
Trying 192.168.0.244...
Connected to test.sbs.com.sbs.com.
Escape character is '^]'.
+OK Hello there.
user user2@mega.com
+OK Password required.
passwd user2
-ERR Invalid command.
pass user2
-ERR chdir Maildir failed
Connection closed by foreign host.


then 1st tis to chechk is vi /etc/courier/authmysqlrc file
check this option
MYSQL_HOME_FIELD "/var/spool/mail/virtual"

also check this option
MYSQL_MAILDIR_FIELD concat(home,'/',maildir)

or
MYSQL_MAILDIR_FIELD CONCAT(maildir,"/")

/etc/init.d/courier-authdaemon restart
/etc/init.d/courier-imap restart
/etc/init.d/courier-imap-ssl restart
/etc/init.d/courier-pop start

then try to tel net to 110 again

hope this helps !!!

--

Other Articles

Enter your email address: