Export Import gpg keys

1. list current keys

$ gpg --list-keys
/home/yyagol/.gnupg/pubring.gpg
-------------------------------
pub 1024D/5E92C97A 2010-04-13  yyagol <[email protected]>
sub 2048g/2752CC68 2010-04-13

2. export both public and private

$ gpg --output mygpgkey_pub.gpg --armor --export 5E92C97A
$ gpg --output mygpgkey_sec.gpg --armor --export-secret-key 5E92C97A

3. copy the files to the other server and then import them

$ gpg --import mygpgkey_pub.gpg
$ gpg --allow-secret-key-import --import mygpgkey_sec.gpg

Building RAID over network share

This How-To will explain how to use remote SMB shares in order to build a RAID device
that can be shared back as over NFS/SMB . The idea is very simple , to use all storage resources
on our LAN in order to build a central NFS/SMB share (can be use for private clouds like CloudStack) .
This was tested with Debian 6 x86_64 , but you will find all command are similar on all Linux distributions .
Note that this configuration is experimental and should not be use for production ( yet )
1. create a device file on each share , execute on all nodes
change [X] to match node number the size is bs*count , in this case 1024M

~# dd if=/dev/zero of=/myshare/node[X].img bs=1M count=1024

 
2. mount this shares to central server , run only on master server
make directory to hold all smb mounts

~# mkdir -p /nodeshares/node{1,2,3}

 
mount each share into its own directory , note that the password is the one chosen on smbpasswd command

~# mount -t cifs -o username=myname,password=1 //192.168.100.11/myshare /nodeshares/node1
~# mount -t cifs -o username=myname,password=1 //192.168.100.12/myshare /nodeshares/node2
~# mount -t cifs -o username=myname,password=1 //192.168.100.13/myshare /nodeshares/node3

 
3. here comes the fun part , create loop devices from the files we created on step 2
note that the files are actually on a remote host .

~# losetup -v /dev/loop1 /nodeshares/node1/node1.img ~# losetup -v /dev/loop2 /nodeshares/node2/node2.img ~# losetup -v /dev/loop3 /nodeshares/node3/node3.img
4. build RAID device out of all shared device files .create RAID level 5 using the loop devices , the devices order does matter~# mdadm –create /dev/md0 –level=5 –raid-disks=3 /dev/loop1 /dev/loop2 /dev/loop3
watch the build process by viewing mdstat~# cat /proc/mdstat
5. create a partition on md1 , mount and export the new partition as NFS or SMB back to the network .— Persistancy —
In order to keep the RAID once the master server boots , we will need to tell the kernel
to look for any MD devices and then refresh mdadm . edit /etc/mdadm/mdadm.conf and add the loop devices by
adding this line :DEVICE /dev/loop*
then run this command to add the new created RAID into persistancymdadm –detail –scan >> /etc/mdadm/mdadm.conf
— what happens when a node restart or gets disconnected —
the mdadm will see the loop device as faulty and mark it as fail , can be seen on mdstat
in this example i shutdown samba on node3~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 loop3[3](F) loop1[0] loop2[1] 1047552 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]unused devices:To repair it you need to remove the faulty device~# mdadm –manage /dev/md0 –remove /dev/loop3 mdadm: hot removed /dev/loop3 from /dev/md1after you remount the node , you can add the device back to the array~# mdadm –manage /dev/md0 –add /dev/loop3 mdadm: added /dev/loop3

file as raw device

Sometimes there is a need to have a file act as raw device , here is a simple trick that
you can take in order to achieve that goal (all commands should run as root) :
1. create an empty file using dd command ,
with the required size ( that can be change later on )

~# dd if=/dev/zero of=1G.img bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.28652 s, 327 MB/s

2. create a file system on that file using mkfs -F

~# mkfs.ext4 -F 1G.img
mke2fs 1.42 (29-Nov-2011)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

3. mount the file as it was a device

~# mkdir mymount
~# mount 1G.img mymount/
~# mount |grep mymount
/tmp/1G.img on /tmp/mymount type ext4 (rw)

And that is it , no more to do . now lets say you want to extend this partition/file , you can do it
with 2 simple commands , but first you need to umount the file .
1. check the fs and clean it before resize

~# e2fsck -f 1G.img
e2fsck 1.42 (29-Nov-2011)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
1G.img: 11/65536 files (0.0% non-contiguous), 12635/262144 blocks

2. resize the file

~# resize2fs 1G.img 2G
resize2fs 1.42 (29-Nov-2011)
Resizing the filesystem on 1G.img to 524288 (4k) blocks.
The filesystem on 1G.img is now 524288 blocks long.
~# ls -lh 1G.img -rw-r--r-- 1 root root 2.0G Jun 19 22:31 1G.img

pfsense under KVM with isolated LAN

In this manual I will explain some bugs and tricks for installing pfsense firewall.
this firewall is based FreeBSD , so some of the settings can be use for installing FreeBSD hosts as well .
we will start with creating an isolate network , this network will be use for our LAN . this step
is not mandatory but if you want to isolate your LAN from other network/bridge its recommended .
on this manual i will be using virsh command line tool because its faster and because easy to understand .
I am running KVM under Ubuntu 11.10 64bit but the idea is the same under any Host server such as CentOS .
creating a private network :
create a template file /root/privlan.xml
all we need is to tel virsh to get the next available bridge ,
no need for any other settings .

<network>
<name>privlan</name>
<bridge name="virbr%d" />
</network>

Now lets go into virsh and create the network

root~# virsh
Welcome to virsh, the virtualization interactive terminal.Type: 'help' for help with commands
'quit' to quitvirsh # net-create /root/privlan.xml
Network privlan created from /root/privlan.xmlvirsh # net-list
Name State Autostart
––––––––––––––––––––––––––––––––––––––––-
default active yes
privlan active no

now due to some bug , the virsh doesn’t create all files needed ,
to overcome this we are going to edit the file using net-edit inside virsh.
the change can be very small like name or MAC change .

virsh # net-autostart privlan
error: failed to mark network privlan as autostarted
error: Requested operation is not valid: cannot set autostart for transient networkvirsh # net-edit privlan
Network privlan XML configuration edited.virsh # net-autostart privlan
Network privlan marked as autostarted

now we are ready to fire up the install , using virt-install we are going to create a domain called pfsense
that has 2G of memory ,2 virtual CPU and 2 network interfaces model e1000.
the first interface is the regular default and the second one is the one we just created .
note that you can change this settings as needed ,
for example some may want to use bridge interface instead of the default NAT network .
unlike Linux , FReeBSD cannot work with virtual disk caching and it does not support virtio .
best performance i tested was scassi bus .

virt-install ––connect qemu:///system -n pfsense -r 2048 ––vcpus=2
––disk path=/var/lib/libvirt/images/pfsense.img,size=10,cache=none,bus=scsi
-c /root/CD_DVD/pfSense-2.0.1-RELEASE-amd64.iso ––vnc ––os-variant=freebsd8
––network=network:privlan,model=e1000
––network=network:default,model=e1000

pfsense install window will come up , you can go on and install just bare in mind that em0 is your LAN .
after install pfsense allow connection only on LAN interface but we created an isolated network ,
so the trick here is to allow connection on the WAN interface .
when pfsense comes up , go into shell (8) , then edit the config.xml via “viconfig”
look for the wan interface and rewmove if needed this 2 lines :

<blockpriv>
<blockbogons>

then create a new filter rule just after the any/any rule of the lan :

<rule>
<type>pass</type>
<descr><![CDATA[Default allow WAN to any rule]]></descr>
<interface>wan</interface>
<source>
<any/>
</source>
<destination>
<any/>
</destination>
</rule>

Now just reboot the pfsense , the new config will automatically refresh after saving but just to be sure .
after it comes up you can connect with browser to the new pfsense and just make sure to remove/set the rule
we just did to allow only trusted networks into the pfsense .

Simple Squid based CDN

Building a self made CDN based on Squid server , is a very simple task
all we need to do is tell squid what domains we want to serve , and where
to get its origin files .
The main idea of CDN is to act as reverse proxy for the origin servers ,
most of the cdn providers out works in the same way as we are about to do .
the first step is to prepare the server . in this example i used a clean squid.conf
for the simplicity of things , this was tested on squid version 3.1 but can work with older versions
as well with small changes . so install squid server , then edit your squid.conf file
we are going to proxy 2 domains www.example.com and images.example.com .
in order to let squid know the origin ip we are going to use 2 more domains
orig-www.example.com orig-images.example.com . and now lets explain how it would work in general
1. user request www.example.com
2. DNS resolve CNAME cdn.www.example.com
3. squid get request for domain www.example.com and look for its origin peer origin-www.example.com
4. squid reverse proxy www.example.com by delivering the files
from origin-www.example.com back to the user while caching it for the next request .
So now we need to edit squid.conf and make it act as acceleration ( what use to be transparent proxy )
and listen on port 80

http_port 80 accel ignore-cc defaultsite=www.example.com vhost
acl mysite1 dstdomain www.example.com
acl mysite2 dstdomain images.example.com
cache_peer orig-www.example.com parent 80 0 no-query originserver name=wwwip
cache_peer orig-images.example.com parent 80 0 no-query originserver name=imgip
cache_peer_access wwwip allow mysite1
cache_peer_access imgip allow mysite2
http_access allow mysite1
http_access allow mysite2
visible_hostname cdn.example.com
cache_mem 1 GB

Analyze Oracle Database schema

Why is it important to analyze , and what is analyze anyway ?
well analyze is a method to gather statistics on table objects in order for the optimizer
to choose the best way for executing queries .
for example the optimizer may choose to use full table scan or to use table index ,
it does so by looking at the table statistics .
Oracle doesn’t gather statistics on schema’s all by it self and the DBA must do it
as part of database maintenance . its is wise to analyze your schema on regular basis’s ,
depend on the data changes . i will show you a small script that can help you analyze your schema .

#!/bin/sh
#
# This script will call dbms_stats to Analyze SCHEMA_OWNER schema
ORACLE_SID=<sid name>
ORACLE_BASE=<path to oracle base>
ORACLE_HOME=<path to oracle home>
export ORACLE_SID ORACLE_BASE ORACLE_HOME
$ORACLE_HOME/bin/sqlplus -s " / as sysdba" <<eof1
spool /tmp/Analyzing.txt
exec dbms_stats.gather_schema_stats(ownname=>'SCHEMA_OWNER',estimate_percent=>5,cascade=>true);
exit;
eof1

convert html to csv

There are many scripts using perl,php,python etc. that will do this for you
but the way you are about to see will make you smile of the simplicity of it .
instead of going over the file line by line and search inside , i am going to use
a tool that is going to do that for me . this tool is lynx , the console browser .
and here is how it works :

lynx -dump file_name.html

now, lets say our table looks like this :

1 2 3 4
5 6 7 8
9 10 11 12

to create a csv file from it , one would do something like this :
use ‘tr’ command to fold all spaces

tr -s " "

now , lets use sed to do the rest of the work for us .
this sed command will remove the first space/tab from the beginning of the lines

sed  's/^[ t]*//'

this sed command will place comma “,” as delimiter instead of space delimiter

sed  's/ /,/g'

So in the end we will end up with a simple one line command that creates a csv from html

lynx -dump file_name.html | tr -s " "|sed -e 's/^[ t]*//' -e 's/ /,/g' > file_name.csv

 
* note : the method shown here can work as long as there are no spaces in cell data

tab completion for sqlplus

Every person who ever worked with bash tab auto complete
know how fast and convenient it is to use .
but when it comes to Oracle sqlplus under Linux there is no such thing as tab completion
on most cases even the arrow keys doesn’t work . in the simple following steps
we will fix all that by using a tool called rlwrap .
in this post i used CentOS because it is similar to RedHat
but before we can compile we need to solve some dependencies first .
rlwrap depends on GNU lib named readline , let compile this one first

yum install ncurses-devel.i386 libtermcap-devel.i386
wget ftp://ftp.gnu.org/gnu/readline/readline-6.2.tar.gz
tar -xzvf readline-6.2.tar.gz
cd readline-6.2/
./configure
make && make install
echo "/usr/local/lib/" >>/etc/ld.so.conf
ldconfig

now we are ready to to compile rlwrap ,

wget http://utopia.knoware.nl/~hlub/rlwrap/rlwrap-0.37.tar.gz
tar -xzvf rlwrap-0.37.tar.gz
cd rlwrap-0.37/
./configure
make && make install

After rlwrap is installed we can start using it . note that it will save the commands
history under the user home directory so don’t use login with passwords .
using rlwrap is simple as starting it before running sqlplus
rlwrap sqlplus myuser/
but there will be no tab auto complete …. well here is the trick ,
rlwrap can take a lists of auto complete words list as a file .
so just create the a file containing all words you wish to auto complete and
just run it
rlwrap -f ~/my_completions sqlplus myuser/
note :
you can put all Oracle dictionary on that file ,all your schema objects
along with all PL/SQL commands

startup ubuntu in text mode

This common task turn out to be a pain in the … if you dont know how to do it .
Ubuntu unlike other Linux distributions moved from traditional sysvinit to Upstart .
i will not name all the differences but just one fact that setting a service to run in
requiered runlevel is not done by stop/start links in /etc/rcX.d anymore, but by init scripts .
Lets start . I use lxde as my desktop and lxdm as my desktop manager ,
i would like to have no Xserver on runlevel 3 ( runlevel 2 is Ubuntu default ) .
in order to do so i edit the file /etc/init/lxdm.conf and set the runlevels i wish lxdm
to start and stop . that is done by the commands
start on runlevel [2]
stop on runlevel [0136]

start on runlevel , was not found on the script so i added it .
now lxdm will start only on runlevel 2 and would stop on runlevel 0,1,3,6
Now boot into grub entry and append the runlevl to the kernel parameters
linux /boot/vmlinuz-2.6.38-8-generic root=/dev/sda1 ro quiet 3
to make it permanently you can create a new menu entry on /boot/grub/grub.cfg
You can read more on upstart at http://upstart.ubuntu.com/index.html

Rotate Oracle logs

Oracle database logs doesn’t rotate by it self , and as time goes by, your
server may hold logs that are too big to read and takes too much storage space .
this can get your server to a maximum capacity , and in some cases crush your server .
The best thing i found is to use logrotate to handle this rotations .
there are 2 files that needs to be rotate ( depend on your infrastructure ) this files
are alert log and listener log . both can grow to unlimited size .
Create a new logrotate rule by edit a files
/etc/logrotate.d/oracle-alert and /etc/logrotate.d/oracle-listener
the oracle-alert file should point to the alert log usualy located under
$ORACLE_HOME/diag/rdbms/<database>/<sid>/trace/alert_.log
here is an example of oracle-alert , that will rotate weekly and store for 4 files back
also it will compress that backups and create a new file with the correct permissions .
* note that Oracle will create a new alert log ,if the file is missing, upon next event

/opt/app/DB/diag/rdbms/example/example1/trace/alert_example1.log {
compress
rotate 4
weekly
create 640 oracle oinstall
}

the next thing to handle is the listener , now the listener log cannot be remove just like that
if you do so , the listener would stop logging into that file . solving it with a special
commands that restart just the loger of the listener .
the location of the listener log is under $ORACLE_HOME/diag/tnslsnr/<database>/listener/trace/listener.log
This example shows how to weekly rotate and compress

/opt/app/DB/diag/tnslsnr/example/listener/trace/listener.log {
compress
rotate 4
weekly
create 640 oracle oinstall
prerotate
su - oracle "lsnrctl set Log_status off"
endscript
postrotate
su - oracle "lsnrctl set Log_status on"
endscript
}