Showing posts with label LINUX sTUFF. Show all posts
Showing posts with label LINUX sTUFF. Show all posts

Thursday, June 27, 2013

Install and Configure Zabbix - server monitoring Network system

About HelpSL


We believe in connecting people to a great education so that anyone around the world can learn    without limits.
HelpSL is an education Organization that partners with the top universities and organizations in the world to offer courses online for anyone to take, for free. Our technology enables our partners to teach millions of students rather than hundreds..
Educate-Lanka-banner

CEO’S MESSAGE

WELCOME TO HelpSL
I would like to take this opportunity and thank you for visiting HelpSL Web site and for taking the time and showing the interest to learning more about Linux Administration  As you will see (or have seen already), we  believe in creating a positive impact for our clients, their beneficiaries and for our staff.Planning start something new in this year? You can start to learn online now. Online education is gaining popularity over the last few years.
Nuwan Vithanage (Data Center Junior System Administrator)
lk.linkedin.com/pub/nuwan-vithanage/30/930/ba5/

I Proud to Say you all .We Obtained Open Data Center Alliance Membership

logo

 Story Behind  Small Data Center Project

The project began in 2012 with two small computers, one with HTTP services and the other was the mail server. Everything was installed in the closet of an apartment. Two copper connections with a static IP address for each modem assured bandwidth data. It is correct to say that the installation was very basic with very little bandwidth capacity.
New Server Arrive To Small Data center 
servers_close
The project anticipates a lot of changes for 2014.
First, there is the relocation of servers that will be held in February. The date is to be scheduled. Also, a shutdown of all servers for a period of one hour.
Recent Update Of Data Center 
MySQL backups every three hours now

So far we’ve saved our customers’ MySQL databases once per night. Since, however, often change the data in MySQL databases and especially for critical data (such as customer records, orders, payment transactions, etc.) is a big problem to lose in a database restore from a backup up to 24 hours of important data, created Timme hosting now automatically every three hours, a backup of your MySQL databases. Thus we ensure that you can work with current data, should it be necessary once to restore your database.
At the same time we store the MySQL backups for seven days, so that you can jump back to an older database problems.
Of the files and folders of clients websites, we continue to create a backup once a day, as files change often and are also easier to restore in case of emergency (usually they can be installed by the site administrator of the computer again via FTP). Again, we store backups for seven days.
However, we would like to emphasize that it is essential that you, the customer also own regular backups of your site and create databases for the – admittedly very rare – case that our backup systems in case of emergency even not available (eg due to maintenance)

 Network Monitoring

  • We have off site network ping tests and technicians get paged with the notifications.
  • Monitors on the walls showing live network statistics such as a live network weather map, graphs showing Mbps, PPS, Switch health, DC temperature monitors and more.
  • We also do thresholds monitoring for early alerting before a problem effects service.
Stay connected!
unterstuetzte_cms_footer







Wednesday, June 26, 2013

EJS y Open Source Physics: física por doquier

El tema de hoy es un tanto distinto. No es ni una novedad ni está ligado únicamente al mundo GNU/Linux aunque sí al Open Source. Antes de ir directo al grano, prefiero entrar en tema con una breve introducción. Sabemos que son enormes las ventajas de tener ordenadores en nuestras casas y que, además, éstos tienen la capacidad de realizar cálculos mucho más rápidos que nosotros, los seres humanos. Un ejemplo de esto son los videojuegos modernos. Podéis ver cómo se modela todo el entorno en tiempo real para poder sentirlo con fluidez y no sólo eso sino que también tiene que hacer que tanto nuestro personaje como todos los objetos de nuestro alrededor respeten las leyes de la física que están impuestas en ese espacio imaginario. Todo eso lleva hacer cuentas para saber qué pasa con cada cosa que esté allí a cada instante.
EJS y Open Source Physics: física por doquierLo siento, no pude evitar esta imagen
Teniendo en cuenta que tenemos semejante capacidad en nuestras manos invertida para el ocio de la gente, vamos a movernos a un plano, ahora, académico. ¿Qué es la física? A grandes rasgos y adecuándose mejor al objetivo de este artículo, podemos definirla como una disciplina que explica lo que sucede en la naturaleza a partir del lenguaje matemático. Es decir que podemos tomarla como un conjunto de ecuaciones de las cuales podemos obtener números y que, éstos, se puedan ver manifestados en la realidad. Como dijimos, los ordenadores tienen una gran capacidad para sacar esos números, procesarlos y mostrarlos en imágenes. Por lo tanto, podemos concluir que estamos en frente de un simulador de leyes físicas que, en el fondo, es lo que son los videojuegos pero cumpliendo sus propias reglas, no necesariamente las de la realidad.
A todo esto, quería comentaros de un sitio que me encontré a partir de un artículo de hace unos meses en Linux Journal que se llama Open Source Physics y justamente lo que tiene, como veníamos hablando, es un conjunto de aplicaciones Java para los distintos campos de la física y, por supuesto, con el código disponible. Además, ni siquiera hay que ser experto en programación para realizar una de estas simulaciones. Se provee, también, una aplicación Java (código disponible, por supuesto) capaz de simplificar la tarea de pasar el problema físico a lenguaje de programación. Eso no significa que igualmente sea soplar y hacer botellas.
Primero lo primero, las simulaciones disponibles abarcan varios campos y problemas de la física. Desde la mecánica clásica bien general, que posiblemente habéis estudiado en el secundario, hasta relatividad, mecánica cuántica y más. Para comenzar por un ejemplo, podemos tomar uno de los casos más sencillos: movimiento rectilíneo uniformemente variado (es decir, aceleración constante). Cuando digo sencillo me refiero a que, una vez planteado y resuelto el problema físico, el modelo matemático resultante no es del todo complicado de entender o imaginar. Si en este caso observamos la posición de un móvil, resulta ser una ecuación que va aumentando en valor a medida que aumenta la velocidad; mientras que la velocidad aumenta a medida que pasa el tiempo. Por lo tanto terminamos teniendo un objeto que se mueve cada vez más y más rápido.
Un ejemplo podría ser una carrera entre dos vehículos. Si habéis ejecutado la simulación, podréis ver que, en este caso, no hay parámetros para modificar. Por lo general los hay, pero quería usar el ejemplo más sencillo. En sí, consiste en adivinar cuál de los dos móviles llegará primero a la meta teniendo en cuenta que uno arranca en ventaja y se mueve a velocidad constante y, el otro, arranca quieto pero tiene aceleración constante.
EJS y Open Source Physics: física por doquier
Sin embargo, eso no es muy divertido que digamos. El asunto se va poniendo más interesante a medida que vamos buscando problemas más complejos porque, como ya vimos, no hace falta conocer la matemática que hay detrás. Probemos algo un poco más avanzado: el slinky. Resulta muy interesante las cosas que puede hacer este juguete, por así llamarlo. Hay una simulación disponible que permite estirarlo a gusto, sostenerlo o soltarlo. Un experimento curioso es dejarlo estirado y destildar la casilla hold (sostener). Notad que la parte inferior permanece quieta mientras la parte superior cae. Lamentablemente no tengo uno para probarlo pero es un fenómeno reproducible en la realidad.
EJS y Open Source Physics: física por doquier
Aún así, eso lo podemos ver. El problema comienza cuando el modelo matemático se vuelve muy complicado y, por lo tanto, más difícil de ver. Un ejemplo puede ser la ecuación de Schrödinger que, incluso en su versión simplificada, puede complicarse al principio. No va al caso entrar en detalle pero vale la pena mencionarlo ya que uno puede manipular los parámetros que afectan las gráficas sin necesidad de conocer perfectamente el modelo aunque, ya en este caso, hay que tener una noción de lo que está sucediendo (en este caso, una onda entrando a una zona con mayor potencial).
EJS y Open Source Physics: física por doquier
La frutilla del postre es, como mencioné anteriormente, la posibilidad de diseñar una simulación sin necesidad de ser un experto en programación. Esa aplicación esta escrita en Java y se llama EJS (Easy Java Simulations). No estuve probándola a fondo pero con los ejemplos que dan, se puede ver cómo parece funcionar. En primer lugar, hay unos documentos en PDF que explican cómo llegar a los archivos de ejemplo (que se encuentran en workspace/source/ModelingScience) y una idea básica para lograr hacer funcionar un programa. Realizar una de estas simulaciones lleva una serie de pasos.
Primero, diseñar el modelo con sus variables, cómo varían entre ellas (o más técnicamente, los valores de sus derivadas), cómo están relacionadas y un par de opciones más. Luego, hay que organizar una interfaz para que la simulación sea usable. Fácilmente, es arrastrar a la ventana los elementos para hacer variar los parámetros y permitir al usuario activar opciones. Digamos que lo que lleva para lograr la simulación es conocer bien el modelo y entender cómo funciona la aplicación. A pesar de que, a primera vista, parezca un tanto complicado con tantas opciones, es mucho más sencillo que programar de cero y, por supuesto, mucho más rápido.
En conclusión, si estáis interesados en la física o la estudiáis, este sitio puede ser una gran herramienta para ver más allá de las ecuaciones y entender mejor qué es lo que dicen. El no tener que trabajar directamente resolviendo la o las ecuaciones, puede dar lugar a que gente no tan interiorizada en el tema pueda satisfacer su curiosidad e ir más allá de lo que sepa o pueda hacer sin que la matemática sea una traba. Por último, para quien quiera ir más allá, como divertimento, puede usar EJS para simular algún problema de la secundaria o la universidad. Aunque es probable que, si sabéis programar, ya lo habréis hecho ;).

Setup Local YUM Server in CentOS 6.x / RHEL 6.x / Scientific Linux 6.x


Yellowdog updater, Modified (Yum) is a software package manager that installs, updates and removes packages on RPM-based Linux distributions. Yum makes it easier to maintain groups of machines without having to manually update each one using rpm.
[UNSET]
Features:
  • Support for multiple repositories.
  • Simple configuration.
  • Automatic dependency calculation.
  • Fast operation.
  • RPM-consistent behavior.
  • Package group support, including multiple-repository groups.
  • Simple interface.
Yum uses an online repository by default, but you can also configure it to use a local repository of packagesY
Let us install a local yum server using CentOS 6.3. The steps provided here are tested in CentOS 6.3. But it will work fine in RHEL 6.x and Scientific Linux 6.x too.
nstall CentOS 6.3 as a physical or virtual machine. I have already covered a topic how to install Redhat Enterprise Linux 6.0 earlier in this blog. Those steps are damn similar to CentOS 6.x and Scientific Linux 6.x installation. So just follow the steps exactly to install CentOS 6.3.
In this example the hostname of the server is myserver.linuxtechguru.com and IP Address is 192.168.56.101.
Login to your system and Mount the contents of your CentOS 6.3 DVD in the /mnt directory or wherever you want. In the Terminal window, type the following command:
#mount /dev/cdrom1 /mnt/ (Here cdrom1 is my local cdrom device)
[UNSET]1
Install vsftpd package, so that we can use this as a FTP server to share our repository in the client systems.
Change to the directory where you mounted CentOS DVD. In our example we have mounted the CentOS DVD in /mnt directory.
#cd /mnt/Packages
#rpm -ivh vsftpd-2.2.2-11.el6.i686.rpm
[UNSET]2
Start the FTP Service:
#service vsftpd start
[UNSET]3
Install createrepo package if it is not installed. This is package is used to create our local repository.
#rpm -ivh createrepo-0.9.8-5.el6.noarch.rpm
[UNSET]4
Oops!! It shows us the dependency problem. Let us the install missing dependencies first:
# rpm -ivh deltarpm-3.5-0.5.20090913git.el6.i686.rpm
[UNSET]5
Then install the another one:
# rpm -ivh python-deltarpm-3.5-0.5.20090913git.el6.i686.rpm
[UNSET]6
Now install the createrepo package:
[UNSET]7
Create a folder called localyumserver (You can use your own) in /var/ftp/pub directory to save all the packages from the CentOS DVD. Copy all the files in the Packages folder from the DVD to /var/ftp/pub/localyumserver folder:
# mkdir /var/ftp/pub/localyumserver
# cp -ar *.* /var/ftp/pub/localyumserver
[UNSET]8
It will take a while to copy all the packages in the DVD. Please be patient. After all packages are copied, create a repo file called localyumserver.repo in /etc/yum.repos.d/ directory.
# nano /etc/yum.repos.d/localyumserver.repo
[UNSET]9
Type the following entries and save the file (CTRL+O to save and CTRL+X to exit):
10
Where,
[localyumserver] ==> Name of the Local Repository.
comment ==> Information about the Repository.
baseurl ==> Path of the Repository (i.e where we had copied the contents from CentOS DVD)
gpgcheck ==> Authentication of the Repository, which is disabled in our case.
Now it is time to create our repository. Enter the following command in the Terminal:
# createrepo -v /var/ftp/pub/localyumserver
Now the local YUM repository creation process will begin.
[UNSET]11

[UNSET]12

Note: Delete or rename all the other repo files except the newly created repo file i.e in our example it is localyumserver.repo
Next update the repository:
  • yum clean all
  • yum update
You’re done now.
Client side configuration:
Create a repo file in your client system as mentioned above in the /etc/yum.repos.d/ directory and remove or rename the existing repositories. Then modify the baseurl as mentioned below:
[localyumserver]
comment ="My Local Repository"
baseurl=ftp://myserver.linuxtechguru.com/pub/localyumserver
gpgcheck=0
enabled=1
(or)
[localyumserver]
comment ="My Local Repository"
baseurl=ftp://192.168.56.101/pub/localyumserver
gpgcheck=0
enabled=1

How to Install and Configure RCP100 Routing Suite on Debian 7

Software-based routers have always played a role in the Internet, and are becoming increasingly important in data centers due to the convergence of video, mobile, and cloud services. Data traffic no longer moves simply from the subscriber into the network and then out again. Instead, most of the traffic is located inside the data center between various application servers within the network.
All this traffic can be routed easily using software-based routers running on commodity PC hardware. Such a router looks like just another server in the data center, and most of the time it is implemented using open-source software. The availability of the source code and the right to modify the software enables the unlimited tuning and optimization of the network traffic.
This article describes how to set up RCP100 routing suite on a Debian 7 computer. RCP100 is a full OSPF/RIP router for Linux. It works on 64bit computers, it is licensed under GPL, and it is actively developed.
The computer I am setting up has two Ethernet interfaces, eth0 (192.168.20.20) and eth1 (10.1.10.1), and it is meant to connect a small private network segment (10.1.10.0/24) to the larger public network. To isolate the private network, I configure Network Address Translation on the router and enable the firewall. Computers on the private network are assigned IP addresses using DHCP. The router also provides NTP and DNS proxy services.
Network setup
Network setup
Manual network configuration
Before going any further, we need to configure the network manually on our Debian box. In sharp contrast to servers and workstations, routers are configured with fixed IP addresses. In Debian the manual configuration is entered in /etc/network/interfaces file as follows:
auto eth0
iface eth0 inet static
        address 192.168.20.20
        netmask 255.255.255.0
        gateway 192.168.20.1

auto eth1
iface eth1 inet static
        address 10.1.10.1
        netmask 255.255.255.0

192.168.20.1 is our default gateway address. All the traffic from our private network going outside will be forwarded to this IP address. To translate names to IP addresses we also need to define some DNS nameservers in /etc/resolv.conf. I’ve picked up in this example two well known public DNS servers provided by Google, you might want to replace them with DNS servers provided by your ISP.
nameserver 8.8.8.8
nameserver 8.8.4.4
After changing the configuration we need to restart the networking service:
$ sudo /etc/init.d/networking restart
RCP100 software installation
Download RCP100 surce code archive, compile it, and install it as follows:
$ tar -xjvf rcp100-X.Y.Z.tar.bz2
$ cd rcp100-X.Y.Z
$ ./configure
$ make
$ sudo make install
The software is self-contained in /opt/rcp directory. Removing it is just a matter of deleting the directory. The router is started by running start.sh script:
$ sudo /opt/rcp/bin/start.sh
First time you start the software, the router detects the existing interface setup and imports it in its own configuration. You can modify it later directly in the router configuration.
Command Line Interface
RCP100 features a Cisco-style command line interface (CLI) accessible by telnet. Most commands have the same syntax as Cisco’s, any differences can be easily figured out using the on-screen help system. Use rcp/rcp as default user/password for login.
$ telnet 127.0.0.1
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
User: rcp
Password: 
rcp> ?
  enable                      Administration mode
  exit                        Exit the current mode
  logout                      Exit the session
  no                          Negate a command or set its defaults
  ping                        Send echo messages
  show                        Show running system information
  telnet-client               Open a telnet session
  traceroute                  Trace route to destination
rcp>
rcp>enable
rcp#config
rcp(config)#
CLI takes a little bit to get used to it. It is used however by most commercial routers out there, if you can handle one of them you can handle all. Until that happens, you will relay on documentation and on-screen help.
In a CLI session, the help can be accessed at any time using ? key, and command completion is activated using TAB. It is not necessary to type the full command, most of the time only a few letters will do it.
The commands are hierarchically structured. As you login you are in unprivileged mode. In this mode you can not modify the configuration. From here you go in privileged mode using enable command and in configuration mode using config command. As you go from one mode to another the prompt changes. You can type exit to go back to the previous mode, and logout to exit the telnet session.
CLI states
CLI states
First login it is advisable to change the default passwords for telnet and http access:
rcp(config)#administrator rcp password mysupersecretpassword
rcp(config)#service http password mysupersecretpassword
The router modifies the running configuration as the commands are entered. To have the configuration stored on the hard disk and applied automatically in case the computer is restarted, we need to execute copy run start command. To display the current running configuration the command is show configuration.
*** save configuration ***

rcp(config)#copy run start

*** display current running configuration ***

rcp(config)#show configuration
Interface configuration
Use show interface command to display the current interface status. In case you need to change the IP addresses, go in interface mode and use ip address command. Don’t forget to save the configuration using copy run start:
*** check interfaces ***

rcp#show interface 
Interface        Type         IP                      Status (admin/link)
eth0             ethernet     192.168.20.20/24        UP/UP
eth1             ethernet     10.1.10.1/24            UP/UP
lo               loopback     127.0.0.1/8             UP/UP
br0              bridge       0.0.0.0/0               DOWN/DOWN
br1              bridge       0.0.0.0/0               DOWN/DOWN

*** modify interface address ***

rcp(config)#interface ethernet eth1
rcp(config-if eth1)#ip address 10.1.10.1/24 
rcp(config-if eth1)#copy run start
rcp(config-if eth1)#exit
rcp(config)#
Static routes
Our default gateway was detected automatically and it should be present in the routing table. In case it is not, we can add it with ip default-gateway command. Removing a default route is just a mater of adding a no in front of the command we used to configure it – this is true for most CLI commands:
*** check routing table ***

rcp#show ip route
Codes: C - connected, S - static, R - RIP, B - blackhole, O - OSPF
IA - OSPF inter area, E1 - OSPF external type 1, E2 - OSPF external type 2

S    0.0.0.0/0[0/0] via 192.168.20.1, eth0
C    10.1.10.0/24 is directly connected, eth1
C    192.168.20.0/24 is directly connected, eth0

*** add and remove a default gateway ***

rcp(config)#ip default-gateway 192.168.20.1
rcp(config)#no ip default-gateway 192.168.20.1
To add static routes use ip route command. You will need to specify the network destination (1.2.3.0/24 in the example below) and the next hop address (192.168.20.1). Optionally, you can specify an administrative distance for this route (default 1). The smaller the administrative distance the higher the precedence of the route in the routing table.
rcp(config)#ip route 1.2.3.0/24 192.168.20.1 
rcp(config)#no ip route 1.2.3.0/24 192.168.20.1 
NAT and Firewall
The command format to enable network address translation in RCP100 is ip nat masquerade internal_network outside_interface. In our case the internal network is the private network 10.1.10.0/24, and the outside interface is eth0 (192.168.20.20):
rcp(config)#ip nat masquerade 10.1.10.0/24 eth0
Once NAT is enabled, all packets from 10.1.10.0/24 network going outside will have the source IP address replaced with 192.168.20.20, eth0 acting like a proxy for all computers on internal network. None of the hosts on our internal network are ever seen directly from the outside network, the only host visible from outside is the masquerade machine itself.
Even with NAT enabled, there are still cases when our computers can still be reached directly from outside network. One such case is somebody sending packets on interface eth0 pretending to be on 10.1.10.0/24 network. Our router will forward these packets unless told specifically not to. This is implemented using Access Control Lists (ACL) as follows:
rcp(config)#access-list 100 deny  10.1.10.0/24  any  
rcp(config)#access-list 100 deny  any  out-interface eth0  
rcp(config)#access-list 100 deny  any  any  new,invalid
rcp(config)#interface ethernet eth0
rcp(config-if eth0)ip access-group 100 forward
rcp(config-if eth0)exit
rcp(config)#
We also need to limit our router access over telnet (port 23) and http (port 80) from outside network.
rcp(config)#access-list 101 deny tcp any  any 23 new,invalid
rcp(config)#access-list 101 deny tcp any  any 80 new,invalid
rcp(config)#interface ethernet eth0
rcp(config-if eth0)ip access-group 101 in
rcp(config-if eth0)exit
rcp(config)#
Services
The first service to be enabled is Network Time Protocol (NTP). We want computers on our private network to be able to synchronize the time with a local NTP server running on the router. The configuration is as follows:
ntp server nist1-nj.ustiming.org
ntp server nist1-pa.ustiming.org
ip ntp server
www.pool.ntp.org lists thousands of public NTP servers you can use for synchronization. Try to pick at least two servers closer to you.
Next service on our list is Domain Name System (DNS). We will enable a DNS proxy on our router. The proxy forwards the requests to configured DNS servers (8.8.8.8 and 8.8.4.4) and maintains a cache entry for each resolved DNS query. The cached entries are used to speed up future queries. This reduces response time for DNS lookups for computers on our private network.
ip name-server 8.8.8.8
ip name-server 8.8.4.4
ip dns server
The last service to be enabled is Dynamic Host Configuration Protocol (DHCP).
rcp(config)#service dhcp
rcp(config)#ip dhcp server
rcp(dhcp server)#dns-server 10.1.10.1
rcp(dhcp server)#ntp-server 10.1.10.1
rcp(dhcp server)#network 10.1.10.0/24
rcp(dhcp 10.1.10.0/24)#range 10.1.10.50 10.1.10.250
rcp(dhcp 10.1.10.0/24)#default-router 10.1.10.1
rcp(dhcp 10.1.10.0/24)#lease 0 4 0
The lease time is set to 4 hours, and leases are assigned in 10.1.10.50 to 10.1.10.250 range. Our interface eth1 10.1.10.1 is advertised as default route, NTP server and DNS server.
Full configuration
This concludes our configuration. We need to make sure we save the configuration on hard disk in case we need to restart the router:
rcp(config)#copy run start
This is the configuration for our NAT router:
rcp#show configuration 
hostname rcp
ip name-server 8.8.8.8
ip name-server 8.8.4.4
ip dns server
!
service telnet
service http encrypted password HMNRYBDP$784691c70a0fa7af5f031d338d2b9725
administrator rcp encrypted password  URCPKGVR$AOt0VUFzM8m12f9C361Ro1
!
service dhcp
ip dhcp server
  dns-server 10.1.10.1
  ntp-server 10.1.10.1
  network 10.1.10.0/24
    range 10.1.10.50 10.1.10.250
    default-router 10.1.10.1
    lease 0 4 0
  !
!
ntp server nist1-nj.ustiming.org
ntp server nist1-pa.ustiming.org
ip ntp server
!
access-list 100 deny  10.1.10.0/24  any  
access-list 100 deny  any  out-interface eth0  
access-list 100 deny  any  any  new,invalid
access-list 101 deny tcp any  any 23 new,invalid
access-list 101 deny tcp any  any 80 new,invalid
!
ip nat masquerade 10.1.10.0/24 eth0
!
interface ethernet eth0
  ip address 192.168.20.20/24
  ip mtu 1500
  no shutdown
  ip access-group 101 in
  ip access-group 100 forward
!
interface ethernet eth1
  ip address 10.1.10.1/24
  ip mtu 1500
  no shutdown
!
interface loopback lo
  ip address 127.0.0.1/8
  ip mtu 16436
!
interface bridge br0
  ip mtu 1500
  shutdown
!
interface bridge br1
  ip mtu 1500
  shutdown
!
rcp#
HTTP access
RCP100 also provides an HTTP interface for configuration and statistics. You can access it by pointing your browser to eth1 interface address (10.1.10.1). Most of the configuration and statistics available in CLI are exposed in this interface.
HTML interface
HTML interface
Conclusion
The use of software-based routers has grown increasingly common. By reducing complexity and simplifying network management, eliminating vendor lock-in and dramatically reducing the cost of the necessary hardware, software-based routers will play a critical role in scaling data center operations.
Building a router out of a regular Debian box is not exactly difficult. RCP100 is free software, and it is easy to integrate into the software stack. On a typical x86 computer today it can route packets from several 1GB Ethernet interfaces at wire speed

Install Galera Cluster Install the Atomic & EPEL repositories Create ssh-keys (on each server)

alt
arabic_logo_set__2_by_mystafa-d5g1hy7
This document describes the installation of the Galera/Interworx cluster. This cluster provides the default Load Balancing that is available within Interworx, and additionally MySQL loadbalancing through MySQL Galera Clustering.
This guide assumes that you have installed your server with CentOS 6.4 (or later), and that you have double NIC’s with one External connection (192.168.120.x), and the other in a private VLAN (172.20.0.x).

Install the Atomic & EPEL repositories

Since we need some additional packages, we have to add some repositories to the server installation.
yum -y install wget wget http://ftp.nluug.nl/pub/os/Linux/distr/fedora-epel/6/i386/epel-release-6-8.noarch.rpm rpm -Uvh epel-release-6-8.noarch.rpm wget -q -O – http://www.atomicorp.com/installers/atomic | sh
Add a line to the atomic repo (since we don’t want to use the atomic repo for mysql)
nano /etc/yum.repos.d/atomic.repo
[atomic]
exclude=mysql*

Monitoring tools

In this part we will setup, the local monitoring tools
yum -y install nano git iftop ntop htop mytop lynx screen gcc mutt innotop iotop mtr man perl-DBD-MySQL

Other packages

Ok, now we are able to monitor the server we want to install the real software:
Webserver programs
yum -y install httpd clamav mysql mysql-server mysql-devel php-common php-dom php-pear php-soap php-pdo php-mysql php-devel php-gd php-ldap php-mbstring php-intl php-mcrypt phpmyadmin php-xmlrpc php-cli php-iconv php-ctype php-tokenizer aspell php-xcache xcache-admin
Common programs (especially needed if you use iscsi for Interworx later on)
yum -y install iscsi-initiator-utils lsscsi device-mapper-multipath dstat nfs-utils nfs-utils-lib

Set hostname

On all of the servers we will add the hostnames to the /etc/hosts file
echo 192.168.120.1 master.hosting.local master >> /etc/hosts
echo 192.168.120.2 slave1.hosting.local slave1 >> /etc/hosts
echo 192.168.120.3 slave2.hosting.local slave2 >> /etc/hosts
echo 192.168.120.4 slave3.hosting.local slave3 >> /etc/hosts
echo 172.20.0.1 master >> /etc/hosts
echo 172.20.0.2 slave1 >> /etc/hosts
echo 172.20.0.3 slave2 >> /etc/hosts
echo 172.20.0.4 slave3 >> /etc/hosts
If this is done, make sure that hostname and hostname -f returns the same value!!
and hostname -i   doesn’t give you 127.0.0.1 or 127.0.1.1.
hostname && hostname -f

SElinux & IPTables

Disable these services by running:
service iptables stop
setenforce 0
Edit the file /etc/sysconfig/selinux so it reads:
SELINUX=disabled

Configure ntpd

Since cluster services need the correct time we have to install the ntpd (timeserver deamon).
yum -y install ntp && chkconfig ntpd on
Let’s put in some timeserver as well:
nano /etc/ntp.conf
server pool.ntp.org

Create ssh-keys (on each server)

Because all the servers should be able to communicate with each other we have to create on each server a ssh-key.
ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Now add the created ssh public keys (/root/.ssh/id_rsa.pub) to all the servers in the file:
nano ~/.ssh/authorized_keys
If this step is completed try to logon to each server, and accept storage of the host key.
ssh master  Yes
exit
ssh slave1 Yes
exit
ssh slave2 Yes
exit
ssh slave3  Yes
exit
Now restart the server:
shutdown -r now
We are ready to go a step further and start with the configuration of MySQL.
First inititate MySQL for the first run:
mkdir /var/log/mysql
chown -R mysql:mysql /var/log/mysql
Now we can start mysql for the first time:
chkconfig mysqld on && service mysqld start

Create a MySQL restore set

To make it later on possible to install configure Interworx without messing up the Galera installation we have to copy some file to a secure location (on all nodes):
service mysqld stop
mkdir /tmp/interworx /tmp/interworx/etc/
cp /etc/my.cnf /tmp/interworx/etc/my.cnf
mkdir /tmp/interworx/etc/init.d
cp /etc/init.d/mysqld /tmp/interworx/etc/init.d/mysqld
mkdir /tmp/interworx/usr /tmp/interworx/usr/bin
cp /usr/bin/mysqld_safe /tmp/interworx/usr/bin/mysqld_safe
mkdir /tmp/interworx/usr/libexec
cp /usr/libexec/mysqld /tmp/interworx/usr/libexec/mysqld
mkdir /tmp/interworx/var/ /tmp/interworx/var/lib
cp -r /var/lib/mysql /tmp/interworx/var/lib

Galera-Configuration

To configure the Galera Cluster we go to the http://www.severalnines.com/galera-configurator/ website. We picked the following settings, but change according to your requirements:
Vendor:                      Codership (based on MySQL 5.5)
Infrastructure:              Other
Operating  System:           RHEL6 - Redhat 6.3/Fedora/Centos 6.3/OLN 6.3/Amazon AMI Platform:Linux  64-bit (x86_64)
Number of Galera  Servers:   3+1
MySQL PortNumber:            3333    (haproxy, interworx on 3306)
Galera PortNumber:           4567
Galera SST  PortNumber:      4444
SSH PortNumber:              22
OS User:                     root
MySQL Server  password:      pw4mydatabase
CMON DB password:            pw4myCMON
Firewall  (IPTables):        Disable
System Memory:               16 Gb
WAN:                         no
Skip DNS Resolve:            no
Database Size:               < 8GB
MySQL Usage:                 High write/High read
Number of cores:             16
Max connections per  server: 1500
Innodb_buffer_pool_size:     11319
Innodb_file_per_tabel:    yes
ClusterControl  Server:      172.20.0.1
Apache user:                 apache
WWWROOT:                     /var/www/html/
Config  directory:           /etc/
Server-ID 1:                 172.20.0.2
Datadir:                     /var/lib/mysql 
Server-ID 2:                 172.20.0.3
Server-ID  3:                172.20.0.4
Email address:               your@email.com
Click on Generate Deployment Scripts and retrieve them from your mailbox shortly after.

Install Galera Cluster

Now we are ready to install the Galera cluster software
cd /usr/local/src/
wget http://www.severalnines.com/galera-configurator/tmp/bhhrgonbsft2e3o2vhlt0ip9v0/s9s-galera-2.2.0-rpm.tar.gz
tar xvfz s9s-galera-2.2.0-rpm.tar.gz
cd s9s-galera-2.2.0-rpm/mysql/scripts/install
Before running the installer change the following:
echo “local_mysql_port=3333″ >> ../../config/cmon.cnf.agent
echo “local_mysql_port=3333″ >> ../../config/cmon.cnf.controller
echo “local_mysql_port=3333″ >> ../../config/cmon_rrd.cnf
and add the bold line to the files ../../config/my.cnf and ../../config/my.cnf.cmon
[MYSQLD]
old-passwords=1
Now run the installer:
./deploy.sh  2>&1  | tee cc.log
First the setup will try to ping all nodes. If ok, setup will start, since we disabled SELinux and Configured the SSH keys we can answer the following:
do you want to set SELinux to Permissive mode. n
Can you SSH from this host to all other hosts without password? y
After successfully configuring the Galera cluster, there should be a ClusterMonitor (cmon) page where you can connect to on the master node at:
http://192.168.120.1/cmon

JPGraph

Some pages require JPGraph, install this on the master:
cd /usr/local/src
wget http://jpgraph.net/download/download.php?p=5
mv download.php\?p\=5 jpgraph.tar.gz
tar xvfz jpgraph.tar.gz -C /var/www/html/cmon
cd /var/www/html/cmon
ln -s jpgraph-3.5.0b1 jpgraph
chown apache:apache -R jpgraph*

Change MySQL.cnf (configure UTF-8 etc.)

Since we want to use the UTF-8 character set in our database we add this option to the [MYSQLD] section of the file:
#Add some interworx  settings
 #symbolic-links=0
# define default  character sets
   collation-server = utf8_unicode_ci
   init-connect='SET NAMES utf8'
   character-set-server  = utf8
#log
   slow_query_log  = 1
   slow-query-log-file=slow-queries.log
   long_query_time  = 10
#OTHER THINGS,  BUFFERS ETC
   myisam-sort-buffer-size  = 8M
   skip-name-resolve
   memlock=0
   event_scheduler=1
Furthermore we change the environment variables to UTF8:
echo “LANG=en_US.utf-8″ >> /etc/environment
echo “LC_ALL=en_US.utf-8″ >> /etc/environment
Now we can restart MySQL to make this setting active for the databases that will be created in the future.
service mysql restart
Now we have to correct some installer files. Since mysql-libs is removed by the Galera setup. We have to use the –excludepath option to get around the following error:
file /usr/share/mysql/russian/errmsg.sys from install of MySQL-server-5.5.28_wsrep_23.7-1.rhel5.x86_64 conflicts with file from package mysql-libs-5.1.67-1.el6_3.x86_64
Download the mysql-libs rpm file and run it with the –excludepath option (which isn’t available when using yum ):
cd /usr/local/src
wget ftp://mirror.switch.ch/pool/1/mirror/scientificlinux/6rolling/x86_64/os/Packages/mysql-libs-5.1.67-1.el6_3.x86_64.rpm
rpm -Uvh mysql-libs-5.1.67-1.el6_3.x86_64.rpm –excludepath=/usr/share/mysql/
Reinstall the packages that where removed because of the mysql-libs dependencies.
yum -y install nagios-plugins-all perl-DBD-MySQL innotop mytop

HAProxy setup

Next step is installing the HAProxy setup that uses cmon of the Galera cluster, to show its status information. At the master server we have to run the following commands:
# the installer needs to be extracted inside the galera installation folder!!!
cd /usr/local/src/s9s-galera-2.2.0-rpm/mysql/scripts/install
wget http://severalnines.com/downloads/s9s-haproxy.tar.gz
tar zxvf s9s-haproxy.tar.gz
cd haproxy
Before installing lets change some settings inside the following files:
install-haproxy.sh:           HAPROXY_MYSQL_LISTEN_PORT="3306"
makecfg.sh:                  "\tserver ${SPLIT[0]}  ${SPLIT[0]}:3333 check $STAT"
mysqlchk.sh.galera           MYSQL_PORT="3333"
mysqlchk.sh.mysqlcluster     MYSQL_PORT="3333"
Now run the installer:
./install-haproxy.sh 172.20.0.1 rhel galera
When the setup is completed without errors you have to grant the installer host rights to mysql instances on each server.
mysql -uroot -ppw4mydatabase
GRANT ALL ON *.* TO ‘root’@’172.20.0.1′ IDENTIFIED BY ‘pw4mydatabase‘;
FLUSH PRIVILEGES;
exit
There is something what probably will go wrong tho, which is related to the defaults requiretty setting. It is possible that you get an error message like:
Pseudo-terminal will not be allocated because stdin is not a terminal.
sudo: sorry, you must have a tty to run sudo stdin is not a terminal.
If this happens edit the sudoers file (on all servers) by using:
visudo
and change the line
 Defaults    requirettys
into
#Defaults    requirettys
Now it should be possible to connect to the loadbalanced cluster on the following address:
mysql -h192.168.120.1 -P3306 -uroot -ppw4mydatabase
Check this from the slave nodes to make sure.
You can connect to the status page of the HAProxy, http://192.168.120.1:9600
(default admin/admin). You can change the username/password by editing the file:
nano /etc/haproxy/haproxy.cfg
userlist STATSUSERS
group admin users admin
user admin insecure-password pw4adminHA
user stats insecure-password pw4userHA
Now we are ready to install Interworx.

InterWorx-ControlPanel

Before installing stop haproxy & mysql & cmon on the nodes
service mysql stop && service cmon stop && service haproxy stop
Now we start moving around some file to restore the default MySQL environment (we have /var/lib/mysql on different disks, so if you dont, make a copy of the directories and restore these after installing interworx):
umount /var/lib/mysql
cp -r /tmp/interworx/var/lib/mysql /var/lib
chown -R mysql:mysql /var/lib/mysql
cp /tmp/interworx/etc/init.d/mysqld /etc/init.d/mysqld
chmod 755 /etc/init.d/mysqld
cp /tmp/interworx/usr/libexec/mysqld /usr/libexec/mysqld
chmod 755 /usr/libexec/mysqld
mkdir /var/run/mysqld
chown mysql:mysql /var/run/mysqld
mv /usr/bin/mysqld_safe /usr/bin/mysqld_safe.galera
cp /tmp/interworx/usr/bin/mysqld_safe /usr/bin/mysqld_safe
chmod 755 /usr/bin/mysqld_safe
mv /etc/my.cnf /etc/galera.cnf
cp /tmp/interworx/etc/my.cnf /etc/my.cnf
mv /etc/init.d/mysql /etc/init.d/mysql.org
Edit the file
nano /etc/my.cnf
[mysqld]
old-passwords=1
Since Interworx and Galera remove and install lots of packages, we have to take some precautions.  Default it isn’t possible to combine those to packages. The steps before (create a backup set of the default mysql) can resolve this problem.
We first have to download the installer file and modify it:
cd /usr/local/src && wget -q http://updates.interworx.com/iworx/scripts/iworx-cp-install.sh
Now we have to comment out line 321, to prevent Interworx to mess up the mysql install.
nano iworx-cp-install.sh
# mysqlinstall
Then we can save the file and start the installer.
sh ./iworx-cp-install.sh
Press <enter> to begin the install…
After a while, you will get:
-=-=-=-=-= ALL DONE! THANK YOU! FOR USING InterWorx-CP =-=-=-=-=-
It’s good to check after installation the logs at  /home/interworx/var/log/error.log.
If ok, you’ll want to activate your InterWorx CP License.
Register now Interworx by using the website, since the command line doesn’t seem to work properly. Go to the website http://public IP Address:2080/nodeworx and enter the registration information:

After a while , you will probably get a blank page. To verify that the setup was completed you could test it with the command line method.

Scripted Activation Procedure

Run the goiworx.pex script with the following parameters (replace the bold text with the licensekeys below):
# Note: Make sure  the below is all one line. Replace LICENSE_KEY, EMAIL, and PASSWORD
 #~iworx/bin/goiworx.pex  --key=INTERWORX_YOURKEY --email=your@email.com --password=pw4yourIworx --ignorechecks
Now go to the website http://public IP Address/nodeworx and enter your login credentials. Accept the license agreement (some time you have to do this twice). If logged on you will be redirected to the page to configure your DNS servers:

Enter the requested primary/secondary DNS servers and click on update.  Click on System Services on the left side, followed by MySQL Server and Overview. Here we want to configure our Galera MySQL root password.

Click save again, logout and empty the IE cache incl. cookies etc.
If you have external nameservers, restore the original /etc/resolv.conf
echo “nameserver mynameserverip1” > /etc/resolv.conf
echo “nameserver mynameserverip2” >> /etc/resolv.conf
We now have configured the basic part of Interworx, ready to move those accounts into the Galera environment.   Repeat these steps for the other servers.
Ok, we now should be able to connect to the page http://Public IP Address Master/nodeworx and logon to interworx.  Of course we also have to logon to the slave servers later on.
After logging in to Interworx on the master server we want to configure the slave nodes:

Click on the setup button underneath Interworx-CP Cluster Manager.

Choose the Quorum IP Address (HeartBeat LAN), and click setup. We now going to add the slave nodes.
Since we need a API-Key of each slave server we logon tot the slave servers, click on Nodeworx in the left menu, click on API-key and then on Generate on the right side panel.

After you click on generate you will see an API-key being generated.

Copy this key and go back to the master server.

Add the slave by using their IP-Address since the Heartbeat node isn’t separate available in the DNS.  Past the API-key, run test first, and then add the API-key. If all of this is working, and cmon also is green we are ready to go further. We start with rolling back the MySQL environment to the Galera environment. First stop on all servers the services!
service iworx stop && service mysqld stop
Now we rollback the Galera MySQL environment (on all servers). Again: we use separate disks for mysql, if you dont use the copies you made before.
mv /etc/my.cnf /etc/interworx.cnf
mv /etc/galera.cnf /etc/my.cnf
mv /usr/bin/mysqld_safe /usr/bin/mysqld_safe.interworx
mv /usr/bin/mysqld_safe.galera /usr/bin/mysqld_safe
rm -rf /var/run/mysqld
rm -rf /usr/libexec/mysqld
rm -rf /etc/init.d/mysqld
mv /var/lib/mysql /var/lib/mysql.iworx
mkdir /var/lib/mysql
chown -R mysql:mysql /var/lib/mysql/
mount -a
mv /etc/init.d/mysql.org /etc/init.d/mysql
Now start on all servers the services:
service mysql start && service cmon start && service iworx start && service haproxy start
Wait until cmon recovers again!
It is possible that MySQL want start the first time stop and start it until it works normal. There could be a lost pid file or subsys file, that you have to remove first. Anyway if it works normal we can remove some extra files
rm -rf /var/lib/mysql.iworx /usr/bin/mysqld_safe.interworx /tmp/interworx
If al servers are done, we have to insert the Iworx users into the Galera MSQL databases. On the master server and galera cluster we run the following mysql statements:
These values are available from Interworx if you install without the modified installer.
mysql --user=root  --password=pw4mydatabase
INSERT INTO  `mysql`.`user` VALUES ('localhost','iworx','[your iworx password in old password format]','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','','','','',0,0,0,0,'',NULL),
('127.0.0.1','iworx','[your iworx password in old password  format]','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','','','','',0,0,0,0,'',NULL),
('172.20.0.2','iworx','[your iworx password in old password  format]','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','','','','',0,0,0,0,'',NULL),
('172.20.0.3','iworx','[your iworx password in old password  format]','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','','','','',0,0,0,0,'',NULL),
('172.20.0.4','iworx','[your iworx password in old password  format]','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','','','','',0,0,0,0,'',NULL),
('172.20.0.1','iworx','[your iworx master password in old password  format]','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','','','','',0,0,0,0,'',NULL);
GRANT ALL ON *.* TO  'iworx'@'172.20.0.1' IDENTIFIED BY ‘pw4mydatabase’;
FLUSH PRIVILEGES;
exit
Wait until its up and running and in sync on the webpage http://master IP Address/cmon (use FireFox since this one updated the page better than IE).
Let’s start Iworx again:

5 Steps to Secure your SSH Server

SSH is the standard method for Admin’s to connect to Linux servers securely. But the default install of SSH server way far from perfect and may allow attackers to hack your server. This guide shows you how to secure your SSH server in few steps

1. Use Strong SSH Passwords

Try to make all your passwords more secure by following next rules:
  • Try to use minimum of 8 characters
  • Use upper and lower case letters
  • Also use  numbers in your password
  • special characters like #$&*
You have also a password generator in Linux called pwgen. Install and use it with the following commands:
apt-get install pwgen
pwgen command will generate a list of passwords of 8 characters. You can use the man documents to find more options.

2. Disable SSH root logins

To disable root logins edit  sshd_config file located in /etc/ssh/ directory.
# Prevent root logins:
PermitRootLogin no
Then Restart SSH Server:
/etc/init.d/ssh restart

3. Change the SSH Port on the server

By changing the default port you will make SSH server more secure. By changing the default port you will reduce the amount of brute force attacks
Open again sshd_config file:
# What ports, IPs and protocols we listen for
Port 22333 (or any port you want)

4. Only Allow specific Users to connect over SSH

You can do this by adding the following line to sshd_config file:
AllowUsers debiantuts

5. Change SSH login grace time

By changing this you will have control on your unauthenticated connections left open.  In Debian, by default this is set to 120 seconds.
# Authentication:
LoginGraceTime 30
 NOTE: After any change you make on sshd_config file you need to restart your SSH Server.

Hacking Web Services with Burp

WSDL (Web Services Description Language) files are XML formatted descriptions about the operations of web services between clients and servers. They contain possible requests along with the parameters an application uses to communicate with a web service. This is great for penetration testers because we can test and manipulate web services all we want using the information from WSDL files.
One of the best tools to use for working with HTTP requests and responses for applications is Burp. The only downside with Burp is that it does not natively support parsing of WSDL files into requests that can be sent to a web service. A common work around has been to use a tool such as Soap-UI and proxy the requests to Burp for further manipulation. I’ve written a plugin for Burp that takes a WSDL request and parses out the operations that are associated with the targeted web service and creates SOAP requests which can then be sent to a web service. This plugin builds upon the work done by Tom Bujok and his soap-ws project which is essentially the WSDL parsing portion of Soap-UI without the UI.
The Wsdler plugin along with all the source is located at the Github repository here: https://github.com/NetSPI/Wsdler.

Wsdler Requirements

  1. Burp 1.5.01 or later
  2. Must be run from the command line

Starting Wsdler

The command to start Burp with the Wsdler plugin is as follows:
java -classpath Wsdler.jar;burp.jar burp.StartBurp

Sample Usage

Here we will intercept the request for a WSDL file belonging to an online store in Burp.
Burp WSDL intercept request
After the request for the WSDL has been intercepted, right click on the request and select Parse WSDL.
Burp WSDL Parse
A new Wsdler tab will open with the parsed operations for the WSDL, along with the bindings and ports for each of the operations. Operations are synonymous with the requests that the application supports. There are two operations in this WSDL file, OrderItem and CheckStatus. Each of these operations has two bindings, for simplicity’s sake, bindings describe the format and protocol for each of the operations. The bindings for both of the operations are InstantOrderSoap and InstantOrderSoap12. The reason there are two bindings for each of the operations is because the WSDL file supports the creation of SOAP 1.1 and 1.2 requests. Finally, the ”Port” for each of the operations is essentially just the URL the request will be sent to. The full specification for each of the Objects in WSDL files can be read here: http://www.w3.org/TR/wsdl.
Burp SOAP Operations Request
The SOAP requests for the operations will be in the lower part of the Burp window. The parsing functionality will also automatically fill in the data type for each of the parameters in the WSDL operation. In this example, strings are filled in with parts of the Aeneid and integers are filled in with numbers.
The request that Wsdler creates is a standard Burp request, so it can be sent to any other Burp function that accepts requests (intruder, repeater, etc.).
Burp Intruder Request
Here the request is sent to intruder for further testing. Because the request is XML, Burp automatically identifies the parameters for intruder to use.
Burp Payload Positioning

Conclusion

Currently, the plugin only supports WSDL specification 1.1, but there is work on supporting 1.2 / 2.0. Also, I will be adding the option to specify your own strings and integers when the plugin automatically fills in the appropriate data type for each of the parameters in the parsed operations. If there are any bugs or features that you would like to see added, send me an email or create a ticket on Github

How Google Drive, DropBox. WorK?

I Got question in Mind How Google Drive, Dropbox, Skydrive work  then I  search about storage combination methood. One of my friend told some Technology used  by Amazon S3 .He Just Told me about GlusterFS.  
logo


GlusterFS is an open source, distributed file system capable of scaling to several petabytes (actually, 72 brontobytes!) and handling thousands of clients. GlusterFS clusters together storage building blocks over Infiniband RDMA or TCP/IP interconnect, aggregating disk and memory resources and managing data in a single global namespace. GlusterFS is based on a stackable user space design and can deliver exceptional performance for diverse workloads.

Figure 1. GlusterFS – One Common Mount Point
GlusterFS supports standard clients running standard applications over any standard IP network. Figure 1, above, illustrates how users can access application data and files in a Global namespace using a variety of standard protocols.
No longer are users locked into costly, monolithic, legacy storage platforms. GlusterFS gives users the ability to deploy scale-out, virtualized storage – scaling from terabytes to petabytes in a centrally managed and commoditized pool of storage.
Attributes of GlusterFS include:
  • Scalability and Performance
  • High Availability
  • Global Namespace
  • Elastic Hash Algorithm
  • Elastic Volume Manager
  • Gluster Console Manager
  • Standards-based
This tutorial shows how to combine four single storage servers (running CentOS 6.3) to one large storage server (distributed storage) with GlusterFS. The client system (CentOS 6.3 as well) will be able to access the storage as if it was a local filesystem. GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86_64 servers with SATA-II RAID and Infiniband HBA.
Please note that this kind of storage (distributed storage) doesn’t provide any high-availability features, as would be the case with replicated storage.

1 Preliminary Note

In this tutorial I use five systems, four servers and a client:
  • server1.example.com: IP address 192.168.0.100 (server)
  • server2.example.com: IP address 192.168.0.101 (server)
  • server3.example.com: IP address 192.168.0.102 (server)
  • server4.example.com: IP address 192.168.0.103 (server)
  • client1.example.com: IP address 192.168.0.104 (client)
All five systems should be able to resolve the other systems’ hostnames. If this cannot be done through DNS, you should edit the /etc/hosts file so that it looks as follows on all five systems:
vi /etc/hosts
27.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.0.100   server1.example.com     server1
192.168.0.101   server2.example.com     server2
192.168.0.102   server3.example.com     server3
192.168.0.103   server4.example.com     server4
192.168.0.104   client1.example.com     client1

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

(It is also possible to use IP addresses instead of hostnames in the following setup. If you prefer to use IP addresses, you don't have to care about whether the hostnames can be resolved or not.)

2 Enable Additional Repositories

server1.example.com/server2.example.com/server3.example.com/server4.example.com/client1.example.com:
First we import the GPG keys for software packages:
rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY*
Then we enable the EPEL6 repository on our CentOS systems:
rpm –import https://fedoraproject.org/static/0608B895.txt
cd /tmp
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm
rpm -ivh epel-release-6-7.noarch.rpm
yum install yum-priorities
Edit /etc/yum.repos.d/epel.repo…
vi /etc/yum.repos.d/epel.repo
… and add the line priority=10 to the [epel] section:
[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
failovermethod=priority
enabled=1
priority=10
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
[...]

3 Setting Up The GlusterFS Servers

server1.example.com/server2.example.com/server3.example.com/server4.example.com:
GlusterFS is available as a package for EPEL, therefore we can install it as follows:
yum install glusterfs-server
Create the system startup links for the Gluster daemon and start it:
chkconfig –levels 235 glusterd on
/etc/init.d/glusterd start
The command
glusterfsd –version
should now show the GlusterFS version that you’ve just installed (3.2.7 in this case):
[root@server1 ~]# glusterfsd –version
glusterfs 3.2.7 built on Jun 11 2012 13:22:28
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@server1 ~]#
If you use a firewall, ensure that TCP ports 111, 24007, 24008, 24009-(24009 + number of bricks across all volumes) are open on server1.example.com, server2.example.com,server3.example.com, and server4.example.com.
Next we must add server2.example.com, server3.example.com, and server4.example.com to the trusted storage pool (please note that I’m running all GlusterFS configuration commands from server1.example.com, but you can as well run them from server2.example.com or server3.example.com or server4.example.com because the configuration is repliacted between the GlusterFS nodes – just make sure you use the correct hostnames or IP addresses):
server1.example.com:
On server1.example.com, run
gluster peer probe server2.example.com
gluster peer probe server3.example.com
gluster peer probe server4.example.com
Output should be as follows:
[root@server1 ~]# gluster peer probe server2.example.com
Probe successful
[root@server1 ~]#
The status of the trusted storage pool should now be similar to this:
gluster peer status
[root@server1 ~]# gluster peer status
Number of Peers: 3
Hostname: server2.example.com
Uuid: da79c994-eaf1-4c1c-a136-f8b273fb0c98
State: Peer in Cluster (Connected)
Hostname: server3.example.com
Uuid: 3e79bd9f-a4d5-4373-88e1-40f12861dcdd
State: Peer in Cluster (Connected)
Hostname: server4.example.com
Uuid: c6215943-00f3-492f-9b69-3aa534c1d8f3
State: Peer in Cluster (Connected)
[root@server1 ~]#
Next we create the distributed share named testvol on server1.example.com, server2.example.com, server3.example.com, and server4.example.com in the /data directory (this will be created if it doesn’t exist):
gluster volume create testvol transport tcp server1.example.com:/data server2.example.com:/data server3.example.com:/data server4.example.com:/data
[root@server1 ~]# gluster volume create testvol transport tcp server1.example.com:/data server2.example.com:/data server3.example.com:/data server4.example.com:/data
Creation of volume testvol has been successful. Please start the volume to access data.
[root@server1 ~]#
Start the volume:
gluster volume start testvol
It is possible that the above command tells you that the action was not successful:
[root@server1 ~]# gluster volume start testvol
Starting volume testvol has been unsuccessful
[root@server1 ~]#
In this case you should check the output of…
server1.example.com/server2.example.com/server3.example.com/server4.example.com:
netstat -tap | grep glusterfsd
on both servers.
If you get output like this…
[root@server1 ~]# netstat -tap | grep glusterfsd
tcp        0      0 *:24009                     *:*                         LISTEN      1365/glusterfsd
tcp        0      0 localhost:1023              localhost:24007             ESTABLISHED 1365/glusterfsd
tcp        0      0 server1.example.com:24009   server1.example.com:1023    ESTABLISHED 1365/glusterfsd
[root@server1 ~]#
… everything is fine, but if you don’t get any output…
[root@server2 ~]# netstat -tap | grep glusterfsd
[root@server2 ~]#
[root@server3 ~]# netstat -tap | grep glusterfsd
[root@server3 ~]#
[root@server4 ~]# netstat -tap | grep glusterfsd
[root@server4 ~]#
… restart the GlusterFS daemon on the corresponding server (server2.example.com, server3.example.com, and server4.example.com in this case):
server2.example.com/server3.example.com/server4.example.com:
/etc/init.d/glusterfsd restart
Then check the output of…
netstat -tap | grep glusterfsd
… again on these servers – it should now look like this:
[root@server2 ~]# netstat -tap | grep glusterfsd
tcp        0      0 *:24009                 *:*                     LISTEN      1152/glusterfsd
tcp        0      0 localhost.localdom:1018 localhost.localdo:24007 ESTABLISHED 1152/glusterfsd
[root@server2 ~]#
[root@server3 ~]# netstat -tap | grep glusterfsd
tcp        0      0 *:24009                 *:*                     LISTEN      1311/glusterfsd
tcp        0      0 localhost.localdom:1018 localhost.localdo:24007 ESTABLISHED 1311/glusterfsd
[root@server3 ~]#
[root@server4 ~]# netstat -tap | grep glusterfsd
tcp        0      0 *:24009                 *:*                     LISTEN      1297/glusterfsd
tcp        0      0 localhost.localdom:1019 localhost.localdo:24007 ESTABLISHED 1297/glusterfsd
[root@server4 ~]#
Now back to server1.example.com:
server1.example.com:
You can check the status of the volume with the command
gluster volume info
[root@server1 ~]# gluster volume info
Volume Name: testvol
Type: Distribute
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: server1.example.com:/data
Brick2: server2.example.com:/data
Brick3: server3.example.com:/data
Brick4: server4.example.com:/data
[root@server1 ~]#
By default, all clients can connect to the volume. If you want to grant access to client1.example.com (= 192.168.0.104) only, run:
gluster volume set testvol auth.allow 192.168.0.104
Please note that it is possible to use wildcards for the IP addresses (like 192.168.*) and that you can specify multiple IP addresses separated by comma (e.g.192.168.0.104,192.168.0.105).
The volume info should now show the updated status:
gluster volume info
[root@server1 ~]# gluster volume info
Volume Name: testvol
Type: Distribute
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: server1.example.com:/data
Brick2: server2.example.com:/data
Brick3: server3.example.com:/data
Brick4: server4.example.com:/data
Options Reconfigured:
auth.allow: 192.168.0.104
[root@server1 ~]#
lient1.example.com:
On the client, we can install the GlusterFS client as follows:
yum install glusterfs-client
Then we create the following directory:
mkdir /mnt/glusterfs
That’s it! Now we can mount the GlusterFS filesystem to /mnt/glusterfs with the following command:
mount.glusterfs server1.example.com:/testvol /mnt/glusterfs
(Instead of server1.example.com you can as well use server2.example.com or server3.example.com or server4.example.com in the above command!)
You should now see the new share in the outputs of…
mount
[root@client1 ~]# mount
/dev/mapper/vg_client1-LogVol00 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
server1.example.com:/testvol on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
[root@client1 ~]#
… and…
df -h
[root@client1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_client1-LogVol00
9.7G  1.7G  7.5G  19% /
tmpfs                 499M     0  499M   0% /dev/shm
/dev/sda1             504M   39M  440M   9% /boot
server1.example.com:/testvol
116G  4.2G  106G  4% /mnt/glusterfs
[root@client1 ~]#
Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.
Open /etc/fstab and append the following line:
vi /etc/fstab
[...]
server1.example.com:/testvol /mnt/glusterfs glusterfs defaults,_netdev 0 0
Again, instead of server1.example.com you can as well use server2.example.com or server3.example.com or server4.example.com!)
To test if your modified /etc/fstab is working, reboot the client:
reboot
After the reboot, you should find the share in the outputs of…
f -h
… and…
mount

5 Testing

Now let’s create some test files on the GlusterFS share:
client1.example.com:
touch /mnt/glusterfs/test1
touch /mnt/glusterfs/test2
touch /mnt/glusterfs/test3
touch /mnt/glusterfs/test4
touch /mnt/glusterfs/test5
touch /mnt/glusterfs/test6
Now let’s check the /data directory on server1.example.com, server2.example.com, server3.example.com, and server4.example.com. You will notice that each storage node holds only a part of the files/directories that make up the GlusterFS share on the client:
server1.example.com:
ls -l /data
[root@server1 ~]# ls -l /data
total 0
-rw-r–r– 1 root root 0 2012-12-17 14:26 test1
-rw-r–r– 1 root root 0 2012-12-17 14:26 test2
-rw-r–r– 1 root root 0 2012-12-17 14:26 test5
[root@server1 ~]#
server2.example.com:
ls -l /data
[root@server2 ~]# ls -l /data
total 0
-rw-r–r– 1 root root 0 2012-12-17 14:26 test4
[root@server2 ~]#
server3.example.com:
ls -l /data
[root@server3 ~]# ls -l /data
total 0
-rw-r–r– 1 root root 0 2012-12-17 14:26 test6
[root@server3 ~]#
server4.example.com:
ls -l /data
[root@server4 ~]# ls -l /data
total 0
-rw-r–r– 1 root root 0 2012-12-17 14:26 test3
[root@server4 ~]#

How Skype Works

arabic_logo_set__2_by_mystafa-d5g1hy7Skype is a software application that allows you to make free phone calls to more than 75 million people worldwide, and shockingly cheap calls to practically everywhere else on Earth! As a result of that, Skype has become the fastest growing service in the history of the Internet. Recently, the company was acquired by eBay, another step forward towards achieving the final goal of making Skype the world’s largest communication company.Skype is easy to install and use. It allows its users to make crystal clear calls, regardless of their location, send instant messages, switch seamlessly between text and voice communication, make video calls, conference calls, transfer files, call land lines and cell phones for a fraction of the cost of a traditional call. Skype is truly making a revolution in the way we communicate.

But how does it actually work? This article focuses on describing the Skype network and the technology behind it.
Skype is a type of peer-to-peer Voice-Over-IP client, based on the Kazaa file sharing program. The developers of Skype claim that it provides better voice quality than similar applications like MSN and Yahoo Messenger. It also encrypts calls end-to-end.
There are two types of machines in the Skype network – ordinary host (Skype Client) and Super Node (SN). An ordinary host is the computer of a regular user who has the application installed and connects to the network in order to communicate with other users. The Super Nodes are the end-point of ordinary hosts in the network. In other words, ordinary hosts connect to the Super Nodes. Any computer with a public IP and proper hardware configuration can be a SN. An ordinary host must connect to a super node and must register itself with the Skype login server for a successful login. The Skype login server is the only central unit in the whole network. It stores the usernames and respective passwords of all Skype users. Nslookups have shown that this server is located in Denmark. All Super Nodes connect to the login server in attempt to verify the username password of the client. It stores your Skype Name, your e-mail address, and an encrypted representation of your password.
If you are a regular Skype user, then your computer is considered an ‘ordinary host’ that connects to a Super Node. The Super Nodes are servers, located in different parts of the world. But your Skype client, must know to which SN it has to connect. Therefore, every Skype client (SC) maintains a local table that contains the IPs and corresponding ports of Super Nodes. This is called a host cache and it stored in the Windows Registry of the given SC. So basically, every time you load up Skype, it reads the date from the host cache, takes the first IP and port from there and tries to connect to this SN. If the connection fails for some reason (the SN is offline; it is no longer part of the network, etc) then it reads the next line from the table. In case it fails to connect to any of the IPs listed, the Skype returns a login error upon start-up. Hence, the host cache must contain at least one valid entry in order for the application to connect to the network and work properly. Valid entry means an IP address and port number of an online Super Node. The path to the table in the Windows Registry is HKEY_CURRENT_USER / SOFTWARE / SKYPE / PHONE / LIB / CONNECTION / HOSTCACHE. You can verify that on your computer by opening the Start menu, then click Run and enter ‘regedit’, without the dashes. Of course, the exact path could be different in the next versions of the application.
As a concept, Super Nodes were introduced in the third-generation P2P networks. They allow improved search performance, reduced file-transfer latency, network scalability, and the ability to resume interrupted downloads and simultaneously download segments of one file from multiple peers. Basically, they help ordinary hosts connect to each other and guide efficiently the encrypted network traffic.
Super Nodes are also responsible for the ‘Global Indexing’. This technology enables you to search for other users in the network. The company guarantees that it will find a user if he has registered and has logged in during the last 72 hours.
A very interesting moment about the Skype network is that it ‘self-modifiable’. If you have the application installed, your computer may turn into a Super Node, without you even knowing it, because those capabilities don’t have a noticeable impact on a computer’s performance. SNs basically store the addresses of up to several hundred Skype users, without carrying any voice, text or file-transfer data. In that manner, the more Skype users come online, the more super nodes become available to expand the capacity of the network.
Skype routes the traffic intelligently by choosing the optimum data transfer path. Since it uses either TCP or UDP protocol, it breaks the whole data stream into separate packets, which can take different paths to the end destination. The final arrangement is done at the receiving end.
As far as safety and privacy are concerned, Skype uses Advanced Encryption Standard, known as Rijndel, used also by the U.S. Government organizations to protect sensitive data. Skype uses 256-bit encryption.
The programmers of Skype have implemented wideband codecs which allows it to maintain a good sound quality at a bandwidth of 32kb/s and allow frequencies between 5-8,000Hz to pass trough.
Your list of contacts, the application stores in the Windows Registry. This is called the Buddy list and once again, it is digitally encrypted. So, the list is local for every machine, or in other words, it’s not downloaded from the central server.
Let’s briefly describe the tasks of the Skype client. First it connects to the network. It then listens on particular ports for incoming calls, refreshes the host cache table, uses wideband codecs, maintains the buddy list, encrypts messages and determines if there is a firewall or not.
The login process: The login process is the most important one and it consists of several phases. As mentioned, SC must connect to a valid SN in order to authenticate the username and password with the Central Server.
Skype gets the fist IP from the host cache, sends it a UDP packet and waits for response. If there is no response after 5 seconds, it sends a TCP packet to the same IP. It tries to establish a TCP connection to the HC IP address and port 80 (HTTP port). If still unsuccessful, it tried to connect to IP address and port 443 (HTTPS port). If this does not work either, it reads the next address in the HC. If Skype is unable to connect to a SN, it will report a login failure.
The application comes with several build-in addresses of different nodes, called bootstrap super nodes.
If the connection attempt is successful, the client must authenticate the user name and password with the Skype login server, which holds all user names and passwords and makes sure they are unique across the whole network. When the application connects to an SN, it receives an up-to-date list of other active SNs, so it has the most current information.
The Media Transfer process: The video/voice communication through SKype is established through UDP. The trick here is that quite often, one of the users is behind a firewall or a router, hence it doesn’t have a real IP address. But if both Skype clients are on real IPs, then the media traffic flows directly between them over UDP. The size of the voice packet is 67 bytes, which is actually the size of UDP payload. One second conversation results in roughly 140 voice packets being exchanged both ways, or 3-16 kilobytes/s.
If one of the callee or both of them do not have a public IP, then they send voice traffic to another online Skype node over UDP or TCP. The developers of Skype have preferred to use UDP for voice transmission as much as possible.
An interesting fact is that even if both sides are not speaking, voice packets will still be flowing between them. The purpose of these so called ‘silent packages’ is to keep the connection alive.
Conclusion: There are several factors responsible for the success of Skype. First of all, the voice quality is better compared to other applications. It works without a problem on computers with firewall. It is very easy to install and use. Skype’s security is also a big advantage. Everything that is being transferred across the network is being encrypted to ensure privacy. As a result of that, even if hackers intercept the data being transferred, they won’t be able to decode it.
The Skype application does not include any adware or spyware. But, there are cases when third parties have managed to add such functionalities (not only for Skype), so it’s really important that you download it from the right place.