If anybody is trying to compile Sangoma’s WANPIPE for their U100 FXO device, it is a must to issue make command in dahdi source directory before compiling WINPIPE, just make without install.
If anybody is trying to compile Sangoma’s WANPIPE for their U100 FXO device, it is a must to issue make command in dahdi source directory before compiling WINPIPE, just make without install.
When I install a fresh CentOS VM, usually I select MINIMAL for packages. After installation, I issue the following commands:
rpm -Uvh http://ftp.riken.jp/Linux/fedora/epel/6/i386/epel-release-6-8.noarch.rpm
to install the EPEL yum package
yum -y install mailx openssh-clients fail2ban perl wget
to install utilities I need
yum -y update
to update the system. Finally,
shutdown -r now
to restart.
Network Time protocol (NTP) is a Transport Layer (4) protocol used to synchronize time across networked devices to have consistent and unified time. It uses UDP port 123, and resists the effect of variable latency of packet-switched networks. Time sync is done using UTC (no time zone or daylight saving is provided) and each device adds its time zone and daylight saving (if different) to it.
Simple Network Time Protocol (SNTP) is a less complex time sync protocol. Basically, this protocol (SNTP) does not store information about previous communication, nor it require high accuracy timing.
NTP is a hierarchical system in terms of servers or clock sources. A stratum(level) determines it distance from the real clock source (GPS, atomic, etc…). Stratum 0 is the clock source itself, while stratum 1 is the directly connected NTP server to stratum 0. Stratum 2 servers are directly querying time from stratum 1 servers and so forth.
Linux has a native support for NTP. To enable it, just make sure NTP package is installed (RH, CentOS, and others) or check for /etc/ntp.conf file. Shut down NTPd service first (if it is running) by typing (for RH, CentOS):
# /etc/init.d/ntpd stop
Then edit the /etc/ntp.conf file adding/replacing these lines to it (based on where are you located):
server 0.de.pool.ntp.org
server 1.de.pool.ntp.org
server 2.de.pool.ntp.org
You can substitute your 2-letter ISO country’s name (de, uk, fr, etc…). Please refer to http://www.pool.ntp.org/ for list of available servers in your area.
Before starting NTPd service execute the following:
# ntpdate 0.de.pool.ntp.org
to sync the time initially and make the time difference minimal, then, alter the run level of the service to 345:
# chkconfig --level 345 ntpd on
Start the service:
# /etc/init.d/ntpd start
Leave it for 1 hour and then issue this command:
# ntpq -p
and the result should be similar to this:
remote refid st t when poll reach delay offset jitter
==============================================================================
+skywiley.com 173.14.47.149 2 u 851 1024 377 346.070 8.366 118.239
-mirror 128.105.39.11 3 u 763 1024 377 266.614 -9.626 3.613
+ntp2.csl.tjhsst 192.5.41.40 2 u 203 1024 377 266.569 -2.793 0.430
*barium.vps.bitf 193.190.230.66 2 u 869 1024 377 189.283 -2.613 0.287
One of the requirements for having a domain name (typically) is to have a static IP, one that does not change frequently. Ordinary people face a problem, most (if not all) residential Internet services have dynamic IP addresses which change once a day (like what I have).
Companies like dyndns.com, changeip.com, no-ip.com, to name a few, have a service that you can use to have a domain name, either dedicated or shared, which can accept this kind of frequent change. All what is needed is either a router that support DDNS, or an application that is installed on a PC.
Most, if not all, DDNS companies use APIs for their service, thus this article is about API and scripting.
Steps:
cd /etc
mkdir cron.2min
chmod 755 cron.2min
0-59/2 * * * * root run-parts /etc/cron.2min
instructing the crontab to run anything in /etc/cron.2min every 2 minutes continuously everyday.
touch /root/externalip.txt
echo "2" > /root/externalip.txt
chmod 644 /root/externalip.txt
This file will be used to store the discovered external IP address by the next script below.
a=`cat /root/externalip.txt`
b=`wget -q -O - http://ip.changeip.com:8245 | cut -f 2 -d "=" | cut -f 1 -d "-" -s | grep -m 1 ^`
if [ $a != $b ]
then
# dyndns
wget --delete-after https://user:pass@members.dyndns.org/nic/update?hostname=yourhost >/dev/null 2>&1
# no-ip
wget --delete-after https://user:pass@dynupdate.no-ip.com/nic/update?hostname=yourhost >/dev/null 2>&1
# changeip
wget --delete-after https://nic.changeip.com/nic/update?hostname=*1&u=user&p=pass >/dev/null 2>&1
# opendns
wget --delete-after https://user:pass@updates.opendns.com/nic/update >/dev/null 2>&1
# update externalip.txt file
echo $b > /root/externalip.txt
fi
chmod 755 /etc/cron.2min/dynip.sh
To make it executable.
The script does the following:
If there are more than one cache server available at the same place, there is a way for any server to query the others for cached content. An example of this would be if a company has two buildings each with its own cache, e.g. building1-cache and building2-cache, where both are configured as [cache siblings], a mean to share cached contents.
As RFC 2186 indicates:
“ICP is a lightweight message format used for communicating among Web caches. ICP is used to exchange hints about the existence of URLs in neighbor caches. Caches exchange ICP queries and replies to gather information to use in selecting the most appropriate location from which to retrieve an object.”
and
“ICP is a message format used for communicating between Web caches. Although Web caches use HTTP for the transfer of object data, caches benefit from a simpler, lighter communication protocol. ICP is primarily used in a cache mesh to locate specific Web objects in neighboring caches. One cache sends an ICP query to its neighbors. The neighbors send back ICP replies indicating a (HIT) or a (MISS)”
So, if the content is available at one of the cache siblings (HIT), it serves the requester directly instead of going to the origin server -on Internet- to download it.
Make sure that the following line exists in /etc/squid/squid.conf and is not commented:
icp_port 3130
As this line enables Squid’s ICP to serve other cache servers.
Next is to have the following line:
cache_host x.x.x.x sibling yyyy 3130 proxy-only
Where x.x.x.x is the other cache IP address, yyyy is the cache HTTP port (squid default is 3128), and proxy-only tells Squid to load content directly and not save it locally. If you have more servers, just list them accordingly.
Issue the Squid reload command to apply the updated config:
#/etc/init.d/squid reload
Do the same steps for the rest of servers.
If the service is reloaded, a new entry will be there in /var/log/squid/cache.log:
Configuring Sibling x.x.x.x/3128/3130
If after a while (ICP timeout), an entry like:
Detected DEAD Sibling: x.x.x.x
is there, double check for any network related problem as Squid service had failed to contact the other cache server, otherwise, cache exchange is working.
Basic network is operational now, with a gateway and cache/proxy, and it is configured transparently with WCCP. Now, blocking some sites (mainly ads) is the next step.
First, beginning with an ACL that contains URLs/domains to be blocked, edit /etc/squid/squid.conf to add the following at the right place:
acl blocked_domains dstdomain .clicksor.com
acl blocked_domains dstdomain .paypopup.com
acl blocked_domains dstdomain .bidvertiser.com
acl blocked_domains dstdomain .zedo.com
acl blocked_domains dstdomain .quantserve.com
acl blocked_domains dstdomain .quantcast.com
acl blocked_domains dstdomain .dmoglobal.net
acl blocked_domains dstdomain ads.mininova.org
acl blocked_domains dstdomain .yieldmanager.com
acl blocked_domains dstdomain .bluelithium.com
acl blocked_domains dstdomain .pubmatic.com
acl blocked_domains dstdomain .adbrite.com
acl blocked_domains dstdomain .advertising.com
acl blocked_domains dstdomain .imvu.com
acl blocked_domains dstdomain .games888.com
acl blocked_domains dstdomain .firstperson.nl
acl blocked_domains dstdomain .mario-sonic.com
acl blocked_domains dstdomain .yahwroom.org
acl blocked_domains dstdomain .yieldmanager.edgesuite.net
acl blocked_domains dstdomain .z5x.net
Where blocked_domains is the ACL name, .domain.com (notice the dot in the beginning) is the domain (and all sub-domains) contained within.
Next is to tell Squid what to do with the ACL created, the following line has to be at the right place:
http_access deny blocked_domains
Here, Squid engine is told to deny access to any domain contained within the ACL blocked_domains.
With these steps indicated above, any domain can be denied access.Finally, the next command has to be issued every time after finishing:
#/etc/init.d/squid reload
Where Squid is instructed to reload its config file without restarting the full service.
When I was planning my home network, I wanted to have basic components available, e.g. local DNS, local proxy/cache, etc… I started by having a Cisco 1750 router as my home ADSL device, as it has a wide range of configuration capabilities.
One item was in my home networking to-do list, a proxy/cache service. Having such a service in any multiuser environment is a must, at least for common Internet related activities (e.g. Windows update, antivirus updates, etc…) which have the same files downloaded again and again for each and every PC you have connected. Another thing by the way, from time to time, I bring several PCs/Laptops home for maintenance or reinstallation of Windows, so the need is obvious, having these files locally save both the Internet bandwidth (download it once – have it locally then) and time.
Besides saving bandwidth, tricks can be done with schedule downloads, most of my family members read newspapers online (PDF version), by having a schedule task to download all PDF files from all newspapers we read, I would have them all saved locally in the central cache, thus ready for local access from all PCs/Laptops in my local network.
Any PC with at least 128MB of RAM can do the required job efficiently. Squid runs under Linux (mainly) operating system.
Linux as an operating system.
I am not going into details on how to install the Linux OS. One thing I would like to mention, usually I install Linux Command-Prompt only, as I never use the graphical interface.
After installing Linux (by the way, I use the latest CentOS Linux distro), I make sure that I use yum to install Squid and any dependencies automatically. After installing Squid, I configure it directly with the following (do a search in /etc/squid/squid.conf file and edit accordingly):
http_port xxx.xxx.xxx.xxx:3128 transparent
icp_port 0
maximum_object_size 71680 KB
cache_replacement_policy heap GDSF
memory_replacement_policy heap GDSF
cache_dir diskd /dir ssss 16 256
acl localnet src nnn.nnn.nnn.0/24
wccp2_router rrr.rrr.rrr.rrr
Where:
I use a general rule for the total cache size based upon the link speed you have, for example, I have a 1Mbps ADSL line to the Internet, multiply this by 60 and then by 60 to have the maximum size downloaded (theoretically of course in megabits) per hour, 1*60*60 = 3600Mbph, multiply the result by 24 to have it for a full day, 3600*24=86400Mbpd, then divide this result by 8 (remember, it is still in bits not bytes) and the final result is 86400/8=10800MBpd. So you have around 10GBs of Internet traffic if it is 100% utilized for 24 hours (neglecting the effect of TCP and other headers to simplify the calculation). Then, based on the total disk size you have (nowdays, disks are cheap, so getting a 160GB one is easy), you can decide how many days worth of cache wanted (under 100% utilization). My choice was 40GB of cache so ssss in my case is 40960.
A GRE network interface configuration is needed in the Linux box, so do the following:
iptables -F; iptables -t nat -F; iptables -t mangle -F
iptables -P INPUT ACCEPT
iptables -t nat -A PREROUTING -s nnn.nnn.nnn.0/255.255.255.0 -d ! nnn.nnn.nnn.0/255.255.255.0 -i gre0 -p tcp -m tcp --dport 80 -j DNAT --to-destination xxx.xxx.xxx.xxx:3128
Where: nnn.nnn.nnn.0 is the network number you are using, xxx.xxx.xxx.xxx is the IP address of the Linux box which Squid is bind to. Keep in mind that the third line is continuous with the forth and fifth lines. What these line do is to redirect any port 80 request (HTTP) coming from the router through GRE0 interface to port 3128 (SQUID) to have it processed by Squid.
alias gre0 ip_gre
DEVICE=gre0
BOOTPROTO=static
IPADDR=10.190.19.19
NETMASK=255.255.255.252
ONBOOT=yes
First addition is to enable the GRE interface module in Linux Kernel, second addition is to configure it with a static IP address (any private IP will do the trick, make sure that it is not the same range used locally).
Now you can bring the GRE interface up using:
#ifup gre0
If everything is going smooth, you should have it up without any error, for checking, issue this command:
#ifconfig gre0
and you should have an output similar to this:
gre0 Link encap:UNSPEC HWaddr 00-00-05-08-60-FC-00-00-00-00-00-00-00-00 inet addr:10.190.19.19 Mask:255.255.255.252 UP RUNNING NOARP MTU:1476 Metric:1 RX packets:14168479 errors:0 dropped:0 overruns:0 frame:0 TX packets:15 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1707102933 (1.5 GiB) TX bytes:3611 (3.5 KiB)
Here, our Linux settings are finished.
Configuring the router is a straight forward job, do the following:
router1#conf t router1(config)#ip wccp web-cache router1(config)#int f0 router1(config-if)#ip wccp web-cache redirect in router1(config-if)#exit router1(config)#exit
Here, router configuration is finished.
A simple way to findout whether cache redirection is working or not, issue this command in the router:
router1#show ip wccp
The output should be similar to this:
Global WCCP information: Router information: Router Identifier: xxx.xxx.xxx.xxx Protocol Version: 2.0 Service Identifier: web-cache Number of Cache Engines: 1 Number of routers: 1 Total Packets Redirected: 2967084 Redirect access-list: -none- Total Packets Denied Redirect: 0 Total Packets Unassigned: 22 Group access-list: -none- Total Messages Denied to Group: 0 Total Authentication failures: 0
Where xxx.xxx.xxx.xxx is your router IP address. Another thing you will notice in the router console output if you stop Squid (#/etc/init.d/squid stop):
.Dec 16 2008 12:54:17: %WCCP-1-CACHELOST: Web Cache ccc.ccc.ccc.ccc lost
And when you start Squid (#/etc/init.d/squid start):
.Dec 16 2008 12:55:14: %WCCP-5-CACHEFOUND: Web Cache ccc.ccc.ccc.ccc acquired
Where ccc.ccc.ccc.ccc is your Squid IP address.
Now browse the Internet for a while, then issue this command in the Linux box:
#tail /var/log/squid/access.log
If you have some output with your PC IP address and some sites you visited, your cache and router redirection are working perfectly.