Monday, December 21, 2009

Startup error with Thunderbird 3.0

Updated to Thunderbird 3.0 released a month back from its beta version with lightning installed but was greeted with a 2 error messages. On starting the client.

Error dialog:
An error was encountered preparing the calendar located at
moz-profile-calendar:// for use. It will not be available.

The issue was locked under bugzilla at redhat, to be a schema issue with the database used for the calender which was to be updated.

So finally followed the steps and the issue was solved, and the calender was back in action.
1. Closed Thunderbird.
2. Switched to the profile directory of thunderbird.

# /home/sawrub/.thunderbird/*********.default
3. Located the directory holding the calender data.
ll calendar-data/
4. There was the local.sqlite file which actually is a SQLite database.
5. Enter the sqlite console, by executing the command an dwas greeted with the corresponding prompt as.
[sawrub@mybox calendar-data]$ sqlite3 local.sqlite
SQLite version 3.6.12
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite>
6. Updated the database.
sqlite>
sqlite> ALTER TABLE cal_relations ADD recurrence_id INTEGER;
sqlite> ALTER TABLE cal_relations ADD recurrence_id_tz TEXT;
sqlite> ALTER TABLE cal_attachments ADD recurrence_id INTEGER;
sqlite> ALTER TABLE cal_attachments ADD recurrence_id_tz TEXT;
sqlite>
7. Quiting the client.
sqlite> .exit
8 . Restarted the Thunderbird and the issue was gone, as no error pop-ups were seen and the local calender data was also visible.

Sunday, December 20, 2009

Dock the Appications

The day can be called successful one again as found a application that works with any kind of window managers having being tested with Gnome, KDE, XFCE as mentioned by the developer/s. So considered to give a try.
1. Installation was quick as its just a 95k package.

# yum install -y kdocker.i586

2. Using it to Dock.
The Docker application was found under Applications > Accessories > KDocker. Selecting the Docker turned my pointer into a kind of stock gun target. Decided to dock thunderbird, hence clicked the mouse on the application window, and it was done.

3. The icon looked a little bad but considering the work done can be acceptable.Here is the screen shot of how the docked thunderbird looked like.



4. Once docked the application can be undocked  by a single click on the docked icon of the application.

5. As seen in the above screen shot there is a terminal also docked along with the thunderbird.

Great thanks to the development team.A small info about the application as read under "yum info kdocker.i586".

"KDocker will help you dock any application in the system tray. This means you

can dock openoffice, xmms, firefox, thunderbolt, eclipse, anything! Just point
and click. Works for both KDE and GNOME (In fact it should work for most modern
window managers that support NET WM Specification. I believe it works for XFCE,
for instance)

All you need to do is start KDocker and select an application using the mouse
and lo! the application gets docked into the system tray. The application can
also be made to disappear from the task bar.

KDocker supports the KDE System Tray Protocol and the System Tray Protocol from freedesktop.org

Very few apps have docking capabilities (e.g. Yahoo! and XMMS don't have any).
Even if they do, sometimes they are specific to desktops (working on KDE but
not on GNOME, and vice versa). KDocker will help you dock any application in
the system tray. This means you can dock OpenOffice.org, XMMS, Firefox,
Thunderbird, etc. Just point and click. It works for KDE, GNOME, XFCE, and
probably many more."


Saturday, December 19, 2009

Bash Scripting Cookies

- Passing arguments to a Bash script.

`basename $0` in the script will list the name of the script without the path.
`dirname $0` in the script will list the absolute path of the script without the name.
`$0` will list the full path 'URI' of the script.
`$#` will list the count of the arguments passed to the script.
`$*` will be listing all the arguments passed to the script.
`$1` will be listing the first argument passed to the script.

- If conditional statements

Condition
Meaning
[ -a FILE ]
True if FILE exists.
[ -b FILE ]
True if FILE exists and is a block-special file.
[ -c FILE ]
True if FILE exists and is a character-special file.
[ -d FILE ]
True if FILE exists and is a directory.
[ -e FILE ]
True if FILE exists.
[ -f FILE ]
True if FILE exists and is a regular file.
[ -g FILE ]
True if FILE exists and its SGID bit is set.
[ -h FILE ]
True if FILE exists and is a symbolic link.
[ -k FILE ]
True if FILE exists and its sticky bit is set.
[ -p FILE ]
True if FILE exists and is a named pipe (FIFO).
[ -r FILE ]
True if FILE exists and is readable.
[ -s FILE ]
True if FILE exists and has a size greater than zero.
[ -t FD ]
True if file descriptor FD is open and refers to a terminal.
[ -u FILE ]
True if FILE exists and its SUID (set user ID) bit is set.
[ -w FILE ]
True if FILE exists and is writable.
[ -x FILE ]
True if FILE exists and is executable.
[ -O FILE ]
True if FILE exists and is owned by the effective user ID.
[ -G FILE ]
True if FILE exists and is owned by the effective group ID.
[ -L FILE ]
True if FILE exists and is a symbolic link.
[ -N FILE ]
True if FILE exists and has been modified since it was last read.
[ -S FILE ]
True if FILE exists and is a socket.
[ FILE1 -nt FILE2 ]
True if FILE1 has been changed more recently than FILE2, or if FILE1 exists and FILE2 does not.
[ FILE1 -ot FILE2 ]
True if FILE1 is older than FILE2, or is FILE2 exists and FILE1 does not.
[ FILE1 -ef FILE2 ]
True if FILE1 and FILE2 refer to the same device and inode numbers.
[ -o OPTIONNAME ]
True if shell option “OPTIONNAME” is enabled.
[ -z STRING ]
True of the length if “STRING” is zero.
[ -n STRING ] or [ STRING ]
True if the length of “STRING” is non-zero.
[ STRING1 == STRING2 ]
True if the strings are equal. “=” may be used instead of “==” for strict POSIX compliance.
[ STRING1 != STRING2 ]
True if the strings are not equal.
[ STRING1 < STRING2 ]
True if “STRING1” sorts before “STRING2” lexicographically in the current locale.
[ STRING1 > STRING2 ]
True if “STRING1” sorts after “STRING2” lexicographically in the current locale.
[ ARG1 OP ARG2 ]
“OP” is one of -eq, -ne, -lt, -le, -gt or -ge. These operators are binary in nature, hence return true if 'ARG1' is equal to, not equal to, less than, less than equal to, greater than, or greater than equal to 'ARG2' respectively. 'ARG1' & 'ARG2' being integers.

Sunday, December 13, 2009

Script to take backup of Mails

The script was written to take the backup of the mails on weekly bases using cron. The script use to kill Thunderbird in my case and then start taking the backup so that no new mail comes in while the backup is in progress, a kind of cold backup.

#Script to take backup of Mails
#!/bin/bash
PID=`pidof thunderbird-bin`
LOC="/home/sawrub/.thunderbird/8vqt6zno.default/Mail/"
DAY=`date +%F`
REPORT="/tmp/Mail_REPORT"
echo -e "`date`: Starting Mail Backup" > ${REPORT}
echo -e "`date`: Going to Kill Thunderbird, PID : ${PID}" >> ${REPORT}
kill -9 $PID >> /dev/null
echo -e "`date`: Thunderbird Killed Successfuly" >> ${REPORT}
cd $LOC
FILE="Mail_${DAY}.tar.bz2"
echo -e "`date`: Backup Starts at `date +%R`" >> ${REPORT}
echo -e "`date`: The backup will be saved as ${FILE}" >> ${REPORT}
tar -cjpf /data/mails_backup/${FILE} *
su - sawrub -c thunderbird
echo -e "`date`: Backup Completes" >> ${REPORT}
sleep 5
echo -e "`date`: Thunderbird starts of successfully as PID :`pidof thunderbird-bin`" >> ${REPORT}
SIZE=`du -h /data/mails_backup/${FILE}|awk '{print $1}'`
echo -e "`date`: Backup size is ${SIZE}" >> ${REPORT}
mail sawrub -s"Mails Backed Up" < ${REPORT} sleep 2 rm -rf ${REPORT} #Delete the stale Mail backup Data

Wednesday, December 9, 2009

Google released Beta version of Chrome for Linux

Google is out with its 'Beta' build of its browser for Linux , in just couple of weeks after the unstable build was made live for public.

The Beta version of the browser like its Windows build is fast, secure, stable, simple, extensible, and embraces open standards like HTML5. All of the user who have installed the unstable version will have to un-install the older unstable version and then install the new 'BETA' version available under the Linux repository from Google as mentioned in the earlier post.
The process of installation is just the same, with minor changes.
1. Search for the available packages for Google Chrome

[root@mybox ~]# yum search chrome
Look in the search results for packages by name 'google-chrome', packages similar to following will be listed.
google-chrome-beta.i386 : Google Chrome
google-chrome-unstable.i386 : Google Chrome
3. Check if the unstable version is installed.
[root@mybox ~]# yum list installed |grep google-chrome
If a row mentioning the installation of the unstable version is there then we need to remove that.
4. In order to remove the Unstable version, just need to fire the command.
[root@mybox ~]# yum erase google-chrome-unstable
5. Once the unstable version have been un-installed. Then the beta can be installed following steps.
[root@mybox ~]# yum -y install google-chrome-beta
6. It can be run in the similar way.

Source : http://blog.chromium.org/2009/12/google-chrome-for-linux-goes-beta.html

Saturday, December 5, 2009

Google Public DNS.....step towards dictatorship

Google Public DNS is a free, global Domain Name System (DNS) resolution service, that you can use as an alternative to your current DNS provider.
To try it out:

  • Configure your network settings to use the IP addresses 8.8.8.8 and 8.8.4.4 as your DNS servers or
  • Read our configuration instructions.
If you decide to try Google Public DNS, your client programs will perform all DNS lookups using Google Public DNS.
Source :http://code.google.com/speed/public-dns/

Try out OPEN DNS  [http://www.opendns.com/]

http://blog.opendns.com/2009/12/03/opendns-google-dns/

The Gift from Traffic


Today was the ....

Here is the latest a fully off topic thing....My farewell mail sent to the people i worked/ interacted at XXXXXX India

[Mail]

From: Saurabh Sharma
Reply-to: sawrub@yahoo.co.in
To:
Bcc:
Subject: Today was the ....
Date: Fri, 04 Dec 2009 15:30:38 +0530

Hi All,

By god's grace, your well wishes and efforts from my side, finally the day has come when I can subject a mail like this, and be in a situation to bid you all a special bye. And this is how I'll like to orate out the whole thing.

- Today was the last morning when I had to get up early at 5:30 AM get ready and then run past the stray dogs barking at me for disturbing them at that hour, and also fearing a call from office that the cab has left from the pick-up point, though i had been a victim of the same a couple of times.

- Today was the last morning when I was supposed to log-in to my Linux box and do some maths for getting the sum up at 8+ Hrs for the activities done yesterday and then stamp the same under effortUpdate.jsp, a bit of which used to be fake.

- Today was the last morning, heading towards the coffee dispenser, with a fear of finding a Cockroach floating happily. Though by luck got none, compared to what I saw in other cups.

- Today was the last morning when I waited impatiently for Rattan to come in, and inquire about the Lunch menu and then wait for the clock to click 1 o'clock.

- Today was the last afternoon when I would have heard a word coming in from the next cubical "Every one does work.....do some MAGIC" by Mr MA**J. The most motivational line of my life :-).

- Today was the last afternoon when I would have heard from my colleague of a girl's visit to the floor, and then seeing the trail of fellow men moving towards the reception one after the other peeping in the waiting room.

- Today was the last afternoon when I had the chance to rush out at the reception after having lunch to find my turn at the Carrom, making my way through the people always present there like Mr. SU**L,Mr. PU**T,Mr.TR***K,Mr. TA**N,Mr. AR***D,Mr. GA***V.

- Today was the last afternoon when the wait for the long awaited 'Company Party' finally ended up in vain, dreaming the same enjoyment that we used to have.

- Today was the last day of receiving the 'Click Drop Alarm' from M1, and getting the whole NOC/QA/Dev people down to work fearing a bug in their rolled out CC.

- Today was the last day when leaving office at 4:30 PM [Non - DST] even after spending out extra half an hour of the shift timing, was like going back home on half day. With words like 'Aaj Jaldi' [early today] coming from friends.

- Today was the last day I could have got a mail from my manager for enjoying a Pizza Party late night, just for the sake of meeting the deadline.

- Today is the last day of mine at the 2nd Boys Hostel apart from the one in my college. With both being quite memorable in different respects.

From the very day I joined XXXXXX I came to know that this is the place where i had to be in order to face the technical world ahead. I entered this world of  web and comparison shopping as a child.Working here along with Mr M*****j B*****a and Mr R*******h G*****a was really a great experience, which helped me in carving my path. The challenging, competitive and dynamic environment that we worked in really helped me in knowing the hidden potentials and hence helped in proving myself and getting to the place where I'm.

Working over here at XXXXXX with such great minds specially the people in SEM group, instil a feeling of pride. Getting the opportunity to work with such experts at
this very early stage of my life clearly helped me gain a lot.

I wish you all a very best of luck.

Please feel free to contact me at following networks.
# Google Wave : luckysharma11@googlewave.com
# Google Mail : luckysharma11@gmail.com
# Yahoo : sawrub@yahoo.co.in
# Freenode : sawrub
# Facebook : SAWRUB
# Twitter : saw_rub
# Linkedin : http://in.linkedin.com/in/saurabh11

Oops I missed the Subject :  Today was the last day at XXXXXX.

--
Thanks
Saurabh Sharma
http://sawrub-blog.blogspot.com
Open your doors.......It's time to look beyond Windows

[/Mail]

Friday, November 27, 2009

Google Chrome released for Linux [Unstable]

The Google Chrome Browser is finally out for the Linux distributions.Presently Google have marked it as unstable.
Process of installing chrome on the Fedora 11 box went as below :

1. As root, added a file called google.repo in /etc/yum.repos.d/ for a new repository information.

[google]
name=Google - i386
baseurl=http://dl.google.com/linux/rpm/stable/i386
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub

2. Searched for chrome in the available repositories.
[root@mybox yum.repos.d]# yum search chrome
Loaded plugins: presto, refresh-packagekit
=========================================================================== Matched: chrome ===========================================================================
links.i586 : Web browser running in both graphics and text mode
bleachbit.noarch : Remove unnecessary files, free space, and maintain privacy
google-chrome-unstable.i386 : Google Chrome
qfaxreader.i586 : A multipage monochrome/color TIFF/FAX viewer
wordpress-mu-plugin-add-to-any.noarch : Add to Any: Share/Bookmark/Email Button plugin for WordPress MU
wordpress-plugin-add-to-any.noarch : Add to Any: Share/Bookmark/Email Button plugin for WordPress
xloadimage.i586 : Image viewer and processor
xorg-x11-drv-openchrome.i586 : Xorg X11 openchrome video driver
xorg-x11-drv-openchrome-devel.i586 : Xorg X11 openchrome video driver XvMC development package

3. Queried about the available package of Google Chrome.
[root@mybox yum.repos.d]# yum info google-chrome-unstable.i386
Loaded plugins: presto, refresh-packagekit
Available Packages
Name : google-chrome-unstable
Arch : i386
Version : 4.0.249.11
Release : 32790
Size : 18 M
Repo : google
Summary : Google Chrome
URL : http://chrome.google.com/
License : Multiple, see http://chrome.google.com/
Description: The web browser from Google
:
: Google Chrome is a browser that combines a minimal design with sophisticated technology to make the web faster, safer, and easier.

4. Finally fired the command for installation.
[root@mybox yum.repos.d]# yum install google-chrome-unstable.i386

5. Located the Google Chrome in the Applications > Internet > Google Chrome


6. And it was running.

Configuration for other distributions can be found at Google Application Repositories for Linux based OS

Saturday, November 21, 2009

Getting website Offline though CLI

Mirroring a site was never that easy. No software required just a single line command all is done.Let me explain how i did that.
1. Found a site had good AWK manual.
2. Went through the AWK manual and located two good options to the command that worked just well, though i had few wrong attempts earlier.
- m : Mirrors the web site
[man] Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite recursion depth and keeps FTP directory listings. It is currently equivalent to -r -N -l inf --no-remove-listing.[/man]

-k : Makes the local copy of site browsable, by making all of the links relative to the local location.
[man]After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc.[/man]

-w : Introduces a delay of x secs between each hit to the server and hence, prevents our IP identified as a crawler.
[man]Wait the specified number of seconds between the retrievals. Use of this option is recommended, as it lightens the server load by making the requests less frequent. Instead of in seconds, the time can be specified in minutes using the "m" suffix, in hours using "h" suffix, or in days using "d" suffix.[/man]

So finally the command for mirroring the whole website looks like this.
#wget -mk -w 5 http://www.server.com

The '5' mentioned here along with -w means the number of seconds to wait.

Wednesday, November 4, 2009

Celebrating 5yrs of Firefox [9 Nov, 2009]


Wednesday, October 14, 2009

Securing machine from any kind of SSH access

Try to locate the that whether you machine is set to accept all SSH connections under the IPTABLES rules.

# iptables -L INPUT --line-numbers|grep ssh
[will list all the rules applied to the incoming traffic over SSH]
Try to locate the following entry [if it exists in the list shown]
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
remove the entry as its defined to accept all the incoming traffic over SSH,and this is how to.
Just pick the row number of the entry [the first column,say its 4th],and then list it using
# iptables -L INPUT 4
and then delete it using
# iptables -D INPUT 4
Now to stop SSH to your machine,just fire the following command.
# iptables -A INPUT -p UDP --dport 22 -j REJECT
# iptables -A INPUT -p TCP --dport 22 -j REJECT
Save the rules.
# service iptables save
Restart the service
# service iptables restart
All this could also have been done by just shutting down the SSH service,but the idea was to try hands at IPTABLES

Saturday, October 10, 2009

SAWRUB behind the BARS


Want one for yourself click

Monday, October 5, 2009

CLI Utils

- List contents of tar.bz2

# tar -jtvf etc.tar.bz2 |less
- Extract Single file from within tar.bz2 [path specified]
# tar -jxvf etc.tar.bz2 etc/yum.conf
- Extract Single file from within tar.bz2 [w/o path specified]
# tar -jxvf etc.tar.bz2 --no-anchored yum.conf
- Adding a panel item to run as different user.
Assume that we want to run a terminal for a user who is not logged in X envt.
Adding the following line to the Command section of the new launcher and selecting run in terminal, will open a new terminal window with that user.
# su - username
Also can add ssh option to the command
# su - username -c "ssh remoteserver"
- Rebuilding RPM database indices from the installed package headers.
# rpm --rebuilddb
If rebuilding the database gives errors,which means the DB is corrupted,in that case we need to delete the DB [a Berkeley DB] and recreate it once the bad one is gone.

- Deleting the RPM Database.
# cd /var/lib/rpm
Locate the DB files in there.
# ll __db*
Remove all of these DB files.
# rm -f __db*
Recheck if none of the files are left back.
# ll __db*

Try rebuilding the DB,as above



New will be coming in...

Address book configuration [LDAP] on Evolution

Select new address book creation using LDAP from the Contacts tab.

General Tab -
Name : My LDAP [Any Name]
Server :
Port : 389
Encryption : No Encryption
Login method : Using DN
Login :

Details tab -
DC=xxxx,DC=xxxx,DC=xxxx [This can be searched by evolution search button over there]
Search scope : Sub
Search Filter :(objectclass=*)


Edit > Prefs > Auto-completion -
Select check box for : Always show addressses of the autocompleted contact
Enable the check box for LDAP server [with the name defined at the time of creation as 'My LDAP' under general tab].

Sunday, October 4, 2009

Yum Utils

1] Check the available packages for update

# yum check-update
2] Exclude a package from getting updated
Exclusion can be done in 2 ways
i] Runtime
# yum check-update --exclude package-name
e.g.
# yum check-update --exclude firefox
or using a wild card, by which all the packages starting with this name will be excluded from update.
# yum check-update --exclude openoffice*
ii] Defining in configuration
# vi /etc/yum.conf
append the following line in there
exclude= package1 package2 package3
we can also use wild card here as above.

Will be updating the same from time to time...

Saturday, October 3, 2009

VPN Setup using PPTP

Configuring the VPN connection to my office n/w :
Steps :
1] Install pptp and NetworkManager-pptp, using yum.

# yum install pptp NetworkManager-pptp
2] Once the installation is done,the next step is to configure the VPN connection using the above mentioned installed packages.For that we need to do setting under Network Manager Applet.It can be accessed by clicking the network icon in the notification area.
3] Right click the Network Manager Applet. Select Edit Connections > VPN tab > Add.
4] Select Point-to-point Tunnelling Protocol over there from the drop-down if its not there by default.
5] In the new connection set-up window.


- Give a name to the new connection [optional,but recommended].
- Select the Connect Automatically check-box.
- Define the :
Gateway | User name | Password
You can even verify the entered password by clicking 'Show Password'.
- All is done now, just click Apply.
6] Restart the machine.
7] Connecting to VPN Server.
- Left click the Network Manager applet.
- From the list select VPN Connection,
- Select VPN connection from under it, that you configured under step 5.
- Selecting the radio button will try enabling the VPN connection, during this duration an animation will be playing over the Network Manager Applet with a sign of lock over it.
- Once the animation goes off, without any error try accessing the local network [office n/w], using browser or shell.
- In case of error re-check the connection strings.
8] Disconnecting from the VPN Server.
- Left click the Network Manager applet.
- From the list select VPN Connection.
- Next go for the VPN connection that you are on.
- From the list select Disconnect VPN.

Friday, October 2, 2009

XZ takes over Gzip in RPM

As per the Fedora 12 Feature List, RPM packages will be compressed with xz instead of gzip, making the iso reduce by 30% in size, and 15% smaller compared to the bzip2 compression which is also an option, though bzip2 adds greater compression, but all this goes at the cost of large memory and cpu time. XZ on the other hand allows better compression without any of these.Thanks to Tukaani, the developer channel [presently one man army] behind the making of XZ Utils.

The core of the XZ Utils compression code is based on LZMA SDK, which is still in rapid development and hence Fedora will just using XZ instead of the not-finalized LZMA.

XZ Utils consist of several components:
- liblzma is a compression library with API similar to that of zlib.
- xz is a command line tool with syntax similar to that of gzip.
- xzdec is a decompression-only tool smaller than the full-featured xz tool.
- A set of shell scripts (xzgrep, xzdiff, etc.) have been adapted from gzip to ease viewing, grepping, and comparing compressed files.
- Emulation of command line tools of LZMA Utils eases transition from LZMA Utils to XZ Utils.

RPM [rpm-4.7.1-1] will be capable of compressing using xz.

Sources :
- XZRpmPayloads
- XZ Utils

Thursday, October 1, 2009

Making of Google Chrome

Funny way of bringing in all the colours of the Windows in a browser.



The real story goes like this : Making of Google Chrome

Sunday, September 27, 2009

Disabling IPv6 lookup for the time being

Just had a chance to open up my hosts file under F11,and saw an entry that got my attention it was the second row as shown below.

[root@sawrub-xbox ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
[root@sawrub-xbox ~]#
Moving around the web get me to the conclusion that its related to the IPv6. The IPv6 site also shouted out at me ,with my IPv4 IP Address.
Thinking over it, questions came up to the every answer that i was getting, let me collect all question together first and make the answers follow them:
Q1] If my machine is still IPv4, why was the entry of IPv6 over there under /etc/hosts.
Q2] Why is this enabled by default.
Q3] How do i control IPv6 fetch from my applications.
Q4] How do I disable IPv6 at my machine.
Q5] How do i get IPv6 address.

The answers are here.
A1] Kernel 2.2+ come with IPv6 built in.

A2] Ipv4 is now ~30 Yr old from its birth in January 1980, what a great job done but the world is running out of IP address now due to the hundreds of computers coming live every day,.The solution to this is wider IP address size, with IPv6 addresses being 128 bits long being 4 times larger than the present IPv4.
So enabling this by default ensures that when ever there is a transition from Ipv4 to Ipv6, no configuration is needed.
Just the drawback of having IPv6 enabled by default is that application s/w like Firefox, which access Internet and have the ability to use IPv6 try to get the Domain Name resolved into the IPv6 IP address by DNS Server,and if the DNS is not capable of returning the IPv6 address, then that handshake between the client and the DNS server is just a waste of time, which gets solved by Firefox re-requesting the DNS but now for the Ipv4 address. So its always better to switch off the use of IPv6 at application and OS to speed up the IP address fetching.Which is read as "Disabling IPv6 increases browsing speed".

A3] Disabling the Ipv6 under application s/w can be done if they provide any such option.I do have option for the most used Firefox.But this needs to be undone ,when you get an IPv6 address in future and your DNS is capable of

Following are the steps :
1] Type in "about:config" in the address bar.
2] In the Filter field. search for "ipv6"
network.dns.disableIPv6 will show up.
3] Right click on the row listing network.dns.disableIPv6,and click Toggle from the pop-up, and the value will be set to 'True'. If its not just edit the value column and set it to 'True' ,me using Firefox 3.5.3.

A4] Disabling the IPv6, services and firewalls involved couple of steps as :
[root@sawrub-xbox ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=sawrub-xbox
[root@sawrub-xbox ~]#
Check the /etc/sysconfig/network file for following entry,if settings are there just set it to 'no' as in :
NETWORKING_IPV6=no
IPV6INIT=no
if the lines as above were not there, then just execute the following.
[root@sawrub-xbox ~]# echo -e "NETWORKING_IPV6=no\nIPV6INIT=no" >> /etc/sysconfig/network
[root@sawrub-xbox ~]# cat /etc/sysconfig/network
--- snipped ---
NETWORKING_IPV6=no
IPV6INIT=no
--- snipped ---
Followed by few changes to ip-tables and services.
Stopping the IPv6 Iptables.
[root@sawrub-xbox ~]# service ip6tables stop
ip6tables: Flushing firewall rules: [ OK ]
ip6tables: Setting chains to policy ACCEPT: filter [ OK ]
ip6tables: Unloading modules: [ OK ]
[root@sawrub-xbox ~]#
Setting the values to off in chkconfig insures that the service will not come up on system restart, and finally re-starting the network service.
[root@sawrub-xbox ~]# chkconfig ip6tables off
[root@sawrub-xbox ~]# service network restart
Shutting down interface eth0: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth0: [ OK ]
[root@sawrub-xbox ~]#

A5] Lastly i'll have to live with IPv4 as getting the IPv6 solely depends on the ISP.

Tuesday, September 22, 2009

Reading a txt file in a Zip

A problem came up...to read a text file without extracting the file from the zip file.
So it was like this how i went by it:

1] Zip file was present in my home dir by the name 'archive.zip',so listing the file in home directory was done by.

[sawrubt@sawrub ~]#
[sawrub@sawrub ~]# ll archive.zip
-- output deprecated --
[sawrub@sawrub ~]#
2] List the files present in that archive.
[sawrub@sawrub ~]# unzip -l archive.zip|tail
-- output deprecated --
9999770 09-02-09 12:09 00010b32.jdb
9999963 09-02-09 12:09 00010b33.jdb
6371421 09-11-09 11:36 00010b34.jdb
71 09-11-09 11:49 version.txt
-------- -------
628281884 504 files
[sawrub@sawrub ~]#
3] And now my task was I had to read version.txt, so went by firing in the command
[sawrub@sawrub ~]# unzip -p archive.zip version.txt
-- output deprecated --
[sawrub@sawrub ~]#
and all was done.
Definition of '-p' option per unix manual:
-p extract files to pipe (stdout).Nothing but the file data is sent to stdout, and the files are
always extracted in binary format, just as they are stored (no conversions).


Cheers.

Sunday, September 20, 2009

Preventing Inactive SSH Snapping

Sometimes we face issues like very frequent connection breaks in the ideal SSH sessions,and its mostly because of "a packet filter or NAT device timing out your TCP connection due to inactivity."
http://www.openssh.com/faq.html#2.12

So here is the work around for the same.

Linux :
1] Open following file /etc/ssh/ssh_config (as Root user).
2] Append the following line in there at the bottom.
3] ServerAliveInterval 60
4] Save the file.
5] Restart the sshd daemon.
6] The line entered have the following definition.
ServerAliveInterval
Sets a timeout interval in seconds after which if no data has been received from the server, ssh
will send a message through the encrypted channel to request a response from the server. The
default is 0, indicating that these messages will not be sent to the server. This option applies
to protocol version 2 only.
Source :man page of ssh_config [man 5 ssh_config]


Windows [Putty] :
1] Open Putty.
2] Select a session,load the settings using 'LOAD'.
3] Set protocol to SSH and port number to 22 [if not default].
4] Under the 'Connection',option in the left hand side tree.
and set value of 'Seconds between Keepalives' to 60 [Default 0].
5] Also set Preferred SSH protocol version to '2' under Connection->SSH.
6] Save the settings,under Session.
7] Open up a new 'Break free' session now.
8] The 'Keepalives' are a special kind of messages sent to the server sent over SSH.As mention above by SSH manual.

Cheers...

Tuesday, September 8, 2009

SSH : Secure Shell

Using SSH [Secured SHell] is a very good mechanism to work on a remote system with full secured communication between client and server as the communicate is under SSL and no one sitting over the wire can read the encrypted data being transferred.

SSH came up as a replacement to the old unsecured protocols like ftp,telnet.
What all makes difference :
- Security
OpenSSH supports 3DES, Blowfish, AES and arcfour as encryption algorithms. These are patent free.Encryption is started before authentication, and no passwords or other information is transmitted in the clear. Encryption is also used to protect against spoofed packets.

- Compression
Requests compression of all data (including stdin, stdout, stderr, and data for forwarded X11 and TCP connections).The compression algorithm is the same used by gzipCompression is desirable on modem lines and other slow connections, but will only slow down things on fast networks.

- Key based authentication [RSA / DSA]
Strong authentication protects against several security problems, e.g., IP spoofing, fakes routes, and DNS spoofing. The authentication methods are: .rhosts together with RSA based host authentication, pure RSA authentication, one-time passwords with s/key, and finally authentication using Kerberos.

- Secure file transfer [scp/sftp]
@ scp
File transfer is carried out at port 22,Much similar to the BSD rcp, but here data is encrypted while transfer over the wire using the authentication and confidentiality of SSH.
Similar to SSH, SCP request any passwords required to connect to a remote host, which rcp is not capable of.
@ sftp
SFTP is not FTP run over SSH, but rather a new protocol designed from scratch.All role that SSH plays here is providing the authentication and security to the communication.sftp is sluggish in transferring of files when compared to scp.

- X11 Communication
GNOME's Nautilus have support under which remote X windows can be accessed and that also securely using the SSH.In the location bar just type in the ssh://user:password@hostname and then just in a matter of seconds you will be connected to the remote machine in GUI.And once there drag and drop can be done.

Little better one is not to pass in the password in the URI,but typing in when the system ask for it.So we can simply do ssh://user@hostname

References :
- openssh Best Practices
- The gr8 Wikipedia

Saturday, August 29, 2009

Backdoor entry as Root

The login into Run level 1,is by default the the ROOT login that is granted to the user,even without any password.Any unauthorised person can boot the system in RL 1 and just gain control of your system.
But don't take this as a hole,the security is just one file and couple of clicks away.All is needed is editing 'inittab' file present under /etc.And add a special line in there.

In steps it will be :
1. Be the root user.
2. Open the file in vi editor
vi /etc/inittab
3. Enter the 'Insert' mode,by pressing Insert button.
4. Append the line at last of the file
su:S:wait:/sbin/sulogin
5. Save the file and exit.

The back-door is now secured by the root password.

Saturday, August 22, 2009

Fedora DVD == Yum Repository

Last night i corrupted out my Xorg ,and was left in the 'Black World' of run-level 3. While trying to re-install it through YUM the N/W went out [all bad against me]. So had no other option than to install the xorg server though the Fedora DVD that i had.

Trying my hands to configure the DVD as the source for YUM was a great experience ,though it was much simple task.

Better to go step by step[Everything as a super user] :
1. All of the repositories configuration files under /etc/yum.repos.d/ were backed up and placed under a new directory created /etc/yum.repos.d/back-up/
#mv /etc/yum.repos.d/* /etc/yum.repos.d/back-up/
2. A new file 'dvd.repo' was created under /etc/yum.repos.d/ that will act as the configuration file for the new repository to be created with source as the Fedora DVD.
3. The file 'dvd.repo' was opened up in vi editor and following lines were added in there
[dvd]
name=DVD
baseurl=file:/media/
enabled=1
gpgcheck=0

and the file was saved.
PS : make sure that there is no other file under /etc/yum.repos.d/ except the 'dvd.repo'
4. The DVD was inserted in the dvd rom.
5. Fedora does mounts the DVD or any inserted media in runlevel 5 [GUI mode] and all such is available under /media/ directory ,but the case was not same here in run level 3.
6. I had to locate the dvd ,which was later located under /dev/cdrom ,having the soft link to
/dev/cdrom -> sr0
7. The dvd was to be mounted manually under /media/ in this case ,in order to make the repo file baseurl sync with the location of the DVD.Which was done using
#mount /dev/cdrom /media/
8. All was done now,the presence of the DVD under /media/,was verified using the long listing of the directory.
9. Tried out using yum,with following command and the packages were located well
#yum install xorg* -y
10. Once the installation was done,the system was rebooted passing out
# init 6
11. The system booted up under xorg ,with all GUI back...

Saturday, August 8, 2009

Chrome @ Fedora

Installing Chromium on Fedora isn't really all that much harder than it is on a Debian-based system. But with Fedora you have a couple of different options. You can either install from command line or you can install using the yum package management too. The benefit of installing via yum is that you will be able to update Either way the installation is simple.

Let's take a look at the Yum.

You have to follow the following steps:

  1. Open your vi editor and enter the gain 'root' access.
  2. Create the '/etc/yum.repos.d/chromium.repo' file.
  3. Add the following contents to the file:
    [chromium]
    name=Chromium Test Packages
    baseurl=http://spot.fedorapeople.org/chromium/F11/
    enabled=1
    gpgcheck=0

  4. Save the file.
  5. Update the yum database using : 'yum update'
  6. Finally fire the install : 'yum install chromium -y'
Doing this all will install Chromium on your fedora 11 system.Which can be accessed from under Applications -> Internet -> Chromium Web Browser



The steps can also be used to install chromium on Fedora 10 ,with just a small change in the '/etc/yum.repos.d/chromium.repo' file ,the baseurl should be replaced with
baseurl=http://spot.fedorapeople.org/chromium/F10/
Source : http://linux.com/news/software/applications/31870-get-your-chrome-experience-on-in-linux

Thursday, August 6, 2009

Some of the good links found over the net

- Gedit Plugins
- Mirroring Fedora
- Official Mirrors for North India
- Online Distro Store
- Third Party Plug-ins for Pidgin
- An open source VoIP and video conferencing application for Gnome
- Plug-ins for Evolution,mail client for Gnome
- Linux installation:No CD,DVD,USB drive,N/W needed
- The Open Web Foundation
- cynin
- Google Wave
- Elgg
- GnomeShell
- Software forge for Free Software
- Creative Commons India
- Secure system from SSH Attacks
- Add Python / C++ Auto-complete Support
- Scripts for Nautilus
- Ubuntu 9.10 with netbooks


The list is increasing day by day.....so do come back to find new ones, and if you want to be good enough, please add a comment for one that you think is missing here,which may be i have not knocked at yet...

Friday, July 31, 2009

Clearing Cache

The Centos machine was just running out of memory and me running out of accessibility to it.And thanks to the system monitor applet that was going whole green [resembling RED here and telling me to stop].
And then only i googled for ,some way to reduce it back to normal.And found many for it,but only few with explanation for the script that was to be run.
So finally after reading all that finally went by using
echo 3 > /proc/sys/vm/drop_caches

All game is in passing the variable to the file 'drop_caches'

Monday, June 22, 2009

Error while Expunging folder

Just started experimenting with Evolution and the 'Trash' was not able to empty itself when asked to 'Empty Trash'.And an error similar to following occurred,for the folder 'bad-folder ' that was created by mistake and was added to trash.

-------------------------------------------------------------------------------------------------------------------------------------------------------------
Error storing `~/.evolution/mail/local/bad-folder (mbox)':
Summary and folder mismatch, even after a sync
-------------------------------------------------------------------------------------------------------------------------------------------------------------

Tried a lot and finally after goggling a lot finally got an idea to try out deleting the entry for the folder 'bad-folder' from the Evolution profile itself.
So first listed all of the entries for it in the profile directory.

[ssharma@l_ssharma ~]$ ll ~/.evolution/mail/local/z-bad*
-rw-rw-r-- 1 ssharma ssharma 20426153 Jun 19 09:29 /home/ssharma/.evolution/mail/local/z-bad-folder
-rw-rw-r-- 1 ssharma ssharma 177 Jun 23 08:59 /home/ssharma/.evolution/mail/local/z-bad-folder.cmeta
-rw------- 1 ssharma ssharma 214595 Jun 19 09:29 /home/ssharma/.evolution/mail/local/z-bad-folder.ev-summary
-rw------- 1 ssharma ssharma 25644 Jun 19 09:29 /home/ssharma/.evolution/mail/local/z-bad-folder.ev-summary-meta

and then the client 'Evolution' was shut down.Once down all the files listed were cleaned up.

[ssharma@l_ssharma ~]$ rm -rf ~/.evolution/mail/local/z-bad*

Restarted Evolution and the folder was gone. :-)

Saturday, May 30, 2009

Brasero : CD / DVD Burning Application

Brasero is a revolutionary application to burn CD/DVD for the Gnome users. It's simple to use and have some unique features to enable users to create their discs easily and quickly.

Few of the good features
- Supports multi session
- CD/DVD image creation and dumping to hard drive.
- Check for file integrity.
- Edition of silences between tracks in Audio CD's
- Erase CD/DVD
- Can save/load projects
- A customisable GUI
- Drag and Drop / Cut'n'Paste from nautilus.

Fore more info:
http://projects.gnome.org/brasero/

Wednesday, May 20, 2009

Cron and Crontab commands

The cron command starts a process that schedules command execution at specified dates and times.

crontab - maintain crontab files for individual users.Similar to other commands this to have options .
-u It specifies the name of the user whose crontab is to be tweaked.
-l The current crontab will be displayed on standard output.
-r The current crontab will be be removed.
-e This option is used to edit the current crontab using the editor specified by the VISUAL or EDITOR environment variables.

Usage :
#crontab -u [user_name] - e
An entry under crontab for user specified by the user_name will be made and saved automatically on exiting the editor.
#crontab -u [user_name] - l
Shows the cron jobs for the user specified.

The cron jobs deceleration needs to be done in a 5 column format consisting of

  1. minute (0-59)
  2. hour (0-23)
  3. day of the month (1-31)
  4. month of the year (1-12)
  5. day of the week (0-6 with 0=Sunday)
Taking the example that if the 'root' user desires to setup a cron job,which displayes massage "Hello Jack" on the user's [Jack] terminal at 01:45 AM everyday ,the process will go like this

#crontab -u jack - e
45 01 * * * /bin/echo "Hello Saurabh"
save and exit using ':wq' if using VI editor.

Viewing the cron job to edit
#crontab -u jack - l

Deleting the cronjob
#crontab -u jack - r

Shorthand at the Linux CLI

Some of the very useful shortcuts of the CLI are

  • / :- root directory
  • ./ :- current directory
  • ./command_name :- run a command in the current directory when the current directory is not on the path
  • ../ :- parent directory
  • ~ :- home directory
  • $ :- typical prompt when logged in as ordinary user
  • # :- typical prompt when logged in as root or superuser
  • ! :- repeat specified command
  • !! :- repeat previous command
  • ^^ :- repeat previous command with substitution
  • & :- run a program in background mode
  • [Tab][Tab] :- prints a list of all available commands. This is just an example of autocomplete with no restriction on the first letter.
  • x[Tab][Tab] :- prints a list of all available completions for a command, where the beginning is ``x''
  • [Alt][Ctrl][F1] :- switch to the first virtual text console
  • [Alt][Ctrl][Fn] :- switch to the nth virtual text console. Typically, there are six on a Linux PC system.
  • [Alt][Ctrl][F7] :- switch to the first GUI console, if there is one running. If the graphical console freezes, one can switch to a nongraphical console, kill the process that is giving problems, and switch back to the graphical console using this shortcut.
  • [ArrowUp] :- scroll through the command history (in bash)
  • [Shift][PageUp] :- scroll terminal output up. This also works at the login prompt, so you can scroll through your boot messages.
  • [Shift][PageDown] :- scroll terminal output down
  • [Ctrl][Alt][+] :- switch to next X server resolution (if the server is set up for more than one resolution)
  • [Ctrl][Alt][-] :- change to previous X server resolution
  • [Ctrl][Alt][BkSpc] :- kill the current X server. Used when normal exit is not possible.
  • [Ctrl][Alt][Del] :- shut down the system and reboot
  • [Ctrl]c :- kill the current process
  • [Ctrl]d :- logout from the current terminal
  • [Ctrl]s :- stop transfer to current terminal
  • [Ctrl]q :- resume transfer to current terminal. This should be tried if the terminal stops responding.
  • [Ctrl]z :- send current process to the background
  • reset :- restore a terminal to its default settings
  • [Leftmousebutton] :- Hold down left mouse button and drag to highlight text. Releasing the button copies the region to the text buffer under X and (if gpm is installed) in console mode.
  • [Middlemousebutton] :- Copies text from the text buffer and inserts it at the cursor location. With a two-button mouse, click on both buttons simultaneously. It is necessary for three-button emulation to be enabled, either under gpm or in XF86Config.

Saturday, April 11, 2009

Adobe Flash Plugin

All of the following steps should be done as ROOT user in terminal.
1] Download the package for yum repository configuration on your machine,using

wget http://linuxdownload.adobe.com/adobe-release/adobe-release-i386-1.0-1.noarch.rpm
You can get the lates one from here
2] Once the download is complete,run command
# rpm -ivh adobe-release-i386-1.0-1.noarch.rpm
3] Now your box is configured to use flash plufuns available from ADOBE
4] To install flash ,just type in
# yum install flash-plugin -y
5] Restart Firefox.
6] Verify that flash have installed close the browser instances and reopen them and browse the following link.

Root Login Not Possible

Few wrong settings that can prevent the 'Root' user to login to the linux box are
1) The Shell
Check for the login shell that the system is offering to the 'Root' user.
Steps for checking this
i) Log in to the system in a single user mode.Help
ii) Check for login shell under the /etc/passwd using following
# grep root /etc/passwd
in the results look for the line starting with 'root' check for the last entry considering ':' as a delimiter.
iii) If the entry is '/sbin/nologin' that means the root user is not being provided a shell that enables a user to login to the system and perform tasks.We need to change this shell to /bin/bash so that user is allowed to login.
iv) To change this we need to fire a single command.
# usermod -s /bin/bash root
this command will change the shell for the root user to /bin/bash as desired.
v) The new shell can be cross checked by running the command used previously in step ii.
# grep root /etc/passwd
this time the last entry should be /bin/bash.

2) Permissions of /etc/securetty file
For the root user to login to the machine the file /etc/securetty should be having follwing either 600 or 644 set as the permissions.So first we need to check the present permissions of the file.This is as sinple a viewing the listing of files.
# ls -l /etc/securetty
the first column of the listing should be somewhat similar to -rw------- which means the value is set to 600.
In case if this value is not one of 600 or 644 then we need to change that by using ollowing commmand.
# chmod 600 /etc/securetty
Cross check can be done to see if the file permissions have been modified by again viewing the permissions as above.

3) No terminal entry in /etc/securetty should be commented.
Open the file /etc/securetty using vi editor.Check to insure that no line in there is commented.If there is any uncomment it ,save the file and exit.
For editing the contents of the file working with vi editor should be known.
PS :A very good post at The linux Documentation Project explains the use of VI .

4)Check for account details.
The next check to be done is for the account details for the root user.Over-here first of all we will check for the present account details of the user.
# chage -l root
The command will list account information about the root user.Check for the dates and insure that there should be none offending.the default and good settings are
Password expires : never
Password inactive : never
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7

if we take as an example that Account expires have a value of date earlier to today, it's very much clear that the account is no longer existing.But since the root user account should not be expred we need to change the value of this paramter to its default value of never.

single user mode

Steps for entering the single user mode.
i) Reboot the machine,at GRUB [Boot Loader] there will be the listing of the Kernel selection.
ii) Select the version of the kernel that you wish to boot and type e for edit. A list of items in the configuration file for the Kernel you just selected will show up.
iii) Select the line that starts with kernel and type e to edit the line.
iv) At the end of this line, giving the Space type-in 1. Press [Enter] to exit edit mode.
v) Once back at the GRUB screen, type b to boot into single user mode.

Saturday, February 28, 2009

Changing Panel Color Scheme

After a lot of fight to locate the a simple hack to change the panel colour scheme,instead of installing another application for this, finally got the answer as a '.gtkrc-2.0' file with colour definitions for panel.

All needed is to create a simple file with name as .gtkrc-2.0, the definition fg[NORMAL] = "#ffffff" defines the foreground colour [Font Colour] to be white [#ffffff].This value can be set to any value per your needs.A list of hex values for different colours are available at http://www.free-webmaster-tools.com/colorpicker.htm.

Steps to create this file, and change the colour . Per this example font colour will be changed.

1. Go to your home directory,using the terminal. [ cd ~ ]
2. vi .gtkrc-2.0
3. Go to the Insert mode of vi editor by pressing in INSERT key,
4. -- INSERT --,should appear at the bottom left corner of the terminal screen.
5. Copy the mentioned contents and paste in the file.
6. Press Esc. key followed by [:wq] to save the changes made to the file.
7. Type in following lines at prompt,this will reload the settings done to panel UI.
8. killall gnome-panel
9. Once done,you are now having your new colour scheme.


Contents of file [.gtkrc-2.0] :
style "panel"
{
fg[NORMAL] = "#ffffff"
# fg[PRELIGHT] = "#000000″
# fg[ACTIVE] = "#ffffff”
# fg[SELECTED] = "#000000″
# fg[INSENSITIVE] = "#8A857C”
# bg[NORMAL] = "#000000″
# bg[PRELIGHT] = "#dfdfdf”
# bg[ACTIVE] = "#D0D0D0″
# bg[SELECTED] = "#D8BB75″
# bg[INSENSITIVE] = "#EFEFEF”
# base[NORMAL] = "#ffffff”
# base[PRELIGHT] = "#EFEFEF”
# base[ACTIVE] = "#D0D0D0″
# base[SELECTED] = "#DAB566″
# base[INSENSITIVE] = "#E8E8E8″
# text[NORMAL] = "#161616″
# text[PRELIGHT] = "#000000″
# text[ACTIVE] = "#000000″
# text[SELECTED] = "#ffffff”
# text[INSENSITIVE] = "#8A857C”
}
widget "*PanelWidget*" style "panel"
widget "*PanelApplet*" style "panel"
class "*Panel*" style "panel"
widget_class "*Mail*" style "panel"
class "*notif*” style "panel"
class "*Notif*” style "panel"
class "*Tray*” style "panel"
class "*tray*” style "panel"

VLC Segmentation Fault

A yet another day for me after updating on my FC9 over the night and finding VLC media player to have stopped its daily task of playing tracks for me.
The UI just stopped showing up,it used to come up for a flash and then go off.So what now the digging starts off along with Googling side by side.
A bit later running 'vlc' at shell gave me the cause of the problem.There was a 'Segmentation fault' coming up.

A lot Googling finally landed me to #videolan IRC chat at irc.vediolan.org.And over here i finally got my problem resolved.

Though the guy over there who helped out did not mentioned the error cause in particular,but from the chat it seems to be due to the skin2 interface that I was using.

I was asked to :

1. delete '~/.config/vlc/*' as a normal user. [rm -rf ~/.config/vlc/*]
2. Re- run 'vlc' at prompt
3. A pop up saying of 'privacy and network warning' came up which was normal per him.So acknowledged it with default settings.
4. The VLC was there in a second as new as ever.

Hoping that this helps you out if you face a similar problem with VLC.