Archive for December, 2011


Internet has become an endless reality, now people can talk to their friends and relatives through video chat, things like these seemed impossible a few decades ago. Internet has greatly revolutionized the world. From payments to social networking, it has his impact on most of the individuals. Without internet, life is hard.

The Internet consists of an infrastructure laid down by hardware devices like Cables, routers, switches (before hub), transmission towers,satellites etc. These form the backbone of the Internet.

The various components include nodes,clients,servers.Some are end points — the computer, smartphone or other device you’re using to read this may count as one. We call those end points clients. Machines that store the information we seek on the Internet are servers. Other elements are nodes which serve as a connecting point along a route of traffic. Connections can be physical or virtual. Moreover we can categorize internet as Wired and Wireless as well.

Now comes the software components: Protocols are the set of rules that nodes and machines in a network follow. Without protocols communication is nearly impossible. They lay down standards and policies that the nodes in the network must follow.

Commonly used protocols on the internet include TCP,UDP,IP,HTTP,FTP.

Now lets concentrate on how packets flow in the internet.

First a connection to the internet is established. We then make use of a Web Browser for viewing web pages.Here, your computer sends an electronic request over your connection to your Internet service provider (ISP). The ISP routes the request to a server further up the chain on the Internet. Eventually, the request will hit a domain name server (DNS).The ISP is the internet provider, example Verizon, Airtel, BSNL.

The DNS forms an important feature of the Internet. It is the one that manages the entire internet. It’s used in redirection look ups apart from many other tasks.This server will search for a match for the domain name you’ve typed in for example (www.google.com). If a match is found it then redirects to the corresponding IP address.For example http://www.google.com will redirect to 216.239.51.99. If it doesn’t find a match, it will send the request further up the chain to a server that has more information.

The request will finally come to our very own Web server. The internet makes use of packets , data is divided into several small data packets and transmitted and received over the internet. Each protocol follows its header and footer formats along with the information that each packet carries. The routing protocol is specified as well. Hence depending on the protocol and the addresses, the packets reach the destination node.

That’s an important feature. Because packets can travel multiple paths to get to their destination, it’s possible for information to route around congested areas on the Internet. In fact, as long as some connections remain, entire sections of the Internet could go down and information could still travel from one section to another — though it might take longer than normal.

Routing is essential as there are several ways to send and receive packets over the internet and its essential to follow the best path and provide alternate paths when necessary.

HTH

 

I was always amazed at the way torrents work that’s why I felt I must write an article about torrents in my blog. Torrents are the most widely used mechanism for downloading files over the internet. Even though they are an innovative thing, they are responsible for most of the piracy that happens over the internet.

Now lets get to the point “How torrents work ?”

Torrents come under the category called Peer to Peer  sharing. P2P file sharing is different from regular downloading. In peer-to-peer sharing, we use a software program to find and connect to computers consist of the file you want to download. Because these are ordinary computers like yours, as opposed to servers, they are called peers.

Few definitions :

  • “Swarming” is about splitting large files into smaller “bits”, and then sharing those bits across a “swarm” of dozens of linked users.
  • “Tracking” is when specific servers help swarm users find each other.
  • Swarm members use special Torrent client software to upload, download, and reconstruct the many file bits into complete usable files.
  • Special .torrent text files act as pointers during this whole process, helping users find other users to swarm with, and enforcing quality control on all shared files.

A torrent makes use of primarily two concepts “seeds” and “peers”.

Every torrent client software contacts  a tracker to find other computers running a torrent client that have the complete file .These are called “seeds” and those with a portion of the file downloaded are called the “peers”.

The tracker in the network makes use of the swarm i.e it identifies the computers that are the seeds and those that are peers.

Torrents make use of simultaneous upload and download i.e a torrent client downloads a part of the file and at the same time uploads the file to be used by other peers in the network. The upload and download rates can be specified in the torrent client.

Download speed is controlled by torrent tracking servers, who monitor all swarm users.I have come across a few articles saying that most torrent clients make use of a strategy called “tit for tat” which means the greater the number of files you upload the better your download speed. Am from India and I have never seen my download speeds crossing the 250 Kbit/sec mark and hence I have little to comment about it.
A quote from netforbeginners.about.com
If you share, tracker servers will reward you by increasing your alotted swarm bandwidth . Similarly, if you leech and limit your upload sharing, tracking servers will choke your download speeds, sometimes to as slow as 1 kilobit per second. Indeed, the “Pay It Forward” philosophy is digitally enforced! Leeches are not welcome in a bittorrent swarm.

For further reading: http://en.wikipedia.org/wiki/BitTorrent_(protocol)

With the increase in the traffic, it gets difficult to get high speed access to resources like data, network bandwidth and other resources. Companies are striving to provide high speed access to its clients and customers at the same time, but the former proves to be hindrance to this. Content Delivery network provides a solution to this. By placing edge servers at various locations on the globe , companies now can provide high speed access by directing the users specific to a region to their corresponding servers.

A quote from Wiki :

The capacity sum of strategically placed servers can be higher than the network backbone capacity. This can result in an impressive increase in the number of concurrent users. For instance, when there is a 10 Gbit/s network backbone and 200 Gbit/s central server capacity, only 10 Gbit/s can be delivered. But when 10 servers are moved to 10 edge locations, total capacity can be 10×10 Gbit/s.

CDN’s are dynamic in nature and they service with the help of TCP and UDP. CDN technologies lay a lot of importance to delivering resources dynamically and this also plays a major role when a particular server fails. CDN now can provide high availability by using other edge servers and there is no lag in transmission of data.

Some of the popular CDN’s include Akamai , Amazon’s Cloud Front and CloudFlare.

It’s very handy to use Hybridfox for handling images in eucalyptus, this article demonstrates on how to manage instances on the cloud using the command line

A key pair needs to be created before logging into a virtual machine on the cloud

euca-add-keypair mykey > mykey.private

Start the VM n is the number of instances that you want to start and emi is the image that you want to run on the cloud

euca-run-instances -k mykey -n (number of instances) (emi)

To query the system regarding the number of images and status of these images use

euca-describe-instances

It is a common issue that some of the download mirrors may be down or be hit by traffic, this method suggests a way to continue downloading the file that you had paused from a alternate mirror (or) url. I use Internet Download Manager and it’s one of the best in the business. To continue your paused download from a different mirror , just get the address to your alternate mirror from the from the source , (AFAIK most of the websites these days provide links for alternate mirrors especially for downloading massive files). Go to IDM , right click the file that was paused, click on Properties and change the address to the address of the new mirror url, now you can resume your download at a higher speed. Choose the mirror that is relatively close to you for better speeds.

HTH

Adding Another Logical Volume

Adding another LV is a straightforward process.

  1. Add the new hard drive.
  2. Configure the new hard drive with partitions using a command tool such as fdisk. It’s code is 8e within fdisk.
  3. If you’ve created separate partitions, you can dedicate the space of a specific partition to a PV. If you don’t already have an empty logical volume, you’ll need to create more than one. For example, for the first partition /dev/sda1, you can do this with the following command:
    # pvcreate /dev/sda1
  4. Next, you’ll want to create a Volume Group (VG) from two or more empty, properly configured partitions (or drives). One way to do this, assuming you have an empty /dev/sda3 partition, is with the following command:
    # vgcreate Volume01 /dev/sda1
  5. Before proceeding, you should inspect the VG with the vgdisplay command.
  6. You should now be able to add another LV with the lvcreate command.
    # lvcreate -l 20 Volume01 -n LogVol01
  7. You’ve added a new logical volume. Naturally, you’ll need to format and mount a directory on this LV before you can use it. For the example shown, you would use the following commands:
    # mkfs -j /dev/Volume01/LogVol01
    # mount -t ext3 /dev/Volume01/LogVol01 /tmp

Removing Logical Volumes

Removing an existing LV requires a straightforward command. The basic command is lvremove. If you’ve created an LV in the previous section and want to remove it, the basic steps are simple:

  1. Save any data in directories that are mounted on the LV.
  2. Unmount any directories associated with the LV. Based on the example in the previous section, you would use the following command:
    # umount /dev/Volume01/LogVol01
  3. Apply the lvremove command to the LV with a command such as:
    # lvremove /dev/Volume01/LogVol01

Resizing Logical Volumes

If you have an existing LV, you can add a newly created PV to extend the space available on your system. All it takes is appropriate use of the vgextend and lvextend commands. For example, if you want to add PEs to the VG associated with the aforementioned /home directory, you could take the following basic steps:

Just be careful while re-sizing home directories as the computer may crash do take sufficient precaution while handling re-sizing requests as they are prone to system crashes.

  1. Back up any data existing on the /home directory.
  2. Unmount the /home directory from the current logical volume.
  3. Extend the VG to include the new hard drive or partitions that you’ve created. For example, if you wanted to add /dev/sdd1 to the /home VG, you would run the following command:
    # vgextend Volume00 /dev/sda1
  4. Make sure the new partitions are included in the VG with the following vgdisplay command:
    # vgdisplay Volume00
  5. Extend the current LV to include the space you need. For example, if you wanted to extend the LV to 2000MB, you’d run the following command:
    # lvextend -L2000M /dev/Volume00/LogVol00

    The lvextend command can help you configure LVs in KB, MB, GB, or even TB. For example, you could get the same result with the following command:

    # lvextend -L2G /dev/Volume00/LogVol00
  6. Reformat and remount the LV, using commands described earlier:

    use resize2fs when prompted

    # mkfs -j /dev/Volume00/LogVol00
    # mount -t ext3 /dev/Volume00/LogVol00 /home

The following example explains the sticky bit – a special feature in linux operating systems

  1. Add users called test1, test2, and test3.
    # useradd test1; passwd test1
    # useradd test2; passwd test2
    # useradd test3; passwd test3
  2. Edit the /etc/group file and add a group called group1. Make the test1 and test2 accounts a member of this group. You could add the following line to /etc/group directly or use the Red Hat User Manager:
    group1::9999:test1,test2

    Before you proceed, make sure the group ID you assign to group group1 is not already in use.

  3. Create a shared director for the group1 group:
    # mkdir  /home/testshared
  4. Change the user and group ownership of the shared directory:
    # chown  nobody.group1  /home/testshared
  5. Log in as test1 and test2 separately. Change the directory to the testshared directory and try to create a file. Two ways to do so are with the following commands.
    $ date >> test.txt
    $ touch abcd
  6. Now as the root user, set group write permissions on the testshared directory.
    # chmod 770 /home/testshared
  7. Log in again as user test1, and then try to create a file in the new directory. So far, so good.
    $ cd /home/testshared
    $ date >> test.txt
    $ ls -l test.txt
  8. Now check the ownership on the new file. Do you think other users in the group11 group can access this file?
    $ ls -l
  9. From the root account, set the SGID bit on the directory:
    # chmod g+s  /home/testshared
  10. Switch back to the test1 account and create another file. Check the ownership on this file. Do you think that user test2 can now access this file?
    $ date >> testb.txt
    $ ls  -l
  11. Now log in as the test2 account. Go into the /home/testshared directory, create a different file, and use ls -l to check permissions and ownership again.
  12. Switch to the test3 account and check whether you can or cannot create files in this directory, and whether you can or cannot view the files in this directory.

There are several methods to create VM’s in CentOS, some of the popular methods include virt-install, xend, xm.

I ll keep updating the post as and when I remember the ways.

1. The first method is using virt-install

virt-install -n rhel5PV -r 500 -f /var/lib/xen/images/rhel5PV.dsk -s 3 –vnc
-p -l ftp://10.1.1.1/trees/RHEL5-B2-Server-i386/

2. The second method is using xm

kernel = "/boot/vmlinuz-xen-install" ramdisk = "/boot/initrd-xen-install" extra = "text ks=http://localserver/minimal-ks.cfg" name = "mailserver" memory = "256" disk = [ 'tap:aio:/srv/xen/mailserver.img,xvda,w', ] vif = [ 'bridge=xenbr0', ] vcpus=1 on_reboot = 'destroy' on_crash = 'destroy'
The above is a sample configuration file for the xm command, one can use the command xm create <cfg file>


recomp

How_to_add_system_call

016

These were some of the resources that i used while learning the MINIX Operating System. A book is exclusively written by Andrew Tannenbaum that deals with in depth concepts of the MINIX system calls and its file system.

SUMO ROBOT ARENA

This robot was designed by myself and my project mate Manikandan Eshwar as a part of our Embedded Systems Laboratory. The basic functionality of the robot is as follows :

It is similar to a obstacle detection robot but it senses an object and pushes it out of the ring.

The program is coded in such a way that it detects white and black color, once within the white ring it moves about freely searching for an opponent, once it senses the black it begins to rotate about its axis as crossing the white means its out of the ring and its declared out. It was in this basis that the robot was designed, i ll upload a copy of the ring and the design soon.

HTH

Code and circuit diagram available on request