Saturday, February 25, 2023

History Internet

 The internet is an incredible tool that has revolutionized the way we live, including how we work, learn, socialize, consume entertainment, shop, and more. It provides a virtually endless source of information and has completely changed the way we communicate. Although it now plays a significant role in our daily lives, the internet is a relatively new invention. Most people over the age of 35, or even 30, will remember a time when the internet didn’t feature in everyday life.

The invention of the internet didn’t happen overnight, and like most major breakthroughs, it was developed as a sequence of expansions on an original idea.

In this article, we explore the origins of the internet and the key players involved. Let’s take a step back in time and see how the world wide web came to be.

Early work by the ARPA

We have to go back to the moon race to understand how the internet first started to percolate in the minds of computer scientists and engineers.

It was the year 1957 when the Soviet Union launched its satellite Sputnik. The event caught US strategy and military planners completely off guard—they had no idea that their cold war adversary possessed this level of sophisticated communications and space navigation equipment.

The US government immediately sprang into action. President Dwight D. Eisenhower created the Advanced Research Projects Agency (ARPA) with the explicit aim of regaining technological superiority in arms and space exploration.

ARPA was funded by the US Department of Defense, hence the common perception that the internet was created by the Pentagon. It was, in fact, largely an independent body that had the funds and mandate to do what it took to succeed.

It did exactly that, directly employing hundreds of scientists and subcontracting several more. ARPA worked with professors in MIT, Stanford, and Berkeley to initially develop advanced technology for space, ballistic missiles, and nuclear testing. A focus on communication via computers came later, after the organization understood that it would be easier to protect data transfer and information by breaking it up into tiny pieces and reassembling it at the other end. That’s kind of what the internet does today.

The “Galactic Network” concept

The theory of a global interlocking network of computers was first put forward by MIT scientist John Licklider, who spoke about a “Galactic Network” concept. Under this vision, computers across the globe would be connected to enable quick and seamless data transfer. This would be accessible to everyone, allowing for instant communication and access to an infinite amount of information.

John Licklider joined ARPA in 1962 to head the project. His colleagues Leonard Kleinrock and Lawrence Roberts came up with the theory of packet transfer, whereby messages are broken up into ‘packets’, sent separately to their destination, and then deciphered at the other end. This was far more secure than sending messages via a solitary line. The complexity of the transfer meant that eavesdropping is difficult, and one intercepted message barely gives away any discernible information.

BBN IMP connected computers to Arpanet.
BBN IMP connected computers to Arpanet. Source: Don DeBold licensed under CC by 2.0

Development of a wide area network

The first result of this experiment came in 1965 when ARPA connected a computer in Massachusetts with another in California via conventional telephone lines. The outcome was mixed—it confirmed the viability of packet switching theory and data transfer via a wide area network, but not everything went according to plan. This indicated that circuit switching via telephone lines is inadequate.

At the same time, the US Department of Defense also contracted the RAND Corporation to conduct a study on how it could maintain its command and control facilities in the case of a devastating nuclear attack. The RAND Corporation began work on a decentralized facility that could not be rendered invalid in such a scenario—with the outcome that the military would still have control over its own nuclear assets for a second strike capability.

By the late 1960s, both the RAND Corporation and ARPA’s researchers came to the same conclusion independently of each other: the best way forward was to develop a wide area network that had the range and connectivity to prevent subterfuge. Their recommendations were incorporated into a computer network system called ARPANET.

In the initial phase, ARPANET linked computers installed at both UCLA and Stanford. Students at UCLA logged in to Stanford’s network and accessed its database. After successful testing, the system gradually expanded to host a total of 23 computers across California, and the number was almost 50 by 1974.

Arpanet map from 1974.
ARPANET network map 1974. Source: Wikimedia

The first public demonstration of this new technology came during the International Computer Communication Conference in October 1972. At the same time, a version of email was introduced—written by Ray Tomlinson to facilitate easy communication for ARPANET developers. This program was further refined to list, archive, forward, and respond to messages.

Ray Tomlinson and Andreu Vea
Left: Ray Tomlinson; right: Andreu Vea. Source: Andreu Vea via Wikimedia

Progression to multiple networks

In 1973, work started on a much more ambitious program: to develop internetworking or connecting multiple networks. This was different to the system architecture of ARPANET which was a closed network and didn’t allow computers from different networks to communicate with it.

The idea was to facilitate an open architecture that wouldn’t be restricted by geography or specific protocols. This would later come to be known as the Transmission Control Protocol/Internet Protocol (TCP/IP). The utility of TCP/IP is that it allows each individual network to operate independently—if one network was brought down it wouldn’t affect the stability of the entire web.

TCP/IP involved no overall global watchdog or manager. Networks were connected together via routers and gateways. And it was from the initial code of the TCP/IP protocol that the internet would later emerge. The key underlying idea of the internet was that of open-architecture networking. That’s what set it apart from earlier versions like ARPANET which were critical in validating things like packet switching, but weren’t designed to be as inclusive as the internet we see today.

Here are some of the salient features of the TCP/IP protocol that we see embedded on the internet:

  • Each network should be able to work on its own and require no modifications to connect to the internet.
  • Within each network there would be a gateway to connect it to the outside world.
  • The gateway could not possibly restrict the information passing through it.
  • Information would be transmitted via the fastest possible route.
  • If one computer was blocked, then it would search for alternatives.
  • The gateways would always remain open and they would route the traffic without prejudice.
  • All development would be open source and design information would be freely available to all.

Early renditions of this idea involved a packet radio system that used satellite networks and ground-based radio networks to maintain effective communication. In 1975, researcher Robert Metcalfe developed a system that replaced radio transmission of data with a cable that had the capability to provide a much larger amount of bandwidth.

Robert Metcalfe
Left: Andreu Vea; right: Bob Metcalfe. Source: Andreu Vea via Wikimedia

Metcalfe called it the Alto Aloha network and it laid the foundations for what later would be known as the ethernet cable. The system was a significant upgrade as it allowed the transfer of millions of bits of data per second in comparison to the thousands of bits via radio channels.

Related: Internet speeds explained

Personal computers and the information superhighway

Until the mid-1970s there weren’t any computers available for personal use. The internet was designed with the view that it would encapsulate networks maintained on large mainframe computers—usually owned by large corporations, governments, and academic institutions.

TCP/IP was incorporated during this time but it took several years of painstaking research and fine-tuning before it was able to serve the needs of the globe.

The Altair 8800, widely believed to be the world’s first personal computer, was introduced by designer Ed Roberts as a portable kit. It featured a whopping 64 KB of RAM. Apple was founded the next year (in 1976) and its first personal computer, the Apple II, was released in 1978. Computers were now slowly becoming part of the mainstream and finding a way into people’s homes.

Altair 8800
Altair 8800, Smithsonian Museum. Source: Ed Uthman via Wikimedia

In 1979, researchers Tom Truscott and Jim Ellis developed USENET based on UNIX system architecture that allowed files to be transferred between computers on the same network. It was widely believed to be the first rendition of internet forums and discussion groups. In its first iteration, USENET allowed people to transfer electronic newsletters between Duke University and the University of North Carolina. The name is derived from ‘users network’.

Early versions of USENET only allowed for one or two file transfers every day. But discussions could be threaded via topics and the system didn’t need a centralized database to function. The program gave birth to the concept of the free flow of information, later known as the ‘information superhighway’.

The internet takes off

It was in the early 1980s that the commercial version of the internet that we see today really started to take shape. ARPANET was in the process of switching to inclusive TCP/IP protocols but still lacked the backbone of the entire system—meaning it wasn’t capable of morphing into the global interlocking behemoth that the internet has become today. It needed an upgrade.

The possibility of switching over to connecting to the internet via traditional dial-up phone lines came during a meeting of computer scientists in 1981. The inadequacies of radio or satellite connections were discussed and the aim was to expand access to ARPANET.

Email was a powerful driver of early innovation in the internet. Its rapid popularity began to choke the system. Users were constantly sending messages back and forth and researchers understood this behavior needed to be encouraged. However, in order to do that, the system needed to be expanded and upgraded to the point that geography would no longer be an issue.

Funding for this idea came from the National Science Foundation in 1982. Computer scientists built Telenet—the world’s first commercial service—and allowed access to it via a complementary solution called PhoneNet. This resulted in a faster, cheaper way of connecting to the internet and allowed for email communication between the US, Germany, France, Japan, Korea, Finland, Sweden, Australia, Israel, and the UK. Development of the web started to accelerate.

The DNS system is born

The concept of Domain Name Servers (DNS) came about in 1984 (the same year as George Orwell’s novel). Before DNS, each host computer on a network was simply assigned a name. These names were added to a centralized database that was easily accessible by all but didn’t have the functionality to tier content according to purpose such as education, healthcare, or news.

The DNS system attached a suffix to each host computer and gave birth to tiered web addresses we see today such as .edu, .gov, and .org. It could also bifurcate locations by country.

domain-name-system
An illustration of the DNS system.

At the same time, Western governments began to see the internet for what it is today—a powerful medium for the dissemination of information—and started to encourage its use in areas like education and healthcare. The British government announced JANET (Joint Academic Network)—a centralized system to facilitate information sharing between universities in the UK.

The US wasn’t far behind—the National Science Foundation established NSFNet to foster collaboration in research and design within higher education across America.

NSFNet had the mandate to work on federal policy initiatives as well. Its most crucial breakthrough was the agreement to provide the backbone for the internet in the US. It did so by plugging in five supercomputers, immediately expanding the capacity of the system and leading to a surge in use. For the first time matters of national security and defense were excluded, marking the final tilt of the internet away from military purposes purely to research, education, and development.

NSFNet’s new backbone resulted in a dramatic surge of computers on the network. In 1986, there were only 5,000 hosts. The number rose to 28,000 just a year later. By deliberately excluding government and commercial entities, the new infrastructure was now exclusively available for research, news, education, and entertainment. This further encouraged private companies to step in and start laying the foundations for internet connectivity in US households.

Technological advances boost growth

ARPANET was formally disbanded in the early 1990s as both personal computers and the internet started to become ubiquitous throughout the US. The infrastructure of this early precursor to the internet was unwieldy and unsuitable for mass public use, hence the decision.

The development of the World Wide Web that we see today was accelerated by scientists at the European Organization for Nuclear Research (CERN) who brought into existence hypertext markup language (HTML). They also established the uniform resource locator (URL) as a standard address format that recognizes both the host computer and information sought.

Tim Berners-Lee, who is widely credited as the inventor of the World Wide Web, was an independent contractor at CERN working on this specific project. He wrote the proposal to link hypertext with TCP/IP and domain name systems. The proposal was accepted by his manager, Mike Sendall, and the rest, as they say, is history.

Tim Berners Lee
Sir Tim Berners-Lee. Source: Paul Clarke via Wikipedia

Both URL and HTML proved to be fundamental to the explosive growth of the web. HTML became the standard language for creating websites and web applications while a URL makes it much easier for users and networks to interact across the internet. It also allows for caching, storing, and indexing web pages.

It came as little surprise that Archie, the world’s first search engine, was developed the same year. The code for this program was written by Alan Emtage and Bill Heelan, both associated with McGill University. Its first version was simply a way for people to log in and search collected information via the Telnet protocol. Web crawlers at the time were not terribly efficient.

The internet becomes increasingly accessible

In 1991, Al Gore helped pass the High Performance Computing Act propelling the former US vice president into internet superstardom. Al had retained a keen interest in computing since his early years as a Congressman in the 70s. The passage of this law created a US$600 million research fund for high performance computing which resulted in the National Research and Educational Network (NREN) and National Information Infrastructure (NII). The latter was popularized by the term ‘information superhighway’.

Some of the most important features of the act were the promise to build widespread internet infrastructure and make it accessible to all. Federal agencies were required to build websites and make information public by putting it on the web, and a pledge was made to connect all US classrooms to the internet by the year 2000.

The year 1993 saw the public appearance of the world’s first graphical internet browser known as Mosaic. It was developed by Marc Andreessen who went on to establish Netscape Communications—known best for its wildly-successful Netscape Navigator line of browsers.

The Mosaic browser.
The Mosaic browser. Source: Wikipedia

The proliferation of Netscape, HTML, and URLs was a triple whammy. Web browsers were now customizable and capable of displaying rich images directly on screens. Additionally, they came backed by 24-hour customer support. Trial versions of Mosaic were made available for free—this decision had several intended network effects and users couldn’t get enough.

By the time Netscape Navigator came online, there were already tens of thousands of users accessing Mosaic on their PCs. Netscape was a significant upgrade that allowed for faster browsing, boot times, and richer images.

Up until now, the internet had largely served the science and research communities by allowing for file transfers and democratizing access to information, as well as geeky early adopters who were fascinated with things like email and discussion forums. Graphic web browsers, search engines, HTML, and rapid growth of internet infrastructure changed the entire paradigm.

Another important development at the time was the invention of Javascript (not to be confused with Java) by Brendan Eich, which now, along with HTML and CSS, is ranked as one of the three core technologies powering the web and web applications.

Javascript was pushed by Netscape’s Marc Andreessen, who understood the key to the ubiquity of the web was to make it more dynamic and vivid. The language assisted web designers and programmers in assembling things like plugins, images, and videos directly into website source codes. Andreessen referred to it as a “glue language”.

Businesses arrive (and form) online

The web was now starting to seriously take off. Businesses were scrambling to get online, social networks and interest groups were forming, people scoured the internet for news, media, entertainment, gaming, and information. Personal computers had never been cheaper. In 1994, the first business plan for Amazon was written by Jeff Bezos.

Also in 1994, it was estimated that there were about 3.2 million host computers connected to the internet and approximately 3,000 websites. In two years that number grew to approximately 13 million host computers and roughly 2.5 million websites. The pace of growth was off the charts.

Bill Gates penned a famous memo in 1995 titled the “Coming Internet Tidal Wave”. It was a verbose and fairly prescient vision of the technology that was about to fundamentally change the way humans interact, engage, explore, and seek information.

Gates accurately predicted that the internet would significantly decrease costs of communication and that cheap, broadband connections in households would lead to increased demand for video and video-based applications. He pointed to the growth of Yahoo as an equalizer, noting that it was easier to search for information on the web than within Microsoft’s internal networks.

“The internet is the most important single development to come along since the IBM PC was introduced in 1981,” Gates exhorted.

1995 also came to be known as a seminal year in other aspects. It saw the launch of Windows 95 with the bundling of Internet Explorer for free—Microsoft’s first effort to challenge the dominance of Netscape. Amazon also officially launched this year as a bookstore, along with other early internet icons such as Yahoo and eBay.

Sun Microsystems programmed Java—allowing for animation on websites and a new level of internet interactivity. PHP, a server-side scripting language, was developed this year too. It assisted the development of HTML and introduced web template systems, content management systems, and frameworks.

Development of rules and regulations

But as the internet well and truly became a permanent part of people’s lives, the first signs of clamping down on freedom of expression and net neutrality started to manifest. The United States Congress introduced the Communications Decency Act—observers saw the bill for what it was: an attempt to censor free speech and information.

The official version behind the Communications Decency Act revolved around concerns stemming from hate speech and pornography that had started to permeate throughout the web. The US government explained it wanted to extend oversight on the internet similar to regulations applied on television and radio.

A rebuke to these laws was penned by John Perry Barlow—a founder of the Electronic Frontier Foundation and an impassioned advocate for the neutral, stateless internet. Barlow argued that the internet wasn’t designed to be subjected to a singular authority and that governments have no “moral right” to rule it.

He weaved the web (pun intended) of an alternative reality, a space where information cannot be suppressed and tyranny unheard of.

“We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth. We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity,” affirmed John in his declaration of the independence of cyberspace.

Want to learn more? We have a dedicated guide to understanding online censorship and how to circumvent restrictions your government might have placed on you.

1998 was the year Sergey Brin and Larry Page co-found Google. Napster spread its wings in 1999—music and video streaming would never be the same again. It eventually shut down due to the combined effort of record labels and movie production houses but not before it introduced a completely new approach.

By now businesses were completely sold on the idea of an internet-only future. Adding a .com at the end increased the valuation of your company ten-fold. Venture capitalists didn’t help the issue either by backing each and every webstore or hare-brained idea. That eventually led to the dotcom crash of 2000.

But for the internet, there was just no looking back.

Web 2.0: the mobile revolution

The proliferation of mobile data networks helped bring vast swathes of the global population online for the first time. Internet service in the late 90s and early 2000s was prohibitively expensive and could only be accessed by clunky devices such as PCs and laptops—hence keeping it out of reach for most.

2G networks like GPRS and EDGE had been around since the mid-90s but the introduction of 3G in 2001 was a true gamechanger. It took a little while for the tech to spawn across the globe but the mass adoption of smartphones completely changed the paradigm. Starting with Blackberry, and then followed by the iPhone and Android OS, consumers now had a small portable device that could access the web, check email, and stay connected via social networking.

A first-generation iPhone.
A first-generation iPhone. Source: Carl Berkeley via Wikimedia

Consumers demanded snazzier websites and greater utility from the internet. Despite the dotcom crash of 2000, fundamentals remained unchanged: the web was a growing phenomenon and people were using it to buy things, search, and access information at an unprecedented rate.

Amazon, which had its IPO in 1997, and Yahoo, which IPO’d in 1996, were continuing to grow and raise the bar. The term “Web 2.0” first started making the rounds in the mid-2000s as websites shifted from static pages to dynamic HTML, high-speed internet started to become the norm, and user-generated content began to spread its wings.

This led to websites like YouTube and social networking apps like Orkut, Myspace, Facebook, and Twitter.

In 2011, more than 90 percent of the world’s population lived in areas with at least 2G coverage with 45 percent living in areas that had both 2G and 3G. The next year it was estimated that nearly 1.5 billion people accessed the web via mobile broadband networks, with the number expected to grow to 6.5 billion in 2018.

What will the future hold for the internet?

It’s hard to imagine life without the internet now. And moving forward, it’s likely that the world will become even more connected.

Data volumes are growing at the rate of 40 percent year-on-year according to the World Economic Forum, with some estimates suggesting that 90 percent of all data consumed took place in the last two years.

It’s predicted that the number of internet-connected devices such as smart TVs, cars, and industrial machines is likely to reach a staggering 75 billion by 2025. This means wifi networks will have to be upgraded accordingly—there’s already development of light fidelity (Li-Fi) tech which allows for data transfer of up to 224 gigabytes per second and could eventually replace wifi.

But at the same time there are challenges. More than one-third of the world’s population still doesn’t have access to the internet. The United Nations has included universal affordable internet access in its Global Goals—recognizing that it has the power to reduce poverty, advance healthcare and education, and build communities.

Types of Computer Networks

 

What Is a Computer Network?

computer network is a connection between two or more network devices, like computers, routers, and switches, to share network resources.

Types_of_Networks_1

The establishment of a computer network depends on the requirements of the communication channel, i.e., the network can be wired or wireless.

Next, let’s look into the types of networks available.

Types of Networks

According to the communication requirements, multiple types of network connections are available. The most basic type of network classification depends on the network's geographical coverage.

Types_of_Networks_2      

Below mentioned are different types of networks:

  • PAN (Personal Area Network)
  • LAN (Local Area Network)
  • MAN (Metropolitan Area Network)
  • WAN (Wide Area Network)

Let’s look into each of the network types in detail.

What Is Local Area Network (LAN)?

Types_of_Networks_3 

The Local Area Network (LAN) is designed to connect multiple network devices and systems within a limited geographical distance. The devices are connected using multiple protocols for properly and efficiently exchanging data and services. 

Attributes of LAN Network:

  • The data transmit speed in the LAN network is relatively higher than the other network types, MAN and WAN.
  • LAN uses private network addresses for network connectivity for data and service exchange, and it uses cable for network connection, decreasing error and maintaining data security.

Advantages and Disadvantages of LAN Network

Advantages 

Disadvantages

Transmission of data and services is relatively higher than other network connections.

Need constant administration of experienced engineers for functioning.

The Network Server acts as a central unit for the whole network. 

Probability of leak of sensitive data by LAN administration.

What Is Metropolitan Area Network (MAN)?

Types_of_Networks_4

The Metropolitan Area Network (MAN) is a network type that covers the network connection of an entire city or connection of a small area. The area covered by the network is connected using a wired network, like data cables.

Attributes of MAN Network:

  • Network covers an entire town area or a portion of a city.
  • Data transmission speed is relatively high due to the installation of optical cables and wired connections.

Advantages and Disadvantages of MAN Network:

Advantages 

Disadvantages

Provides Full-Duplex data transmission in the network channel for devices. 

High probability of attack from hackers and cybercriminals due to large networks.

The network connection area covers an entire city or some parts using the optic cables.

The need for good quality hardware and the installation cost is very high.

What Is Wide Area Network (WAN)?

Types_of_Networks_5

The Wide Area Network (WAN) is designed to connect devices over large distances like states or between countries. The connection is wireless in most cases and uses radio towers for communication.

The WAN network can be made up of multiple LAN and MAN networks.

Attributes of WAN Network:

  • The speed of the WAN data transfer is lower than in comparison to LAN and MAN networks due to the large distance covered.
  • The WAN network uses a satellite medium to transmit data between multiple locations and network towers.

Advantages and Disadvantages of WAN Network:

Advantages

Disadvantages

This network covers a high geographical area and is used for large-distance connections.

High cost to set up the network and the Support of experienced technicians is needed to maintain the network.

They also use radio towers and connect channels for users.

It is difficult to prevent hacking and debug a large network.

Now let’s move on to the next network type, MAN Network.

Different Types of Computer Networks

There are various types of Computer Networking options available. The classification of network in computers can be done according to their size as well as their purpose.

The size of a network should be expressed by the geographic area and number of computers, which are a part of their networks. It includes devices housed in a single room to millions of devices spread across the world. Following are the popular types of Computer Network:

Types of Computer Networks

Types of Computer Networks

Some of the most popular computer network types are:

  • PAN (Personal Area Network)
  • LAN (Local Area Network)
  • MAN (Metropolitan Area Network)
  • WAN (Wide Area Network)

Let’s study all of these types of networking in detail.

What is PAN (Personal Area Network)?

PAN (Personal Area Network) is a computer network formed around a person. It generally consists of a computer, mobile, or personal digital assistant. PAN can be used for establishing communication among these personal devices for connecting to a digital network and the internet.

Characteristics of PAN

Below are the main characteristics of PAN:

  • It is mostly personal devices network equipped within a limited area.
  • Allows you to handle the interconnection of IT devices at the surrounding of a single user.
  • PAN includes mobile devices, tablet, and laptop.
  • It can be wirelessly connected to the internet called WPAN.
  • Appliances use for PAN: cordless mice, keyboards, and Bluetooth systems.

Advantages of PAN

Here are the important pros/benefits of PAN network:

  • PAN networks are relatively secure and safe
  • It offers only short-range solution up to ten meters
  • Strictly restricted to a small area

Disadvantages of PAN

Here are the cons/drawbacks of using PAN network:

  • It may establish a bad connection to other networks at the same radio bands.
  • Distance limits.

What is a LAN (Local Area Network)?

Local Area Network (LAN) is a group of computer and peripheral devices which are connected in a limited area such as school, laboratory, home, and office building. It is a widely useful network for sharing resources like files, printers, games, and other application. The simplest type of LAN network is to connect computers and a printer in someone’s home or office. In general, LAN will be used as one type of transmission medium. It is a network which consists of less than 5000 interconnected devices across several buildings.

Local Area Network (LAN)

Local Area Network (LAN)

Characteristics of LAN

Here are the important characteristics of a LAN network:

  • It is a private network, so an outside regulatory body never controls it.
  • LAN operates at a relatively higher speed compared to other WAN systems.
  • There are various kinds of media access control methods like token ring and ethernet.


Advantages of LAN


Here are the pros/benefits of LAN:

  • Computer resources like hard-disks, DVD-ROM, and printers can share local area networks. This significantly reduces the cost of hardware purchases.
  • You can use the same software over the network instead of purchasing the licensed software for each client in the network.
  • Data of all network users can be stored on a single hard disk of the server computer.
  • You can easily transfer data and messages over networked computers.
  • It will be easy to manage data at only one place, which makes data more secure.
  • Local Area Network offers the facility to share a single internet connection among all the LAN users.

Disadvantages of LAN

Here are the cons/drawbacks of LAN:

  • LAN will indeed save cost because of shared computer resources, but the initial cost of installing Local Area Networks is quite high.
  • The LAN admin can check personal data files of every LAN user, so it does not offer good privacy.
  • Unauthorized users can access critical data of an organization in case LAN admin is not able to secure centralized data repository.
  • Local Area Network requires a constant LAN administration as there are issues related to software setup and hardware failures

What is WAN (Wide Area Network)?

WAN (Wide Area Network) is another important computer network that which is spread across a large geographical area. WAN network system could be a connection of a LAN which connects with other LAN’s using telephone lines and radio waves. It is mostly limited to an enterprise or an organization.

Wide Area Network (WAN)

Wide Area Network (WAN)

Characteristics of WAN

Below are the characteristics of WAN:

  • The software files will be shared among all the users; therefore, all can access to the latest files.
  • Any organization can form its global integrated network using WAN.

Advantages of WAN

Here are the benefits/pros of WAN:

  • WAN helps you to cover a larger geographical area. Therefore business offices situated at longer distances can easily communicate.
  • Contains devices like mobile phones, laptop, tablet, computers, gaming consoles, etc.
  • WLAN connections work using radio transmitters and receivers built into client devices.

Disadvantages of WAN

Here are the drawbacks/cons of WAN network:

  • The initial setup cost of investment is very high.
  • It is difficult to maintain the WAN network. You need skilled technicians and network administrators.
  • There are more errors and issues because of the wide coverage and the use of different technologies.
  • It requires more time to resolve issues because of the involvement of multiple wired and wireless technologies.
  • Offers lower security compared to other types of network in computer.



What is MAN (Metropolitan Area Network)?

Metropolitan Area Network or MAN is consisting of a computer network across an entire city, college campus, or a small region. This type of network is large than a LAN, which is mostly limited to a single building or site. Depending upon the type of configuration, this type of network allows you to cover an area from several miles to tens of miles.

Metropolitan Area Network (MAN)

Metropolitan Area Network (MAN)

Characteristics of MAN

Here are important characteristics of the MAN network:

  • It mostly covers towns and cities in a maximum 50 km range
  • Mostly used medium is optical fibers, cables
  • Data rates adequate for distributed computing applications.

Advantages of MAN

Here are the pros/benefits of MAN network:

  • It offers fast communication using high-speed carriers, like fiber optic cables.
  • It provides excellent support for an extensive size network and greater access to WANs.
  • The dual bus in MAN network provides support to transmit data in both directions concurrently.
  • A MAN network mostly includes some areas of a city or an entire city.

Disadvantages of MAN

Here are drawbacks/cons of using the MAN network:

  • You need more cable to establish MAN connection from one place to another.
  • In MAN network it is tough to make the system secure from hackers

Other Types of Computer Networks

Apart from above mentioned computer networks, here are some other important types of networks:

  • WLAN (Wireless Local Area Network)
  • Storage Area Network
  • System Area Network
  • Home Area Network
  • POLAN- Passive Optical LAN
  • Enterprise private network
  • Campus Area Network
  • Virtual Area Network

Let’s see all these different types of networks in detail:

1) WLAN

WLAN (Wireless Local Area Network) helps you to link single or multiple devices using wireless communication within a limited area like home, school, or office building. It gives users an ability to move around within a local coverage area which may be connected to the network. Today most modern day’s WLAN systems are based on IEEE 802.11 standards.

2) Storage-Area Network (SAN)

A Storage Area Network is a type of network which allows consolidated, block-level data storage. It is mainly used to make storage devices, like disk arrays, optical jukeboxes, and tape libraries.

3) System-Area Network

System Area Network is used for a local network. It offers high-speed connection in server-to-server and processor-to-processor applications. The computers connected on a SAN network operate as a single system at quite high speed.

4) Passive Optical Local Area Network

POLAN is a networking technology which helps you to integrate into structured cabling. It allows you to resolve the issues of supporting Ethernet protocols and network apps.

POLAN allows you to use optical splitter which helps you to separate an optical signal from a single-mode optical fiber. It converts this single signal into multiple signals.

5) Home Area Network (HAN):

A Home Area Network is always built using two or more interconnected computers to form a local area network (LAN) within the home. For example, in the United States, about 15 million homes have more than one computer.

These types of network connections help computer owners to interconnect with multiple computers. This network allows sharing files, programs, printers, and other peripherals.

6) Enterprise Private Network :

Enterprise private network (EPN) networks are build and owned by businesses that want to securely connect numerous locations in order to share various computer resources.

7) Campus Area Network (CAN):

A Campus Area Network is made up of an interconnection of LANs within a specific geographical area. For example, a university campus can be linked with a variety of campus buildings to connect all the academic departments.

8) Virtual Private Network:

A VPN is a private network which uses a public network to connect remote sites or users together. The VPN network uses “virtual” connections routed through the internet from the enterprise’s private network or a third-party VPN service to the remote site.

It is a free or paid service that keeps your web browsing secure and private over public WiFi hotspots.

Summary

  • Types of connections in computer networks can be categorized according to their size as well as their purpose
  • PAN is a computer network which generally consists of a computer, mobile, or personal digital assistant
  • LAN (Local Area Network) is a group of computer and peripheral devices which are connected in a limited area
  • WAN (Wide Area Network) is another important computer network that which is spread across a large geographical area
  • A metropolitan area network or MAN is consisting of a computer network across an entire city, college campus, or a small region
  • WLAN is a wireless local area network that helps you to link single or multiple devices using. It uses wireless communication within a limited area like home, school, or office building.
  • SAN is a storage area network is a type of network which allows consolidated, block-level data storage
  • System area network offers high-speed connection in server-to-server applications, storage area networks, and processor-to-processor applications
  • POLAN is a networking technology which helps you to integrate into structured cabling
  • Home network (HAN) is a always built using two or more interconnected computers to form a local area network (LAN) within the home
  • Enterprise private network (EPN) networks are build and owned by businesses that want to securely connect various locations
  • Campus area network (CAN) is made up of an interconnection of LANs in a specific geographical area
  • A VPN is a private network which uses a public network to connect remote sites or users together
  • What does LAN stand for? – LAN stands for Local Area Network.
  • What is the difference between LAN and WAN? – LAN is a computer network that covers a small geographic area, like a home, office, or group of buildings, while WAN is a computer network that covers a broader area.

MCQ: Unit: 1 Contemporary Technology

 1) The network of physical devices which are built-in with sensors, hardware and software? a) AI b) VR c) IOT d) Cloud computing 2) Which o...