Computer network is a set of computers sharing resources located on or provided by network nodes. The computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies, based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.
The nodes of a computer network can include personal computers, servers, networking hardware, or other specialized or general-purpose hosts. They are identified by network addresses, and may have hostnames. Hostnames serve as memorable labels for the nodes, rarely changed after initial assignment. Network addresses serve for locating and identifying the nodes by communication protocols such as the Internet Protocol.
Computer networks may be classified by many criteria, including the transmission medium used to carry signals, bandwidth, communications protocols to organize network traffic, the network size, the topology, traffic control mechanism, and organizational intent.
Computer networks support many applications and services, such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers, printers, and fax machines, and use of email and instant messaging applications.
History of Computer Networks
Computer networking may be considered a branch of computer science, computer engineering, and telecommunications, since it relies on the theoretical and practical application of the related disciplines. Computer networking was influenced by a wide array of technology developments and historical milestones.
- In the late 1950s, a network of computers was built for the U.S. military Semi-Automatic Ground Environment (SAGE) radar system using the Bell 101 modem. It was the first commercial modem for computers, released by AT&T Corporation in 1958. The modem allowed digital data to be transmitted over regular unconditioned telephone lines at a speed of 110 bits per second (bit/s).
- In 1959, Christopher Strachey filed a patent application for time-sharing and John McCarthy initiated the first project to implement time-sharing of user programs at MIT. Stratchey passed the concept on to J. C. R. Licklider at the inaugural UNESCO Information Processing Conference in Paris that year. McCarthy was instrumental in the creation of three of the earliest time-sharing systems (Compatible Time-Sharing System in 1961, BBN Time-Sharing System in 1962, and Dartmouth Time Sharing System in 1963).
- In 1959, Anatoly Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organization of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centers. Kirov’s proposal was rejected, as later was the 1962 OGAS economy management network project.
- In 1960, the commercial airline reservation system semi-automatic business research environment (SABRE) went online with two connected mainframes.
- In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the “Intergalactic Computer Network”, a computer network intended to allow general communications among computer users.
- Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept. The NPL network, a local area network at the National Physical Laboratory (United Kingdom) used a line speed of 768 kbit/s and later high-speed T1 links (1.544 Mbit/s line rate).
- In 1965, Western Electric introduced the first widely used telephone switch that implemented computer control in the switching fabric.
- In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, and the University of Utah. In the early 1970s, Leonard Kleinrock carried out mathematical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kemon remains critical to the operation of the Internet today.
- In 1972, commercial services were first deployed on public data networks in Europe, which began using X.25 in the late 1970s and spread across the globe. The underlying infrastructure was used for expanding TCP/IP networks in the 1980s.
- In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself.
- In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system that was based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper “Ethernet: Distributed Packet Switching for Local Computer Networks” and collaborated on several patents received in 1977 and 1978.
- In 1974, Vint Cerf, Yogen Dalal, and Carl Sunshine published the Transmission Control Protocol (TCP) specification, RFC 675, coining the term Internet as a shorthand for internetworking.
- In 1976, John Murphy of Data point Corporation created ARCNET, a token-passing network first used to share storage devices.
- In 1977, the first long-distance fiber network was deployed by GTE in Long Beach, California.
- In 1977, Xerox Network Systems (XNS) was developed by Robert Metcalfe and Yogen Dalal at Xerox.
- In 1979, Robert Metcalfe pursued making Ethernet an open standard.
- In 1980, Ethernet was upgraded from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, which was developed by Ron Crane, Bob Garner, Roy Ogun, and Yonge Dalai.
- In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of 1 Grit/s. Subsequently, higher speeds of up to 400 Gbit/s were added (as of 2018). The scaling of Ethernet has been a contributing factor to its continued use.
Use
A computer network extends interpersonal communications by electronic means with various technologies, such as email, instant messaging, online chat, voice and video telephone calls, and video conferencing. A network allows sharing of network and computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, data, and other types of information giving authorized users the ability to access information stored on other computers on the network. Distributed computing uses computing resources across a network to accomplish tasks.
Network packet
Most modern computer networks use protocols based on packet-mode transmission. A network packet is a formatted unit of data carried by a packet-switched network.
Packets consist of two types of data: control information and user data (payload). The control information provides data the network needs to deliver the user data, for example, source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between.
With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched. When one user is not sending packets, the link can be filled with packets from other users, and so the cost can be shared, with relatively little interference, provided the link isn’t overused. Often the route a packet needs to take through a network is not immediately available. In that case, the packet is queued and waits until a link is free.
The physical link technologies of packet network typically limit the size of packets to a certain maximum transmission unit (MTU). A longer message may be fragmented before it is transferred and once the packets arrive, they are reassembled to construct the original message.
Network topology
The physical or geographic locations of network nodes and links generally have relatively little effect on a network, but the topology of interconnections of a network can significantly affect its throughput and reliability. With many technologies, such as bus or star networks, a single failure can cause the network to fail entirely. In general, the more interconnections there are, the more robust the network is; but the more expensive it is to install. Therefore, most network diagrams are arranged by their network topology which is the map of logical interconnections of network hosts.
Common layouts are:
- Bus network: all nodes are connected to a common medium along this medium. This was the layout used in the original Ethernet, called 10BASE5 and 10BASE2. This is still a common topology on the data link layer, although modern physical layer variants use point-to-point links instead, forming a star or a tree.
- Star network: all nodes are connected to a special central node. This is the typical layout found in a small switched Ethernet LAN, where each client connects to a central network switch, and logically in a wireless LAN, where each wireless client associates with the central wireless access point.
- Ring network: each node is connected to its left and right neighbor node, such that all nodes are connected and that each node can reach each other node by traversing nodes left- or rightwards. Token ring networks, and the Fiber Distributed Data Interface (FDDI), made use of such a topology.
- Mesh network: each node is connected to an arbitrary number of neighbors in such a way that there is at least one traversal from any node to any other.
- Fully connected network: each node is connected to every other node in the network.
- Tree network: nodes are arranged hierarchically. This is the natural topology for a larger Ethernet network with multiple switches and without redundant meshing.
The physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring, but the physical topology is often a star, because all neighboring connections can be routed via a central physical location. Physical layout is not completely irrelevant, however, as common ducting and equipment locations can represent single points of failure due to issues like fires, power failures and flooding.
Star Topology
A central node connects a cable to each computer in the network in a star topology. Each computer in the network has an independent connection to the center of the network, and if one connection breaks, it won’t affect the rest of the network. However, one downside is that many cables are required to form this kind of network.
Bus Topology
In a bus topology network connection, one cable connects the computer. The information for the last node on the network must run through each connected computer. There is less cabling required, but if the cable breaks, none of the computers can reach the network.
Ring Topology
A ring topology is like a bus topology because it uses a single cable with the end nodes connected to each other so the signal can circle through the network to find its recipient. The signal will try several times to find its destination even when the network node is not working properly.
A collapsed ring topology has a central node, which is a hub, router, or switch. These devices have an internal ring topology and places for cables to plug in. Every computer in the network has its own cable to plug into the device. In an office, this probably means having a cabling closet, where all computers are connected to the closet and switch.
Network Protocols
Network protocols are the languages that computer devices use to communicate. The protocols that computer networks support offer another way to define and group them. Networks can have more than one protocol, and each can support different applications. Protocols that are often used include TCP/IP, which is most common on the internet and in home networks.
Overlay network
An overlay network is a virtual network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network.
The topology of the overlay network may (and often does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet.
Overlay networks have been around since the invention of networking when computer systems were connected over telephone lines using modems before any data network existed.
The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on the telephone network. Even today, each Internet node can communicate with virtually any other through an underlying mesh of sub-networks of wildly different topologies and technologies. Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network.
Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually a map) indexed by keys.
Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP multicast have not seen wide acceptance largely because they require modification of all routers in the network.
On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination.
For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast, resilient routing and quality of service studies, among others.
Network links
(a) Wired Network: As we all know, “wired” refers to any physical medium made up of cables. Copper wire, twisted pair, or fiber optic cables are all options. A wired network employs wires to link devices to the Internet or another network, such as laptops or desktop PCs.
(b) Wireless Network: “Wireless” means without wire, media that is made up of electromagnetic waves (EM Waves) or infrared waves. Antennas or sensors will be present on all wireless devices. Cellular phones, wireless sensors, TV remotes, satellite disc receivers, and laptops with WLAN cards are all examples of wireless devices. For data or voice communication, a wireless network uses radiofrequency waves rather than wires.
Types of Networks
(a) Wi-Fi
The industry-standard wireless local area network (WLAN) technology for linking computers and other electronic devices to one another and the Internet. Wi-Fi is a wireless variant of a wired Ethernet network that is frequently used in conjunction with it (see Ethernet).
Wi Fi is a type of wireless networking that uses radio frequencies to send and receive data. Wi Fi allows users to connect to the Internet at high speeds without the necessity of cables.
Wi Fi stands for “wireless fidelity” and is a phrase that is often used to refer to wireless networking technologies. A wireless router is used to connect to the internet. When you connect to Wi-Fi, you’re connecting to a wireless router that connects your Wi-Fi-enabled devices to the Internet.
How does Wi-Fi Work?
The IEEE 802.11 standard defines the protocols that allow existing Wi-Fi-enabled wireless devices, such as wireless routers and access points, to communicate with one another. Different IEEE standards are supported by wireless access points.
Each standard is the result of a series of amendments that have been ratified over time. The standards operate at different frequencies, have different bandwidths, and support varied channel counts.
(b) Bluetooth
Bluetooth is a telecommunication industry standard that outlines how mobile devices, PCs, and other equipment can communicate wirelessly across short distances. This wireless technology allows Bluetooth-enabled devices to communicate with one another. It connects desktop and laptop computers, PDAs (such as the Palm Pilot or Handspring Visor), digital cameras, scanners, cellular phones, and printers over short distances.
Infrared used to serve the same purpose as Bluetooth, but it had a few disadvantages. If an object were to be placed between the two communication devices, for example, the communication would be disrupted. (If you’ve ever used a television remote control, you’ve probably observed this limitation.) The infrared transmission was very slow, and devices were frequently incompatible with one another.
Because Bluetooth technology is based on radio waves, items or even walls can be placed between communication devices without disrupting the connection. Bluetooth also employs a common 2.4 GHz frequency, ensuring that all Bluetooth-enabled devices are interoperable. The sole disadvantage of Bluetooth is that its range is restricted to 30 feet due to its high frequency.
Bluetooth is a computer and telecommunications industry standard that defines how devices connect with one another. Computers, computer keyboards and mice, personal digital assistants, and cellphones are all Bluetooth-enabled devices. Bluetooth consumes less energy and is less expensive to set up than Wi-Fi. Because of its lower power, it is less likely to suffer from or cause interference with other wireless devices operating in the same 2.4GHz radio band.
Cloud Computing
A physical site called a data center houses a common pool of computer resources (such as hardware, software, and services like servers and internet storage). Your cloud service providers have data centers all around the world.
Cloud computing is a methodology for providing on-demand network access to a shared pool of programmable computing resources that can be quickly supplied and released with no administration effort or service provider contact.
Cloud computing is an internet-based computing model in which several network connections and computer systems are used to provide online services. Users at a distance who have access to the internet can readily access the cloud and its services, and these services and information can be shared among several computers and users if they are all linked.
Types of Clouds
The four types of access to the cloud are public, private, hybrid, and community:
(a) Public Cloud: The public cloud makes it possible for anybody to access systems and services. Because of its openness, the public cloud may be less secure. The public cloud is one in which cloud infrastructure services are made available through the internet to the public or large industrial groups. The infrastructure in this cloud model is owned by the company that delivers the cloud service, not by the consumer.
Example: Microsoft Azure, Google App Engine
(b) Private Cloud: A private cloud is one in which cloud infrastructure is set aside for a single organization’s exclusive use. Organizations, third parties, or a mixture of both own, manage and operate it. In this architecture, the cloud infrastructure is provisioned on the organization’s premises but hosted in a third-party data center. Organizations will benefit from the private cloud over public cloud since it gives them more flexibility and control over cloud resources. Example: E-bay
(c) Hybrid Cloud: Hybrid cloud, as the name implies, is a blend of different cloud models, such as public cloud, private cloud, and community cloud. This model utilizes all the models that are a component of it. As a result, it will combine scalability, economic efficiency, and data security into a single model. The complexity of creating such a storage solution is a downside of this strategy.
(d) Community Cloud: The community cloud model distributes cloud infrastructure among numerous organizations to support a specific community with shared issues. Cloud infrastructure is delivered on-premises or at a third-party data center in this manner. Participating organizations or a third party manage this.
Computer – KnowledgeSthali