Albi computers

  • Home
  • Albi computers

Albi computers Contact information, map and directions, contact form, opening hours, services, ratings, photos, videos and announcements from Albi computers, .

Join this educational and informative Facebook/YouTube channel and become a family and find out in advance that new videos have been released.
ይህን ትምህርታዊና መረጃ ሰጪ የዩቱብ ቻናል በመቀላቀልና ቤተሰብ በመሆን ቁም ነገር ይገብዩ እንዲሁም አዳዲስ ቪድዮዎች እንደተለቀቁ ቀድመው ያግኙ

27/09/2025

Network Security – Firewalls
Almost every medium and large-scale organization has a presence on the Internet and has an organizational network connected to it. Network partitioning at the boundary between the outside Internet and the internal network is essential for network security. Sometimes the inside network (intranet) is referred to as the “trusted” side and the external Internet as the “un-trusted” side.

Types of Firewall
Firewall is a network device that isolates organization’s internal network from larger outside network/Internet. It can be a hardware, software, or combined system that prevents unauthorized access to or from internal network.

All data packets entering or leaving the internal network pass through the firewall, which examines each packet and blocks those that do not meet the specified security criteria.

Firewall

Deploying firewall at network boundary is like aggregating the security at a single point. It is analogous to locking an apartment at the entrance and not necessarily at each door.

Firewall is considered as an essential element to achieve network security for the following reasons −

Internal network and hosts are unlikely to be properly secured.

Internet is a dangerous place with criminals, users from competing companies, disgruntled ex-employees, spies from unfriendly countries, vandals, etc.

To prevent an attacker from launching denial of service attacks on network resource.

To prevent illegal modification/access to internal data by an outsider attacker.

Firewall is categorized into three basic types −

Packet filter (Stateless & Stateful)
Application-level gateway
Circuit-level gateway
These three categories, however, are not mutually exclusive. Modern firewalls have a mix of abilities that may place them in more than one of the three categories.

Firewalls Types

Stateless & Stateful Packet Filtering Firewall
In this type of firewall deployment, the internal network is connected to the external network/Internet via a router firewall. The firewall inspects and filters data packet-by-packet.

Packet-filtering firewalls allow or block the packets mostly based on criteria such as source and/or destination IP addresses, protocol, source and/or destination port numbers, and various other parameters within the IP header.

The decision can be based on factors other than IP header fields such as ICMP message type, TCP SYN and ACK bits, etc.

Packet filter rule has two parts −

Selection criteria − It is a used as a condition and pattern matching for decision making.

Action field − This part specifies action to be taken if an IP packet meets the selection criteria. The action could be either block (deny) or permit (allow) the packet across the firewall.

Packet filtering is generally accomplished by configuring Access Control Lists (ACL) on routers or switches. ACL is a table of packet filter rules.

As traffic enters or exits an interface, firewall applies ACLs from top to bottom to each incoming packet, finds matching criteria and either permits or denies the individual packets.

Stateless Packet Firewall

Stateless firewall is a kind of a rigid tool. It looks at packet and allows it if its meets the criteria even if it is not part of any established ongoing communication.

Hence, such firewalls are replaced by stateful firewalls in modern networks. This type of firewalls offer a more in-depth inspection method over the only ACL based packet inspection methods of stateless firewalls.

Stateful firewall monitors the connection setup and teardown process to keep a check on connections at the TCP/IP level. This allows them to keep track of connections state and determine which hosts have open, authorized connections at any given point in time.

They reference the rule base only when a new connection is requested. Packets belonging to existing connections are compared to the firewall's state table of open connections, and decision to allow or block is taken. This process saves time and provides added security as well. No packet is allowed to trespass the firewall unless it belongs to already established connection. It can timeout inactive connections at firewall after which it no longer admit packets for that connection.

Application Gateways
An application-level gateway acts as a relay node for the application-level traffic. They intercept incoming and outgoing packets, run proxies that copy and forward information across the gateway, and function as a proxy server, preventing any direct connection between a trusted server or client and an untrusted host.

The proxies are application specific. They can filter packets at the application layer of the OSI model.

Application-specific Proxies
Application-specific Proxies

An application-specific proxy accepts packets generated by only specified application for which they are designed to copy, forward, and filter. For example, only a Telnet proxy can copy, forward, and filter Telnet traffic.

If a network relies only on an application-level gateway, incoming and outgoing packets cannot access services that have no proxies configured. For example, if a gateway runs FTP and Telnet proxies, only packets generated by these services can pass through the firewall. All other services are blocked.

Application-level Filtering
An application-level proxy gateway, examines and filters individual packets, rather than simply copying them and blindly forwarding them across the gateway. Application-specific proxies check each packet that passes through the gateway, verifying the contents of the packet up through the application layer. These proxies can filter particular kinds of commands or information in the application protocols.

Application gateways can restrict specific actions from being performed. For example, the gateway could be configured to prevent users from performing the ‘FTP put’ command. This can prevent modification of the information stored on the server by an attacker.

Transparent
Although application-level gateways can be transparent, many implementations require user authentication before users can access an untrusted network, a process that reduces true transparency. Authentication may be different if the user is from the internal network or from the Internet. For an internal network, a simple list of IP addresses can be allowed to connect to external applications. But from the Internet side a strong authentication should be implemented.

An application gateway actually relays TCP segments between the two TCP connections in the two directions (Client ↔ Proxy ↔ Server).

For outbound packets, the gateway may replace the source IP address by its own IP address. The process is referred to as Network Address Translation (NAT). It ensures that internal IP addresses are not exposed to the Internet.

Circuit-Level Gateway
The circuit-level gateway is an intermediate solution between the packet filter and the application gateway. It runs at the transport layer and hence can act as proxy for any application.

Similar to an application gateway, the circuit-level gateway also does not permit an end-to-end TCP connection across the gateway. It sets up two TCP connections and relays the TCP segments from one network to the other. But, it does not examine the application data like application gateway. Hence, sometime it is called as ‘Pipe Proxy’.

SOCKS
SOCKS (RFC 1928) refers to a circuit-level gateway. It is a networking proxy mechanism that enables hosts on one side of a SOCKS server to gain full access to hosts on the other side without requiring direct IP reachability. The client connects to the SOCKS server at the firewall. Then the client enters a negotiation for the authentication method to be used, and authenticates with the chosen method.

The client sends a connection relay request to the SOCKS server, containing the desired destination IP address and transport port. The server accepts the request after checking that the client meets the basic filtering criteria. Then, on behalf of the client, the gateway opens a connection to the requested untrusted host and then closely monitors the TCP handshaking that follows.

The SOCKS server informs the client, and in case of success, starts relaying the data between the two connections. Circuit level gateways are used when the organization trusts the internal users, and does not want to inspect the contents or application data sent on the Internet.

Firewall Deployment with DMZ
A firewall is a mechanism used to control network traffic ‘into’ and ‘out’ of an organizational internal network. In most cases these systems have two network interfaces, one for the external network such as the Internet and the other for the internal side.

The firewall process can tightly control what is allowed to traverse from one side to the other. An organization that wishes to provide external access to its web server can restrict all traffic arriving at firewall expect for port 80 (the standard http port). All other traffic such as mail traffic, FTP, SNMP, etc., is not allowed across the firewall into the internal network. An example of a simple firewall is shown in the following diagram.

Firewall Deployment with DMZ

In the above simple deployment, though all other accesses from outside are blocked, it is possible for an attacker to contact not only a web server but any other host on internal network that has left port 80 open by accident or otherwise.

Hence, the problem most organizations face is how to enable legitimate access to public services such as web, FTP, and e-mail while maintaining tight security of the internal network. The typical approach is deploying firewalls to provide a Demilitarized Zone (DMZ) in the network.

In this setup (illustrated in following diagram), two firewalls are deployed; one between the external network and the DMZ, and another between the DMZ and the internal network. All public servers are placed in the DMZ.

With this setup, it is possible to have firewall rules which allow public access to the public servers but the interior firewall can restrict all incoming connections. By having the DMZ, the public servers are provided with adequate protection instead of placing them directly on external network.

Dual Firewall Deployment

Intrusion Detection / Prevention System
The packet filtering firewalls operate based on rules involving TCP/UDP/IP headers only. They do not attempt to establish correlation checks among different sessions.

Intrusion Detection/Prevention System (IDS/IPS) carry out Deep Packet Inspection (DPI) by looking at the packet contents. For example, checking character strings in packet against database of known virus, attack strings.

Application gateways do look at the packet contents but only for specific applications. They do not look for suspicious data in the packet. IDS/IPS looks for suspicious data contained in packets and tries to examine correlation among multiple packets to identify any attacks such as port scanning, network mapping, and denial of service and so on.

Difference between IDS and IPS
IDS and IPS are similar in detection of anomalies in the network. IDS is a ‘visibility’ tool whereas IPS is considered as a ‘control’ tool.

Intrusion Detection Systems sit off to the side of the network, monitoring traffic at many different points, and provide visibility into the security state of the network. In case of reporting of anomaly by IDS, the corrective actions are initiated by the network administrator or other device on the network.

Intrusion Prevention System are like firewall and they sit in-line between two networks and control the traffic going through them. It enforces a specified policy on detection of anomaly in the network traffic. Generally, it drops all packets and blocks the entire network traffic on noticing an anomaly till such time an anomaly is addressed by the administrator.

IDS Vs IPS

Types of IDS
There are two basic types of IDS.

Signature-based IDS

It needs a database of known attacks with their signatures.

Signature is defined by types and order of packets characterizing a particular attack.

Limitation of this type of IDS is that only known attacks can be detected. This IDS can also throw up a false alarm. False alarm can occur when a normal packet stream matches the signature of an attack.

Well-known public open-source IDS example is “Snort” IDS.

Anomaly-based IDS

This type of IDS creates a traffic pattern of normal network operation.

During IDS mode, it looks at traffic patterns that are statistically unusual. For example, ICMP unusual load, exponential growth in port scans, etc.

Detection of any unusual traffic pattern generates the alarm.

The major challenge faced in this type of IDS deployment is the difficulty in distinguishing between normal traffic and unusual traffic.

Summary
In this chapter, we discussed the various mechanisms employed for network access control. The approach to network security through access control is technically different than implementing security controls at different network layers discussed in the earlier chapters of this tutorial. However, though the approaches of implementation are different, they are complementary to each other.

Network access control comprises of two main components: user authentication and network boundary protection. RADIUS is a popular mechanism for providing central authentication in the network.

Firewall provides network boundary protection by separating an internal network from the public Internet. Firewall can function at different layers of network protocol. IDS/IPS allows to monitor the anomalies in the network traffic to detect the attack and take preventive action against the same.

20/09/2025

What is the Application Layer?
The “application layer” is one of the seven layers that appears in the Open Systems Interconnection (OSI) model. This is a security framework that allows developers to conceptualize the security of their software as seven interconnected layers:

The physical layer
The data link layer
The network layer
The transport layer
The session layer
The presentation layer
The application layer
Three of these layers relate to media, and four to hosting. They run from the most basic hardware and infrastructure level—the physical layer—to the layer that is closest to the way that users actually interact with software, the application layer. Each layer is designed to build upon the last, and viewing software like this allows developers to isolate security threats to the level of hardware or software that they are most relevant to.

For most software developers, layers 4 to 7 are the most important to watch for vulnerabilities. Layers 1 through 3 are typically administered in-house or on-premises and taken care of by most leading web hosting providers. On the other hand, the most common security vulnerabilities that businesses face usually appear in the later layers, which are more user-focused. Of these, the application layer is by far the most liable to be attacked, but is often forgotten about.

Threats to the Application Layer
The application layer is the most vulnerable layer in the OSI model for two reasons. The first is that, since it is closest to the end user, it offers a larger attack surface than any of the layers that precede it. Second, the layers “below” the application layer only typically see interactions with users who are more conscious of security.

The types of threats that the application layer is exposed to will be familiar to any developer who has worked with web security:

DDoS attacks, which require applications to be shielded.
HTTP floods that aim to lock up applications and deny access to legitimate users.
SQL injections on applications that have poorly verified user input protocols.
Cross-site scripting and parameter tampering.
Application Layer Protocols
There are several protocols which work for users in Application Layer. Application layer protocols can be broadly divided into two categories:

Protocols which are used by users.For email for example, eMail.

Protocols which help and support protocols used by users.For example DNS.

Few of Application layer protocols are described below:

Domain Name System
The Domain Name System (DNS) works on Client Server model. It uses UDP protocol for transport layer communication. DNS uses hierarchical domain based naming scheme. The DNS server is configured with Fully Qualified Domain Names (FQDN) and email addresses mapped with their respective Internet Protocol addresses.

A DNS server is requested with FQDN and it responds back with the IP address mapped with it. DNS uses UDP port 53.

Simple Mail Transfer Protocol
The Simple Mail Transfer Protocol (SMTP) is used to transfer electronic mail from one user to another. This task is done by means of email client software (User Agents) the user is using. User Agents help the user to type and format the email and store it until internet is available. When an email is submitted to send, the sending process is handled by Message Transfer Agent which is normally comes inbuilt in email client software.

Message Transfer Agent uses SMTP to forward the email to another Message Transfer Agent (Server side). While SMTP is used by end user to only send the emails, the Servers normally use SMTP to send as well as receive emails. SMTP uses TCP port number 25 and 587.

Client software uses Internet Message Access Protocol (IMAP) or POP protocols to receive emails.

File Transfer Protocol
The File Transfer Protocol (FTP) is the most widely used protocol for file transfer over the network. FTP uses TCP/IP for communication and it works on TCP port 21. FTP works on Client/Server Model where a client requests file from Server and server sends requested resource back to the client.

FTP uses out-of-band controlling i.e. FTP uses TCP port 20 for exchanging controlling information and the actual data is sent over TCP port 21.

The client requests the server for a file. When the server receives a request for a file, it opens a TCP connection for the client and transfers the file. After the transfer is complete, the server closes the connection. For a second file, client requests again and the server reopens a new TCP connection.

Post Office Protocol (POP)
The Post Office Protocol version 3 (POP 3) is a simple mail retrieval protocol used by User Agents (client email software) to retrieve mails from mail server.

When a client needs to retrieve mails from server, it opens a connection with the server on TCP port 110. User can then access his mails and download them to the local computer. POP3 works in two modes. The most common mode the delete mode, is to delete the emails from remote server after they are downloaded to local machines. The second mode, the keep mode, does not delete the email from mail server and gives the user an option to access mails later on mail server.

Hyper Text Transfer Protocol (HTTP)
The Hyper Text Transfer Protocol (HTTP) is the foundation of World Wide Web. Hypertext is well organized documentation system which uses hyperlinks to link the pages in the text documents. HTTP works on client server model. When a user wants to access any HTTP page on the internet, the client machine at user end initiates a TCP connection to server on port 80. When the server accepts the client request, the client is authorized to access web pages.

To access the web pages, a client normally uses web browsers, who are responsible for initiating, maintaining, and closing TCP connections. HTTP is a stateless protocol, which means the Server maintains no information about earlier requests by clients.

HTTP versions

HTTP 1.0 uses non persistent HTTP. At most one object can be sent over a single TCP connection.

HTTP 1.1 uses persistent HTTP. In this version, multiple objects can be sent over a single TCP connection.

15/09/2025

Data Link Layer
Data Link Layer provides the functional and procedural means to transfer data between network entities and to detect and possibly correct errors that may occur in the physical layer. Originally, this layer was intended for point-to-point and point-to-multipoint media, characteristic of wide area media in the telephone system. Local area network architecture, which included broadcast-capable multi-access media, was developed independently of the ISO work in IEEE Project 802. IEEE work assumed sub-layering and management functions not required for WAN use. In modern practice, only error detection, not flow control using sliding window, is present in data link protocols such as Point-to-Point Protocol (PPP), and, on local area networks, the IEEE 802.2 LLC layer is not used for most protocols on the Ethernet, and on other local area networks, its flow control and acknowledgment mechanisms are rarely used. Sliding window flow control and acknowledgment is used at the transport layer by protocols such as TCP, but is still used in niches where X.25 offers performance advantages. The ITU-T G.hn standard, which provides high-speed local area networking over existing wires (power lines, phone lines, and coaxial cables), includes a complete data link layer which provides both error correction and flow control by means of a selective repeat sliding window protocol. Both WAN and LAN service arrange bits from the physical layer into logical sequences called frames. Not all physical layer bits necessarily go into frames, as some of these bits are purely intended for physical layer functions. For example, every fifth bit of the FDDI bit stream is not used by the layer.

Services provided by Data Link Layer
Data Link Layer is basically second layer of seven-layer Open System Interconnection(OSI) reference model of computer networking and lies just above Physical Layer.

This layer usually provides and gives data reliability and provides various tools to establish, maintain, and also release data link connections between network nodes. It is responsible for receiving and getting data bits usually from Physical Layer and also then converting these bits into groups, known as data link frames so that it can be transmitted further. It is also responsible to handle errors that might arise due to transmission of bits.

Service Provided to Network Layer :
The important and essential function of Data Link Layer is to provide an interface to Network Layer. Network Layer is third layer of seven-layer OSI reference model and is present just above Data Link Layer.

The main aim of Data Link Layer is to transmit data frames they have received to destination machine so that these data frames can be handed over to network layer of destination machine. At the network layer, these data frames are basically addressed and routed.

1. Actual Communication :
In this communication, physical medium is present through which Data Link Layer simply transmits data frames. The actual path is Network Layer -> Data link layer -> Physical Layer on sending machine, then to physical media and after that to Physical Layer -> Data link layer -> Network Layer on receiving machine.

2. Virtual Communication :
In this communication, no physical medium is present for Data Link Layer to transmit data. It can be only be visualized and imagined that two Data Link Layers are communicating with each other with the help of or using data link protocol.

Types of Services provided by Data Link Layer :

The Data link layer generally provides or offers three types of services as given below :

1. Unacknowledged Connectionless Service
2. Acknowledged Connectionless Service
3. Acknowledged Connection-Oriented Service
Unacknowledged Connectionless Service :
Unacknowledged connectionless service simply provides datagram styles delivery without any error, issue, or flow control. In this service, source machine generally transmits independent frames to destination machine without having destination machine to acknowledge these frames.

This service is called as connectionless service because there is no connection established among sending or source machine and destination or receiving machine before data transfer or release after data transfer.

In Data Link Layer, if anyhow frame is lost due to noise, there will be no attempt made just to detect or determine loss or recovery from it. This simply means that there will be no error or flow control. An example can be Ethernet.

Acknowledged Connectionless Service :
This service simply provides acknowledged connectionless service i.e. packet delivery is simply acknowledged, with help of stop and wait for protocol.
In this service, each frame that is transmitted by Data Link Layer is simply acknowledged individually and then sender usually knows whether or not these transmitted data frames received safely. There is no logical connection established and each frame that is transmitted is acknowledged individually.

This mode simply provides means by which user of data link can just send or transfer data and request return of data at the same time. It also uses particular time period that if it has passed frame without getting acknowledgment, then it will resend data frame on time period.

This service is more reliable than unacknowledged connectionless service. This service is generally useful over several unreliable channels, like wireless systems, Wi-Fi services, etc.

Acknowledged Connection-Oriented Service :
In this type of service, connection is established first among sender and receiver or source and destination before data is transferred.
Then data is transferred or transmitted along with this established connection. In this service, each of frames that are transmitted is provided individual numbers first, so as to confirm and guarantee that each of frames is received only once that too in an appropriate order and sequence.

14/09/2025

Transport Layer
The transport layer is a 4th layer from the top.
The main role of the transport layer is to provide the communication services directly to the application processes running on different hosts.
The transport layer provides a logical communication between application processes running on different hosts. Although the application processes on different hosts are not physically connected, application processes use the logical communication provided by the transport layer to send the messages to each other.
The transport layer protocols are implemented in the end systems but not in the network routers.
A computer network provides more than one protocol to the network applications. For example, TCP and UDP are two transport layer protocols that provide a different set of services to the network layer.
All transport layer protocols provide multiplexing/demultiplexing service. It also provides other services such as reliable data transfer, bandwidth guarantees, and delay guarantees.
Each of the applications in the application layer has the ability to send a message by using TCP or UDP. The application communicates by using either of these two protocols. Both TCP and UDP will then communicate with the internet protocol in the internet layer. The applications can read and write to the transport layer. Therefore, we can say that communication is a two-way process.
Services provided by the Transport Layer
The services provided by the transport layer are similar to those of the data link layer. The data link layer provides the services within a single network while the transport layer provides the services across an internetwork made up of many networks. The data link layer controls the physical layer while the transport layer controls all the lower layers.

The services provided by the transport layer protocols can be divided into five categories:

End-to-end delivery
Addressing
Reliable delivery
Flow control
Multiplexing
End-to-end delivery:
The transport layer transmits the entire message to the destination. Therefore, it ensures the end-to-end delivery of an entire message from a source to the destination.

Reliable delivery:
The transport layer provides reliability services by retransmitting the lost and damaged packets.

The reliable delivery has four aspects:

Error control
Sequence control
Loss control
Duplication control
Error Control

The primary role of reliability is Error Control. In reality, no transmission will be 100 percent error-free delivery. Therefore, transport layer protocols are designed to provide error-free transmission.
The data link layer also provides the error handling mechanism, but it ensures only node-to-node error-free delivery. However, node-to-node reliability does not ensure the end-to-end reliability.
The data link layer checks for the error between each network. If an error is introduced inside one of the routers, then this error will not be caught by the data link layer. It only detects those errors that have been introduced between the beginning and end of the link. Therefore, the transport layer performs the checking for the errors end-to-end to ensure that the packet has arrived correctly.
Sequence Control

The second aspect of the reliability is sequence control which is implemented at the transport layer.
On the sending end, the transport layer is responsible for ensuring that the packets received from the upper layers can be used by the lower layers. On the receiving end, it ensures that the various segments of a transmission can be correctly reassembled.
Loss Control

Loss Control is a third aspect of reliability. The transport layer ensures that all the fragments of a transmission arrive at the destination, not some of them. On the sending end, all the fragments of transmission are given sequence numbers by a transport layer. These sequence numbers allow the receiver?s transport layer to identify the missing segment.

Duplication Control

Duplication Control is the fourth aspect of reliability. The transport layer guarantees that no duplicate data arrive at the destination. Sequence numbers are used to identify the lost packets; similarly, it allows the receiver to identify and discard duplicate segments.

Flow Control
Flow control is used to prevent the sender from overwhelming the receiver. If the receiver is overloaded with too much data, then the receiver discards the packets and asking for the retransmission of packets. This increases network congestion and thus, reducing the system performance. The transport layer is responsible for flow control. It uses the sliding window protocol that makes the data transmission more efficient as well as it controls the flow of data so that the receiver does not become overwhelmed. Sliding window protocol is byte oriented rather than frame oriented.

Multiplexing
The transport layer uses the multiplexing to improve transmission efficiency.

Multiplexing can occur in two ways:

Upward multiplexing: Upward multiplexing means multiple transport layer connections use the same network connection. To make more cost-effective, the transport layer sends several transmissions bound for the same destination along the same path; this is achieved through upward multiplexing.
Downward multiplexing: Downward multiplexing means one transport layer connection uses the multiple network connections. Downward multiplexing allows the transport layer to split a connection among several paths to improve the throughput. This type of multiplexing is used when networks have a low or slow capacity.
Addressing
According to the layered model, the transport layer interacts with the functions of the session layer. Many protocols combine session, presentation, and application layer protocols into a single layer known as the application layer. In these cases, delivery to the session layer means the delivery to the application layer. Data generated by an application on one machine must be transmitted to the correct application on another machine. In this case, addressing is provided by the transport layer.
The transport layer provides the user address which is specified as a station or port. The port variable represents a particular TS user of a specified station known as a Transport Service access point (TSAP). Each station has only one transport entity.
The transport layer protocols need to know which upper-layer protocols are communicating.
Transport Layer Security Protocols
The following sections describe the security protocols that operate over TCP/IP or some other reliable but insecure transport. They are categorized as Transport layer security protocols because their intent is to secure the Transport layer as well as to provide methods for implementing privacy, authentication, and integrity above the Transport layer.

The Secure Socket Layer Protocol
The Secure Socket Layer (SSL) is an open protocol designed by Netscape; it specifies a mechanism for providing data security layered between application protocols (such as HTTP, Telnet, NNTP, or FTP) and TCP/IP. It provides data encryption, server authentication, message integrity, and optional client authentication for a TCP/IP connection.

The primary goal of SSL is to provide privacy and reliability between two communicating applications. This process is accomplished with the following three elements:

• The handshake protocol. This protocol negotiates the cryptographic parameters to be used between the client and the server session. When an SSL client and server first start communicating, they agree on a protocol version, select cryptographic algorithms, optionally authenticate each other, and use public-key encryption techniques to generate shared secrets.

• The record protocol. This protocol is used to exchange Application layer data. Application messages are fragmented into manageable blocks, optionally compressed, and a MAC (message authentication code) is applied; the result is encrypted and transmitted. The recipient takes the received data and decrypts it, verifies the MAC, decompresses and reassembles it, and delivers the result to the application protocol.

• The alert protocol. This protocol is used to indicate when errors have occurred or when a session between two hosts is being terminated.

Let's look at an example using a Web client and server. The Web client initiates an SSL session by connecting to an SSL-capable server. A typical SSL-capable Web server accepts SSL connection requests on a different port (port 443 by default) than standard HTTP requests (port 80 by default). When the client connects to this port, it initiates a handshake that establishes the SSL session. After the handshake finishes, communication is encrypted and message integrity checks are performed until the SSL session expires. SSL creates a session during which the handshake must happen only once.

The SSL handshake process is shown in Figure 2-20. (Refer to "Public Key Infrastructure and Distribution Models," later in this chapter, for more information about digital certificates.) The steps in the process are as follows:

Step 1 The SSL client connects to the SSL server and requests the server to authenticate itself.

Step 2 The server proves its identity by sending its digital certificate. This exchange may optionally include an entire certificate chain, up to some root certificate authority (CA). Certificates are verified by checking validity dates and verifying that the certificate bears the signature of a trusted CA.

Step 3 The server may then initiate a request for client-side certificate authen-tication. However, because of a lack of a public key infrastructure, most servers today do not do client-side authentication.

Step 4 The message encryption algorithm for encryption and the hash function for integrity are negotiated. Usually the client presents a list of all the algorithms it supports, and the server selects the strongest cipher available.

Step 5 The client and server generate the session keys by following these steps:

(a) The client generates a random number that it sends to the server, encrypted with the server's

Security Technologies public key (obtained from the server's certificate).

(b) The server responds with more random data (encrypted with the client's public key, if available; otherwise, it sends the data in cleartext).

(c) The encryption keys are generated from this random data using hash functions.

The advantage of the SSL protocol is that it provides connection security that has three basic properties:
• The connection is private. Encryption is used after an initial handshake to define a secret key. Symmetric cryptography is used for data encryption (for example, DES and RC4).

• The peer's identity can be authenticated using asymmetric, or public key, cryptography (for example, RSA and DSS).

• The connection is reliable. Message transport includes a message integrity check using a keyed MAC. Secure hash functions (such as SHA and MD5) are used for MAC computations.

Address


Telephone

+251914509624

Website

Alerts

Be the first to know and let us send you an email when Albi computers posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Contact The Business

Send a message to Albi computers:

  • Want your business to be the top-listed Media Company?

Share