Tuesday, 6 February 2024

database management system (DBMS) unit -1

 

                                  UNIT ONE:

INTRODUCTION:

Database System

A database is s collection of data, typically describing the activities of one or more related organizations. For example, a university database might contain information about the following:

·         Entities such as Students, faculty, courses and classrooms.

·         Relationships between entities, such as students’ enrollment in courses, faculty teaching courses, and the use of rooms for sources.

A database management system (DBMS) is a collection of interrelated data and a set of programs to access those data. The collection of data, usually referred to as the database, contains information relevant to an enterprise. The primary goal of a DBMS is to provide a way to store and retrieve database information that is both convenient and efficient.

Database systems are designed to manage large bodies of information. Management of data involves both defining structures for storage of information and providing mechanisms for the manipulation of information. In addition, the database system must ensure the safety of the information stored, despite system crashed or attempts at unauthorized access. If data are to be shared among several users, the system must avoid possible anomalous results.

Key terms related to database.

Data: Data is the Raw facts and figures that are unprocessed and have no context until they are processed. Data is the raw material from which useful information is derived.

 It takes variety of forms, including numeric data, text and voice and images. Data is a collection of facts, which is unorganized but can be made organized into useful information.

 Example: Weights, prices, costs, number of items sold etc.

Information:  It is the processed and organized data that provides context and meaning, making it useful for decision-making. The term data and information are closely related. Data are raw material resources that are processed into finished information products. The information as data that has been processed in such way that it can increase the knowledge of the person who uses it.

Data Processing: The manipulation and transformation of raw data into meaningful information through various operations is called data processing.

Meta data: Data about data, describing the characteristics, structure, and context of data within a database. It provides context with details such as the source, type, owner, and relationships to other data sets.

Application of database:

Databases are widely used. Here are some representative applications:

1. Banking: For customer information, accounts, and loans, and banking transactions.

2. Airlines: For reservations and schedule information. Airlines were among the first to use databases in a geographically distributed manner - terminals situated around the world accessed the central database system through phone lines and other data networks.

3. Universities: For student information, course registrations, and grades.

4. Credit card transactions: For purchases on credit cards and generation of monthly statements.

5. Telecommunication: For keeping records of calls made, generating monthly bills, maintaining balances on prepaid calling cards, and storing information about the communication networks.

6. Finance: For storing information about holdings, sales, and purchases of financial instruments such as stocks and bonds.

7. Sales: For customer, product, and purchase information.

8. Manufacturing: For management of supply chain and for tracking production of items in factories, inventories of items in warehouses / stores, and orders for items.

9. Human resources: For information about employees, salaries, payroll taxes and benefits, and for generation of paychecks

Characteristics (File system vs DBMS)

The difference between file system and DBMS is as follows:

DBMS

File System

DBMS is a collection of data. In DBMS, the user is not required to write the procedures.

The file system is a collection of data. In this system, the user has to write the procedures for managing the database.

Due to the centralized approach, data sharing is easy.

Data is distributed in many files, and it may be of different formats, so it isn't easy to share data.

DBMS gives an abstract view of data that hides the details.

The file system provides the detail of the data representation and storage of data.

DBMS provides a good protection mechanism.

It isn't easy to protect a file under the file system.

DBMS provides a crash recovery mechanism, i.e., DBMS protects the user from system failure.

The file system doesn't have a crash mechanism, i.e., if the system crashes while entering some data, then the content of the file will be lost.

DBMS contains a wide variety of sophisticated techniques to store and retrieve the data.

The file system can't efficiently store and retrieve the data.

DBMS takes care of Concurrent access of data using some form of locking.

In the File system, concurrent access has many problems like redirecting the file while deleting some information or updating some information.

Advantages and Disadvantages of DBMS

A DBMS is a piece of software that is designed to make the preceding tasks easier. By storing data in a DBMS, rather than as a collection of operating system Files, we can use the DBMS's features to manage the data in a robust and efficient manner.

Advantages of DBMS

The following are the major advantages of using a Database Management System (DBMS):

Data independence: Application programs should be as independent as possible from details of data representation and storage. The DBMS can provide an abstract view of the data to insulate application code from such details.

Efficient data access: A DBMS utilizes a variety of sophisticated techniques to store and retrieve data efficiently. This feature is especially important if the data is stored on external storage devices.

Data integrity and security: The DBMS can enforce integrity constraints on the data. The DBMS can enforce access controls that govern what data is visible to different classes of users.

Data administration: When several users share the data, centralizing the administration of data can offer significant improvements. It can be used for organizing the data representation to minimize redundancy and for fine-tuning the storage of the data to make retrieval efficient.

Concurrent access and crash recovery: A DBMS schedules concurrent accesses to the data in such a manner that users can think of the data as being accessed by only one user at a time. Further, the DBMS protects users from the effects of system failures.

Reduced application development time: Clearly, the DBMS supports many important functions that are common to many applications accessing data stored in the DBMS.

Disadvantages of DBMS

The following are the major disadvantages of using a Database Management System (DBMS):

Increased cost:

High cost is one of the main disadvantages of DBMS, the cost can be of many types like hardware or software costs, data storage costs, etc. The following are the major disadvantages of it:

  • Hardware and Software Costs: It is the main disadvantage of a DBMS system; a database management system database requires processing power which needs high-speed processors and these processors use hardware that is expensive and increases the cost of the overall system.

Database management systems need a lot of storage and expensive software for storing data and this storage memory needs to be fast for faster output. Hence, the storage adds to overall costs. Hence, there are a lot of costs required for setting up and maintaining a database management system.

  • Staff Training and Expense: A huge amount of cost is also required for training and educating staff that maintains the database, Hiring new staff and giving them also increases the overall expense.
  • Cost of Data Conversion: we have to convert all our data into a database management system, and for that skilled database designers are required for designing the entire database, Hence a large amount of money is required for their salaries and the software required to design the database. all these add ups to increased costs.

Complexity

The database management system is very complex to use and normal people cannot understand how to use its software before proper training. So, for proper design of the database and management of the database skilled engineers, developers, and database administrators are required.

Database Failure

Database Failure is one of the biggest disadvantages of a database management system. it requires a lot of maintenance and constant power. Data stored on DBMS is centralized in nature, if the database server fails, the whole system will fail and the organization will be affected.

Performance

The database management system works very fast when the data is less to work on, but as the data of the organization grows, the system becomes heavier and heavier and the performance of a DBMS decreases, so sometimes the file management system is preferred over the database management system.

Frequent Updates/Upgrades

Nowadays DBMS software is regularly been updated by DBMS vendors, so updating the software also increases the need of updating the hardware used for that software. which increases the overall unnecessary expenses to the organization. introducing new updates in the software also brings up new commands to operate the DBMS system. Hence, staff maintaining the database have to be trained according to the updates which is a lot of hassle.

Huge Size

As the data acquired by the organization increases, more storage space is needed to set up. But increasing the storage space makes the database heavier, so searching and storing of data becomes slow and DBMS software takes more time to fetch queries which makes it inefficient.

Database users (Actors on the scene, workers behind the scene)

Database users refer to individuals or entities that interact with a database system, performing various roles based on their responsibilities and requirements. There are two main categories of database users:

1.      Actors on the scene

2.      Workers behind the scene

Actors on the scene: Those who actually use and control the database content, and those who design,

develop and maintain database applications are called actors on the scene. These include:

1. Database Administrator (DBA): This is the chief administrator, who oversees and manages the database system (including the data and software). Duties include authorizing users to access the database, coordinating/monitoring its use, acquiring hardware/software for upgrades, etc. In large organizations, the DBA might have a support staff.

2. Database Designers: They are responsible for identifying the data to be stored and for choosing an appropriate way to organize it. They also define views for different categories of users. The final design must be able to support the requirements of all the user sub-groups.

3. End Users: These are persons who access the database for querying, updating, and report generation. They are main reason for database's existence. These are those who actually reap the benefits of having a DBMS. End users can range from simple viewers who pay attention to the logs or market rates to sophisticated users such as business analysts.

Workers behind the scene: These are those who design and develop the DBMS software and related tools, and the computer systems operators.

I. DBMS system designers/ implementers:  They design and provide the DBMS software that is at the foundation of all.

II. Tool developers: They design and implement software tools facilitating database system design, performance monitoring, creation of graphical user interfaces, prototyping, etc.

III. Operators and maintenance personnel: These personnels are responsible for the day-to-day operation of the system.

 

Brief Introduction to different Data Models:

Underlying the structure of a database is the data model. It is a collection of conceptual tools for describing data, data relationships, data semantics, and consistency constraints.

A data model provides a way to describe the design of a database at the physical, logical and view level.

The following are the four data model used for understanding the structure of the database:



 


Relational Model: The relational model uses a collection of tables to represent both data and the relationships among those data. Each table has multiple columns, and each column has a unique name.

The table is also called a relation. The relational model is an example of a record-based model. Record-based models are so named because the database is structured in fixed-format records of several types.

Each table contains records of a particular type. Each record type defines a fixed number of fields, or attributes. The columns of the table correspond to the attributes of the record type.

The Entity-Relationship Model: The entity-relationship (E-R) data model is based on a perception of a real world that consists of a collection of a basic objects, called entities, and of relationships among these objects.

An entity is a “thing” or “object “ in the real world that is distinguishable from other objects.

Object-based data model: An extension of the ER model with notions of functions, encapsulation, and object identity, as well. This model supports a rich type system that includes structured and collection types. It combines the features of the object-oriented data model and relational data model.

Semi structured Data Model: This type of data model is different from the other three data models (explained above). The semi structured data model allows the data specifications at places where the individual data items of the same type may have different attributes sets. The Extensible Markup Language, also known as XML, is widely used for representing the semi structured data. Although XML was initially designed for including the markup information to the text document, it gains importance because of its application in the exchange of data.

Concepts of Schemas, Instance and Data independence

Schemas: A database schema is the skeleton structure that represents the logical view of the entire database. It defines how the data is organized and how the relations among them are associated. It formulates all the constraints that are to be applied on the data.

A database schema defines its entities and the relationship among them. It contains a descriptive detail of the database, which can be depicted by means of schema diagrams. It’s the database designers who design the schema to help programmers understand the database and make it useful.

A database Schema can be divided broadly into three categories:

1.      Logical or conceptual schema

2.      External Schema

3.      Physical Schema

Logical or Conceptual Schema: In database management systems (DBMS), a conceptual schema is a high-level representation of the data model that provides a global view of the database. It defines the structure and organization of the entire database without going into the implementation details or physical storage considerations. The conceptual schema is primarily concerned with capturing the essential entities, relationships, and constraints within the domain of the application.

 

This schema hides information about the physical storage structures and focuses on describing data types, entities, relationships, etc.

This logical level comes between the user level and physical storage view. However, there is only single conceptual view of a single database.

Facts about Conceptual schema:

  • Defines all database entities, their attributes, and their relationships
  • Security and integrity information
  • In the conceptual level, the data available to a user must be contained in or derivable from the physical level

External Schema: In the context of a database management system (DBMS), an external schema, also known as a user schema or a view, is a representation of the database that is tailored to the specific needs and requirements of a particular group of users or applications.

Unlike the conceptual schema, which provides a global view of the entire database, an external schema focuses on a subset of the data that is relevant to a specific user or group of users.

Facts about external schema:

  • An external level is only related to the data which is viewed by specific end users.
  • This level includes some external schemas.
  • External schema level is nearest to the user
  • The external schema describes the segment of the database which is needed for a certain user group and hides the remaining details from the database from the specific user group.

Physical Schema: The physical schema, also referred to as the internal schema, is a level of database architecture that describes how data is stored, organized, and represented at the physical storage level.

 It defines the way data is stored on the storage media such as disks and how the data is accessed by the database management system (DBMS) at the lowest level. The physical schema is concerned with optimizing performance, storage efficiency, and retrieval speed.

Facts about Internal schema:

  • The internal schema is the lowest level of data abstraction
  • It helps you to keeps information about the actual representation of the entire database. Like the actual storage of the data on the disk in the form of records
  • The internal view tells us what data is stored in the database and how
  • It never deals with the physical devices. Instead, internal schema views a physical device as a collection of physical pages

How are these different schema layers related to the concepts of logical and physical data independence?

Data independence in the context of a Database Management System (DBMS) refers to the ability to make changes to the database without affecting the applications that interact with the data.

The concepts of logical and physical data independence are closely related to the different schema layers in a database management system (DBMS). The various data independence in DBMS are:

Logical Data Independence:

Logical data independence refers to the ability to change the logical schema (or conceptual schema) without affecting the application programs that access the data. It allows modifications to the way data is organized, without impacting how applications interact with the data.

Physical Data Independence:

Physical data independence refers to the ability to change the physical schema (or internal schema) without affecting the conceptual or logical schema. It allows modifications to the storage and retrieval mechanisms, indexes, and other physical storage details without impacting the way users perceive or interact with the data.

The logical and physical data independence are related to the various schema layers by the following ways:

Relationship with Conceptual Schema (Logical Schema): Logical data independence is closely tied to the conceptual schema. Changes to the conceptual schema, such as adding or modifying entities, relationships, or constraints, should not require changes to application programs or queries. This separation allows for flexibility in adapting the database to evolving business requirements without disrupting the application layer.

Relationship with Physical Schema (internal Schema): Physical data independence is associated with the internal schema. Changes to the physical storage structures, indexing mechanisms, or other storage-related details should not affect the conceptual or logical schema. This separation allows for optimizations at the physical level without impacting the way data is modeled or perceived by users and applications.

Instance:

The concept of an "instance" in a database refers to a specific occurrence or representation of the database at a given point in time. It can refer to the current state of the data in the database or to a running copy of the database management system. There are two main aspects to consider when discussing the concept of an instance in the context of a database:

Database Instance (Data):

At the most basic level, an instance represents the current state of the data stored in a database. It includes all the records, rows, and information that are currently present in the database. The data instance reflects the content of the database at a specific moment.

For example, if you have a relational database, an instance of that database represents the current set of rows and columns in each table. It's the actual data that users interact with and query.

Database Instance (Running):

In the context of a running database server, an instance refers to a specific, independent copy of the database management system (DBMS) that is active and accessible. A running instance includes both the database schema (structure) and the data. Multiple instances of a DBMS can run concurrently on the same server, each serving different databases.

Each running instance has its own set of memory structures, processes, and resources dedicated to managing the associated database. This concept is often used in relational database systems like Oracle, SQL Server, and MySQL.

 

 

 

Data Independence:

Data Independence means the ability of the data to change the schema at one level of the database without having to change the schema at the next higher level. In simple words, we can say that Data independence is a property of a database that allows the User or Database Administrator to change the schema at one level without affecting the data or schema at another level. It allows for flexibility and adaptability in the evolution of a database system, enhance the security of the system, save time and reduce costs needed once the information is changed or altered.

There are two main types of data independence: logical data independence and physical data independence.

1. Physical Level Data Independence

Physical Data Independence can be defined as the ability to change the physical level without affecting the logical or Conceptual level. Physical data independence gives us the freedom to modify the - Storage device, File structure, location of the database, etc. without changing the definition of conceptual or view level.

Example: For example, if we take the database of the banking system and we want to scale up the database by changing the storage size and also want to change the file structure, we can do it without affecting any functionality of logical schema.

Below changes can be done at the physical layer without affecting the conceptual layer -

  • Changing the storage devices like SSD, hard disk and magnetic tapes, etc.
  • Changing the access technique and modifying indexes.
  • Changing the compression techniques or hashing algorithms.

2. Logical Level Data Independence

Logical Data Independence is a property of a database that can be used to change the logic behind the logical level without affecting the other layers of the database. Logical data independence is usually required for changing the conceptual schema without having to change the external schema or application programs. It allows us to make changes in a conceptual structure like adding, modifying, or deleting an attribute in the database.

Example: If there is a database of a banking system and we want to add the details of a new customer or we want to update or delete the data of a customer at the logical level data will be changed but it will not affect the Physical level or structure of the database.

These changes can be done at a logical level without affecting the application program or external layer.

  • Adding, deleting, or modifying the entity or relationship.
  • Merging or breaking the record present in the database.

Three tier schema architecture for data independence:

A three-tier schema architecture is often employed in database systems to achieve data independence and separation of concerns. This architecture divides the database into three layers or tiers used to create a separation between the physical database and the user application each with its specific role and level of abstraction. In simple terms, this architecture hides the details of physical storage from the user.

The database administrator (DBA) responsible is to change the structure of database storage without affecting the user’s view. It deals with the data, the relationship between them and the different access methods implemented on the database. The logical design of database is called a schema

This architecture contains three layers of database management system, which are as follows −

  • External level
  • Logical or conceptual level
  • Physical or internal level

Logical or Conceptual Schema level: In database management systems (DBMS), a conceptual schema is a high-level representation of the data model that provides a global view of the database. It defines the structure and organization of the entire database without going into the implementation details or physical storage considerations. The conceptual schema is primarily concerned with capturing the essential entities, relationships, and constraints within the domain of the application.

This schema hides information about the physical storage structures and focuses on describing data types, entities, relationships, etc. This logical level comes between the user level and physical storage view. However, there is only single conceptual view of a single database.

External Schema level: In the context of a database management system (DBMS), an external schema, also known as a user schema or a view, is a representation of the database that is tailored to the specific needs and requirements of a particular group of users or applications.

Unlike the conceptual schema, which provides a global view of the entire database, an external schema focuses on a subset of the data that is relevant to a specific user or group of users.

Physical Schema level: The physical schema, also referred to as the internal schema, is a level of database architecture that describes how data is stored, organized, and represented at the physical storage level.

 It defines the way data is stored on the storage media such as disks and how the data is accessed by the database management system (DBMS) at the lowest level. The physical schema is concerned with optimizing performance, storage efficiency, and retrieval speed.

 

 

DATABASE SYSTEM STRUCTURE, ENVIRONMENT, CENTRALIZED AND CLIENT SERVER ARCHITECTURE FOR THE DATABASE

Database System Structure: A database system structure typically involves multiple components that work together to manage and organize data efficiently. The main components of a typical database system structure include:

1.      Query Processor

2.      Storage Manager,

3.     
Disk storage

 


 

QUERY PROCESSOR:  It interprets the requests (queries) received from end user via an application program into instructions. It also executes the user request which is received from the DML compiler.

 In this way, the query processor aids the database system in making data access simple and easy. The query processor's primary duty is to successfully execute the query. The Query Processor transforms (or interprets) the user's application program-provided requests into instructions that a computer can understand.

Components of the Query Processor:

DDL Interpreter:

Data Definition Language is what DDL stands for. As implied by the name, the DDL Interpreter interprets DDL statements like those used in schema definitions (such as create, remove, etc.).

This interpretation yields a set of tables that include the meta-data (data of data) that is kept in the data dictionary. Metadata may be stored in a data dictionary.

DML Compiler:

Compiler for DML Data Manipulation Language is what DML stands for. In keeping with its name, the DML Compiler converts DML statements like select, update, and delete into low-level instructions or simply machine-readable object code, to enable execution.

The optimization of queries is another function of the DML compiler. Since a single question can typically be translated into a number of evaluation plans. As a result, some optimization is needed to select the evaluation plan with the lowest cost out of all the options. This process, known as query optimization, is exclusively carried out by the DML compiler. Simply put, query optimization determines the most effective technique to carry out a query.

Embedded DML Pre-compiler:

Before the query evaluation, the embedded DML commands in the application program (such as SELECT, FROM, etc., in SQL) must be pre-compiled into standard procedural calls (program instructions that the host language can understand). Therefore, the DML statements which are embedded in an application program must be converted into routine calls by the Embedded DML Pre-compiler.

Query Optimizer:

It starts by taking the evaluation plan for the question, runs it, and then returns the result. Simply said, the query evaluation engine evaluates the SQL commands used to access the database's contents before returning the result of the query.

It is in charge of analyzing the queries and running the object code that the DML Compiler produces. Apache Drill, Presto, and other Query Evaluation Engines are a few examples.

STORAGE MANAGER:

Storage Manager is a program that provides an interface between the data stored in the database and the queries received. It is also known as Database Control System.

 It maintains the consistency and integrity of the database by applying the constraints and executing the DCL statements. It is responsible for updating, storing, deleting, and retrieving data in the database. 

Following are the components of Storage Manager:

Integrity Manager: Whenever there is any change in the database, the Integrity manager will manage the integrity constraints.

Authorization Manager: Authorization manager verifies the user that he is valid and authenticated for the specific query or request.

File Manager: All the files and data structure of the database are managed by this component.

Transaction Manager: It is responsible for making the database consistent before and after the transactions. Concurrent processes are generally controlled by this component.

Buffer Manager: The transfer of data between primary and main memory and managing the cache memory is done by the buffer manager.

 

DISK STORAGE:

Disk storage is a critical component in the structure of a database system, as it plays a key role in persistently storing data even when the system is not actively running. In a typical database system, data is stored on disk in a structured manner to ensure efficient retrieval, management, and durability.

It contains the following components –

Data Files: It stores the data.

Data Dictionary: It contains the information about the structure of any database object. It is the repository of information that governs the metadata.

Indices: It provides faster retrieval of data item.

 

DATABASE ENVIRONMENT:

A database environment is a collective system of components that comprise and regulates the group of data, management, and use of data, which consist of software, hardware, people, techniques of handling database, and the data also.

 

1. Hardware

The hardware component of the database system environment includes all the physical devices that comprise the database system. It includes storage devices, processors, input and output devices, printers, network devices and many more.

2. Software

The software component of the database environment includes all the software that we require to access, store and regulate the database. Like operating systems, DBMS and application programs and utilities.

The operating system invokes computer hardware, and let other software runs. DBMS software controls and regulates the database. The application program and utilities access the database and if required you can even manipulate the database.

3. People

It includes all the people who are related to the database. There may be a group of people who will access the database just to resolve their queries i.e., end-user, there may be people that are involved in designing the database i.e., database designer.

Some may be involved in designing the applications that will have an interface through which data entry is possible i.e., database programmer and analyst and some may also be there to monitor the database i.e. database administrator.

4. Procedures

The procedure component of the database environment is nothing but the function that regulates and controls the use of the database.

5. Data

Data component include a collection of related data which are the known fact that can be recorded and it has an implicit meaning in the database.

CENTRALIZED AND CLIENT SERVER ARCHITECTURE FOR THE DATABASE

Database system architecture refers to the organization and arrangement of components in a database system that collectively allow for the storage, retrieval, and management of data. Two common architectures for database systems are centralized architecture and client-server architecture.

Centralized Architecture: In a centralized architecture, the entire database system is hosted and managed on a single computer or server. All components, including the DBMS software, data storage, and application interfaces, are centralized in one location.

This location is most often a central computer or database system, for example a desktop or server CPU, or a mainframe computer.

It is maintained and modified from that location only and usually accessed using an internet connection such as a LAN or WAN. The centralized database is used by organizations such as colleges, companies, banks etc.

 



 

 

 

 

 

 

 

Advantages:

·         Since all data is stored at a single location only thus it is easier to access and co-ordinate data.

·         The centralized database has very minimal data redundancy since all data is stored at a single place.

·         It is cheaper in comparison to all other databases available.

Disadvantages:

·         The data traffic in case of centralized database is more.

·         If any kind of system failure occurs at centralized system, then entire data will be destroyed.

Client/Server Architecture: A client-server architecture for DBMS is one in which data is stored on a central server, but clients connect to that server in order to access and manipulate the data. This type of architecture is more complex than a centralized architecture, but it offers several advantages over the latter.

 The system includes three main components: Clients, Servers and Communication Middleware.


 

 

 

Client: The client is any computer process that requests service from the server.

Server: The server is any computer process providing services to the clients.

Communication Middleware: The communication middleware is any computer process through which clients and servers communicate and is also known as communication layer.

There are basically two-types of client/server architectures:

Two-Tier Architecture: Two- tier architecture in DBMS refers to a client-server architecture where the user interface and the application logic are separated into two separate components. The client component is typically the user interface and the server component is responsible for handling the data and business logic. In this architecture, the client component communicates directly with the server component to request data and perform actions.

In a two-tier architecture, there are two main components: the client and the server. The client is responsible for the user interface and application logic, while the server manages the database and processes requests from the client.

  • Client Tier: This tier includes the user interface and application logic, often running on end-user devices such as desktops or laptops. The client directly communicates with the database server for data retrieval and manipulation.
  • Server Tier: The server tier hosts the database management system (DBMS) and is responsible for managing the database. It processes SQL queries and updates, returning the results to the client.

 

 

 

 

 

 

 

Advantages of 2 Tier Architecture in DBMS

Below are a few advantages of 2 tier architecture in DBMS: -

  • Simplicity: Two-tier architecture is simple and easy to understand, as it involves only two components: the client and the server.
  • Cost Effective: Two-tier architecture is less expensive to implement and maintain compared to three-tier or multi-tier architecture.
  • Ease of deployment: The client software can be deployed on individual workstations, making it easier to manage and update.
  • Direct Access to Database: The client has direct access to the database, allowing for fast data retrieval and update.

Three-Tier Architecture: In a three-tier architecture, the system is divided into three main components or layers: the presentation tier, the application tier, and the data tier. The client cannot communicate directly with the server with this design.

On the client side, the program communicates with an application server, which then communicates with the database system. Beyond the application server, the end-user has no knowledge of the database’s existence. Aside from the application, the database has no knowledge of any other users. In the event of an extensive web application, the 3-tier design is used.

  • Presentation Tier (Client): This tier is responsible for the user interface and presentation logic. It communicates with the application server for data processing.
  • Application Tier (Middle Tier): The application server hosts the application logic, processing user requests, and interacting with the database server for data retrieval and updates.
  • Data Tier (Server): The database server manages the database and processes SQL queries. It interacts with the application server to handle data-related operations.

 

 

 

 

 

 

 

Advantages of 3-Tier Architecture

Below are a few advantages of 2-tier architecture: -

  • Scalability: The architecture can be easily scaled by adding more servers or upgrading existing servers, improving performance, and ensuring high availability.
  • Increased Security: The three-tier architecture provides a clear separation of responsibilities, which makes it easier to secure data and control access to the database.
  • Improved Maintenance: The architecture makes it easier to maintain and upgrade the system, as changes can be made to one tier without affecting the others.
  • Reusability: The business logic can be centralized in the application tier, making it easier to reuse and share across different applications.

 

 

Centralized vs Client/Server model: The centralized model and the client-server model are two contrasting approaches to organizing and distributing computing resources in a networked environment. The concepts of each are as follows:

 

Centralized Model: In a centralized model, all computing resources and data processing tasks are concentrated in a single, central location or server. Clients, which can be user terminals or devices, are generally responsible for input and output operations, but the majority of computational tasks and data storage occur on the central server.

Characteristics:

Single Point of Control: The central server is the hub of the system, controlling all processing and data management.

Simplicity: Easier to set up and manage, especially for small-scale systems.

Limited Scalability: Scaling the system involves upgrading the central server, and there are limitations to handling larger workloads.

Advantages:

Simplicity: Straightforward to manage and maintain.

Low Network Overhead: As processing is centralized, there is less communication overhead.

Disadvantages:

Limited Scalability: May struggle to handle increasing data or user demands.

Single Point of Failure: The central server is a single point of failure, and any issues with it can lead to system-wide outages.

Client-Server Model: In the client-server model, computing resources are distributed between clients and servers. Clients are end-user devices or applications responsible for user interfaces and input, while servers handle data processing, storage, and management. Communication occurs between clients and servers over a network.

Characteristics:

Distributed Processing: Processing tasks are distributed between clients and servers.

Scalability: Easier to scale by adding more servers to handle increased load.

Decentralization: No single point of control; clients and servers collaborate in a distributed manner.

Advantages:

Scalability: Can easily handle increased workloads by distributing tasks among multiple servers.

Fault Tolerance: Reduced risk of a single point of failure; system components can continue functioning if one server encounters issues.

Client Independence: Clients can be on different devices and platforms.

Disadvantages:

Complexity: More complex to set up and manage compared to a centralized model.

Network Overhead: Increased communication between clients and servers may result in higher network traffic.

Comparison between centralized and client/server architecture.

Centralized

Client/Server

Centralized Architecture has single point of control.

 

In this architecture, clients and servers share control.

 

It has limited scalability.

 More scalable due to the distribution of tasks.

Vulnerable to single points of failure.

Improved fault tolerance with distributed components.

Simple to implement and use.

More complex due to distributed components.

 

 

 

No comments:

Post a Comment