How data redundancy is controlled in DBMS as per the traditional file system.

Q3. Compare how data redundancy is controlled in DBMS as per the traditional file system?
Ans. Duplication or repetition of data is known as data redundancy. Before the introduction of computers, data was stored in a traditional file system. In this system, data was recorded and stored in multiple files. Let’s take an example to understand the problems you may face while maintaining data in the traditional file system. Suppose there is a school that uses the traditional file system to store data. In this school, data related to the students is stored in two files, Student_Details and Hostel_Student_Details. The Student_Details file contains data related to all the students of the school and the Hos-tel_Student_Details file contains data about the students who will live in the hostel. This implies that the Student_Details file contains data about all the students that is, those who study in the school as well as those who live in the hostel. In such a case, there is duplication of data (data redundancy), because the records of the hostel students are maintained in both the files. These shortcomings were overcome with the introduction of the Database Management System (DBMS). In short, DBMS is a program that controls the creation, maintenance, and use of database. In DBMS, data is stored centrally. This allows users to easily access and shares the data as a common resource. If you want to access or modify data, then you have to do it from a central location. Any changes made in the data are reflected automatically and made available to all the users. Since the changes are made add only one location, changes of data redundancy or data inconsistency are greatly reduced.

Leave a Reply

Your email address will not be published. Required fields are marked *