Basic Concepts of OOP.

Q3. Explain the basic concepts of OOP with examples.

Ans. The object oriented programming has been developed with the view to overcome the drawbacks of traditional programming approaches. The OOP approach is based on certain concepts that helps it attain goal of overcoming the drawbacks or shortcomings of traditional programming approaches. These general concepts of OOP are given below:

  1. Data Abstraction
  2. Data Encapsulation
  3. Modularity
  4. Inheritance
  5. Polymorphism

 

  1. Data Abstraction:

Abstraction is the concept of simplifying a real world concept into its essential elements.

Abstraction refers to the act of representing essential features without including the background details or explanations.

Let’s take the example of ‘switch board’. You only press certain switches according to your requirement. What is happening inside, how it is happening etc. you needn’t know. This is abstraction, you know only the essential things to operate on switch board without knowing the background details of switch board.

  1. Encapsulation:

Encapsulation is the most fundamental concept of OOP. It is the way of combining both data and the functions that operate on that data under a single unit.

         Read More …

Difference Between Internal and External Memory

Q16. Distinguish between internal and external memory.

Ans. The term “memory” has two different meanings in the context of computer components. One meaning, which most commonly involves RAM (random access memory), is a computer component used to temporarily hold data for processing by the computer. The other meaning is as a form of rewritable permanent storage, specifically one using a system called “flash memory.” In a computer, internal memory normally means RAM, while external memory means flash memory storage devices such as a USB stick. The biggest difference is that RAM is cleared whenever the computer is shut down, while data on flash memory storage remains until you delete or replace it.

The Generations Of Modern Computer

Q5. Write a note on all five generations of computers discussing their characteristics. Also give examples of computers belonging to each generation.

Ans. First Generation Computers (1949-55)

The first generation computers used thermionic values (vacuum tubes) and machine language was used for giving instructions. The first generation computers used the concept of ‘stored program’. The computers of this generation were very large size and their programming was a difficult task. Some computers of this generation are being given below:

  1. ENIAC

This was the first electronic computer developed in 1946 by a team lead by Prof. Eckert and Mauchly at the University of Pennsylvania in U.S.A. This computer, called Electronic Numerical Integrated And Calculator, used high speed vacuum tube switching devices. It has a very small memory and it was used for calculating the trajectories of missiles. It took 200 microseconds for addition and about 2800 microseconds for multiplication. This giant machine was 30X50 feet long, weighed 30 tons, contained 18,000 vacuum tubes, 70,000 resistors,  10,000 capacitors, 6000switches, used 150,000 watts of electricity, and cost S400,000. When ENIAC was built, it was 5000 times faster than the closest competitor, the Harserd MARK-I.

  1. EDVAC

The binary arithmetic was used in the construction of a computer called the Electronic Discrete Variable Automatic Computer (EDVAC), completed in 1950. The Von Neumann concept of stored program (in which the machine instructions are stored with data internally) was also applied in EDVAC. With this the operation became faster since the computer could rapidly access both the program and data.

  1. EDSAC

The EDSAC, short for Electronic Delay Storage Automatic Computer was built by Prof. M.V. Wilkes at Cambridge University in 1949 and used mercury delayed lines for storage. It also used Neumann concept ‘stored program’. This allowed easy implementation of program loops.

  1. UNIVAC-I

Commercial production of stored program electronic computers began in the early 50’s. One such computer was UNIVAC-I built by Univac division of Remington Rand and delivered in 1951. This computer also used vacuum tubes.

 

Initial applications of computers those days were in science and engineering but with the advent of UNIVAC I, the prospects of commercial application were perceived.

Though the first generation computers were welcomed by Govt. and Universities as they greatly helped them in these tasks, however, the first generation computers suffered from some ‘big’ limitations like slow operating speed, restricted computing capacity, high power consumption, short mean time between failures, very large space requirement and limited programming capabilities. Further researches in this line aimed at removal of these limitations.

 

Second Generation Computers (1956-65)

A big revolution in electronics took place with the invention of transistors by Bardeen, Brattain and Shockley in 1946. Transistors were highly reliable compared to tubes. They occupied less space and required only 1/10 of the power required by tubes. Also they took 1/10 time (switching from 0 to 1 or 1 to 0) needed by tubes and were ten times cheaper than those using tubes.

Another major event during this period was the invention of magnetic core and development of magnetic disk for storage. These are tiny ferrite rings (0.02 inch diameter) that can be magnetized in either clockwise or anti-clockwise direction. The two directions represent 0 and 1. Magnetic cores were used to construct large random access memories.

The second generation computers began with the advent of transistorized circuitry, invention of magnetic core and development of magnetic disk storage devices. These new developments made these computers much more reliable.

The increased reliability and availability of large memories paved the way for the development of high level languages (HLL) such as FORTRAN, COBOL, Algol and Snobol etc. With speedy CPUs and the advent of magnetic tape and disk storage, operating systems came into being. Batch operating systems ruled the second generation computers.

Commercial applications rapidly developed during this period and more than 80% of these computers were used in business and industries in the applications like payroll, inventory control, marketing, production planning etc.

 

Third Generation Computers (1966-75)

The third generation’s computers replaced transistors with ‘Integrated Circuits’ known popularly as chips. The ‘Integrated Circuit’ or I.C. was inverted by Jack Kilby at Texas Instruments in 1958.

An I.C. is wafer tin slice of extremely purified silicon crystals. A single I.C. has many transistors, resistors and capacitors along with the associated circuitry encapsulated in a small package with many leads.

From small scale integrated (SSI) circuits which had about 10 transistors per chip, technology developed to medium scale integrated, (MSI) circuits with 100 transistors per chip. The size of main memories reached about 4 megabytes. Magnetic disk technology also improved and it became feasible to have drive having capacity up to 100 MBs. The CPUs because much more powerful with the capacity of carrying out 1 million instructions per second (MIPS).

The third generation computers using integrated circuits proved to be highly reliable, relatively inexpensive, and faster. Less human labor was required at assembly stage. Examples of some mainframe computers developed during this generation are: IBM-360 series, ICL-1900 series, IBM-370/168, ICL-2900, Honeywell Model 316, Honeywell-6000 series. Some mini computers developed during this phase are: ICL-2903 manufactured by International Computers Limited, CDC-1700 manufactured by Control Data Corporation and PDP-11/45 (Personal Data Processor – 11/45)

Computers these days found place in other areas like education, survey, small businesses, estimation, analysis etc. along with their previous usage area that is, scientific and engineering.

 

Fourth Generation Computers (1976-present)

The advent of the microprocessor chip marked the beginning of the fourth generation computers. Medium scale integrated (MSI) circuits yielded to Large and Very Large Scale Integrated (VLSI) circuits packing about 50000 transistors in a chip. Semiconductor memories replaced magnetic core memories. The emergence of the microprocessor (CPU on a single chip) led to the emergence of extremely powerful personal computers. Computer costs came down so rapidly that these found places in most offices and homes. The faster accessing and processing speeds and increased memory capacity helped in development of much more powerful operating system.

The second decade (1986-present) of the fourth generation observed a great increase in the speed of microprocessors and the size of main memory. The speed of microprocessors and the size of main memory and hard disk went up by a factor of 4 every 3 years. Many of the mainframe CPU features became part of the microprocessor architecture in 90s. In 1995 the most popular CPUs were Pentium, Power PC etc. Also RISC (Reduced Instruction Set Computers) microprocessors are preferred in powerful servers for numeric computing and file services.

The hard disks are available of the six up to 80 GB. For large disks RAID technology (Redundant Array of Inexpensive Disks) gives storage up to hundreds of GB. The CDROMS (Compact Disk-Read Only Memory) are also becoming popular day by day. The CDROMs of today can store up to 650 MB information.

The computer networks came of age and are one of the most popular ways of interacting with computer chains of millions of users. The computers are being applied in various areas like simulation, visualization, Parallel computing, virtual reality, Multimedia etc.

 

Fifth Generation Computers (Coming Generation)

Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today. The use of parallel processing and supercomputers is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization.

The most noticeable characteristics of these computers will be the ability to apply previously gained knowledge, draw conclusions and then execute a task. The computer will, in short, simulate the human ability to reason.

Computers will have to be able to classify information, search large databases rapidly, plan, apply the rules which humans regard as clear thinking, make decision and learn from their mistakes. Input devices for future computers could also include speech and visual recognition.

 

Applications for Fifth-Generation Computers

Intelligent robots that could ‘see’ their environment (visual input – for example, a video camera) and could be programmed to carry out certain tasks without step-by-step instructions. The robot should be able to decide for itself how the task should be accomplished, based on the observations it made of its environment.

 

Applications examples of 5th generation computers are:

Intelligent systems that could control the route of a missile and defense-systems that could fend off attacks; Word processors that could be controlled by means by speech recognition; Programs that could translate documents from one language to another.

 

Evolution of Computers

Q4. Write a note on evolution of computers.

Ans. The concept of a computer did not materialize overnight. Just as the growth and development of mature biological species normally took place in fits and started over the ages, the computer also took thousands of years to mature.

Ancient people used stones for counting or made scratches on a wall or tied knots in a rope to record information. But all these were manual computing techniques. Attempts had been going on for developing faster computing devices and the first achievement was the abacus, the pioneer computing device used by man; Let us take a look at the development of the computer through various stages.

 

  1. Abacus

Around 3000 years before the birth of Jesus Christ, the Mesopotamians quite unknowingly laid the foundation of the computer era. They discovered the earliest form of a dead-and-wire counting machine, which subsequently came to be known as abacus. The Chinese improved upon the abacus so that they could count and calculate fast.

An abacus consists of deads divided into two parts which are movable on the rods of the two parts. Addition and multiplication etc. of numbers is done by using the place value of the digits of the numbers and position of beads in the abacus.

The abacus
Figure: The ABACUS
  1. Napier ‘Logs’ and ‘Bones’

John Napier (1550-1617) developed the idea of Logarithm. He used ‘logs’ to transform multiplication problem to addition problem. Napier’s logs later became the basis for a well known invention – the computing machine known as ‘slide rule’ (invented in 1661). Napier also devised set of numbering rods known as Napier’s Bones. He could perform both multiplication and division with these ‘Bones’.

The Napier's Bones
Figure: The Napier’s Bones

 

 

The ideas of logarithm, developed in 1614, notably reduced the tedium of repetitive calculations.

 

  1. Pascal’s Adding Machine

Blaise Pascal, a French mathematician, invented a machine in 1642 made up of gears which was used for adding numbers quickly. This machine was named as Adding Machine (also known as Pascaline) and was capable of addition and subtraction. It worked on clockwork mechanism principle. The adding machine consisted of numbered toothed wheels having unique position values. The rotation of wheels controlled the addition and subtraction operations. This machine was capable of carry-transfer automatically.

Pascal's Adding Machine
Figure: Pascal’s Adding Machine

 

  1. Leibnitz’s Calculator

Gotfried Leibnitz, a German mathematician, improved an adding machine and constructed a new machine in 1671 that was able to perform multiplication and division as well. This machine performed multiplication through repeated addition of numbers. Leibnitz’s machine used stepped cylinder each with nine teeth of varying lengths instead of wheels as was used by Pascal.

Leibnitz's Calculator
Figure: Leibnitz’s Calculator

 

  1. Jacquard’s Loom

Joseph Jacquard manufactured punched cards at the end of American Revolution and used them to control looms in 1801. Thus the entire control weaving process was automatic. The entire operation was under a program’s control. With the historic invention of punched cards, the era of storing and retrieving information started that greatly influenced the later inventions and a advancements.

Jacquard's Loom
Figure: Jacquard’s Loom
  1. Babbage’s Difference Engine

Charles Babbage, a professor of mathematics, developed a machine called Difference Engine in the year 1822. This machine was expected to calculate logarithmic tables to a high degree of precision. The difference engine was made to calculate various mathematical functions. The machine was capable of polynomial evaluation by finite difference and its operation was automatic multiston operation.

 Babbage's Difference Engine

Figure: Babbage’s Difference Engine

 

  1. Babbage’s Analytical Engine

In 1833, Charles Babbage started designing an Analytical Engine which was to become a real ancestor of the modern day computer. With the methodical design of his Analytical Engine, Babbage meticulously established the basic principles on which today’s computers work. The Analytical Engine was capable of performing ll four arithmetic operations as well as comparison. It had a number of features startlingly similar to those in today’s electronic computers.  He included the concept of central processor, storage area, memory and input-output devices in his design. The two revolutionary innovations incorporated in Analytical Engine were comparisons and modification of stored information. The first innovation enabled the machine to compare quantities and then decide which of the instruction sequences to follow. The second permitted the results of a calculation to change numbers and instructions already stored in the machine. Owing to the lack of technology of the time, the Analytical Engine was never built. Its design remained conceptual.

 Babbage's Analytical Engine

Figure: Babbage’s Analytical Engine

 

His great inventions of difference engine and analytical engine earned Charles Babbage the title ‘Father of Modern Computers’ – a fitting tribute to him.

 

  1. Hollerith’s Machine

In 1887, an American named Herman Hollerith (1869-1929) fabricated what was dreamt of by Charles Babbage. He fabricated the first electromechanical punched-cared tabulator that used punched cards for input, output and instructions. This machine was used by American Department of Census to compile their 1880 census data and was able to complete compilation in 3 years which earlier used to take around 10 years.

 Hollerith's Machine

Figure: Hollerith’s Machine

 

  1. Mark-I

Prof. Howard Aiken (1900-1973) in U.S.A. constructed in 1943 an electromechanical computer named Mark-I which could multiply two 10-digit numbers in 5 seconds – a record at that time. Mark-I was the first machine which could perform according to pre programmed instructions automatically without any manual interference. This was the first operational general purpose computer.

 Mark-I Computer

Figure: Mark-I Computer

Contribution of Computers towards our Society and its Future Tends

Q3. What is the contribution of computers towards out society? What are the advantages and disadvantages of computer data processing over manual data processing? Can you anticipate future trends in computer data processing? What are they?

Ans. Computers have made great inroads in our everyday life and thinking. They are put to use for all sorts of applications ranging from complex calculations in the field or frontline research, engineering simulations down to teaching, printing books and recreational games. The ease with which computers can process data, store and retrieve it painlessly have made them inevitable in office and business environments. The areas of applications of computers are confined only by limitations on human creativity and imagination. In fact, any task that can be carried out systematically can be performed by a computer. Therefore, it is essential for every educated person today to know about a computer, its strengths, its weaknesses and its internal structure.

A computer is an electronic device that can perform a variety of operations in accordance with a set of instructions called program.

A computer is an electronic device that can perform a variety of operations in accordance with a set of instructions called program.

 

Following are the advantages of computer data processing over manual data processing:

  1. Speed

Computers are much faster as compared to human beings. A computer can perform task in minute that may take days if performed manually. A modern computer can execute millions of instructions in one second.

  1. High Storage Capacity

Computers can store a large amount of information in very small space. A CDROM of 4.7 inch diameter can store all the 33 volumes of Encyclopedia Britannica and will still have room to store more information. Bubble memories can store 6,250,000 bits per square centimeter of space.

  1. Accuracy

Computers can perform all the calculations and comparisons accurately provided the hardware does not malfunction.

  1. Reliability

Computers are immune to tiredness and boredom or fatigue. Thus they are more reliable than human beings.

  1. Versatility

Computer can perform repetitive jobs efficiently. They can solve labor problem or do hazardous jobs in hostile environment. They even can work in the areas where human brain can err for instance observing motion of very fast moving articles. Also they can work with different types of data and information like graphics, audio, visual, characters etc.

 

Following are the disadvantages of computer data processing over manual data processing:

  1. Lack of Decision Making Power

Computers cannot decide on their own. They do not possess this power which is a great asset of human beings.

  1. IQ Zero

Computers are dumb machines with zero IQ. They need to be told each and every step, however minute it may be.

These limitations of computers are characteristics of human beings. Thus, computers and human beings work in collaboration to make a perfect pair.

 

Future Trends in Computer Data Processing

Computing devices based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today. The use of parallel processing and supercomputers is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of future computers is to develop devices that respond to natural language input and are capable of learning and self-organization.

The most noticeable characteristics of these computers will be the ability to apply previously gained knowledge, draw conclusions and then execute a task. The computer will, in short, simulate the human ability to reason.

Computers will have to be able to classify information, search large databases rapidly, plan, apply the rules which humans regard as clear thinking, make decision and learn from their mistakes. Input devices for future computers could also include speech and visual recognition.

Hardware and Software

Q2. What do you understand by the terms ‘hardware’ and ‘software’? What is their significance? Classify software into its component softwares and give examples of each subtype of software.

Ans. Hardware

Hardware represents the physical and tangible (touchable) components of the computer that is, the components that can be seen and touched. Or we can say that collectively, the electronic, electrical and mechanical equipment that makes up a computer is called Hardware. Input devices, output devices, CPU, floppy disk, hard disk etc. are examples of computer hardware.

Here you should also know another term (which is of course hardware) – peripherals. The peripherals are the devices that surround the system unit, for example, the keyboard, mouse, speakers, printers, monitors etc. are peripherals.

A computer consists of five primary hardware components:

  1. Input Devices
  2. CPU (Central Processing Unit)
  3. Memory
  4. Output Devices
  5. Storage Devices

These components work together with software to perform calculations, organize data, and communicate with other computers.

Software

Software represents the set of programs that govern the operation of a computer system and make the hardware run.

Software can be broadly classified into two categories:

(a) System Software

(b) Application Software

System Software represents the set of programs that govern the operation of a computer system and make the hardware run.

System software can be broadly classified into three categories:

(i)        Operating System

(ii)       Language Processors

 

Operating System

Hardware is nothing but finely designed machinery. A machine is ultimately a machine only, which is always made to work. In case of computers, it is either us if we do that or ‘some other’ which does it for us. This ‘some other’ is nothing but our very own ‘Operating System’.

An operating system is a program which acts as an interface between a user and the hardware (that is, all computer resources).

Operating System is just like our secretary. As the boss gives orders to his secretary and the secretary does all the work for his boss. The secretary himself decides: How to do? What to do? When to do? etc. Same way, we pass our orders/requests to operating system and ‘the Operating System’ does it for us. ‘Operating System’ itself decides: How to do? What to do? When to do? The primary goal of an operating system is thus to make the computer system convenient to use and secondary goal is to use computer hardware in an efficient manner.

An operating system is an important component of a computer system which controls all other components of the computer system.

Major components of a computer system are:

  1. The Hardware
  2. The Operating System
  3. The Application program routines (Compiler, linkers, database management systems, utility programs)
  4. The Humanware (users)

Where hardware provides the basic computing resources, the application program routines define the ways in which these resources are used to solve the computing problems of the users and the Operating System controls and coordinate the use of the hardware among the various application programs for the various users.

By telling the computer how to perform common functions, the operating system frees the application software to concentrate on producing information. Following figure illustrates how the operating system insulates the user and application software from computer hardware.

 Role of OS

Figure: Role of OS

 

The operating system performs the following functions:

(i) provides the instructions to prepare user-interface, that is, way to interact with user whether through typed commands or through graphical symbols.

(ii) loads necessary programs (into the computer memory) which are required for proper computer functioning.

(iii) coordinates how programs work with the CPU, keyboard, mouse, printer and other hardware as well as with other software.

(iv) manages the way information stored on and retrieved from disks.

 

There are various types of OSs – single user OS, multiuser OS, batch processing OS, multiprocessing OS etc.

As the names suggest, single user OS supports single user whereas multiuser OS can support multiple users. The batch processing OS processes the batches (groups) of jobs (process given to it) and multiprocessing OS is capable of handling multiple CPUs at the same time.

 

Language Processors

As programmer prefer to write their programs in one of the High Level Language (HLLs) because it is much easier to code in such languages. However, the computer does not understand any language other than its own machine language (binary language) therefore; it becomes necessary to process a HLL program so as to make it understandable to the computer. The system programs which perform this very job are language processors. The language processors are given below:

(i) Assembler

This language processor converts the program written in assembly language into machine language.

(ii) Interpreter

This language processor converts a HLL program into machine language by converting and executing it line by line. If there is any error in any line, it reports it at the same time and program execution cannot resume until the error is rectified. Interpreter must always be present in the memory every time the program is executed as every time the program is run, it is first interpreted and then executed. For error debugging, interpreter is very much useful as it reports the error(s) at the same time. But once errors are removed, unnecessary usage of memory takes place as it has to be present in the memory always.

(iii) Compiler

It also converts the HLL program into machine language but the conversion manner is different. It converts the entire HLL program in one go, and reports all the errors of the program along with the line numbers. After all the errors are removed, the program is recompiled, and after that the compiler is not needed in the memory as the object program is available.

Therefore, if we combine interpreter and compiler, it gives the best combination for HLL program translation into object code. For the error removal, interpreter can be used and after all the errors are removed the program can be compiled enabling the removal of the language translator from the memory.

 

Application Software

This type of software pertains to one specific application. For instance, software that can perform railway reservation functions cannot prepare result for a school.

Application software is the set of programs necessary to carry out operations for a specified application.

These are the programs written by programmers to enable computer to perform a specific task such as inventory control, medical accounting, financial accounting, result preparation, railway reservation, billing etc. Application software can further be subdivided into two categories:

  1. Customized Application Software

This type of software is tailor-made software according to a user’s requirements. The software is developed to meet all the requirements specified by the user. However, this cannot be directly installed at any other user’s workplace as the requirements of this user may differ from the first one and the software may not fit in the requirements of the new user.

  1. General Application Software

This type of software is developed keeping in mind the general requirements for carrying out a specific task. Many users can use it simultaneously as it fulfills the general requirements.

Basic Architecture of Computer along with the Functioning of each of its Subunits

Q1. What is a computer? Explain its basic architecture along with the functioning of each of its subunits.

Ans. A computer is an electronic device that can perform a variety of operations in accordance with a set of instructions called program.

Basic Architecture of Computer
Basic Structure of a Computer

 

Input Unit

The input unit is formed by the input devices attached to the computer.

Examples of input devices and media are: keyboard, mouse, magnetic ink character reader (MICR), optical mark reader (OMR), optical character reader (OCR), joystick etc.

The input unit is responsible for taking input and converting it into computer understandable form (the binary code). Since a computer operates on electricity, it can understand only the language of electricity that is, either ON or OFF or high voltage or low voltage. That means a computer can understand two stages ON/OFF or High/Low voltage or the binary language that uses just two symbols: 1 for ON and 0 for OFF.

An input unit takes the input and converts it into binary form so that it can be understood by the computer.

 

Central Processing Unit (CPU)

The CPU is the control centre for a computer. It guides, directs and governs its performance. It is the brain of the computer. The CPU has two components which are responsible for different functions. These two components are its Control Unit (CU) and Arithmetic Logic Unit (ALU).

 

Arithmetic Logic Unit (ALU)

The ALU performs all the four arithmetic (+,-,*,/) and some logical (<,>,=,<=,>=,<>) operations. When two numbers are required to be added, these numbers are sent from memory to ALU where addition takes place and the result is put back in the memory. The same way other arithmetic operations are performed.

For logical operations also, the numbers to be compared are sent from memory to ALU where the comparison takes place and the result is returned to the memory. The result of a logical operation is either TRUE or FALSE. These operations provide the capability of decision-making to the computer.

 

Control Unit (CU)

The Control Unit (CU) controls and guides the interpretation, flow and manipulation of all data and information. The CU sends control signals until the required operations are done properly by ALU and memory. Another important function of CU is the program execution that is, carrying out all the instructions stored in the program. The CU gets program instructions from memory and executes them one after the other. After getting the instructions from memory in CU, the instruction is decoded and interpreted that is, which operation is to be performed. Then the asked operation is carried out. After the work of this instruction is completed, control unit sends signal to memory to send the next instruction in sequence to CU.

The control unit even controls the flow of data from input devices to memory and from memory to output devices.

 

Output Unit

The output unit is formed by the output devices attached to the computer. The output coming from the CPU is in the form of electronic binary signals which needs conversion in some form which can be easily understood by human beings that is, characters, graphical or audio visual. This function of conversion is performed by output units. Some popular output devices are VDU (Visual Display Unit), printer, plotter, speech synthesizer and coder etc.

 

The Memory

The memory of a computer is more like a predefined working place, where it temporarily keeps information and data to facilitate its performance. When the task is performed, it clears its memory and memory space is then available for the next task to be performed. When the power is switched off, everything stored in the memory gets erased and cannot be recalled.

The memory of computer is often called main memory or primary memory.

The memory cell may be defined as a device which can store a symbol selected from a set of symbols.

Each of these cells is further broken down into smaller parts known as bits. A bit means a binary digit, that is, either 0 or 1. A number of bits together are used to store data instructions by their combination.

 

Memory Cells
Memory Cells

 

 A bit is an elementary unit of the memory.

A group of 8 bits is called a byte and a group of 4 bits is called a nibble.

One byte is the smallest unit which can represent a data item or a character. Other units of memory are KB, MB, GB, and TB.

One KB (Kilobyte) means 210 bytes, that is, 1024 bytes. One MB (Megabyte) means 210 KB, that is, 1024X1024 bytes. One GB (Gigabyte) means 210 MB that is 1024X1024X1024 bytes. One TB (Terabyte) means 210 GB, that is, 1024 GB.

 

Since computer’s main memory (primary memory) is temporary, secondary memory space is needed to store data and information permanently for later use. Some most common secondary storage media are the floppy diskette, the hard disk and CD-RWs etc.. The secondary memory devices are also known as storage devices.

Digital, Analog and Hybrid Computers

Q35. How are digital, analog and hybrid computers different from one another?

Ans. Digital Computer

The digital computers work upon discontinuous data. They convert the data into digits (binary digits 0 and 1) and all operations are carried out on these digits at extremely fast rates. A digital computer basically knows how to count the digits and add the digits. Digital Computers are much faster than analog computers and far more accurate. Computers used for business and scientific applications are digital computers.

 

Analog Computer

In analog computers, continuous quantities are used. Computations are carried out with physical quantities such as voltage, length, current, temperature etc. the devices that measure such quantities are analog devices, for example voltmeter, ammeter. Analog computers operate by measuring rather than counting. The main advantage of analog computers is that all calculations take place in parallel and hence these are faster. But their accuracy is poor as compared to digital counterparts. Analog computers are mostly used in engineering and scientific applications. An electronic weighing scale is an example of an analog computer.

 

Hybrid Computer

Hybrid computers utilize the best qualities of both the digital and analog computers. In these computers some calculations take place in analog manner and rest of them take place in digital manner. Hybrid computers are best used in hospital where analog part is responsible for measurement of patient’s heart beat, blood pressure, temperature and other vital signs and then the operation is carried out in digital fashion to monitor patient’s vital sign. Hybrid computers are also used in weather forecasting.

 

Super Computers and Name of a Supercomputer installed in India

Q34. What do you understand by the term ‘Super Computers’? Give the name of a supercomputer installed in India.

Ans. Super computers are the most powerful computers among digital computers. These consist of several processors running together thereby making them immensely faster and powerful. These computers are capable of handling huge amounts of calculations that are beyond human capabilities. Super computers can perform billions of instructions per second. Some of the today’s super computers have the computing capability equal to that of 40,000 micro computers. A Japanese super-computer has calculated the value of Pi (π) to 16 million decimal places. These are mainly used in application like weather forecasting, nuclear science research, aerodynamic modeling, seismology, metrology etc.

 

Following are a few names of supercomputers installed in India:

  1. CRAY XMP-14 brought from US.
  2. Flosolver Mk3 is in use at the Centre for Atmospheric Sciences of the Indian Institute of Science (IISc), Banglore.
  3. PACE (Processor for Aerodynamic Computation and Evaluation), develop by the Hyderabad-based Advanced Numerical Research and Analysis Group (ANURAG).
  4. PARAM developed by the Pune-based Centre for Development of Advanced Computing (C-DAC).

Three Advantages of Computer Data Processing over Manual Methods

Q33. List at least three advantages of computer data processing over manual methods.

Ans. Following are the three advantages of computer data processing over manual methods:

  1. Speed

Computers are much faster as compared to human beings. A computer can perform task in minute that may take days if performed manually. A modern computer can execute millions of instructions in one second.

  1. High Storage Capacity

Computers can store a large amount of information in very small space. A CDROM of 4.7 inch diameter can store all the 33 volumes of Encyclopedia Britannica and will still have room to store more information. Bubble memories can store 6,250,000 bits per square centimeter of space.

  1. Accuracy

Computers can perform all the calculations and comparisons accurately provided the hardware does not malfunction.