Wednesday, December 12, 2012

Computer History, Architechture And Definition has great information for everyone:

In this post discusses the history of computer science as well as the definition, In computer science and engineering, computer architecture refers to specification of the relationship between different hardware components of a computer system. It may also refer to the practical art of defining the structure and relationship of the subcomponents of a computer. As in the architecture of buildings, computer architecture can comprise many levels of information. The highest level of the definition conveys the concepts implement. Whereas in building architecture this over-view is normally visual, computer architecture is primarily logical, positing a conceptual system that serves a particular purpose. In both instances (building and computer), many levels of detail are required to completely specify a given implementation, and some of these details are often implied as common practice.

For example, at a high level, computer architecture is concerned with how the central processing unit (CPU) acts and how it accesses computer memory. Some currently (2011) fashionable computer architectures include cluster computing and Non-Uniform Memory Access.

From early days, computers have been used to design the next generation. Programs written in the proposed instruction language can be run on a current computer via emulation. At this stage, it is now commonplace for compiler designers to collaborate, suggesting improvements in the ISA. Modern simulators normally measure time in clock cycles, and give power consumption estimates in watts, or, especially for mobile systems, energy consumption in joules.

An early example of an architectural definition of a computer was John Von Neumann's 1945 paper, First Draft of a Report on the EDVAC, which described an organization of logical elements. IBM used this to develop the IBM 701, the company's first commercial stored program computer, delivered in early 1952.

The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson, Muhammad Usman Khan and Frederick P. Brooks, Jr., members in 1959 of the Machine Organization department in IBM’s main research center. Johnson had the opportunity to write a proprietary research communication about Stretch, an IBM-developed supercomputer for Los Alamos Scientific Laboratory. In attempting to characterize his chosen level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements was at the level of “system architecture” – a term that seemed more useful than “machine organization.”

Subsequently, Brooks, one of the Stretch designers, started Chapter 2 of a book (Planning a Computer System: Project Stretch, ed. W. Buchholz, 1962) by writing, “Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints.”

Brooks went on to play a major role in the development of the IBM System/360 (now called the IBM zSeries) line of computers, where “architecture” gained currency as a noun with the definition as “what the user needs to know”. Later the computer world would employ the term in many less-explicit ways. 


The coordination of abstract levels of a processor under changing forces, involving design, measurement and evaluation. It also includes the overall fundamental working principle of the internal logical structure of a computer system.

It can also be defined as the design of the task-performing part of computers, i.e. how various gates and transistors are interconnected and are caused to function per the instructions given by an assembly language programmer.
Instruction set architecture
Main article: Instruction set architecture

Instruction set architecture (ISA) is the interface between the software and hardware. Computers do not understand high level languages. A processor only understand instructions encoded as binary numbers. Besides instructions, the ISA defines items in the computer that are available to a program—including data types, registers, addressing modes, and memory organization.

Register indexes (or names) and memory addressing modes are the ways that instructions locate their operands. Software tools, special computer programs such as compilers, translate high level languages into instructions for a particular instruction set architecture. The ISA of a computer is usually described in a small book or pamphlet. This guide describes the binary numbers that encode the instructions. Most often, short three-letter human names for the instructions can be recognized by a software development tool called an assembler. An assembler is a computer program that translates a human-readable form of the ISA into a computer-readable form. Disassemblers are also widely available, usually in debuggers, software programs to isolate and correct malfunctions in binary computer programs.

ISAs vary in quality and completeness. A good ISA compromises between programmer convenience (more operations can be better), cost of the computer to interpret the instructions (cheaper is better), speed of the computer (faster is better), and size of the code (smaller is better). For example, a single-instruction ISA is possible, inexpensive, and fast, (e.g., subtract and jump if zero—actually used in the SSEM), but not convenient or helpful to make programs small. Memory organization defines how instructions interact with the memory.