If the performance of computers increases by a factor of 10 every 5 years, how long does it take for the performance to double?
To enable the straightforward comparison of the doubling aspect defined with the number of years considered for the double, instantaneous performance rate is calculated first. The performance calculation is derived from the following equation
Yt=Yoe8t.
Doubling time of the computer performance is, therefore, calculated as
Double time=LN(2)/g with a maximum instantaneous rate of 69.3%. This will take a minimum of 10-12 years (Fountain, 2006).
Given a task that can be divided into m subtasks, each one requiring one unit of time, how much time is needed for an m-stage pipeline to process n tasks?
For instance: m=3 steps and n=10 tasks.
Each step represents 1/m of the time required to complete a task exclusive of the pipeline. Step one takes 1 time unit, and it does not produce anything (one in pipe)
Step two takes 1 time unit it does not produce anything (two in pipe)
Step three takes 1 time unit and produces one (three in pipe)
Step four takes 1 time unit and produces one (three in pipe)
Step five also takes 1 time unit and produces one (and so on)
There is the production of one unit in each step apart from the first two (m-1). This results to a formulae m-1+ n. In order to make 10 units without using a pipeline, it will take n*m time unit (Kirk & Hwu, 2010).
What are the 4 ways to program a parallel computer?
The four ways of programming a parallel computer include Bit-level, Instruction level, data-level and Task-level parallelisms (Fountain, 2006).
Bit-level parallelism focuses on increasing a computer’s word size. Increasing the word size is essential for reducing the amount of operations that are on the data values and are bigger than the word size (Fountain, 2006).
Instruction level parallelism bases on the idea that certain instructions in a programme can occur separately in parallel. If a programme needs an operation on distinct data, these operations can take place at the same time (Fountain, 2006).
The basis of the data-level parallelism is the idea that different sets of data can run a single set of instructions at the same moment. This data relies on a control unit, which directs multiple ALUs to handle the same operation and is known as Single instructions multiple data (SIMD) (Fountain, 2006).
Task-level parallelism forms it foundation from the idea that diverse processors can carry out unrelated tasks on different or identical sets of data. Different processors operating on a similar set of data are comparable to pipelining in a von Neumann mechanism (Fountain, 2006).
What borders the number of processors in centralized multiprocessors?
The workload adjuster is accountable for adjusting the boundaries of the processors contained in a centralised multiprocessor (Kirk & Hwu, 2010).
Do some research and name at least one commercial computer system for each of the
categories in Flynn’s taxonomy. This can be either a currently available computer or one that had been available in the past
• Single Instruction Single Data (SISD) such as the traditional uniprocessor machines like old mainframes or pc
• Single Instruction Multiple Data (SIMD) example an array processor
• Multiple Instruction Single Data (MISD) example the Space Shuttle flight control computer Multiple Instruction Multiple Data (MIMD) example the distributed system
References
Fountain, T. J. (2006). Parallel Computing: Principles and Practice. Cambridge Univ Pr.
Kirk, D., & Hwu, W. (2010). Programming massively parallel processors: A hands-on approach. Amsterdam: Morgan Kaufmann.