Although the origins of parallel computing go back to the last century, it was only in the 1970s that parallel and vector computers became available to the scientific community. The first of these machines-the 64 processor llliac IV and the vector computers built by Texas Instruments, Control Data Corporation, and then CRA Y Research Corporation-had a somewhat limited impact. They were few in number and available mostly to workers in a few government laboratories. By now, however, the trickle has become a flood. There are over 200 large-scale vector computers now installed, not only in government laboratories but also in universities and in an increasing diversity of industries. Moreover, the National Science Foundation's Super computing Centers have made large vector computers widely available to the academic community. In addition, smaller, very cost-effective vector computers are being manufactured by a number of companies. Parallelism in computers has also progressed rapidly. The largest super computers now consist of several vector processors working in parallel. Although the number of processors in such machines is still relatively small (up to 8), it is expected that an increasing number of processors will be added in the near future (to a total of 16 or 32). Moreover, there are a myriad of research projects to build machines with hundreds, thousands, or even more processors. Indeed, several companies are now selling parallel machines, some with as many as hundreds, or even tens of thousands, of processors.
This book introduces the basic concepts of parallel and vector computing in the context of an introduction to numerical methods.
In this text, students of applied mathematics, science and engineering are introduced to fundamental ways of thinking about the broad context of parallelism.
This book presents a unified treatment of recently developed techniques and current understanding about solving systems of linear equations and large scale eigenvalue problems on high-performance computers.
The recent availability of advanced architecture computers has had a very sigificant impact on all spheres of scientific computation including algorithm research and software development in numerical linear algebra.
This book presents the most important parallel algorithms for the solution of linear systems.
This new edition includes thoroughly revised chapters on matrix multiplication problems and parallel matrix computations, expanded treatment of CS decomposition, an updated overview of floating point arithmetic, a more accurate rendition of ...
[1807] L. SNYDER [1986], Type architecturec, shared memory and the corollary of modest potential. Preprint. [1808] L. SNYDER, L. JAMmsoN, D. CANNON, AND 1!. Sweet, eds. [1985]. Algorithmically Specialized Parallel Computere, ...
For more general Bi - CGSTAB ( C ) schemes see [ 174 , 177 ] . xo is an initial guess , ro = b – Axo Choose ro , for example ro = r po = 1 , u = 0 , a = 0 , W2 = 1 for i = 0 , 2 , 4 , 6 , .... Po = -02P0 even Bi - CG step : Pi = ( ro ...
SCHNEIDER, C., Produktintegration zur Losung Schwachsingularer Integralgleichungen, ZAMM, Vol. 61, pp. T317–T319, 1981. ANSELONE, P. M., Collectively Compact Operator Approximation Theory and Applications to Integral Equations, ...
Mathematics of Computing -- General.