slpdoc.f 25 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459
  1. *DECK SLPDOC
  2. SUBROUTINE SLPDOC
  3. C***BEGIN PROLOGUE SLPDOC
  4. C***PURPOSE Sparse Linear Algebra Package Version 2.0.2 Documentation.
  5. C Routines to solve large sparse symmetric and nonsymmetric
  6. C positive definite linear systems, Ax = b, using precondi-
  7. C tioned iterative methods.
  8. C***LIBRARY SLATEC (SLAP)
  9. C***CATEGORY D2A4, D2B4, Z
  10. C***TYPE SINGLE PRECISION (SLPDOC-S, DLPDOC-D)
  11. C***KEYWORDS BICONJUGATE GRADIENT SQUARED, DOCUMENTATION,
  12. C GENERALIZED MINIMUM RESIDUAL, ITERATIVE IMPROVEMENT,
  13. C NORMAL EQUATIONS, ORTHOMIN,
  14. C PRECONDITIONED CONJUGATE GRADIENT, SLAP,
  15. C SPARSE ITERATIVE METHODS
  16. C***AUTHOR Seager, Mark. K., (LLNL)
  17. C User Systems Division
  18. C Lawrence Livermore National Laboratory
  19. C PO BOX 808, L-60
  20. C Livermore, CA 94550
  21. C (FTS) 543-3141, (510) 423-3141
  22. C seager@llnl.gov
  23. C***DESCRIPTION
  24. C The
  25. C Sparse Linear Algebra Package
  26. C
  27. C @@@@@@@ @ @@@ @@@@@@@@
  28. C @ @ @ @ @ @ @
  29. C @ @ @ @ @ @
  30. C @@@@@@@ @ @ @ @@@@@@@@
  31. C @ @ @@@@@@@@@ @
  32. C @ @ @ @ @ @
  33. C @@@@@@@ @@@@@@@@@ @ @ @
  34. C
  35. C @ @ @@@@@@@ @@@@@
  36. C @ @ @ @ @ @@
  37. C @ @ @@@@@@@ @ @@ @ @ @ @
  38. C @ @ @ @ @@ @ @@@@@@ @ @ @
  39. C @ @ @@@@@@@@@ @ @ @ @ @
  40. C @ @ @ @ @ @@@ @@ @
  41. C @@@ @@@@@@@ @ @@@@@@@@@ @@@ @@@@@
  42. C
  43. C
  44. C =================================================================
  45. C ========================== Introduction =========================
  46. C =================================================================
  47. C This package was originally derived from a set of iterative
  48. C routines written by Anne Greenbaum, as announced in "Routines
  49. C for Solving Large Sparse Linear Systems", Tentacle, Lawrence
  50. C Livermore National Laboratory, Livermore Computing Center
  51. C (January 1986), pp 15-21.
  52. C
  53. C This document contains the specifications for the SLAP Version
  54. C 2.0 package, a Fortran 77 package for the solution of large
  55. C sparse linear systems, Ax = b, via preconditioned iterative
  56. C methods. Included in this package are "core" routines to do
  57. C Iterative Refinement (Jacobi's method), Conjugate Gradient,
  58. C Conjugate Gradient on the normal equations, AA'y = b, (where x =
  59. C A'y and A' denotes the transpose of A), BiConjugate Gradient,
  60. C BiConjugate Gradient Squared, Orthomin and Generalized Minimum
  61. C Residual Iteration. These "core" routines do not require a
  62. C "fixed" data structure for storing the matrix A and the
  63. C preconditioning matrix M. The user is free to choose any
  64. C structure that facilitates efficient solution of the problem at
  65. C hand. The drawback to this approach is that the user must also
  66. C supply at least two routines (MATVEC and MSOLVE, say). MATVEC
  67. C must calculate, y = Ax, given x and the user's data structure for
  68. C A. MSOLVE must solve, r = Mz, for z (*NOT* r) given r and the
  69. C user's data structure for M (or its inverse). The user should
  70. C choose M so that inv(M)*A is approximately the identity and the
  71. C solution step r = Mz is "easy" to solve. For some of the "core"
  72. C routines (Orthomin, BiConjugate Gradient and Conjugate Gradient
  73. C on the normal equations) the user must also supply a matrix
  74. C transpose times vector routine (MTTVEC, say) and (possibly,
  75. C depending on the "core" method) a routine that solves the
  76. C transpose of the preconditioning step (MTSOLV, say).
  77. C Specifically, MTTVEC is a routine which calculates y = A'x, given
  78. C x and the user's data structure for A (A' is the transpose of A).
  79. C MTSOLV is a routine which solves the system r = M'z for z given r
  80. C and the user's data structure for M.
  81. C
  82. C This process of writing the matrix vector operations can be time
  83. C consuming and error prone. To alleviate these problems we have
  84. C written drivers for the "core" methods that assume the user
  85. C supplies one of two specific data structures (SLAP Triad and SLAP
  86. C Column format), see below. Utilizing these data structures we
  87. C have augmented each "core" method with two preconditioners:
  88. C Diagonal Scaling and Incomplete Factorization. Diagonal scaling
  89. C is easy to implement, vectorizes very well and for problems that
  90. C are not too ill-conditioned reduces the number of iterations
  91. C enough to warrant its use. On the other hand, an Incomplete
  92. C factorization (Incomplete Cholesky for symmetric systems and
  93. C Incomplete LU for nonsymmetric systems) may take much longer to
  94. C calculate, but it reduces the iteration count (for most problems)
  95. C significantly. Our implementations of IC and ILU vectorize for
  96. C machines with hardware gather scatter, but the vector lengths can
  97. C be quite short if the number of non-zeros in a column is not
  98. C large.
  99. C
  100. C =================================================================
  101. C ==================== Supplied Data Structures ===================
  102. C =================================================================
  103. C The following describes the data structures supplied with the
  104. C package: SLAP Triad and Column formats.
  105. C
  106. C ====================== S L A P Triad format =====================
  107. C
  108. C In the SLAP Triad format only the non-zeros are stored. They may
  109. C appear in *ANY* order. The user supplies three arrays of length
  110. C NELT, where NELT is the number of non-zeros in the matrix:
  111. C (IA(NELT), JA(NELT), A(NELT)). If the matrix is symmetric then
  112. C one need only store the lower triangle (including the diagonal)
  113. C and NELT would be the corresponding number of non-zeros stored.
  114. C For each non-zero the user puts the row and column index of that
  115. C matrix element in the IA and JA arrays. The value of the
  116. C non-zero matrix element is placed in the corresponding location
  117. C of the A array. This is an extremely easy data structure to
  118. C generate. On the other hand, it is not very efficient on vector
  119. C computers for the iterative solution of linear systems. Hence,
  120. C SLAP changes this input data structure to the SLAP Column format
  121. C for the iteration (but does not change it back).
  122. C
  123. C Here is an example of the SLAP Triad storage format for a
  124. C nonsymmetric 5x5 Matrix. NELT=11. Recall that the entries may
  125. C appear in any order.
  126. C
  127. C 5x5 Matrix SLAP Triad format for 5x5 matrix on left.
  128. C 1 2 3 4 5 6 7 8 9 10 11
  129. C |11 12 0 0 15| A: 51 12 11 33 15 53 55 22 35 44 21
  130. C |21 22 0 0 0| IA: 5 1 1 3 1 5 5 2 3 4 2
  131. C | 0 0 33 0 35| JA: 1 2 1 3 5 3 5 2 5 4 1
  132. C | 0 0 0 44 0|
  133. C |51 0 53 0 55|
  134. C
  135. C ====================== S L A P Column format ====================
  136. C
  137. C In the SLAP Column format the non-zeros are stored counting down
  138. C columns (except for the diagonal entry, which must appear first
  139. C in each "column") and are stored in the real array A. In other
  140. C words, for each column in the matrix first put the diagonal entry
  141. C in A. Then put in the other non-zero elements going down the
  142. C column (except the diagonal) in order. The IA array holds the
  143. C row index for each non-zero. The JA array holds the offsets into
  144. C the IA, A arrays for the beginning of each column. That is,
  145. C IA(JA(ICOL)), A(JA(ICOL)) are the first elements of the ICOL-th
  146. C column in IA and A. IA(JA(ICOL+1)-1), A(JA(ICOL+1)-1) are the
  147. C last elements of the ICOL-th column. Note that we always have
  148. C JA(N+1) = NELT+1, where N is the number of columns in the matrix
  149. C and NELT is the number of non-zeros in the matrix. If the matrix
  150. C is symmetric one need only store the lower triangle (including
  151. C the diagonal) and NELT would be the corresponding number of
  152. C non-zeros stored.
  153. C
  154. C Here is an example of the SLAP Column storage format for a
  155. C nonsymmetric 5x5 Matrix (in the A and IA arrays '|' denotes the
  156. C end of a column):
  157. C
  158. C 5x5 Matrix SLAP Column format for 5x5 matrix on left.
  159. C 1 2 3 4 5 6 7 8 9 10 11
  160. C |11 12 0 0 15| A: 11 21 51 | 22 12 | 33 53 | 44 | 55 15 35
  161. C |21 22 0 0 0| IA: 1 2 5 | 2 1 | 3 5 | 4 | 5 1 3
  162. C | 0 0 33 0 35| JA: 1 4 6 8 9 12
  163. C | 0 0 0 44 0|
  164. C |51 0 53 0 55|
  165. C
  166. C =================================================================
  167. C ====================== Which Method To Use ======================
  168. C =================================================================
  169. C
  170. C BACKGROUND
  171. C In solving a large sparse linear system Ax = b using an iterative
  172. C method, it is not necessary to actually store the matrix A.
  173. C Rather, what is needed is a procedure for multiplying the matrix
  174. C A times a given vector y to obtain the matrix-vector product, Ay.
  175. C SLAP has been written to take advantage of this fact. The higher
  176. C level routines in the package require storage only of the non-zero
  177. C elements of A (and their positions), and even this can be
  178. C avoided, if the user writes his own subroutine for multiplying
  179. C the matrix times a vector and calls the lower-level iterative
  180. C routines in the package.
  181. C
  182. C If the matrix A is ill-conditioned, then most iterative methods
  183. C will be slow to converge (if they converge at all!). To improve
  184. C the convergence rate, one may use a "matrix splitting," or,
  185. C "preconditioning matrix," say, M. It is then necessary to solve,
  186. C at each iteration, a linear system with coefficient matrix M. A
  187. C good preconditioner M should have two properties: (1) M should
  188. C "approximate" A, in the sense that the matrix inv(M)*A (or some
  189. C variant thereof) is better conditioned than the original matrix
  190. C A; and (2) linear systems with coefficient matrix M should be
  191. C much easier to solve than the original system with coefficient
  192. C matrix A. Preconditioning routines in the SLAP package are
  193. C separate from the iterative routines, so that any of the
  194. C preconditioners provided in the package, or one that the user
  195. C codes himself, can be used with any of the iterative routines.
  196. C
  197. C CHOICE OF PRECONDITIONER
  198. C If you willing to live with either the SLAP Triad or Column
  199. C matrix data structure you can then choose one of two types of
  200. C preconditioners to use: diagonal scaling or incomplete
  201. C factorization. To choose between these two methods requires
  202. C knowing something about the computer you're going to run these
  203. C codes on and how well incomplete factorization approximates the
  204. C inverse of your matrix.
  205. C
  206. C Let us suppose you have a scalar machine. Then, unless the
  207. C incomplete factorization is very, very poor this is *GENERALLY*
  208. C the method to choose. It will reduce the number of iterations
  209. C significantly and is not all that expensive to compute. So if
  210. C you have just one linear system to solve and "just want to get
  211. C the job done" then try incomplete factorization first. If you
  212. C are thinking of integrating some SLAP iterative method into your
  213. C favorite "production code" then try incomplete factorization
  214. C first, but also check to see that diagonal scaling is indeed
  215. C slower for a large sample of test problems.
  216. C
  217. C Let us now suppose you have a vector computer with hardware
  218. C gather/scatter support (Cray X-MP, Y-MP, SCS-40 or Cyber 205, ETA
  219. C 10, ETA Piper, Convex C-1, etc.). Then it is much harder to
  220. C choose between the two methods. The versions of incomplete
  221. C factorization in SLAP do in fact vectorize, but have short vector
  222. C lengths and the factorization step is relatively more expensive.
  223. C Hence, for most problems (i.e., unless your problem is ill
  224. C conditioned, sic!) diagonal scaling is faster, with its very
  225. C fast set up time and vectorized (with long vectors)
  226. C preconditioning step (even though it may take more iterations).
  227. C If you have several systems (or right hand sides) to solve that
  228. C can utilize the same preconditioner then the cost of the
  229. C incomplete factorization can be amortized over these several
  230. C solutions. This situation gives more advantage to the incomplete
  231. C factorization methods. If you have a vector machine without
  232. C hardware gather/scatter (Cray 1, Cray 2 & Cray 3) then the
  233. C advantages for incomplete factorization are even less.
  234. C
  235. C If you're trying to shoehorn SLAP into your favorite "production
  236. C code" and can not easily generate either the SLAP Triad or Column
  237. C format then you are left to your own devices in terms of
  238. C preconditioning. Also, you may find that the preconditioners
  239. C supplied with SLAP are not sufficient for your problem. In this
  240. C situation we would recommend that you talk with a numerical
  241. C analyst versed in iterative methods about writing other
  242. C preconditioning subroutines (e.g., polynomial preconditioning,
  243. C shifted incomplete factorization, SOR or SSOR iteration). You
  244. C can always "roll your own" by using the "core" iterative methods
  245. C and supplying your own MSOLVE and MATVEC (and possibly MTSOLV and
  246. C MTTVEC) routines.
  247. C
  248. C SYMMETRIC SYSTEMS
  249. C If your matrix is symmetric then you would want to use one of the
  250. C symmetric system solvers. If your system is also positive
  251. C definite, (Ax,x) (Ax dot product with x) is positive for all
  252. C non-zero vectors x, then use Conjugate Gradient (SCG, SSDCG,
  253. C SSICSG). If you're not sure it's SPD (symmetric and Positive
  254. C Definite) then try SCG anyway and if it works, fine. If you're
  255. C sure your matrix is not positive definite then you may want to
  256. C try the iterative refinement methods (SIR) or the GMRES code
  257. C (SGMRES) if SIR converges too slowly.
  258. C
  259. C NONSYMMETRIC SYSTEMS
  260. C This is currently an area of active research in numerical
  261. C analysis and there are new strategies being developed.
  262. C Consequently take the following advice with a grain of salt. If
  263. C you matrix is positive definite, (Ax,x) (Ax dot product with x
  264. C is positive for all non-zero vectors x), then you can use any of
  265. C the methods for nonsymmetric systems (Orthomin, GMRES,
  266. C BiConjugate Gradient, BiConjugate Gradient Squared and Conjugate
  267. C Gradient applied to the normal equations). If your system is not
  268. C too ill conditioned then try BiConjugate Gradient Squared (BCGS)
  269. C or GMRES (SGMRES). Both of these methods converge very quickly
  270. C and do not require A' or M' (' denotes transpose) information.
  271. C SGMRES does require some additional storage, though. If the
  272. C system is very ill conditioned or nearly positive indefinite
  273. C ((Ax,x) is positive, but may be very small), then GMRES should
  274. C be the first choice, but try the other methods if you have to
  275. C fine tune the solution process for a "production code". If you
  276. C have a great preconditioner for the normal equations (i.e., M is
  277. C an approximation to the inverse of AA' rather than just A) then
  278. C this is not a bad route to travel. Old wisdom would say that the
  279. C normal equations are a disaster (since it squares the condition
  280. C number of the system and SCG convergence is linked to this number
  281. C of infamy), but some preconditioners (like incomplete
  282. C factorization) can reduce the condition number back below that of
  283. C the original system.
  284. C
  285. C =================================================================
  286. C ======================= Naming Conventions ======================
  287. C =================================================================
  288. C SLAP iterative methods, matrix vector and preconditioner
  289. C calculation routines follow a naming convention which, when
  290. C understood, allows one to determine the iterative method and data
  291. C structure(s) used. The subroutine naming convention takes the
  292. C following form:
  293. C P[S][M]DESC
  294. C where
  295. C P stands for the precision (or data type) of the routine and
  296. C is required in all names,
  297. C S denotes whether or not the routine requires the SLAP Triad
  298. C or Column format (it does if the second letter of the name
  299. C is S and does not otherwise),
  300. C M stands for the type of preconditioner used (only appears
  301. C in drivers for "core" routines), and
  302. C DESC is some number of letters describing the method or purpose
  303. C of the routine. The following is a list of the "DESC"
  304. C fields for iterative methods and their meaning:
  305. C BCG,BC: BiConjugate Gradient
  306. C CG: Conjugate Gradient
  307. C CGN,CN: Conjugate Gradient on the Normal equations
  308. C CGS,CS: biConjugate Gradient Squared
  309. C GMRES,GMR,GM: Generalized Minimum RESidual
  310. C IR,R: Iterative Refinement
  311. C JAC: JACobi's method
  312. C GS: Gauss-Seidel
  313. C OMN,OM: OrthoMiN
  314. C
  315. C In the single precision version of SLAP, all routine names start
  316. C with an S. The brackets around the S and M designate that these
  317. C fields are optional.
  318. C
  319. C Here are some examples of the routines:
  320. C 1) SBCG: Single precision BiConjugate Gradient "core" routine.
  321. C One can deduce that this is a "core" routine, because the S and
  322. C M fields are missing and BiConjugate Gradient is an iterative
  323. C method.
  324. C 2) SSDBCG: Single precision, SLAP data structure BCG with Diagonal
  325. C scaling.
  326. C 3) SSLUBC: Single precision, SLAP data structure BCG with incom-
  327. C plete LU factorization as the preconditioning.
  328. C 4) SCG: Single precision Conjugate Gradient "core" routine.
  329. C 5) SSDCG: Single precision, SLAP data structure Conjugate Gradient
  330. C with Diagonal scaling.
  331. C 6) SSICCG: Single precision, SLAP data structure Conjugate Gra-
  332. C dient with Incomplete Cholesky factorization preconditioning.
  333. C
  334. C
  335. C =================================================================
  336. C ===================== USER CALLABLE ROUTINES ====================
  337. C =================================================================
  338. C The following is a list of the "user callable" SLAP routines and
  339. C their one line descriptions. The headers denote the file names
  340. C where the routines can be found, as distributed for UNIX systems.
  341. C
  342. C Note: Each core routine, SXXX, has a corresponding stop routine,
  343. C ISSXXX. If the stop routine does not have the specific stop
  344. C test the user requires (e.g., weighted infinity norm), then
  345. C the user should modify the source for ISSXXX accordingly.
  346. C
  347. C ============================= sir.f =============================
  348. C SIR: Preconditioned Iterative Refinement Sparse Ax = b Solver.
  349. C SSJAC: Jacobi's Method Iterative Sparse Ax = b Solver.
  350. C SSGS: Gauss-Seidel Method Iterative Sparse Ax = b Solver.
  351. C SSILUR: Incomplete LU Iterative Refinement Sparse Ax = b Solver.
  352. C
  353. C ============================= scg.f =============================
  354. C SCG: Preconditioned Conjugate Gradient Sparse Ax=b Solver.
  355. C SSDCG: Diagonally Scaled Conjugate Gradient Sparse Ax=b Solver.
  356. C SSICCG: Incomplete Cholesky Conjugate Gradient Sparse Ax=b Solver.
  357. C
  358. C ============================= scgn.f ============================
  359. C SCGN: Preconditioned CG Sparse Ax=b Solver for Normal Equations.
  360. C SSDCGN: Diagonally Scaled CG Sparse Ax=b Solver for Normal Eqn's.
  361. C SSLUCN: Incomplete LU CG Sparse Ax=b Solver for Normal Equations.
  362. C
  363. C ============================= sbcg.f ============================
  364. C SBCG: Preconditioned BiConjugate Gradient Sparse Ax = b Solver.
  365. C SSDBCG: Diagonally Scaled BiConjugate Gradient Sparse Ax=b Solver.
  366. C SSLUBC: Incomplete LU BiConjugate Gradient Sparse Ax=b Solver.
  367. C
  368. C ============================= scgs.f ============================
  369. C SCGS: Preconditioned BiConjugate Gradient Squared Ax=b Solver.
  370. C SSDCGS: Diagonally Scaled CGS Sparse Ax=b Solver.
  371. C SSLUCS: Incomplete LU BiConjugate Gradient Squared Ax=b Solver.
  372. C
  373. C ============================= somn.f ============================
  374. C SOMN: Preconditioned Orthomin Sparse Iterative Ax=b Solver.
  375. C SSDOMN: Diagonally Scaled Orthomin Sparse Iterative Ax=b Solver.
  376. C SSLUOM: Incomplete LU Orthomin Sparse Iterative Ax=b Solver.
  377. C
  378. C ============================ sgmres.f ===========================
  379. C SGMRES: Preconditioned GMRES Iterative Sparse Ax=b Solver.
  380. C SSDGMR: Diagonally Scaled GMRES Iterative Sparse Ax=b Solver.
  381. C SSLUGM: Incomplete LU GMRES Iterative Sparse Ax=b Solver.
  382. C
  383. C ============================ smset.f ============================
  384. C The following routines are used to set up preconditioners.
  385. C
  386. C SSDS: Diagonal Scaling Preconditioner SLAP Set Up.
  387. C SSDSCL: Diagonally Scales/Unscales a SLAP Column Matrix.
  388. C SSD2S: Diagonal Scaling Preconditioner SLAP Normal Eqns Set Up.
  389. C SS2LT: Lower Triangle Preconditioner SLAP Set Up.
  390. C SSICS: Incomplete Cholesky Decomp. Preconditioner SLAP Set Up.
  391. C SSILUS: Incomplete LU Decomposition Preconditioner SLAP Set Up.
  392. C
  393. C ============================ smvops.f ===========================
  394. C Most of the incomplete factorization (LL' and LDU) solvers
  395. C in this file require an intermediate routine to translate
  396. C from the SLAP MSOLVE(N, R, Z, NELT, IA, JA, A, ISYM, RWORK,
  397. C IWORK) calling convention to the calling sequence required
  398. C by the solve routine. This generally is accomplished by
  399. C fishing out pointers to the preconditioner (stored in RWORK)
  400. C from the IWORK array and then making a call to the routine
  401. C that actually does the backsolve.
  402. C
  403. C SSMV: SLAP Column Format Sparse Matrix Vector Product.
  404. C SSMTV: SLAP Column Format Sparse Matrix (transpose) Vector Prod.
  405. C SSDI: Diagonal Matrix Vector Multiply.
  406. C SSLI: SLAP MSOLVE for Lower Triangle Matrix (set up for SSLI2).
  407. C SSLI2: Lower Triangle Matrix Backsolve.
  408. C SSLLTI: SLAP MSOLVE for LDL' (IC) Fact. (set up for SLLTI2).
  409. C SLLTI2: Backsolve routine for LDL' Factorization.
  410. C SSLUI: SLAP MSOLVE for LDU Factorization (set up for SSLUI2).
  411. C SSLUI2: SLAP Backsolve for LDU Factorization.
  412. C SSLUTI: SLAP MTSOLV for LDU Factorization (set up for SSLUI4).
  413. C SSLUI4: SLAP Backsolve for LDU Factorization.
  414. C SSMMTI: SLAP MSOLVE for LDU Fact of Normal Eq (set up for SSMMI2).
  415. C SSMMI2: SLAP Backsolve for LDU Factorization of Normal Equations.
  416. C
  417. C =========================== slaputil.f ==========================
  418. C The following utility routines are useful additions to SLAP.
  419. C
  420. C SBHIN: Read Sparse Linear System in the Boeing/Harwell Format.
  421. C SCHKW: SLAP WORK/IWORK Array Bounds Checker.
  422. C SCPPLT: Printer Plot of SLAP Column Format Matrix.
  423. C SS2Y: SLAP Triad to SLAP Column Format Converter.
  424. C QS2I1R: Quick Sort Integer array, moving integer and real arrays.
  425. C (Used by SS2Y.)
  426. C STIN: Read in SLAP Triad Format Linear System.
  427. C STOUT: Write out SLAP Triad Format Linear System.
  428. C
  429. C
  430. C***REFERENCES 1. Mark K. Seager, A SLAP for the Masses, in
  431. C G. F. Carey, Ed., Parallel Supercomputing: Methods,
  432. C Algorithms and Applications, Wiley, 1989, pp.135-155.
  433. C***ROUTINES CALLED (NONE)
  434. C***REVISION HISTORY (YYMMDD)
  435. C 880715 DATE WRITTEN
  436. C 890404 Previous REVISION DATE
  437. C 890915 Made changes requested at July 1989 CML Meeting. (MKS)
  438. C 890921 Removed TeX from comments. (FNF)
  439. C 890922 Numerous changes to prologue to make closer to SLATEC
  440. C standard. (FNF)
  441. C 890929 Numerous changes to reduce SP/DP differences. (FNF)
  442. C -----( This produced Version 2.0.1. )-----
  443. C 891003 Rearranged list of user callable routines to agree with
  444. C order in source deck. (FNF)
  445. C 891004 Updated reference.
  446. C 910411 Prologue converted to Version 4.0 format. (BAB)
  447. C -----( This produced Version 2.0.2. )-----
  448. C 910506 Minor improvements to prologue. (FNF)
  449. C 920511 Added complete declaration section. (WRB)
  450. C 920929 Corrected format of reference. (FNF)
  451. C 921019 Improved one-line descriptions, reordering some. (FNF)
  452. C***END PROLOGUE SLPDOC
  453. C***FIRST EXECUTABLE STATEMENT SLPDOC
  454. C
  455. C This is a *DUMMY* subroutine and should never be called.
  456. C
  457. RETURN
  458. C------------- LAST LINE OF SLPDOC FOLLOWS -----------------------------
  459. END