+91 7682 015 542       info@gexinonline.com

  • Account
    • Sign In
      • Author
      • Editor
      • Reviewer
    • Sign Up
      • Author
logo
  • Home
  • Open Access
  • About Us
    • About Us
    • Our Team
  • Journal
  • Submission
    • Submit Manuscript
    • Instructions to Authors
    • Review Process
    • Join As Reviewers
    • Our Reviewers
  • Policies & Ethics
    • Open Access Policy
    • Editorial Policy
    • Conflict of Interest
    • Publication Ethics and Malpractice Statement
    • Plagiarism Policy
    • Review Policy
    • Correction, Retraction, Withdrawal Policies
    • Digital Preservation Policy
    • Waiver Policy
    • Complaints Policy
    • Advertising Policy
    • Data Sharing Policy
    • Policy on Statement of Informed Consent
    • Policy on Ethics of Human and Animal Experimentation
  • Contact Us
  • About the Journal
  • Editorial Board
  • Review Process
  • Author Guidelines
  • Article Processing Charges
  • Special Issues
  • Current Issue
  • Past Issue
Journal of Comprehensive Pure and Applied Mathematics
Full-Text HTML   Full-Text PDF  

Journal of Comprehensive Pure and Applied Mathematics Volume 3 (2025), Article ID: JCPAM-118

https://doi.org/10.33790/cpam1100118

Research Article

A Method for Singular Symmetric Generalized Eigenvalue Problem

Kuiyuan Li

Department of Mathematics and Statistics University of West Florida Pensacola, FL 32514, USA

Corresponding Author Details: Kuiyuan Li, Department of Mathematics and Statistics, University of West Florida, Pensacola, FL 32514, USA.

Received date: 24th March, 2025

Accepted date: 28th May, 2025

Published date: 30th May, 2025

Citation: Li, K. (2025). A Method for Singular Symmetric Generalized Eigenvalue Problem. J Comp Pure Appl Math, 3(1):1-07.

Copyright: ©2025, This is an open-access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Abstract

In this paper, a method for finding some or all finite eigenvalues of a singular real symmetric matrix pencil (A, B) is presented, where A is a symmetric tridiagonal matrix and B a diagonal matrix with b1 > 0 and bi ≥ 0, i = 2, 3, ..., n. The approach is to first reduce pencil (A,B) to a real symmetric positive definite pencil \((\tilde{A}, \tilde{B})\) then compute some or all eigenvalues of pencil (A, B) via eigenvalues of pencil \((\tilde{A}, \tilde{B})\). This method is more efficient if the dimension of pencil (A,B) is higher than the dimension of pencil \((\tilde{A}, \tilde{B})\).

Key words: Matrix pencil, Eigenvalues, Eigenvalue curves, Multiprocessors. Homotopy method,

Introduction

Consider the generalized real symmetric eigenvalue problem: \[Ax=\lambda Bx \label{g2}-------------(1)\] where \(A\), \(B\) are real symmetric and \(B\) is positive semidefinite. By MDR reduction [1], \(A\)\(B\) to a positive semidefinite diagonal matrix simultaneously with either \(B=\mbox{ diag}(B_1, O_1, B_2, O_2, ..., B_r, O_r)\), a block diagonal matrix, or \(B=\mbox{ diag}(B_1, O_1, B_2, O_2, ..., B_r, O_r, B_{r+1})\), where \(B_i's\) are positive definite diagonal matrices, and \(O_i's\) are zero matrices. In practice, this is what one will do first, then solve the reduced eigenvalue problem. Hence, we will assume \(A\) is real symmetric tridiagonal and \(B\) diagonal. With zero matricex \(O_i\) \(\forall i\), (1) is called singular symmetric generalized eigenvalue problem.

There are few efficient algorithms for solving (1), such as Fix Heiberger [3], Bunse Gerstner [1], and QZ method [4]. However, these methods are not parallel methods which cannot take advantage of multiprocessor machines. Another disadvantage of these methods is that they cannot be used to compute some eigenpairs.

In [6], A homotopy method is presented to solve the problem. The approach is to first reduce pencil \((A,B)\) to a real symmetric positive definite pencil \((\tilde{A}, \tilde{B})\), that is, \(\tilde{A}\) is real symmetric tridiagonal, and \(\tilde{B}\) positive definite diagonal, then use the homotopy method to compute all or some eigenpairs of pencil \((\tilde{A}, \tilde{B})\). The method is fully parallel and very efficient to compute all or some eigenpairs of \((A,B)\), but it seems a little expensive if only some or all eigenvalues are needed. The reason is that reducing pencil \((A,B)\) to a real symmetric positive definite pencil \((\tilde{A}, \tilde{B})\), eigenvectors of pencil \((A,B)\) and \((\tilde{A}, \tilde{B})\) must be computed.

In [7], A homotopy method is presented to find some or all finite eigenvalues of pencil \((A,B)\). The method is more efficient than [6] since no eigenvectors are computed. However, the method is not able to reduce the pencil to a lower dimensional pencil \((\tilde{A}, \tilde{B})\). When \(n\), the dimension of pencil \((A,B)\) is much larger than \(m\), the number of finite eigenvalues, the method may not be a good choice.

In this paper, we present a method for finding some or all finite eigenvalues of (1) by first eliminating the zero block matrices on the diagonal of \(B\). The new approach is more efficient since the dimension of reduced pencil could be much smaller than the original one. It is also a fully parallel method and enjoys the flexibility of finding some or all eigenvalues of the pencil.

In section 2, we will show how to reduce \((A,B)\) to a lower dimensional pencil \((\tilde{A}, \tilde{B})\) and show how the eigenvalues are related. In section 3, we will outline the method to compute some or all eigenvalues of pencil \((A,B)\) from pencil \((\tilde{A}, \tilde{B})\).

Analysis

After MDR reduction, \(B\) must be either the form \(B=\mbox{ diag}(B_1, O_1, B_2, O_2, ..., B_r, O_r)\) or \(B=\mbox{ diag}(B_1, O_1, B_2, O_2, ..., B_r, O_r, B_{r+1})\), where \(B_i's\) are positive definite diagonal matrices, and \(O_i's\) are zero matrices. The reason that \(B_i's\) are not further reduced to identity matrices is to keep the high accuracy since \(B_i's\) may be ill-conditioned.

Assume in (1) \[A = \left( \begin{array}{ccccc} \alpha_{1} & \beta_{2} &&& \\ \beta_{2} & \alpha_{2} & \beta_{3}&& \\ & \ddots & \ddots & \ddots & \\ & & \beta_{n-1} & \alpha_{n-1} & \beta_{n} \\ & & & \beta_{n} &\alpha_{n} \end{array} \right) \label{r1}-------------(2)\]

If \(\beta_{i} =0\) for some \(i\), \(2 \leq i \leq n\), then \({\bf R^{n} }\) can clearly be decomposed into two complementary subspaces invariant under \(A\). Thus the generalized eigenvalue problem \(Ax=\lambda Bx\) is decomposed in an obvious way into two smaller subproblems. Hence, we will assume that each \(\beta_{i} \not =0\). That is, \(A\) is unreduced.

In this paper, we will concentrate on two special cases, one is \(B=\mbox{ diag }(B_1, O_1)\) and other one is \(B=\mbox{ diag }(B_1, O_1, B_2)\). The general cases can be handled with the same ideas.
If \(B=\mbox{ diag}(B_1, O_1)\), we partition \(A\) as following: \[A = \left( \begin{array}{cc} A_{1} & * \\ ** & A_{1}^0 \end{array} \right) \label{A1}----------------(3)\]
If \(B=\mbox{ diag}(B_1, O_1, B_2)\), we partition \(A\) as following: \[A = \left( \begin{array}{ccc} A_{1} & * & \\ **& A_{1}^0 & * \\ & * & A_{2} \end{array} \right), \label{A2}---------------------(4)\] where \(dim(O_1)=dim(A_1^0),\) and \(dim(B_i)=dim(A_i)\), \(i=1, 2\).

Since \(B\) is singular, pencil (\(A\),\(B\)) has fewer than \(n\) finite eigenvalues.

Let deg(\(p(\lambda)\)) denote the degree of the polynomial \(p(\lambda)\), \((M)_1\) be the matrix obtained from \(M\) by deleting its last row and last column, and \((M)^1\) be the matrix obtained from \(M\) by deleting its first row and first column.

Theorem 2.1. Let \(n(A,B)\) denote the number of finite eigenvalues of pencil \((A,B)\). If \(detA_1^0 \ne 0\), then \(n(A,B)=rank(B_1)\) for the case in (3) and \(n(A,B)=rank(B_1)+rank(B_2)\) for the case in (4).

Proof:

Case 1, Let \(B=\mbox{diag }(B_1,O_1)\) and \(rank(B_1)=m\). By simple computation, wee see that \[det(A-\lambda B)=det(A_1-\lambda B_1)detA_1^0-\beta_{m+1}^2det(A_1-\lambda B_1)_1 det(A_1^0)^1.\] Since \(A_1\) is unreduced and \(B_1\) positive definite, \(det(A_1-\lambda B_1)=0\) has \(m\) distinct eigenvalues. So \(deg(det(A_1-\lambda B_1))=m\).

\[\begin{array}{lll} deg (det(A- \lambda B)) & = deg(det(A_1-\lambda B_1)det(A_1^0)) &\\ & =deg(det(A_1-\lambda B_1))& \mbox{ since } det(A_1^0)\ne 0,\\ & = rank(B_1). \end{array}\]

Case 2, Let \(B=\mbox{ diag }(B_1, O_1, B_2),\) and \(T=\mbox{ diag }(B_1, O_1)\), then \(B=\mbox{ diag }(T,B_{2}).\)

Assume \(dim(T)=k\). Let \(M\) denote the \(k\times k\) leading principal submatrix of \(A\), then \[det(A-\lambda B)=det(M -\lambda T)det(A_{2} - \lambda B_{2}) -\beta_{k+1}^2det(M -\lambda T)_1 det(A_{2} - \lambda B_{2})^1.\] Since \(A_2\) is unreduced and \(B_2\) positive definite, \(det(A_2-\lambda B_2)=0\) has \(rank(B_2)\) many distinct eigenvalues. Hence, \(deg(det(A_{2} - \lambda B_{2}))=rank(B_2).\) \[\begin{array}{ll} n(A,B)& =deg(det(M -\lambda T))+deg(det(A_{2} - \lambda B_{2}))\\ & =rank(T) + rank(B_{2}), \mbox{ according to the case 1,}\\ & =rank(B_1))+ rank(B_2), \mbox{ since } rank(T)=rank(B_1). \end{array}\]



Let \(\beta _{k+1}\) be an off-diagonal element of \(A_{1}\) or \(A_2\) or the off-diagonal element of \(A\) which joins block matrices \(A_1\) and \(A^{0}_{1}\) or \(A^{0}_{1}\) and \(A_{2}\), then let \[D= \left( \begin{array}{cc} D_{1} & 0 \\ 0 & D_{2} \end{array} \right), \label{g2.4}---------------(5)\] where \[D_{1} =\left( \begin{array}{ccccc} \alpha_{1} & \beta_{2} \\ \beta_{2} & \alpha_{2} & \beta_{3} \\ & \ddots & \ddots & \ddots & \\ & & \beta_{k-1} & \alpha_{k-1} & \beta_{k} \\ & & & \beta_{k} &\alpha_{k} \end{array} \right), D_{2} =\left( \begin{array}{ccccc} \alpha_{k+1} & \beta_{k+2} \\ \beta_{k+2} & \alpha_{k+2} & \beta_{k+3} \\ & \ddots & \ddots & \ddots & \\ & & \beta_{n-1} & \alpha_{n-1} & \beta_{n} \\ & & & \beta_{n} &\alpha_{n} \end{array} \right) ,\] and then let \[B= \left( \begin{array}{cc} B_{1} & 0 \\ 0 & B_{2} \end{array} \right)\] with \(dim(B_{i})=dim(D_{i}), i=1, 2.\)

Let \(A(t)=(1-t)D+tA\), \(t \in [0,1]\) and \[p(t)=\det (A(t)-\lambda (t)B). \label{p}-------------------(6)\]

Theorem 2.2. Let \(D\) and \(p(t)\) be given in (5) and (6), let \(\lambda(t)\) be a eigenvalue curve of \(p(t)=0\), then

i. \(\lambda(t)\) is simple for any \(t \in (0,1]\).

ii. \(\lambda(t)\) is either constant or strictly monotone.

Proof:

See [7]

Let \(\xi _1, \xi _2,\) ... , \(\xi _m\) be eigenvalues of pencil \((D,B)\), \(\lambda_1(t), \lambda_2(t),\) ... , \(\lambda_m(t)\) be eigenvalues of pencil \((A(t),B)\). From Theorem 2.2, clearly,\(\lambda_i(0)=\xi _i\), and \(\lambda _1(t)< \lambda_2(t)< ... < \lambda_m(t)\) for \(t \in (0,1].\) With the results from Theorem 2.2, one can get the following result.

Theorem 2.3. If \(\xi _i's\) are distinct and \(\lambda(t)\) is not constant, then either \(\xi _{i-1}<\lambda_i(t) <\xi _i,\) or \(\xi _{i}<\lambda _i(t) <\xi _{i+1}\) for \(t\in (0,1]\).

See [7]

Theorem 2.4. Let \(B=diag(B_1,O_1)\) and \(\lambda(A,B)\) denote all eigenvalues of pencil \((A,B)\).
(i). If \(detA_1^0 =0\), then \(\lambda(A,B) = \lambda((A_1)_1,(B_1)_1)\).
(ii). If \(det(A_1^0)^1= 0\), then \(\lambda(A,B)= \lambda(A_1,B_1)\).
(iii). If \(detA_1^0 \neq 0\) and \(det(A_1^0)^1 \neq 0\), then \(\lambda_i \in (\xi_{i-1},\xi_i)\) or \(\lambda_i \in (\xi_{i},\xi_{i+1})\), \(\forall \; i\). where \(\lambda_1 < \lambda_2 < ... < \lambda_m\) are eigenvalues of pencil \((A, B)\), and \(\xi_1 < \xi_2 < ... < \xi_m\) eigenvalues of pencil \((A_1, B_1)\).

Proof:

i. If \(detA_1^0)=0,\) then \(det(A_1^0)^1 \ne 0\) by Cauchy interlacing theorem . Therefore,

\(det(A-\lambda B)=det(A_1-\lambda B_1)detA_1^0-\beta_{m+1}^2det(A_1-\lambda B_1)_1 det(A_1^0)^1=0\) impels \[\begin{array}{lll} det(A-\lambda B)=0 & \iff det(A_1-\lambda B_1)_1det(A_1^0)^1=0 & \mbox{ since } detA_1^0)=0 \\ & \iff det(A_1-\lambda B_1)_1=0 & \mbox{ since } det(A_1^0)^1 \ne 0 \\ &\iff det((A_1)_1-\lambda(B_1)_1)=0. & \end{array}\]

Therefore,\(\lambda(A,B) = \lambda((A_1)_1,(B_1)_1).\)
ii. If \(det(A_1^0)^1= 0\), then \(det(A_1^0)\ne 0\) by Cauchy interlacing theorem .

\(det(A-\lambda B)=det(A_1-\lambda B_1)detA_1^0-\beta_{m+1}^2det(A_1-\lambda B_1)_1 det(A_1^0)^1=0\) impels \[\begin{array}{lll} det(A-\lambda B)=0 & \iff det(A_1-\lambda B_1)detA_1^0 =0& \mbox{ since } det(A_1^0)^1= 0\\ & \iff det(A_1-\lambda B_1)=0 & \mbox{ since } det(A_1^0)\ne 0 \end{array}\] Therefore, \(\lambda(A,B)= \lambda(A_1,B_1).\)
iii. By Theorem 2.1, \(n(A,B)=n(D,B)=n(A_1, B_1)=rank(B_1).\) Since \(A_1\) is unreduced symmetric traditional and \(B_1\) is positive definite, pencil \((A_1, B_1)\) has exactly \(m=rank(B_1)\) many distinct eigenvalues. Since \(det(D-\lambda B)=det(A_1-\lambda B_1)det (A_1^0)\) and \(det(A_1^0)\ne 0\), \(\lambda(D, B)= \lambda(A_1, B_1)= \{\xi _1, \xi _2, ... , \xi _m\) }.

By Theorem 2.3, \(\lambda_i=\lambda_i(1)\in (\xi _{(i-1)}, \xi _i)\) or \(\lambda_i=\lambda_i(1)\in (\xi _(i), \xi _{(i+1)}),\)\(\forall \; i.\)


Theorem 2.4 basically shows the relationships between pencil \((A, B)\) and pencil \((A_1, B_1)\) and what one needs to do to compute some or all eigenvalues of pencil pencil \((A, B)\) via pencil \((A_1, B_1).\)

Algorithm

Based on Thereom 2.4, some or all eigenvalues of pencil \((A,B)\) can be computed with the following steps.

Case I. \(B=\mbox{diag }(B_1,O_1)\).

If \(detA_1^0 =0\), then \(\lambda(A,B) = \lambda((A_1)_1,(B_1)_1)\) according to Theorem 2.4. Many fully parallel methods, such as Bisection [8], Divide-Conquer [2], and Homotopy [5] can be applied on \(((A_1)_1,(B_1)_1)\) to find all eigenvalues. Clearly, dimension of \(((A_1)_1,(B_1)_1)\) is smaller than the dimension of \((A,B)\) so it takes less time to find some or all eigenvalues.

If \(det(A_1^0)^1= 0\), then \(\lambda(A,B)= \lambda(A_1,B_1)\) according to Theorem 2.4. Again, many fully parallel methods, such as Bisection, Divide-Conquer, and Homotopy can be applied on \((A_1,B_1)\) to find some or all eigenvalues. Clearly, dimension of \((A_1,B_1)\) is smaller than the dimension of \((A,B)\) so it also takes less time to find some or all eigenvalues.

If \(detA_1^0 \neq 0\) and \(det(A_1^0)^1 \neq 0\), then \(\lambda_i \in (\xi_{i-1},\xi_i)\) or \(\lambda_i \in (\xi_{i},\xi_{i+1})\), \(\forall \; i,\) where \(\lambda_1 < \lambda_2 < ... < \lambda_m\) are eigenvalues of pencil \((A, B)\), and \(\xi_1 < \xi_2 < ... < \xi_m\) eigenvalues of pencil \((A_1, B_1)\) according to Theorem 2.4. First, we apply fully parallel methods, such as Homotopy [5], to find all eigenvalues of \((A_1,B_1)\). Since the eigenvalues of \((A, B)\) are separated by the eigenvalues of \((A_1,B_1)\), we may apply an iterative method, such as, Newton, or Laguerre method [4] to compute eigenvalues of \((A, B)\) on each interval \((\xi_{i-1},\xi_i)\) or \((\xi_{i},\xi_{i+1})\). Since \(\lambda_i\) can be computed independently on each interval, the method is fully parallel.

Case II. \(B=\mbox{diag }(B_1,O_1, B_2)\).

First, we form an initial matrix pencil. Assume \(\beta_{k+1}\) is the off diagonal element of \(A\) that joins \(A_1^0\) and \(A_2\). Let \[D = \left( \begin{array}{cc} D_{1} & 0 \\ 0 & A_{2} \end{array} \right) \mbox{ where } D_1 \mbox{ is } k \times k \; \mbox{ leading principal submatrix of } A,\] then \[A(t)=(1-t)D+tA=\left( \begin{array}{cc} D_{1} & t\beta_{k+1}\\ t\beta_{k+1} & A_{2} \end{array} \right).\]

Let \(p(t)\) be given in (6) and \(\lambda(t)\) be an eigenvalue curve of \(p(t)=0\), then the eigenvalues of \((A(t),B),\) \(\forall\) \(t\) are separated by the eigenvalues of \((D,B)\) by Theorem 2.3. Since \(\lambda(A, B)=\lambda(A(1), B)\) so eigenvalues of \((A, B)\) are separated by the eigenvalues of \((D,B).\) If we find all eigenvalues of \((D,B)\), then we can use an iterative method to find all eigenvalues of \((A,B)\). The eigenvalues of \((D,B)\) are the eigenvalues of \((A_2,B_2)\) and the eigenvalues of \((D_1,\hat{B})\), where \(\hat{B}=diag(B_1,O_1)\). The fully parallel methods, such as Bisection, Divide-Conquer, and Homotopy, can be used to solve \((A_2,B_2)\). \((D_1,\hat{B})\) is just the Case I above.

Conclusion

When eigenvalues and corresponding eigenvectors of pencil \((A, B)\) are needed, [6] is a good method to use. When only eigenvalues of \((A, B)\) are needed and the dimensions of \(O_i\) are not too big, then the method in [7] is a good choice. When the dimension of pencil \((A, B)\), is much higher than the number of finite eigenvalues, and dimensions of some \(O_i\) are much higher, then the method presented here is a better choice. It is showed how \((A, B)\) can be reduced to a dimension much smaller positive definite pencil with very little computations. The eigenvalue relationships between these two pencils are established. Since the eigenvalues of pencil \((A, B)\) are either same as those of \((\tilde{A}, \tilde{B})\), or strictly separated by the eigenvalues of \((\tilde{A}, \tilde{B})\), all or some eigenvalues of \((A, B)\) can be computed on each interval so that the method is fully parallel in nature. Since the dimension of pencil \((A, B)\) is given and the number of finite eigenvalues can be computed easily by using Theorem 2.1, it is easy to see if the method in [7] or this method should be used.

Conflict of Interest

The author declares that there is no conflicts of interest.

References

  1. Bunse-Gerstner, A. (1984). An algorithm for the symmetric generalized eigenvalue problem, Linear Alg. Appl., 58, pp 43-68. View

  2. Dongarra, J. J & Sorensen, D. C. (1987). A fully parallel algorithm for the symmetric eigenvalue problem, SIAM J. Sci. Stat. Comput, 8, 139-154. View

  3. Fix, G. & Heiberger, R. (1972). An algorithm for the ill-conditioned generalized eigenvalue problem, SIAM, J. Numer. Anal., 9, pp 78-88. View

  4. Golub, G. & Loan, C. V. (1996). Matrix computations, 3ed. The Johns Hopkins Press, Baltimore, MD.

  5. Li, K. & Li, T. Y. (1993). An Algorithm For Symmetric Tridiagonal Eigen-problems — Divide and Conquer with Homotopy Continuation, SIAM J. Sci. Stat. Comput, 14, 735-751. View

  6. Li, K. & Li, T. Y. (1993). A Homotopy Algorithm for Symmetric Generalized Eigenproblem, Numerical Algorithms 4, 167-195. View

  7. Li, K., Uvah, J., & Zhao, S. (2005). A Fully Parallel Method for the Singular Eigenvalue Problem, International J. Comput. Math with Applications, 49, 1279-1284. View

  8. Parlett, B. N. (1998). The symmetric eigenvalue problem, SIAM. View

LICENSE

This work is licensed under a Creative Commons Attribution 4.0 International License.

Quick Links

  • Open Access
  • About Us
  • Journal
  • Submit Manuscript
  • Copyright & Licensing Policy

Contact Us

  • Plot No. - 814/1775, Jayar Sasan, Bhubaneswar, Odisha, India, Pin - 752101
  • +91 7682 015 542
  • info@gexinonline.com
MEMBER OF
JOURNAL ARCHIVED IN

© Gexin Publications.

All Rights Reserved.