{"id":2604,"date":"2023-06-28T17:47:06","date_gmt":"2023-06-28T17:47:06","guid":{"rendered":"https:\/\/nag.com\/?page_id=2604"},"modified":"2023-06-28T17:47:07","modified_gmt":"2023-06-28T17:47:07","slug":"nearest-correlation-matrix","status":"publish","type":"page","link":"https:\/\/nag.com\/nearest-correlation-matrix\/","title":{"rendered":"Nearest Correlation Matrix"},"content":{"rendered":"\n<div class=\"gbc-title-banner ta ta-lg ta-xl\" style='background-color: #082d48ff; color: #ffffffff; border-radius: 0px; '>\n    <div class=\"container\" style='border-radius: 0px; '>\n        <div class=\"row justify-content--center\" style='color: #ffffffff;'>\n            <div class=\"col-12\"  >\n                <div class=\"wrap pv-4 \" style=\"0px\">\n                                <div class=\"col-12 col-md-12 col-lg-10 col-xl-8  banner-content\"  >\n    \n                                             <h1>Nearest Correlation Matrix<\/h1>\n                    \n                    <div class=\"mt-1 mb-1 content\"><p><span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG Library mini-article<\/p>\n<\/div>\n\n                    \n                                    <\/div>\n                <\/div>\n            <\/div>\n        <\/div>\n    <\/div>\n<\/div>\n\n\n<!-- Spacer -->\n<div class=\"pt-4 pt-lg-4 pt-xl-4\" ><\/div>\n\n<div class=\"container content-area-default \">\n    <div class=\"row justify-content--center\">\n        <div class=\"col-12 col-md-12 col-lg-10 col-xl-8\">\n            <p>The <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG Library has a range of functionality related to computing the nearest correlation matrix. In this article we take look at nearest correlation matrix problems, giving some background and introducing the routines that solve them.<\/p>\n<h3>Introduction<\/h3>\n<p>A correlation matrix is characterized as being a real, square matrix that<\/p>\n<ul>\n<li>is symmetric;<\/li>\n<li>has ones on the diagonal;<\/li>\n<li>has non-negative eigenvalues.<\/li>\n<\/ul>\n<p>A matrix with non-negative eigenvalues is called positive semidefinite. If a matrix \\(c\\) is a correlation matrix then the elements of \\(C, {c_i}{_j}, \\) represent the pair-wise correlation of entity \\(i\\) with entity \\(j\\), that is, the strength and direction of a linear relationship between the two.<\/p>\n<div class=\"paragraph--color--transparent paragraph--alignment--left paragraph paragraph--type--text paragraph--view-mode--default\">\n<div class=\"field field--name-field-paragraph-text field--type-text-long field--label-hidden field--item\">\n<div class=\"tex2jax_process\">\n<p>In the literature\u00a0there are numerous examples illustrating the use of correlation matrices but the one we have encountered the most arises in finance where the correlation between various stocks is used to construct sensible portfolios. Unfortunately, for a variety of reasons, an input matrix which is supposed to be a correlation matrix may fail to be semidefinite. For example, the correlations may be between stocks measured over a period of time and some data may be missing. If individual correlations are computed using observation data they have in common, and this varies over all the variables, it will give rise to an indefinite matrix. Still drawing from finance, a practitioner may wish to explore the effect on a portfolio of assigning correlations between certain assets different from those computed from historical values. This can also result in negative eigenvalues in the computed matrix.<\/p>\n<p>In such situations, the result is a matrix which is an\u00a0<em>approximate correlation matrix<\/em>\u00a0and this must be fixed for subsequent analysis that relies upon having a\u00a0<em>true correlation matrix<\/em>\u00a0to be valid. Ideally, we wish to find the \u2018nearest\u2019 true correlation matrix to our approximate one for some sensible definition of \u2018near\u2019. This is our basic\u00a0<em>nearest correlation matrix problem<\/em>.<\/p>\n<p>\u00a0<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"paragraph--color--transparent paragraph--alignment--left paragraph paragraph--type--text paragraph--view-mode--default\">\n<div class=\"field field--name-field-paragraph-text field--type-text-long field--label-hidden field--item\">\n<div class=\"tex2jax_process\">\n<h3>The Basic Nearest Correlation Matrix Problem<\/h3>\n<p>The <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG Library routine\u00a0<a href=\"https:\/\/www.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02aaf.html\">nagf_correg_corrmat_nearest<\/a>\u00a0(<a href=\"https:\/\/www.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02aaf.html\">g02aa<\/a>) implements a Newton algorithm to solve the\u00a0<i>basic problem<\/i> we outlined in the introduction. It finds a true correlation matrix \\(x\\) that is closest to the approximate input matrix, \\(G\\) in the Frobenius norm. That is we find the minimum of<\/p>\n<\/div>\n<\/div>\n<\/div>\n        <\/div>\n    <\/div>\n<\/div>\n\n\n<math xmlns=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"block\">\n  <mo fence=\"false\" stretchy=\"false\">&#x2016;<!-- \u2016 --><\/mo>\n  <mi>G<\/mi>\n  <mo>&#x2212;<!-- \u2212 --><\/mo>\n  <mi>X<\/mi>\n  <msub>\n    <mo fence=\"false\" stretchy=\"false\">&#x2016;<!-- \u2016 --><\/mo>\n    <mi>F<\/mi>\n  <\/msub>\n  <mo>.<\/mo>\n<\/math>\n\n\n<div class=\"container content-area-default \">\n    <div class=\"row justify-content--center\">\n        <div class=\"col-12 col-md-12 col-lg-10 col-xl-8\">\n            <p>\u00a0<\/p>\n<p>The algorithm, described in a paper by Qi and Sun [8], has superior convergence properties over previously suggested approaches. Borsdorf and Higham [2], at the University of Manchester, looked at this in greater detail and offered further improvements. These include a different iterative solver (MINRES was preferred to Conjugate Gradient) and a means of pre-conditioning the linear equations. It is this enhanced algorithm that has been incorporated into our Library.<\/p>\n<h3>Weighted Problems and Forcing a Positive Definite Correlation Matrix<\/h3>\n<p>In <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG Library routine <a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02abf.html\" target=\"_blank\" rel=\"noopener\">nagf_correg_corrmat_nearest_bounded<\/a>\u00a0(<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02abf.html\" target=\"_blank\" rel=\"noopener\">g02ab<\/a>) we extend the functionality provided by\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02aaf.html\" target=\"_blank\" rel=\"noopener\">g02aa<\/a>. If we have an approximate correlation matrix it is reasonable to suppose that not all of the matrix is actually approximate, perhaps only part of it is. Similarly, we may trust some correlations more than others and wish for these to stay closer to their input value in the final matrix.<\/p>\n<p>In this algorithm, we apply the original work of Qi and Sun, to now use a weighted norm. Thus, we find the minimum of<\/p>\n<p>\u00a0<\/p>\n        <\/div>\n    <\/div>\n<\/div>\n\n\n<math xmlns=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"block\">\n  <mo fence=\"false\" stretchy=\"false\">&#x2016;<!-- \u2016 --><\/mo>\n  <msup>\n    <mi>W<\/mi>\n    <mrow class=\"MJX-TeXAtom-ORD\">\n      <mn>1<\/mn>\n      <mrow class=\"MJX-TeXAtom-ORD\">\n        <mo>\/<\/mo>\n      <\/mrow>\n      <mn>2<\/mn>\n    <\/mrow>\n  <\/msup>\n  <mo stretchy=\"false\">(<\/mo>\n  <mi>G<\/mi>\n  <mo>&#x2212;<!-- \u2212 --><\/mo>\n  <mi>X<\/mi>\n  <mo stretchy=\"false\">)<\/mo>\n  <msup>\n    <mi>W<\/mi>\n    <mrow class=\"MJX-TeXAtom-ORD\">\n      <mn>1<\/mn>\n      <mrow class=\"MJX-TeXAtom-ORD\">\n        <mo>\/<\/mo>\n      <\/mrow>\n      <mn>2<\/mn>\n    <\/mrow>\n  <\/msup>\n  <msub>\n    <mo fence=\"false\" stretchy=\"false\">&#x2016;<!-- \u2016 --><\/mo>\n    <mi>F<\/mi>\n  <\/msub>\n  <mo>.<\/mo>\n<\/math>\n\n\n<div class=\"container content-area-default \">\n    <div class=\"row justify-content--center\">\n        <div class=\"col-12 col-md-12 col-lg-10 col-xl-8\">\n            <p>\u00a0<\/p>\n<p>Here \\(W\\) is a diagonal matrix of weights. This means that we are seeking to minimize the elements \\(\u221aw_i{_j} (g_i{j} &#8211; x_i{_j})\u221aw_j{_j}\\). Thus, by choosing elements in \\(W\\) appropriately we can favour some elements in \\(G\\) forcing the corresponding elements in \\(X\\) to be closer to them.<\/p>\n<p>\u00a0<\/p>\n<p>This method means that whole rows and columns of \\(G\\)\u00a0<span class=\"math inline\"><span id=\"MathJax-Element-14-Frame\" class=\"MathJax\" style=\"box-sizing: border-box; display: inline; font-style: normal; font-weight: normal; line-height: normal; font-size: 16px; text-indent: 0px; text-align: left; text-transform: none; letter-spacing: normal; word-spacing: normal; overflow-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border: 0px; padding: 0px; margin: 0px; position: relative;\" tabindex=\"0\" role=\"presentation\" data-mathml=\"&lt;math xmlns=&quot;http:\/\/www.w3.org\/1998\/Math\/MathML&quot;&gt;&lt;mi&gt;G&lt;\/mi&gt;&lt;\/math&gt;\"><span class=\"MJX_Assistive_MathML\" role=\"presentation\"><math xmlns=\"http:\/\/www.w3.org\/1998\/Math\/MathML\"><mi><br \/><\/mi><\/math><\/span><\/span><\/span>are weighted. However, <a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02ajf.html\" target=\"_blank\" rel=\"noopener\">nagf_correg_corrmat_h_weight<\/a>\u00a0(<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02ajf.html\" target=\"_blank\" rel=\"noopener\">g02aj<\/a>) allows element-wise weighting and in this routine, we find the minimum of<\/p>\n<p>\u00a0<\/p>\n        <\/div>\n    <\/div>\n<\/div>\n\n\n<math xmlns=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"block\">\n  <mo fence=\"false\" stretchy=\"false\">&#x2016;<!-- \u2016 --><\/mo>\n  <mi>H<\/mi>\n  <mo>&#x2218;<!-- \u2218 --><\/mo>\n  <mo stretchy=\"false\">(<\/mo>\n  <mi>G<\/mi>\n  <mo>&#x2212;<!-- \u2212 --><\/mo>\n  <mi>X<\/mi>\n  <mo stretchy=\"false\">)<\/mo>\n  <msub>\n    <mo fence=\"false\" stretchy=\"false\">&#x2016;<!-- \u2016 --><\/mo>\n    <mi>F<\/mi>\n  <\/msub>\n  <mo>,<\/mo>\n<\/math>\n\n\n<div class=\"container content-area-default \">\n    <div class=\"row justify-content--center\">\n        <div class=\"col-12 col-md-12 col-lg-10 col-xl-8\">\n            <p>\u00a0<\/p>\n<p>where \\(C=A \u2218 B\\) denotes the matrix C with elements \\(c_i,{_i} = a_i,{_j} b_i{_j}\\). Thus by choosing appropriate values in \\(H\\), we can emphasize individual elements in \\(G\\)<\/p>\n<p>and leave the others unweighted. The algorithm employed here is by Jiang, Sun and Toh [7], and has the Newton algorithm at its core.<\/p>\n<p>Both g02ab and g02aj allows us to specify that the computed correlation matrix is positive definite, that is its eigenvalues are greater than zero. This is required in some applications to improve the condition of the matrix and to increase stability.<\/p>\n<p>\u00a0<\/p>\n<h3>Constraining the Rank of the Correlation Matrix<\/h3>\n<p>If a low-rank correlation matrix is required, for example, to constrain the number of independent random variables,\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02akf.html\" target=\"_blank\" rel=\"noopener\">nagf_correg_corrmat_nearest_rank<\/a>\u00a0(<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02akf.html\" target=\"_blank\" rel=\"noopener\">g02ak<\/a>) can be used. It finds the nearest correlation matrix, in the Frobenius norm, of maximum prescribed rank. The routine is based on the Majorized Penalty Approach proposed by Gao and Sun [4].<\/p>\n<p>\u00a0<\/p>\n<h3>Fixing Correlations with Shrinking and Alternating Projections<\/h3>\n<p>We now turn our attention to fixing some of the elements that are known to be true correlations. Instead of using a Newton method like the previous four algorithms, here we use a shrinking method.<\/p>\n<p>One common example where this is needed is where the correlations between a subset of our variables are trusted and on their own would form a valid correlation matrix. We could thus arrange these into the leading block of our input matrix and seek to fix them while we correct the remainder. We call this the fixed block problem. The routine\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02anf.html\" target=\"_blank\" rel=\"noopener\">nagf_correg_corrmat_shrinking<\/a>\u00a0(<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02anf.html\" target=\"_blank\" rel=\"noopener\">g02an<\/a>) preserves such a leading block of correlations in our approximate matrix. Using the shrinking method of Higham, Strabi\u0107 and \u0160ego [6], the routine finds a true correlation matrix of the following form<\/p>\n<p>\u00a0<\/p>\n        <\/div>\n    <\/div>\n<\/div>\n\n\n<math xmlns=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"block\">\n  <mi>&#x03B1;<!-- \u03b1 --><\/mi>\n  <mrow>\n    <mo>(<\/mo>\n    <mtable rowspacing=\"4pt\" columnspacing=\"1em\">\n      <mtr>\n        <mtd>\n          <mi>A<\/mi>\n        <\/mtd>\n        <mtd>\n          <mn>0<\/mn>\n        <\/mtd>\n      <\/mtr>\n      <mtr>\n        <mtd>\n          <mn>0<\/mn>\n        <\/mtd>\n        <mtd>\n          <mi>I<\/mi>\n        <\/mtd>\n      <\/mtr>\n    <\/mtable>\n    <mo>)<\/mo>\n  <\/mrow>\n  <mo>+<\/mo>\n  <mo stretchy=\"false\">(<\/mo>\n  <mn>1<\/mn>\n  <mo>&#x2212;<!-- \u2212 --><\/mo>\n  <mi>&#x03B1;<!-- \u03b1 --><\/mi>\n  <mo stretchy=\"false\">)<\/mo>\n  <mi>G<\/mi>\n  <mo>.<\/mo>\n<\/math>\n\n\n<div class=\"container content-area-default \">\n    <div class=\"row justify-content--center\">\n        <div class=\"col-12 col-md-12 col-lg-10 col-xl-8\">\n            <p>\u00a0<\/p>\n<p>\\(G\\) is is again our input matrix and we find the smallest \\(a\\) in the interval [0,1] that gives a positive semidefinite result. The smaller \\(a\\) is, the closer we stay to our original matrix, and any \\(a\\) preserves the leading submatrix \\(A\\) which needs to be positive definite. The algorithm uses a bisection method which converges quickly in a finite number of steps.<\/p>\n<p>\u00a0<\/p>\n<p>The routine\u00a0<a href=\"https:\/\/www.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02apf.html\">nagf_correg_corrmat_target<\/a>\u00a0(<a href=\"https:\/\/www.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02apf.html\">g02ap<\/a>) generalizes the shrinking idea and allows us to supply our own target matrix. The target matrix, \\(T\\), is defined by specifying a matrix of weights, \\(H\\), with \\(T = H \u2218 G\/). We then find a solution of the form<\/p>\n<div class=\"MathJax_Display\">\u00a0<\/div>\n        <\/div>\n    <\/div>\n<\/div>\n\n\n<math xmlns=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"block\">\n  <mi>&#x03B1;<!-- \u03b1 --><\/mi>\n  <mi>T<\/mi>\n  <mo>+<\/mo>\n  <mo stretchy=\"false\">(<\/mo>\n  <mn>1<\/mn>\n  <mo>&#x2212;<!-- \u2212 --><\/mo>\n  <mi>&#x03B1;<!-- \u03b1 --><\/mi>\n  <mo stretchy=\"false\">)<\/mo>\n  <mi>G<\/mi>\n  <mo>,<\/mo>\n<\/math>\n\n\n<div class=\"container content-area-default \">\n    <div class=\"row justify-content--center\">\n        <div class=\"col-12 col-md-12 col-lg-10 col-xl-8\">\n            <p>\u00a0<\/p>\n<p>Computing \\(a\\) as before. A bound on the smallest eigenvalue can also be specified. Specifying a value of 1 in \\(H\\) essentially fixes an element in \\(G\\) so it is unchanged in \\(X\\).<\/p>\n<p>For example, it is sometimes required to fix two diagonal blocks, so we could choose \\(H\\) to be<\/p>\n<p>\u00a0<\/p>\n        <\/div>\n    <\/div>\n<\/div>\n\n\n<math xmlns=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"block\">\n  <mi>H<\/mi>\n  <mo>=<\/mo>\n  <mrow>\n    <mo>(<\/mo>\n    <mtable rowspacing=\"4pt\" columnspacing=\"1em\">\n      <mtr>\n        <mtd>\n          <mrow>\n            <mo>[<\/mo>\n            <mtable rowspacing=\"4pt\" columnspacing=\"1em\">\n              <mtr>\n                <mtd>\n                  <mn>1<\/mn>\n                <\/mtd>\n                <mtd \/>\n                <mtd>\n                  <mo>&#x2026;<!-- \u2026 --><\/mo>\n                <\/mtd>\n                <mtd \/>\n                <mtd>\n                  <mn>1<\/mn>\n                <\/mtd>\n              <\/mtr>\n              <mtr>\n                <mtd>\n                  <mo>&#x22EE;<!-- \u22ee --><\/mo>\n                <\/mtd>\n                <mtd \/>\n                <mtd>\n                  <mo>&#x22F1;<!-- \u22f1 --><\/mo>\n                <\/mtd>\n                <mtd \/>\n                <mtd>\n                  <mo>&#x22EE;<!-- \u22ee --><\/mo>\n                <\/mtd>\n              <\/mtr>\n              <mtr>\n                <mtd>\n                  <mn>1<\/mn>\n                <\/mtd>\n                <mtd \/>\n                <mtd>\n                  <mo>&#x2026;<!-- \u2026 --><\/mo>\n                <\/mtd>\n                <mtd \/>\n                <mtd>\n                  <mn>1<\/mn>\n                <\/mtd>\n              <\/mtr>\n            <\/mtable>\n            <mo>]<\/mo>\n          <\/mrow>\n        <\/mtd>\n        <mtd>\n          <mn>0<\/mn>\n        <\/mtd>\n      <\/mtr>\n      <mtr>\n        <mtd>\n          <mn>0<\/mn>\n        <\/mtd>\n        <mtd>\n          <mrow>\n            <mo>[<\/mo>\n            <mtable rowspacing=\"4pt\" columnspacing=\"1em\">\n              <mtr>\n                <mtd>\n                  <mn>1<\/mn>\n                <\/mtd>\n                <mtd \/>\n                <mtd>\n                  <mo>&#x2026;<!-- \u2026 --><\/mo>\n                <\/mtd>\n                <mtd \/>\n                <mtd>\n                  <mn>1<\/mn>\n                <\/mtd>\n              <\/mtr>\n              <mtr>\n                <mtd>\n                  <mo>&#x22EE;<!-- \u22ee --><\/mo>\n                <\/mtd>\n                <mtd \/>\n                <mtd>\n                  <mo>&#x22F1;<!-- \u22f1 --><\/mo>\n                <\/mtd>\n                <mtd \/>\n                <mtd>\n                  <mo>&#x22EE;<!-- \u22ee --><\/mo>\n                <\/mtd>\n              <\/mtr>\n              <mtr>\n                <mtd>\n                  <mn>1<\/mn>\n                <\/mtd>\n                <mtd \/>\n                <mtd>\n                  <mo>&#x2026;<!-- \u2026 --><\/mo>\n                <\/mtd>\n                <mtd \/>\n                <mtd>\n                  <mn>1<\/mn>\n                <\/mtd>\n              <\/mtr>\n            <\/mtable>\n            <mo>]<\/mo>\n          <\/mrow>\n        <\/mtd>\n      <\/mtr>\n    <\/mtable>\n    <mo>)<\/mo>\n  <\/mrow>\n  <mo>.<\/mo>\n<\/math>\n\n\n<div class=\"container content-area-default \">\n    <div class=\"row justify-content--center\">\n        <div class=\"col-12 col-md-12 col-lg-10 col-xl-8\">\n            <p>\u00a0<\/p>\n<p>The algorithm then finds the smallest \\(a\\) that gives a positive semidefinite matrix of the following form.<\/p>\n<p>\u00a0<\/p>\n        <\/div>\n    <\/div>\n<\/div>\n\n\n<math xmlns=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"block\">\n  <mi>&#x03B1;<!-- \u03b1 --><\/mi>\n  <mrow>\n    <mo>(<\/mo>\n    <mtable rowspacing=\"4pt\" columnspacing=\"1em\">\n      <mtr>\n        <mtd>\n          <msub>\n            <mi>G<\/mi>\n            <mrow class=\"MJX-TeXAtom-ORD\">\n              <mn>11<\/mn>\n            <\/mrow>\n          <\/msub>\n        <\/mtd>\n        <mtd>\n          <mn>0<\/mn>\n        <\/mtd>\n      <\/mtr>\n      <mtr>\n        <mtd>\n          <mn>0<\/mn>\n        <\/mtd>\n        <mtd>\n          <msub>\n            <mi>G<\/mi>\n            <mrow class=\"MJX-TeXAtom-ORD\">\n              <mn>22<\/mn>\n            <\/mrow>\n          <\/msub>\n        <\/mtd>\n      <\/mtr>\n    <\/mtable>\n    <mo>)<\/mo>\n  <\/mrow>\n  <mo>+<\/mo>\n  <mo stretchy=\"false\">(<\/mo>\n  <mn>1<\/mn>\n  <mo>&#x2212;<!-- \u2212 --><\/mo>\n  <mi>&#x03B1;<!-- \u03b1 --><\/mi>\n  <mo stretchy=\"false\">)<\/mo>\n  <mi>G<\/mi>\n  <mo>.<\/mo>\n<\/math>\n\n\n<div class=\"container content-area-default \">\n    <div class=\"row justify-content--center\">\n        <div class=\"col-12 col-md-12 col-lg-10 col-xl-8\">\n            <div class=\"paragraph--color--transparent paragraph--alignment--left paragraph paragraph--type--text paragraph--view-mode--default\">\n<div class=\"field field--name-field-paragraph-text field--type-text-long field--label-hidden field--item\">\n<div class=\"tex2jax_process\">\n<p>\u00a0<\/p>\n<p>The shrinking algorithms are characterized by their speed and the potentially large distance between the input and the output. Alternating projections with Anderson acceleration is another algorithm we employ to compute fixed block problems, and it\u2019s speed and nearness characteristics are the reverse of that of shrinking.<\/p>\n<p>The input matrix is repeatedly, and alternately, projected on to the nearest matrix in the sets of semidefinite matrices and matrices with entries we wish to preserve, including the unit diagonal. Whilst there is no guarantee of convergence theoretically, in practice the algorithm will find the nearest correlation matrix in the intersection of these two sets. In the routine\u00a0<a href=\"https:\/\/www.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02asf.html\">nagf_correg_corrmat_fixed<\/a>\u00a0(<a href=\"https:\/\/www.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02asf.html\">g02as<\/a>) we employ the method of Higham and Strabi\u0107 [<a href=\"https:\/\/www.nag.com\/content\/nearest-correlation-matrix-0#HighamStrabic\">5<\/a>], which computes the nearest correlation matrix in the Frobenius norm while fixing arbitrary elements, and optionally setting a minimum eigenvalue.<\/p>\n<p>\u00a0<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"paragraph--color--transparent paragraph--alignment--left paragraph paragraph--type--text paragraph--view-mode--default\">\n<div class=\"field field--name-field-paragraph-text field--type-text-long field--label-hidden field--item\">\n<div class=\"tex2jax_process\">\n<h2>Choosing a Nearest Correlation Matrix Routine<\/h2>\n<p>When choosing a routine, the trade-off is between computation time and the distance of the solution from the original matrix. The Newton algorithms (<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02aaf.html\" target=\"_blank\" rel=\"noopener\">g02aa<\/a>,\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02abf.html\" target=\"_blank\" rel=\"noopener\">g02ab<\/a>,\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02ajf.html\" target=\"_blank\" rel=\"noopener\">g02aj<\/a>\u00a0and\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02akf.html\" target=\"_blank\" rel=\"noopener\">g02ak<\/a>) and the alternating projection algorithm (<a href=\"https:\/\/www.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02asf.html\">g02as<\/a>) will always find the nearest solution to the problem they are solving, recalling that weighted algorithms are only influencing the input. The shrinking routines (<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02anf.html\" target=\"_blank\" rel=\"noopener\">g02an<\/a>\u00a0and\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02apf.html\" target=\"_blank\" rel=\"noopener\">g02ap<\/a>) will find a result further away but will be much quicker.<\/p>\n<p>For the basic problem,\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02aaf.html\" target=\"_blank\" rel=\"noopener\">g02aa<\/a>\u00a0will always find the nearest matrix. Using\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02apf.html\" target=\"_blank\" rel=\"noopener\">g02ap<\/a>, with an identity matrix as the target, will produce a matrix further away than this, which is understandable given the form of the solution, but with a shorter computation time.<\/p>\n<p>If you wish to solve the fixed block problem the specialist routine\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02anf.html\" target=\"_blank\" rel=\"noopener\">g02an<\/a>\u00a0will be the fastest. Of the Newton algorithms\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02abf.html\" target=\"_blank\" rel=\"noopener\">g02ab<\/a>\u00a0will find a solution in a reasonable time but, as we weight whole rows and columns, some elements will be overemphasized outside of the correct block. A more accurate weighting can be achieved with\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02ajf.html\" target=\"_blank\" rel=\"noopener\">g02aj<\/a>\u00a0and a close solution will be found. However, the routine will take considerably more time. The closest solution will be found with\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02asf.html\" target=\"_blank\" rel=\"noopener\">g02as<\/a>. Whilst much slower than the Newton algorithms it can outperform\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02ajf.html\" target=\"_blank\" rel=\"noopener\">g02aj<\/a>\u00a0for speed.<\/p>\n<p>For fixing two diagonal blocks, or for arbitrary fixing and weighting, the choice is between\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02ajf.html\" target=\"_blank\" rel=\"noopener\">g02aj<\/a>,\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02apf.html\" target=\"_blank\" rel=\"noopener\">g02ap<\/a>\u00a0and\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02asf.html\" target=\"_blank\" rel=\"noopener\">g02as<\/a>\u00a0with the same speed and nearness trade-off. The alternating projections of\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02asf.html\" target=\"_blank\" rel=\"noopener\">g02as<\/a>\u00a0will fix elements and find the nearest solution. Although the shrinking algorithm is fixing elements and is quick, the target matrix is required to be positive definite and form part of a valid correlation matrix, which can be a limitation. Since\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02ajf.html\" target=\"_blank\" rel=\"noopener\">g02aj<\/a>\u00a0only weights elements in the input it may offer some flexibility here if the blocks you wish to preserve are close to, but fail to be, positive semidefinite.<\/p>\n<p>If we seek to fix the minimum eigenvalue, and no weighting is required,\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02abf.html\" target=\"_blank\" rel=\"noopener\">g02ab<\/a>\u00a0or\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02apf.html\" target=\"_blank\" rel=\"noopener\">g02ap<\/a>\u00a0can be used. With the latter using an identity target, as for the basic problem. If weighting or fixing is also required then similar results are found for problems described above. However, in combination with weighting\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02apf.html\" target=\"_blank\" rel=\"noopener\">g02ap<\/a>\u00a0can return a large value of\u00a0<span class=\"math inline\"><span id=\"MathJax-Element-38-Frame\" class=\"MathJax\" style=\"box-sizing: border-box; display: inline; font-style: normal; font-weight: normal; line-height: normal; font-size: 16px; text-indent: 0px; text-align: left; text-transform: none; letter-spacing: normal; word-spacing: normal; overflow-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border: 0px; padding: 0px; margin: 0px; position: relative;\" tabindex=\"0\" role=\"presentation\" data-mathml=\"&lt;math xmlns=&quot;http:\/\/www.w3.org\/1998\/Math\/MathML&quot;&gt;&lt;mrow class=&quot;MJX-TeXAtom-ORD&quot;&gt;&lt;mo&gt;&amp;#x1D6FC;&lt;\/mo&gt;&lt;\/mrow&gt;&lt;\/math&gt;\"><span id=\"MathJax-Span-414\" class=\"math\"><span id=\"MathJax-Span-415\" class=\"mrow\"><span id=\"MathJax-Span-416\" class=\"texatom\"><span id=\"MathJax-Span-417\" class=\"mrow\"><span id=\"MathJax-Span-418\" class=\"mo\">\ud835\udefc<\/span><\/span><\/span><\/span><\/span><span class=\"MJX_Assistive_MathML\" role=\"presentation\"><math xmlns=\"http:\/\/www.w3.org\/1998\/Math\/MathML\"><mrow><mo>\ud835\udefc<\/mo><\/mrow><\/math><\/span><\/span><\/span>. This means that much of the input matrix has been lost and a result far from it is returned.<\/p>\n<p>To constrain the rank of the output correlation matrix use\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02akf.html\">g02ak<\/a>.<\/p>\n<p>The tolerance used in all of the algorithms, which defines convergence, can obviously affect the number of iterations undertaken and thus the speed and nearness. We recommend some experimentation using data that represents your typical problem. The routine\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02ajf.html\" target=\"_blank\" rel=\"noopener\">g02aj<\/a>\u00a0can be sensitive to the weights used, so different values should be tried to tune both the nearness and the computation time.<\/p>\n<h3>The Nearest Correlation Matrix with Factor Structure<\/h3>\n<p>A correlation matrix with factor structure is one where the off-diagonal elements agree with some matrix of rank \\(k\\). That is, the correlation matrix \\(C\\) can be written as<\/p>\n<p>\u00a0<\/p>\n<\/div>\n<\/div>\n<\/div>\n        <\/div>\n    <\/div>\n<\/div>\n\n\n<math xmlns=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"block\">\n  <mi>C<\/mi>\n  <mo>=<\/mo>\n  <mstyle displaystyle=\"false\" scriptlevel=\"0\">\n    <mtext>diag<\/mtext>\n  <\/mstyle>\n  <mo stretchy=\"false\">(<\/mo>\n  <mi>I<\/mi>\n  <mo>&#x2212;<!-- \u2212 --><\/mo>\n  <mi>X<\/mi>\n  <msup>\n    <mi>X<\/mi>\n    <mi>T<\/mi>\n  <\/msup>\n  <mo stretchy=\"false\">)<\/mo>\n  <mo>+<\/mo>\n  <mi>X<\/mi>\n  <msup>\n    <mi>X<\/mi>\n    <mi>T<\/mi>\n  <\/msup>\n  <mo>,<\/mo>\n<\/math>\n\n\n<div class=\"container content-area-default \">\n    <div class=\"row justify-content--center\">\n        <div class=\"col-12 col-md-12 col-lg-10 col-xl-8\">\n            <p>\u00a0<\/p>\n<p>where \\(X\\) here is an \\(n\\) \u00d7 \\(k\\) matrix, often referred to as the factor loading matrix, and \\(k\\) is generally much small than \\(n\\).<\/p>\n<p>These correlation matrices arise in factor models of asset returns, collateralized debit obligations and multivariate time series.<\/p>\n<p>The routine\u00a0<a href=\"http:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02aef.html\" target=\"_blank\" rel=\"noopener\">g02ae<\/a> computes the nearest factor loading matrix, \\(X\\) that gives the nearest correlation matrix for an approximate one, \\(G\\) by finding the minimum of<\/p>\n<p>\u00a0<\/p>\n        <\/div>\n    <\/div>\n<\/div>\n\n\n<math xmlns=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" display=\"block\">\n  <mo fence=\"false\" stretchy=\"false\">&#x2016;<!-- \u2016 --><\/mo>\n  <mi>G<\/mi>\n  <mo>&#x2212;<!-- \u2212 --><\/mo>\n  <mi>X<\/mi>\n  <msup>\n    <mi>X<\/mi>\n    <mi>T<\/mi>\n  <\/msup>\n  <mo>+<\/mo>\n  <mstyle displaystyle=\"false\" scriptlevel=\"0\">\n    <mtext>diag<\/mtext>\n  <\/mstyle>\n  <mo stretchy=\"false\">(<\/mo>\n  <mi>X<\/mi>\n  <msup>\n    <mi>X<\/mi>\n    <mi>T<\/mi>\n  <\/msup>\n  <mo>&#x2212;<!-- \u2212 --><\/mo>\n  <mi>I<\/mi>\n  <mo stretchy=\"false\">)<\/mo>\n  <msub>\n    <mo fence=\"false\" stretchy=\"false\">&#x2016;<!-- \u2016 --><\/mo>\n    <mi>F<\/mi>\n  <\/msub>\n  <mo>.<\/mo>\n<\/math>\n\n\n<div class=\"container content-area-default \">\n    <div class=\"row justify-content--center\">\n        <div class=\"col-12 col-md-12 col-lg-10 col-xl-8\">\n            <p>\u00a0<\/p>\n<p>We have implemented the spectral projected gradient method of Birgin, Martinez and Raydan [1] as suggested by Borsdorf, Higham and Raydan [3].<\/p>\n<h3>Table of Functionality<\/h3>\n<p>This table lists all our nearest correlation matrix routines and indicates the measure of nearness and what weighting and fixing can be used in each.<\/p>\n<table style=\"height: 339px; width: 99.4576%; border-collapse: collapse; margin-left: auto; margin-right: auto;\" border=\"1\">\n<tbody>\n<tr style=\"height: 123px;\">\n<td style=\"width: 11.1111%; height: 123px;\">Routine<\/td>\n<td style=\"width: 11.1111%; height: 123px;\">Nearness measured in the Frobenius Norm<\/td>\n<td style=\"width: 11.1111%; height: 123px;\">Shrinking Algorithm<\/td>\n<td style=\"width: 11.1111%; height: 123px;\">Nearest Matrix with Factor Structure<\/td>\n<td style=\"width: 11.1111%; height: 123px;\">Elements can weighted<\/td>\n<td style=\"width: 11.1111%; height: 123px;\">Elements can be fixed\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 123px;\">Minimum eigenvalue can be requested<\/td>\n<td style=\"width: 11.1111%; height: 123px;\">Maximum rank can be requested<\/td>\n<\/tr>\n<tr style=\"height: 27px;\">\n<td style=\"width: 11.1111%; height: 27px;\">g02aa<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<\/tr>\n<tr style=\"height: 27px;\">\n<td style=\"width: 11.1111%; height: 27px;\">g02ab<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<\/tr>\n<tr style=\"height: 27px;\">\n<td style=\"width: 11.1111%; height: 27px;\">g02ae<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<\/tr>\n<tr style=\"height: 27px;\">\n<td style=\"width: 11.1111%; height: 27px;\">g02aj<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<\/tr>\n<tr style=\"height: 27px;\">\n<td style=\"width: 11.1111%; height: 27px;\">g02ak<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<\/tr>\n<tr style=\"height: 27px;\">\n<td style=\"width: 11.1111%; height: 27px;\">g02an<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<\/tr>\n<tr style=\"height: 27px;\">\n<td style=\"width: 11.1111%; height: 27px;\">g02ap<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<\/tr>\n<tr style=\"height: 27px;\">\n<td style=\"width: 11.1111%; height: 27px;\">g02as<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">X<\/td>\n<td style=\"width: 11.1111%; height: 27px;\">\u00a0<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u00a0<\/p>\n        <\/div>\n    <\/div>\n<\/div>\n\n\n<div class=\"gbc-title-banner tac tac-lg tac-xl\" style='border-radius: 0px; '>\n    <div class=\"container\" style='border-radius: 0px; '>\n        <div class=\"row justify-content--center\" >\n            <div class=\"col-12\"  >\n                <div class=\"wrap pv-4 \" style=\"0pxbackground-color: \">\n                                <div class=\"col-12 col-md-10 col-lg-8 col-xl-6  banner-content\"  >\n    \n                                             <h1>Try The <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG Library Now<\/h1>\n                    \n                    <div class=\"mt-1 mb-1 content\"><\/div>\n\n                    \n                    <a href='https:\/\/support.nag.com\/content\/getting-started-nag-library' style='background-color: #ff7d21ff; color: #ffffffff; border-radius: 30px; font-weight: 600; ' class='btn mr-1  ' >Trial Now <i class='fas fa-angle-right'><\/i><\/a>                <\/div>\n                <\/div>\n            <\/div>\n        <\/div>\n    <\/div>\n<\/div>\n\n\n\n<div class=\"gbc-title-banner ta ta-lg ta-xl\" style='background-color: #082d48ff; color: #ffffffff; border-radius: 0px; '>\n    <div class=\"container\" style='border-radius: 0px; '>\n        <div class=\"row justify-content--center\" style='color: #ffffffff;'>\n            <div class=\"col-12\"  >\n                <div class=\"wrap pv-4 \" style=\"0px\">\n                                <div class=\"col-12 col-md-12 col-lg-10 col-xl-8  banner-content\"  >\n    \n                                             <h3 class=\"field field--name-field-paragraph-title field--type-string field--label-hidden field--item\">References<\/h3>\n<div class=\"field field--name-field-paragraph-text field--type-text-long field--label-hidden field--item\">\n<div class=\"tex2jax_process\">\n<p>\u00a0<\/p>\n<p>[1]\u00a0\u00a0\u00a0<a id=\"BirginMartinezRaydan\"><\/a>Birgin E G, Mart\u00ednez J M and Raydan M (2001) Algorithm 813: SPG\u2013software for convex-constrained optimization\u00a0<em>ACM Trans. Math Software<\/em>\u00a027 340\u2013349<\/p>\n<p>[2]\u00a0\u00a0\u00a0<a id=\"BorsdorfHigham\"><\/a>Borsdorf R and Higham N J (2010) A preconditioned (Newton) algorithm for the nearest correlation matrix\u00a0<em>IMA Journal of Numerical Analysis<\/em>\u00a030(1) 94\u2013107<\/p>\n<p>[3]\u00a0\u00a0\u00a0<a id=\"BorsdorfHighamRaydan\"><\/a>Borsdorf R, Higham N J and Raydan M (2010) Computing a nearest correlation matrix with factor structure.\u00a0<em>SIAM J. Matrix Anal. Appl.<\/em>\u00a031(5) 2603\u20132622<\/p>\n<p>[4]\u00a0\u00a0\u00a0<a id=\"GaoSun\"><\/a>Gao Y and Sun D (2010) A majorized penalty approach for calibrating rank constrained correlation matrix problems\u00a0<em>Technical report<\/em>\u00a0Department of Mathematics, National University of Singapore<\/p>\n<p>[5]\u00a0\u00a0\u00a0<a id=\"HighamStrabic\"><\/a>Higham N J and Strabi\u0107 N (2016) Anderson acceleration of the alternating projections method for computing the nearest correlation matrix\u00a0<em>Numer. Algor.<\/em>\u00a072 1021\u20131042<\/p>\n<p>[6]\u00a0\u00a0\u00a0<a id=\"HighamStrabicSego\"><\/a>Higham N J, Strabi\u0107 N and \u0160ego V (2014) Restoring definiteness via shrinking, with an application to correlation matrices with a fixed block\u00a0<em>MIMS EPrint 2014.54\u00a0<\/em>Manchester Institute for Mathematical Sciences, The University of Manchester, UK<\/p>\n<p>[7]\u00a0\u00a0\u00a0<a id=\"JiangSunToh\"><\/a>Jiang K, Sun D and Toh K-C (2012) An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP\u00a0<em>SIAM J. Optim.<\/em>\u00a0<strong>22(3)<\/strong>\u00a01042\u20131064<\/p>\n<p>[8]\u00a0\u00a0\u00a0<a id=\"QiSun\"><\/a>Qi H and Sun D (2006) A quadratically convergent Newton method for computing the nearest correlation matrix\u00a0<em>SIAM J. Matrix Anal. Appl.<\/em>\u00a0<strong>29(2)<\/strong>\u00a0360\u2013385<\/p>\n<\/div>\n<\/div>\n                    \n                    <div class=\"mt-1 mb-1 content\"><\/div>\n\n                    \n                                    <\/div>\n                <\/div>\n            <\/div>\n        <\/div>\n    <\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>&#x2016; G &#x2212; X &#x2016; F . &#x2016; W 1 \/ 2 ( G &#x2212; X ) W 1 \/ 2 &#x2016; F . &#x2016; H &#x2218; ( G &#x2212; X ) &#x2016; F , &#x03B1; ( A 0 0 I ) + ( 1 &#x2212; &#x03B1; ) G . &#x03B1; T + ( 1 [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"content-type":"","footnotes":""},"class_list":["post-2604","page","type-page","status-publish","hentry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Nearest Correlation Matrix - nAG<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/nag.com\/nearest-correlation-matrix\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Nearest Correlation Matrix - nAG\" \/>\n<meta property=\"og:description\" content=\"&#x2016; G &#x2212; X &#x2016; F . &#x2016; W 1 \/ 2 ( G &#x2212; X ) W 1 \/ 2 &#x2016; F . &#x2016; H &#x2218; ( G &#x2212; X ) &#x2016; F , &#x3B1; ( A 0 0 I ) + ( 1 &#x2212; &#x3B1; ) G . &#x3B1; T + ( 1 [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/nag.com\/nearest-correlation-matrix\/\" \/>\n<meta property=\"og:site_name\" content=\"nAG\" \/>\n<meta property=\"article:modified_time\" content=\"2023-06-28T17:47:07+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/nag.com\/wp-content\/uploads\/2024\/02\/NAG-Logo-White-On-Blue.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"12770\" \/>\n\t<meta property=\"og:image:height\" content=\"4353\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@NAGTalk\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/nag.com\/nearest-correlation-matrix\/\",\"url\":\"https:\/\/nag.com\/nearest-correlation-matrix\/\",\"name\":\"Nearest Correlation Matrix - nAG\",\"isPartOf\":{\"@id\":\"https:\/\/nag.com\/#website\"},\"datePublished\":\"2023-06-28T17:47:06+00:00\",\"dateModified\":\"2023-06-28T17:47:07+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/nag.com\/nearest-correlation-matrix\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/nag.com\/nearest-correlation-matrix\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/nag.com\/nearest-correlation-matrix\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/nag.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Nearest Correlation Matrix\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/nag.com\/#website\",\"url\":\"https:\/\/nag.com\/\",\"name\":\"NAG\",\"description\":\"Robust, trusted numerical software and computational expertise.\",\"publisher\":{\"@id\":\"https:\/\/nag.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/nag.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/nag.com\/#organization\",\"name\":\"Numerical Algorithms Group\",\"alternateName\":\"NAG\",\"url\":\"https:\/\/nag.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/nag.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/nag.com\/wp-content\/uploads\/2023\/11\/NAG-Logo.png\",\"contentUrl\":\"https:\/\/nag.com\/wp-content\/uploads\/2023\/11\/NAG-Logo.png\",\"width\":1244,\"height\":397,\"caption\":\"Numerical Algorithms Group\"},\"image\":{\"@id\":\"https:\/\/nag.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/NAGTalk\",\"https:\/\/www.linkedin.com\/company\/nag\/\",\"https:\/\/www.youtube.com\/user\/NumericalAlgorithms\",\"https:\/\/github.com\/numericalalgorithmsgroup\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Nearest Correlation Matrix - nAG","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/nag.com\/nearest-correlation-matrix\/","og_locale":"en_US","og_type":"article","og_title":"Nearest Correlation Matrix - nAG","og_description":"&#x2016; G &#x2212; X &#x2016; F . &#x2016; W 1 \/ 2 ( G &#x2212; X ) W 1 \/ 2 &#x2016; F . &#x2016; H &#x2218; ( G &#x2212; X ) &#x2016; F , &#x03B1; ( A 0 0 I ) + ( 1 &#x2212; &#x03B1; ) G . &#x03B1; T + ( 1 [&hellip;]","og_url":"https:\/\/nag.com\/nearest-correlation-matrix\/","og_site_name":"nAG","article_modified_time":"2023-06-28T17:47:07+00:00","og_image":[{"width":12770,"height":4353,"url":"https:\/\/nag.com\/wp-content\/uploads\/2024\/02\/NAG-Logo-White-On-Blue.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_site":"@NAGTalk","schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/nag.com\/nearest-correlation-matrix\/","url":"https:\/\/nag.com\/nearest-correlation-matrix\/","name":"Nearest Correlation Matrix - nAG","isPartOf":{"@id":"https:\/\/nag.com\/#website"},"datePublished":"2023-06-28T17:47:06+00:00","dateModified":"2023-06-28T17:47:07+00:00","breadcrumb":{"@id":"https:\/\/nag.com\/nearest-correlation-matrix\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/nag.com\/nearest-correlation-matrix\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/nag.com\/nearest-correlation-matrix\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/nag.com\/"},{"@type":"ListItem","position":2,"name":"Nearest Correlation Matrix"}]},{"@type":"WebSite","@id":"https:\/\/nag.com\/#website","url":"https:\/\/nag.com\/","name":"NAG","description":"Robust, trusted numerical software and computational expertise.","publisher":{"@id":"https:\/\/nag.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/nag.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/nag.com\/#organization","name":"Numerical Algorithms Group","alternateName":"NAG","url":"https:\/\/nag.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/nag.com\/#\/schema\/logo\/image\/","url":"https:\/\/nag.com\/wp-content\/uploads\/2023\/11\/NAG-Logo.png","contentUrl":"https:\/\/nag.com\/wp-content\/uploads\/2023\/11\/NAG-Logo.png","width":1244,"height":397,"caption":"Numerical Algorithms Group"},"image":{"@id":"https:\/\/nag.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/NAGTalk","https:\/\/www.linkedin.com\/company\/nag\/","https:\/\/www.youtube.com\/user\/NumericalAlgorithms","https:\/\/github.com\/numericalalgorithmsgroup"]}]}},"_links":{"self":[{"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/pages\/2604","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/comments?post=2604"}],"version-history":[{"count":25,"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/pages\/2604\/revisions"}],"predecessor-version":[{"id":2631,"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/pages\/2604\/revisions\/2631"}],"wp:attachment":[{"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/media?parent=2604"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}