{"id":1431,"date":"2021-09-11T10:40:00","date_gmt":"2021-09-11T10:40:00","guid":{"rendered":"https:\/\/nag.com\/?post_type=insights&#038;p=1018"},"modified":"2024-03-07T16:42:54","modified_gmt":"2024-03-07T16:42:54","slug":"achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library","status":"publish","type":"insights","link":"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/","title":{"rendered":"Achieve a 70x Speedup for the Adjoint of the Nearest Correlation Matrix With the <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG\u00a0Library"},"content":{"rendered":"<div class=\"container content-area-default \">\n    <div class=\"row justify-content--center\">\n        <div class=\"col-12 col-md-10 col-lg-8 col-xl-6\">\n            <p>On the 6<sup>th<\/sup>\u00a0of October, <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG collaborator\u00a0<a href=\"https:\/\/nhigham.com\/\" target=\"_blank\" rel=\"noopener\">Professor Nick Higham<\/a>\u00a0of the University of Manchester and I presented a webinar entitled\u00a0<i>What&#8217;s New With the Nearest Correlation Matrix?<\/i>\u00a0You can watch a recording of the webinar\u00a0<a href=\"https:\/\/support.nag.com\/form\/watch-whats-next-nearest-correlation-matrix-webinar\">here<\/a>. In this blog post, I will talk a little more about some of the content we covered, with a focus on the new\u00a0<a href=\"https:\/\/nag.com\/nag-library\/\"><span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG Library<\/a>\u00a0symbolic adjoint for the nearest correlation matrix (NCM), introduced at Mark 27.3.<\/p>\n<p>First, we need to talk a little about correlations. Given two sequences of data, \\(X_1\\) and \\(X_2\\) (or\u00a0<i>random variables<\/i>\u00a0in statistical language), the correlation between them, \\(\\text{corr}(X_1,X_2)\\), is a measure of the degree to which they are linearly related. Given \\(n\\) such random variables, a\u00a0<i>correlation matrix<\/i>\u00a0is the \\(n\\times n\\) matrix with \\( (i,j) \\) entry given by \\(\\text{corr}(X_i,X_j)\\). Anywhere that multivariate data is being collected, correlation matrices are likely to be encountered.<\/p>\n<p id=\"fig1\" align=\"center\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-1023\" src=\"https:\/\/nag.com\/wp-content\/uploads\/2021\/09\/corrmat.png\" alt=\"\" width=\"729\" height=\"568\" \/><\/p>\n<p id=\"fig1\" class=\"para-sm\" style=\"text-align: left;\" align=\"center\">Figure\u00a01: A correlation matrix with entries coloured according to their value. Note the 1s on the diagonal.<\/p>\n<p>Correlation matrices have a number of useful mathematical properties, perhaps the most important of which is that they have non-negative eigenvalues (they are positive semi-definite). However, when data is collected from real-world measurements, it is possible to encounter matrices that are not positive semi-definite. For example, this can happen if data is collected asynchronously or inaccurately, or if there are missing observations.<\/p>\n<p>Given an invalid correlation matrix, \\(X\\), in order to proceed with subsequent analysis, it may be necessary to\u00a0<i>fix<\/i>\u00a0the matrix, otherwise, we may encounter numerical issues. For example, in financial mathematics, using indefinite matrices in place of true correlation matrices can yield negative variances. We can fix our invalid correlation matrix by finding the\u00a0<i>nearest correlation matrix<\/i>. Using the Frobenius norm, this is the matrix, \\(G\\), which minimizes \\[\\|X-G\\|_F.\\]<\/p>\n<p>The\u00a0<a href=\"https:\/\/nag.com\/nag-library\"><span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG Library<\/a>\u00a0contains a number of state-of-the-art routines for solving various flavours of the nearest correlation matrix problem, developed in collaboration with Nick Higham and his research group. See Chapter\u00a0<a href=\"https:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/flhtml\/g02\/g02conts.html\">G02<\/a>\u00a0for further details.<\/p>\n<p>One of the most striking things to come out of the webinar was the number of applications of the NCM problem. What started out as an important problem in finance, involving correlations between assets in portfolio management, has now, due to the rise of data science, found additional applications in insurance, seismology, meteorology, measuring sea levels and even tourism.<\/p>\n<p>Irrespective of the application area, NCM problems are typically embedded within a longer workflow. Often, as part of such workflows\u00a0<i>sensitivities<\/i>\u00a0are required, that is, we would like to know how a small (or rather infinitesimal) perturbation to a given input can affect the outputs of the workflow. How, then, can we compute sensitivities for NCM problems?<\/p>\n<p>Sensitivities are obtained by computing derivatives, and we can do this by using the\u00a0<a href=\"https:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/adhtml\/frontmatter\/manconts.html\"><span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG AD Library<\/a>, where AD stands for\u00a0<i>automatic differentiation<\/i>. The idea is to use a tool to differentiate the code line by line. One such tool is\u00a0<a href=\"https:\/\/nag.com\/automatic-differentiation\/\">dco\/c++<\/a>, which works seamlessly with the <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG AD Library.<\/p>\n<p>In the language of AD, differentiating our code produces the so-called\u00a0<i>algorithmic adjoint<\/i>, but if you are unfamiliar with AD, think of it simply as the derivative. Since Mark 27, we have had algorithmic adjoints available for\u00a0<a href=\"https:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/adhtml\/g02\/g02aa_ad_f.html\"><tt>g02aa<\/tt><\/a>\u00a0and\u00a0<a href=\"https:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/adhtml\/g02\/g02ab_ad_f.html\"><tt>g02ab<\/tt><\/a>, which compute nearest correlation matrices by solving an associated optimization problem using a Newton method. <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG&#8217;s AD capabilities have come about as a result of a long-standing collaboration with\u00a0Professor Uwe Naumann\u00a0and his team at RWTH Aachen University.<\/p>\n<p>It&#8217;s worth making a short digression at this point because there is an important question we haven&#8217;t considered: are NCM problems even differentiable? Technically, the answer is, NO! So, where does that leave us?<\/p>\n<p>When Qi and Sun<sup><a id=\"footnote-1-ref\" href=\"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/#footnote-1\">[1]<\/a><\/sup>\u00a0developed the original version of the Newton method for the NCM (since enhanced by Higham and Borsdorf<sup><a id=\"footnote-2-ref\" href=\"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/\">[2]<\/a><\/sup>), they considered this problem in great depth. The issue is that hidden away in any NCM algorithm is the function \\(f(x) = \\text{max}(x,0)\\), which is used to shift eigenvalues away from the negative real line, but which is nondifferentiable at the origin, as shown in Figure\u00a0<a href=\"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/#fig2\">2<\/a>.<\/p>\n<p id=\"fig2\" align=\"center\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1026 size-full\" src=\"https:\/\/nag.com\/wp-content\/uploads\/2021\/09\/graph5.png\" alt=\"\" width=\"840\" height=\"390\" \/><\/p>\n<p class=\"para-sm\">Figure\u00a02: The function \\(f(x) = \\text{max}(x,0)\\)<\/p>\n<p>Also shown in Figure\u00a02\u00a0is a series of the red dashed lines touching \\(f(x)\\) at the origin. These are known as\u00a0<i>subderivatives<\/i>, and the complete set of all such lines form what is known as the\u00a0<i>subdifferential<\/i>\u00a0of \\(f\\) at 0. In our case, the subdifferential is all those lines through the origin with gradient in \\([0,1]\\). There is a lot of mathematical theory behind subdifferentials, generalizing the idea of derivatives for nondifferentiable functions. Qi and Sun were able to use this theory to show that, provided we are consistent in our choice of which subderivative we use at the origin, then the nondifferentiability in the NCM can be handled. Indeed, numerical experiments show that algorithmic adjoints of the <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG NCM solvers give the same results as other techniques, such as finite-difference approximations (also known as\u00a0<i>bumping<\/i>\u00a0or\u00a0<i>bump and reval<\/i>).<\/p>\n<p>The issue of nondifferentiability is an important one in AD. In particular, even mathematically smooth functions could potentially be implemented in a manner that results in nondifferentiable code (for example, by branching to different types of approximation depending on the input values). There&#8217;s lots more we could discuss here, but that&#8217;s for another post, as now I&#8217;d like to talk about how AD performs.<\/p>\n<p>To compute the algorithmic adjoint, the AD tool needs to perform a data flow reversal on the computer code, which requires both a forward and a reverse pass through the code. During the forward pass, a lot of data must be stored, which is then used during the reverse pass. For example, for a \\(250\\times 250\\) NCM problem, the algorithmic adjoint of\u00a0<tt>g02aa<\/tt>\u00a0requires about 20GB of memory (whereas the NCM itself requires less than 3GB of memory).<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1080 size-full\" src=\"https:\/\/nag.com\/wp-content\/uploads\/2021\/09\/graph1.png\" alt=\"Memory footprint of symbolic and algorithmic adjoints\" width=\"786\" height=\"546\" \/><\/p>\n<p class=\"para-sm\">Figure\u00a03: Memory footprint of adjoint modes for\u00a0<tt>g02aa<\/tt><\/p>\n<p>As problem sizes increase, the memory requirements of algorithmic adjoints can cause issues. However, for certain algorithms, another approach is available: the\u00a0<i>symbolic adjoint<\/i>. To compute the symbolic adjoint, we go back to the mathematics behind the algorithm and differentiate that, before reimplementing the differentiated algorithm.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-1082 size-full\" src=\"https:\/\/nag.com\/wp-content\/uploads\/2021\/09\/compute_time_graph.png\" alt=\"Compute time symbolic and algorithmic adjoints\" width=\"836\" height=\"541\" \/><\/p>\n<p class=\"para-sm\">Figure\u00a04: Runtime of adjoint modes for\u00a0<tt>g02aa<\/tt><\/p>\n<p>In the <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG AD Library, a number of routines now have symbolic adjoints available, and for Mark 27.3, we have developed a symbolic adjoint of\u00a0<tt>g02aa<\/tt>. Figures\u00a0<a href=\"\/insights\/achieve-70x-speedup-adjoint-nearest-correlation-matrix-nag-library#fig3\">3<\/a>\u00a0and\u00a0<a href=\"\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/#fig4\">4<\/a>\u00a0show how it performs compared with the algorithmic adjoint. For a\u00a0<span id=\"MathJax-Element-17-Frame\" class=\"MathJax\" style=\"box-sizing: border-box; display: inline; font-style: normal; font-weight: 400; line-height: normal; font-size: 16px; text-indent: 0px; text-align: left; text-transform: none; letter-spacing: normal; word-spacing: 0px; overflow-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border: 0px; padding: 0px; margin: 0px; color: #232331; font-family: 'Nunito Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif; font-variant-ligatures: normal; font-variant-caps: normal; orphans: 2; widows: 2; -webkit-text-stroke-width: 0px; background-color: #ffffff; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; position: relative;\" tabindex=\"0\" role=\"presentation\" data-mathml=\"&lt;math xmlns=&quot;http:\/\/www.w3.org\/1998\/Math\/MathML&quot;&gt;&lt;mn&gt;250&lt;\/mn&gt;&lt;mo&gt;&amp;#x00D7;&lt;\/mo&gt;&lt;mn&gt;250&lt;\/mn&gt;&lt;\/math&gt;\"><span id=\"MathJax-Span-113\" class=\"math\"><span id=\"MathJax-Span-114\" class=\"mrow\"><span id=\"MathJax-Span-115\" class=\"mn\">250<\/span><span id=\"MathJax-Span-116\" class=\"mo\">\u00d7<\/span><span id=\"MathJax-Span-117\" class=\"mn\">250<\/span><\/span><\/span><span class=\"MJX_Assistive_MathML\" role=\"presentation\"><math xmlns=\"http:\/\/www.w3.org\/1998\/Math\/MathML\"><mn>\u00a0<\/mn><mn><br \/><\/mn><\/math><\/span><\/span>NCM problem it is 70x faster and uses 2500x less memory. Such a dramatic performance improvement is not unusual for symbolic vs. algorithmic adjoints. Symbolic adjoints are not always available, but when they are they invariably significantly outperform their algorithmic counterparts. They do, however, require an exact solution of the problem to be valid (for example, full convergence of iterative algorithms), which is not always a given; in these cases, the algorithmic adjoint gives the more robust derivative.<\/p>\n<p>If you have existing code calling the algorithmic adjoint of\u00a0<tt>g02aa<\/tt>, then it is very easy to switch to the symbolic adjoint: you simply set the adjoint strategy in the\u00a0<tt>ad_handle<\/tt>\u00a0to symbolic:<br \/>\u00a0<tt>ad_handle.set_strategy(<span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>ag::ad::symbolic);<\/tt><\/p>\n<div class=\"field field--name-field-paragraph-text field--type-text-long field--label-hidden field--item\">\n<div class=\"tex2jax_process\">\n<p>Of course, if your NCM problem was embedded in a longer workflow, then you may have more work to do in order to compute sensitivities, but this is where the power and versatility of AD really becomes apparent. Through a combination of applying AD to your own code (using\u00a0<a href=\"https:\/\/nag.com\/automatic-differentiation\/\">dco\/c++<\/a>), calling routines from the <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG AD Library, and using symbolic adjoints when they are available, sensitivities can readily be traced through the entirety of your computation. See\u00a0<a href=\"https:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/adhtml\/genint\/adintro.html\">here<\/a>\u00a0for more details on using dco\/c++ with the <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG AD Library.<\/p>\n<p>The new symbolic adjoint for\u00a0<tt>g02aa<\/tt>\u00a0is now available, see\u00a0<a href=\"https:\/\/support.nag.com\/numeric\/nl\/nagdoc_latest\/adhtml\/g02\/g02aa_ad_f.html\">here<\/a>\u00a0for the documentation. Full product trials are available for the\u00a0<a href=\"\/nag-library\/\"><span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG Library<\/a> and dco\/c++.<\/p>\n<\/div>\n<\/div>\n        <\/div>\n    <\/div>\n<\/div>\n\n<div class=\"container content-area-default \">\n    <div class=\"row justify-content--center\">\n        <div class=\"col-12 col-md-10 col-lg-8 col-xl-6\">\n                    <\/div>\n    <\/div>\n<\/div>\n\n\n<div class=\"gbc-title-banner tac tac-lg tac-xl\" style='border-radius: 0px; '>\n    <div class=\"container\" style='border-radius: 0px; '>\n        <div class=\"row justify-content--center\" >\n            <div class=\"col-12\"  >\n                <div class=\"wrap pv-4 \" style=\"0pxbackground-color: \">\n                                <div class=\"col-12 col-md-10 col-lg-8 col-xl-6  banner-content\"  >\n    \n                    \n                    <div class=\"mt-1 mb-1 content\"><\/div>\n\n                    \n                    <a href='\/nag-library\/' style='background-color: #ff7d21ff; color: #ffffffff; border-radius: 30px; font-weight: 600; ' class='btn mr-1  ' >Learn more about the <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG Library <i class='fas fa-angle-right'><\/i><\/a>                <\/div>\n                <\/div>\n            <\/div>\n        <\/div>\n    <\/div>\n<\/div>\n\n\n<div class=\"container content-area-default \">\n    <div class=\"row justify-content--center\">\n        <div class=\"col-12 col-md-10 col-lg-8 col-xl-6\">\n            <h2>References<\/h2>\n<p>\u00a0<\/p>\n<p id=\"footnote-1\">[1] H. Qi and D. Sun (2006). A quadratically convergent Newton method for computing the nearest correlation matrix, <i>SIAM J. Matrix Anal. and Appl.<\/i>, 28(2), pp. 360-385. <a href=\"#footnote-1-ref\">\u21a9<\/a><\/p>\n<p id=\"footnote-2\">[2] R. Borsdorf and N. J. Higham (2010). A preconditioned Newton algorithm for the nearest correlation matrix, <i>IMA J. Num. Anal.<\/i>, 30(1), pp.94-107. <a href=\"#footnote-2-ref\">\u21a9<\/a><\/p>\n        <\/div>\n    <\/div>\n<\/div>\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>On the 6th of October, <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG collaborator Professor Nick Higham of the University of Manchester and I presented a webinar entitled What&#8217;s New With the Nearest Correlation Matrix? You can watch a recording of the webinar here. In this blog post, I will talk a little more about some of the content we covered, with a focus on the new <span class=\"nag-n-override\" style=\"margin-left: 0 !important;\"><i>n<\/i><\/span>AG\u00ae Library symbolic adjoint for the nearest correlation matrix (NCM), introduced at Mark 27.3.<\/p>\n","protected":false},"author":4,"featured_media":1020,"parent":0,"menu_order":0,"template":"","meta":{"content-type":"","footnotes":""},"post-tag":[27,18,21],"class_list":["post-1431","insights","type-insights","status-publish","has-post-thumbnail","hentry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Achieve a 70x Speedup for the Adjoint of the Nearest Correlation Matrix With the NAG\u00a0Library - nAG<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Achieve a 70x Speedup for the Adjoint of the Nearest Correlation Matrix With the NAG\u00a0Library - nAG\" \/>\n<meta property=\"og:description\" content=\"On the 6th of October, NAG collaborator Professor Nick Higham of the University of Manchester and I presented a webinar entitled What&#039;s New With the Nearest Correlation Matrix? You can watch a recording of the webinar here. In this blog post, I will talk a little more about some of the content we covered, with a focus on the new NAG\u00ae Library symbolic adjoint for the nearest correlation matrix (NCM), introduced at Mark 27.3.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/\" \/>\n<meta property=\"og:site_name\" content=\"nAG\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-07T16:42:54+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/nag.com\/wp-content\/uploads\/2023\/05\/algorithms-2.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2000\" \/>\n\t<meta property=\"og:image:height\" content=\"1000\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@NAGTalk\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/\",\"url\":\"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/\",\"name\":\"Achieve a 70x Speedup for the Adjoint of the Nearest Correlation Matrix With the NAG\u00a0Library - nAG\",\"isPartOf\":{\"@id\":\"https:\/\/nag.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/nag.com\/wp-content\/uploads\/2023\/05\/algorithms-2.png\",\"datePublished\":\"2021-09-11T10:40:00+00:00\",\"dateModified\":\"2024-03-07T16:42:54+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/#primaryimage\",\"url\":\"https:\/\/nag.com\/wp-content\/uploads\/2023\/05\/algorithms-2.png\",\"contentUrl\":\"https:\/\/nag.com\/wp-content\/uploads\/2023\/05\/algorithms-2.png\",\"width\":2000,\"height\":1000},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/nag.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Insights\",\"item\":\"https:\/\/nag.com\/insights\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Achieve a 70x Speedup for the Adjoint of the Nearest Correlation Matrix With the NAG\u00a0Library\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/nag.com\/#website\",\"url\":\"https:\/\/nag.com\/\",\"name\":\"NAG\",\"description\":\"Robust, trusted numerical software and computational expertise.\",\"publisher\":{\"@id\":\"https:\/\/nag.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/nag.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/nag.com\/#organization\",\"name\":\"Numerical Algorithms Group\",\"alternateName\":\"NAG\",\"url\":\"https:\/\/nag.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/nag.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/nag.com\/wp-content\/uploads\/2023\/11\/NAG-Logo.png\",\"contentUrl\":\"https:\/\/nag.com\/wp-content\/uploads\/2023\/11\/NAG-Logo.png\",\"width\":1244,\"height\":397,\"caption\":\"Numerical Algorithms Group\"},\"image\":{\"@id\":\"https:\/\/nag.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/NAGTalk\",\"https:\/\/www.linkedin.com\/company\/nag\/\",\"https:\/\/www.youtube.com\/user\/NumericalAlgorithms\",\"https:\/\/github.com\/numericalalgorithmsgroup\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Achieve a 70x Speedup for the Adjoint of the Nearest Correlation Matrix With the NAG\u00a0Library - nAG","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/","og_locale":"en_US","og_type":"article","og_title":"Achieve a 70x Speedup for the Adjoint of the Nearest Correlation Matrix With the NAG\u00a0Library - nAG","og_description":"On the 6th of October, NAG collaborator Professor Nick Higham of the University of Manchester and I presented a webinar entitled What's New With the Nearest Correlation Matrix? You can watch a recording of the webinar here. In this blog post, I will talk a little more about some of the content we covered, with a focus on the new NAG\u00ae Library symbolic adjoint for the nearest correlation matrix (NCM), introduced at Mark 27.3.","og_url":"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/","og_site_name":"nAG","article_modified_time":"2024-03-07T16:42:54+00:00","og_image":[{"width":2000,"height":1000,"url":"https:\/\/nag.com\/wp-content\/uploads\/2023\/05\/algorithms-2.png","type":"image\/png"}],"twitter_card":"summary_large_image","twitter_site":"@NAGTalk","twitter_misc":{"Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/","url":"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/","name":"Achieve a 70x Speedup for the Adjoint of the Nearest Correlation Matrix With the NAG\u00a0Library - nAG","isPartOf":{"@id":"https:\/\/nag.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/#primaryimage"},"image":{"@id":"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/#primaryimage"},"thumbnailUrl":"https:\/\/nag.com\/wp-content\/uploads\/2023\/05\/algorithms-2.png","datePublished":"2021-09-11T10:40:00+00:00","dateModified":"2024-03-07T16:42:54+00:00","breadcrumb":{"@id":"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/#primaryimage","url":"https:\/\/nag.com\/wp-content\/uploads\/2023\/05\/algorithms-2.png","contentUrl":"https:\/\/nag.com\/wp-content\/uploads\/2023\/05\/algorithms-2.png","width":2000,"height":1000},{"@type":"BreadcrumbList","@id":"https:\/\/nag.com\/insights\/achieve-a-70x-speedup-for-the-adjoint-of-the-nearest-correlation-matrix-with-the-nag-library\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/nag.com\/"},{"@type":"ListItem","position":2,"name":"Insights","item":"https:\/\/nag.com\/insights\/"},{"@type":"ListItem","position":3,"name":"Achieve a 70x Speedup for the Adjoint of the Nearest Correlation Matrix With the NAG\u00a0Library"}]},{"@type":"WebSite","@id":"https:\/\/nag.com\/#website","url":"https:\/\/nag.com\/","name":"NAG","description":"Robust, trusted numerical software and computational expertise.","publisher":{"@id":"https:\/\/nag.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/nag.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/nag.com\/#organization","name":"Numerical Algorithms Group","alternateName":"NAG","url":"https:\/\/nag.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/nag.com\/#\/schema\/logo\/image\/","url":"https:\/\/nag.com\/wp-content\/uploads\/2023\/11\/NAG-Logo.png","contentUrl":"https:\/\/nag.com\/wp-content\/uploads\/2023\/11\/NAG-Logo.png","width":1244,"height":397,"caption":"Numerical Algorithms Group"},"image":{"@id":"https:\/\/nag.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/NAGTalk","https:\/\/www.linkedin.com\/company\/nag\/","https:\/\/www.youtube.com\/user\/NumericalAlgorithms","https:\/\/github.com\/numericalalgorithmsgroup"]}]}},"_links":{"self":[{"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/insights\/1431","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/insights"}],"about":[{"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/types\/insights"}],"author":[{"embeddable":true,"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/users\/4"}],"version-history":[{"count":19,"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/insights\/1431\/revisions"}],"predecessor-version":[{"id":5879,"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/insights\/1431\/revisions\/5879"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/media\/1020"}],"wp:attachment":[{"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/media?parent=1431"}],"wp:term":[{"taxonomy":"post-tag","embeddable":true,"href":"https:\/\/nag.com\/wp-json\/wp\/v2\/post-tag?post=1431"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}