By
if(typeof ez_ad_units!=’undefined’){ez_ad_units.push([[728,90],’scitechdaily_com-box-3′,’ezslot_7′,102,’0′,’0′])};if(typeof __ez_fad_position!=’undefined’){__ez_fad_position(‘div-gpt-ad-scitechdaily_com-box-3-0’)};
With a tensor language prototype, “speed and correctness do not have to compete … they can go together, hand-in-hand.”
High-performance computing is needed for an ever-growing number of tasks — such as image processing or various deep learning applications on neural nets — where one must plow through immense piles of data, and do so reasonably quickly, or else it could take ridiculous amounts of time. It’s widely believed that, in carrying out operations of this sort, there are unavoidable trade-offs between speed and reliability. If speed is the top priority, according to this view, then reliability will likely suffer, and vice versa.
if(typeof ez_ad_units!=’undefined’){ez_ad_units.push([[300,250],’scitechdaily_com-medrectangle-3′,’ezslot_1′,110,’0′,’0′])};if(typeof __ez_fad_position!=’undefined’){__ez_fad_position(‘div-gpt-ad-scitechdaily_com-medrectangle-3-0’)};
catch(error){}
However, a team of researchers, based mainly at <span class=glossaryLink aria-describedby=tt data-cmtooltip="
” data-gt-translate-attributes='[{“attribute”:”data-cmtooltip”, “format”:”html”}]’>MIT, is calling that notion into question, claiming that one can, in fact, have it all. With the new programming language, which they’ve written specifically for high-performance computing, says Amanda Liu, a second-year PhD student at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), “speed and correctness do not have to compete. Instead, they can go together, hand-in-hand, in the programs we write.”
Liu — along with University of California at Berkeley postdoc Gilbert Louis Bernstein, MIT Associate Professor Adam Chlipala, and MIT Assistant Professor Jonathan Ragan-Kelley — described the potential of their recently developed creation, “A Tensor Language” (ATL), last month at the Principles of Programming Languages conference in Philadelphia.
“Everything in our language,” Liu says, “is aimed at producing either a single number or a tensor.” Tensors, in turn, are generalizations of vectors and matrices. Whereas vectors are one-dimensional objects (often represented by individual arrows) and matrices are familiar two-dimensional arrays of numbers, tensors are n-dimensional arrays, which could take the form of a 3x3x3 array, for instance, or something of even higher (or lower) dimensions.
if(typeof ez_ad_units!=’undefined’){ez_ad_units.push([[250,250],’scitechdaily_com-medrectangle-4′,’ezslot_13′,111,’0′,’0′])};if(typeof __ez_fad_position!=’undefined’){__ez_fad_position(‘div-gpt-ad-scitechdaily_com-medrectangle-4-0’)};
The whole point of a computer algorithm or program is to initiate a particular computation. But there can be many different ways of writing that program — “a bewildering variety of different code realizations,” as Liu and her coauthors wrote in their soon-to-be published conference paper — some considerably speedier than others. The primary rationale behind ATL is this, she explains: “Given that high-performance computing is so resource-intensive, you want to be able to modify, or rewrite, programs into an optimal form in order to speed things up. One often starts with a program that is easiest to write, but that may not be the fastest way to run it, so that further adjustments are still needed.”
As an example, suppose an image is represented by a 100×100 array of numbers, each corresponding to a pixel, and you want to get an average value for these numbers. That could be done in a two-stage computation by first determining the average of each row and then getting the average of each column. ATL has an associated toolkit — what computer scientists call a “framework” — that might show how this two-step process could be converted into a faster one-step process.
“We can guarantee that this optimization is correct by using something called a proof assistant,” Liu says. Toward this end, the team’s new language builds upon an existing language, Coq, which contains a proof assistant. The proof assistant, in turn, has the inherent capacity to prove its assertions in a mathematically rigorous fashion.
Coq had another intrinsic feature that made it attractive to the MIT-based group: programs written in it, or adaptations of it, always terminate and cannot run forever on endless loops (as can happen with programs written in Java, for example). “We run a program to get a single answer — a number or a tensor,” Liu maintains. “A program that never terminates would be useless to us, but termination is something we get for free by making use of Coq.”
if(typeof ez_ad_units!=’undefined’){ez_ad_units.push([[580,400],’scitechdaily_com-box-4′,’ezslot_8′,112,’0′,’0′])};if(typeof __ez_fad_position!=’undefined’){__ez_fad_position(‘div-gpt-ad-scitechdaily_com-box-4-0’)};
The ATL project combines two of the main research interests of Ragan-Kelley and Chlipala. Ragan-Kelley has long been concerned with the optimization of algorithms in the context of high-performance computing. Chlipala, meanwhile, has focused more on the formal (as in mathematically-based) verification of algorithmic optimizations. This represents their first collaboration. Bernstein and Liu were brought into the enterprise last year, and ATL is the result.
It now stands as the first, and so far the only, tensor language with formally verified optimizations. Liu cautions, however, that ATL is still just a prototype — albeit a promising one — that’s been tested on a number of small programs. “One of our main goals, looking ahead, is to improve the scalability of ATL, so that it can be used for the larger programs we see in the real world,” she says.
In the past, optimizations of these programs have typically been done by hand, on a much more ad hoc basis, which often involves trial and error, and sometimes a good deal of error. With ATL, Liu adds, “people will be able to follow a much more principled approach to rewriting these programs — and do so with greater ease and greater assurance of correctness.”
Reference: “Verified Tensor-Program Optimization Via High-Level” by Amanda Liu, Gilbert Louis Bernstein, Adam Chlipala and Jonathan Ragan-Kelley, 12 January 2022, Proceedings of the ACM on Programming Languages.
DOI: 10.1145/3498717
With a tensor language prototype, “speed and correctness do not have to compete … they can go together, hand-in-hand.”
High-performance computing is needed for an ever-growing number of tasks — such as image processing or various deep learning applications on neural nets — where one must plow through immense piles of data, and do so reasonably quickly, or else it could take ridiculous amounts of time. It’s widely believed that, in carrying out operations of this sort, there are unavoidable trade-offs between speed and reliability. If speed is the top priority, according to this view, then reliability will likely suffer, and vice versa.
However, a team of researchers, based mainly at <span class=glossaryLink aria-describedby=tt data-cmtooltip="
” data-gt-translate-attributes='[{“attribute”:”data-cmtooltip”, “format”:”html”}]’>MIT, is calling that notion into question, claiming that one can, in fact, have it all. With the new programming language, which they’ve written specifically for high-performance computing, says Amanda Liu, a second-year PhD student at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), “speed and correctness do not have to compete. Instead, they can go together, hand-in-hand, in the programs we write.”
Liu — along with University of California at Berkeley postdoc Gilbert Louis Bernstein, MIT Associate Professor Adam Chlipala, and MIT Assistant Professor Jonathan Ragan-Kelley — described the potential of their recently developed creation, “A Tensor Language” (ATL), last month at the Principles of Programming Languages conference in Philadelphia.
“Everything in our language,” Liu says, “is aimed at producing either a single number or a tensor.” Tensors, in turn, are generalizations of vectors and matrices. Whereas vectors are one-dimensional objects (often represented by individual arrows) and matrices are familiar two-dimensional arrays of numbers, tensors are n-dimensional arrays, which could take the form of a 3x3x3 array, for instance, or something of even higher (or lower) dimensions.
The whole point of a computer algorithm or program is to initiate a particular computation. But there can be many different ways of writing that program — “a bewildering variety of different code realizations,” as Liu and her coauthors wrote in their soon-to-be published conference paper — some considerably speedier than others. The primary rationale behind ATL is this, she explains: “Given that high-performance computing is so resource-intensive, you want to be able to modify, or rewrite, programs into an optimal form in order to speed things up. One often starts with a program that is easiest to write, but that may not be the fastest way to run it, so that further adjustments are still needed.”
As an example, suppose an image is represented by a 100×100 array of numbers, each corresponding to a pixel, and you want to get an average value for these numbers. That could be done in a two-stage computation by first determining the average of each row and then getting the average of each column. ATL has an associated toolkit — what computer scientists call a “framework” — that might show how this two-step process could be converted into a faster one-step process.
“We can guarantee that this optimization is correct by using something called a proof assistant,” Liu says. Toward this end, the team’s new language builds upon an existing language, Coq, which contains a proof assistant. The proof assistant, in turn, has the inherent capacity to prove its assertions in a mathematically rigorous fashion.
Coq had another intrinsic feature that made it attractive to the MIT-based group: programs written in it, or adaptations of it, always terminate and cannot run forever on endless loops (as can happen with programs written in Java, for example). “We run a program to get a single answer — a number or a tensor,” Liu maintains. “A program that never terminates would be useless to us, but termination is something we get for free by making use of Coq.”
The ATL project combines two of the main research interests of Ragan-Kelley and Chlipala. Ragan-Kelley has long been concerned with the optimization of algorithms in the context of high-performance computing. Chlipala, meanwhile, has focused more on the formal (as in mathematically-based) verification of algorithmic optimizations. This represents their first collaboration. Bernstein and Liu were brought into the enterprise last year, and ATL is the result.
It now stands as the first, and so far the only, tensor language with formally verified optimizations. Liu cautions, however, that ATL is still just a prototype — albeit a promising one — that’s been tested on a number of small programs. “One of our main goals, looking ahead, is to improve the scalability of ATL, so that it can be used for the larger programs we see in the real world,” she says.
In the past, optimizations of these programs have typically been done by hand, on a much more ad hoc basis, which often involves trial and error, and sometimes a good deal of error. With ATL, Liu adds, “people will be able to follow a much more principled approach to rewriting these programs — and do so with greater ease and greater assurance of correctness.”
Reference: “Verified Tensor-Program Optimization Via High-Level” by Amanda Liu, Gilbert Louis Bernstein, Adam Chlipala and Jonathan Ragan-Kelley, 12 January 2022, Proceedings of the ACM on Programming Languages.
DOI: 10.1145/3498717
Technology
New System for Performing “Tensor Algebra” Offers Faster Big-Data Analysis
Halide, A New and Improved Programming Language for Image Processing Software
Quantum Programming Breakthrough: First Intuitive Language for Quantum Computers
Twist: MIT’s New Programming Language for Quantum Computing
Exposure to Pollutants Damages Our DNA and Tissues – May Speed Up Aging for the Entire Body
Surprisingly Smart Artificial Intelligence Sheds Light on How the Brain Processes Language
MIT Startup Exchange Program: Creating Powerful Synergies
Reading Computer Code Is Not the Same As Reading Language to the Brain
Great
I am really happy to know this useful website. It gives me a lot of interesting knowledge about everything around especially the content of the above article.
This is a great site so I want to thank you for developing it. It provides a lot of useful advice for those who are really interested in the topic they love, more specifically this post.
Email address is optional. If provided, your email will not be published or shared.
document.getElementById(“ak_js_1”).setAttribute(“value”,(new Date()).getTime());
SciTechDaily: Home of the best science and technology news since 1998. Keep up with the latest scitech news via email or social media.
August 20, 2022
NASA Identifies Lunar Landing Sites for Next Americans on Moon
NASA has identified 13 candidate landing regions near the lunar South Pole as it prepares to send astronauts back to the Moon under the Artemis…
NASA has identified 13 candidate landing regions near the lunar South Pole as it prepares to send astronauts back to the Moon under the Artemis…
August 19, 2022
Does Time Exist? How Do We Know?
August 19, 2022
Tracking 30 Years of Sea Level Rise Around the World
August 19, 2022
Researchers Find Training Routine That Will Turn Your Body Into a Fat Burning “Machine”
August 19, 2022
Rhine River – Europe’s Second-Largest River – Runs Dry
August 19, 2022
NASA’s Longest-Lived Mission: Voyager Probes Log 45 Years in Space
August 18, 2022
Fireball Lights Up the Sky Over Salt Lake City – 16 Times Brighter Than the Full Moon
August 18, 2022
Most People Infected With COVID-19 Omicron Variant Didn’t Know It
Copyright © 1998 – 2022 SciTechDaily. All Rights Reserved.