Tech

How Quickly Do Algorithms Improve?

MIT scientists provide the first systematic and quantitative evidence that algorithms are one of the most important sources of improvement in computing.

MIT Scientists show how fast algorithms are improving in various examples, demonstrating the crucial importance of algorithms in computing advances.

Algorithms are like the parents of a computer. They teach computers how to understand the meaning of information, and then allow them to make something useful from it.

The more efficient the algorithm, the less work the computer needs to do. For all the technological advances in computing hardware, and for the many discussed lifetimes of Moore’s Law, computer performance is only one side of the big picture.

Behind the scenes, the second trend is happening. The improved algorithm requires less computing power. The efficiency of the algorithm may not be the focus of much attention, but if a reliable search engine suddenly slows down to a tenth, or if moving large datasets feels like it’s going through sludge. You will definitely notice.

As a result, scientists at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) asked: How fast will the algorithm improve?

The existing data on this question was largely anecdotal and consisted of case studies of specific algorithms that were supposed to represent a wider range. Faced with this lack of evidence, the team set out to process data from 57 textbooks and over 1,110 research treatises to track the history of when the algorithm was improved. Some research treatises directly report how good the new algorithm is, while others have been reconstructed by the author using “pseudocode”, a shortened version of the algorithm that explains the basic details. I had to.

The team examined a total of 113 “algorithm families.” This is a set of algorithms that solve the same problems that were emphasized as the most important in computer science textbooks. For each of the 113, the team reconstructed its history, tracked each time a new algorithm was proposed for the problem, and paid special attention to a more efficient algorithm. From the 1940s to the present, the team has a wide range of performance, divided into decades, and found an average of eight algorithms per family, two of which improved efficiency. To share this assembled database of knowledge, the team also created Algorithm-Wiki.org.

Scientists have illustrated how quickly these families improved, focusing on the most analyzed function of the algorithm, how fast it can be guaranteed to solve the problem (computer-speaking). For example, “time complexity in the worst case”). What emerged was not only great variability, but also important insights into how transformative algorithmic improvements have been made to computer science.

For large-scale computing problems, 43% of the algorithm family saw improvements equal to or greater than the many benefits of Moore’s Law. In 14% of the problems, the performance gains from the algorithms far outweighed the performance gains from the hardware gains. These advances have become more important in recent decades, as the benefits of algorithmic improvements have been particularly significant for big data issues.

The only biggest change the author observed occurred when the algorithm family moved from exponential to polynomial complexity. The effort required to solve an exponential problem is like a person trying to guess a lock combination. If you only have one 10-digit dial, the task is easy. With four dials like a bike lock, no one can steal your bike, but it’s likely that you can still try all the combinations. At 50, that’s almost impossible — it takes too many steps. Exponentially complex problems are similar to those of computers. As problems grow, computers quickly outweigh their ability to handle them. Finding a polynomial algorithm often solves it, allowing you to tackle the problem in ways that cannot be improved by hardware.

As the roar of Moore’s Law, which is nearing its end, rapidly permeates global conversations, researchers need to look more and more at areas like algorithms to improve performance for computing users. Is called. According to the team, the findings confirm that historically the benefits from the algorithm have been enormous, so the possibility is there. But if they benefit from algorithms rather than hardware, they look different. Moore’s Law hardware improvements go smoothly over time, and algorithms usually benefit from large but rare steps.

“This is the first paper to show how fast the algorithm is improving in a wide range of examples,” said Neil Thompson, MIT research scientist and lead author of the new paper at CSAIL and Sloan School of Business. I am. “Through our analysis, we were able to say how many more tasks could be performed with the same amount of computing power after the algorithm was improved. The problem is billions or trillions of data. As it increases to the point, algorithm improvements are virtually more important than hardware improvements. In an era of increasing concern about the computing environment footprint, this has no downside to businesses or other organizations. It is a way to improve. “

See also: “How fast does the algorithm improve?” Yash Sherry and Neil C. Thompson, September 20, 2021 IEEE Minutes..
DOI: 10.1109 / JPROC.2021.3107219

Thompson wrote a treatise with Yash Sherry, a visiting student at MIT.The paper is published in IEEE Minutes.. This work was funded by the Tides Foundation and the MIT Initiative on the Digital Economy.



How Quickly Do Algorithms Improve? Source link How Quickly Do Algorithms Improve?

Related Articles

Back to top button