MITB Banner

10 most impressive Research Papers around Artificial Intelligence

Share

Illustration by Progress in AI research is propelling the technology ahead

Artificial Intelligence research advances are transforming technology as we know it. The AI research community is solving some of the most technology problems related to software and hardware infrastructure, theory and algorithms. Interestingly, the field of AI AI research has drawn acolytes from the non-tech field as well. Case in point — prolific Hollywood actor Kristen Stewart’s highly publicized paper on Artificial Intelligence, originally published at Cornell University library’s open access siteStewart co-authored the paper, titled “Bringing Impressionism to Life with Neural Style Transfer in Come Swim with the American poet and literary critic David Shapiro and Adobe Research Engineer Bhautik Joshi.

Essentially, the AI-based paper talks about the style transfer techniques used in her short film Come Swim. However, Stewart’s detractors dismissed it as another “high-level case study.”

Meanwhile, the community is awash with ground-breaking research papers around AI.  Analytics India Magazine lists down the most cited scientific papers around AI, machine intelligence, and computer vision, that will give a perspective on the technology and its applications.

Most of these papers have been chosen on the basis of citation value for each. Some of these papers take into account a Highly Influential Citation count (HIC) and Citation Velocity (CV). Citation Velocity is the weighted average number of citations per year over the last 3 years.

Iris AI dips into her extensive research knowledge

A Computational Approach to Edge Detection: Originally published in 1986 and authored by John Canny this paper, on the computational approach to edge detection, has approximately 9724 citations. The success of this approach is defined by a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution.

Besides, the paper also presents a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. This helps in establishing the fact that edge detector performance improves considerably as the operator point spread function is extended along the edge.

A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence: This research paper was co-written by John McCarthy, Marvin L. Minsky, Nathaniel Rochester, Claude E. Shannon, and published in the year 1955. This summer research proposal defined the field, and has another first to its name — it is the first paper to use the term Artificial Intelligence. The proposal invited researchers to the Dartmouth conference, which is widely considered the “birth of AI”.

A Threshold Selection Method from Gray-Level Histograms: The paper was authored by Nobuyuki Otsu and published in 1979. It has received 7849 paper citations so far. Through this paper, Otsu discusses a nonparametric and unsupervised method of automatic threshold selection for picture segmentation.

The paper delves into how an optimal threshold is selected by the discriminant criterion to maximize the separability of the resultant classes in gray levels. The procedure utilizes only the zeroth- and first-order cumulative moments of the gray-level histogram. The method can be easily applied across multi threshold problems. The paper validates the method by presenting several experimental results.

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift: This 2015 article was co-written by Sergey Ioffe and Christian Szegedy. The paper received 946 citations and reflects on a HIC score of 56.

Kristen Stewart

The paper talks about how training deep neural networks is complicated by the fact that the distribution of each layer’s inputs changes during training. This is a result of change in parameters of the previous layers. The phenomenon is termed as internal covariate shift. This issue is addressed by normalizing layer inputs.

Batch normalization achieves the same accuracy with 14 times fewer training steps when applied to a state-of-the-art image classification model. In other words, Batch Normalization beats the original model by a significant margin.

Deep Residual Learning for Image Recognition: The 2016 paper was co-authored by Kaiming He, Xiangyu Zhang, and Shaoqing Ren. The paper has been cited 1436 times, reflecting on a HIC value of 137 and a CV of 582. The authors have delved into residual learning framework to ease the training of deep neural networks that are substantially deeper than those used previously.

Besides, the research paper explicitly reformulates the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. The research also delves into how comprehensive empirical evidence show that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Distinctive Image Features from Scale-Invariant Keypoints: This article was authored by David G. Lowe in 2004. The paper received 21528 citations  and explores the method for extracting distinctive invariant features from images. These can be utilized to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination.

The paper additionally delves into an approach which leverages these features for image recognition. This approach can help identify objects among clutter and occlusion while achieving near real-time performance.

Dropout: a simple way to prevent neural networks from overfitting: The 2014 paper was co-authored by Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. The paper has been cited around 2084 times, with a HIC and CV value of 142 and 536 respectively. Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks.

The central premise of the paper is to drop units (along with their connections) from the neural network during training, thus preventing units from co-adapting too much. This helps in significantly reducing overfitting, while furnishing major improvements over other regularization methods.

Induction of decision trees: Authored by J. R. Quinlan, this scientific paper was originally published in 1986 and summarizes an approach to synthesizing decision trees that has been used in a variety of systems. The paper specifically describes one such system, ID3, in detail. Additionally, the paper discusses a reported shortcoming of the basic algorithm, besides comparing the two methods of overcoming it. To conclude the paper, the author presents illustrations of current research directions.

Apple published its first artificial intelligence research paper

Large-Scale Video Classification with Convolutional Neural Networks : This 2014 paper was co-written by 6 authors, Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. The paper has been cited over 865 times, and reflects on a HIC score of 24, and a CV of 239.

Convolutional Neural Networks (CNNs) are proven to stand as a powerful class of models for image recognition problems. These results encouraged the authors to provide an extensive empirical evaluation of CNNs on large-scale video classification. This was accomplished using a new dataset of 1 million YouTube videos belonging to 487 classes.

Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference: The paper was published in 1988. Judea Pearl is the author to this article. The paper presents a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty.

Pearl furnishes a provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic.

Share
Picture of Amit Paul Chowdhury

Amit Paul Chowdhury

With a background in Engineering, Amit has assumed the mantle of content analyst at Analytics India Magazine. An audiophile most of the times, with a soul consumed by wanderlust, he strives ahead in the disruptive technology space. In other life, he would invest his time into comics, football, and movies.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.