Deep Learning to classify big data

 

Human brain 2
The human brain's deep architecture has been an inspiration for deep nets.

The human brain’s deep architecture, as an example of a successful agent in classification tasks, has inspired the development of deep nets, artificial neural networks that can learn tasks that contain more than one hidden layer between the input and output layers. This work has shown amazing result recently. In deep neural networks, each layer consists of several nodes, called neurons. Edges in a neural network are between two consequent layers. Each edge has a weight and each node has a bias value. These neutral networks “learn” by applying training inputs to the input layer of the network and comparing them to the desired (and known) output. The input is weighted and modified until the desired outcome is achieved.

Neural networks have been around since 1970, but back then, they were successful only in simple pattern problems. Tackling more complicated patterns meant adding layers to the system and training with a large number of layers. This is what is now called a deep net, and it was not possible until recently, when computer scientists found better ways to train a deep net such as applying new types of activation functions to nodes. Deep learning has shown amazing results—sometimes better than those of the human brain—in areas such as image processing, speech processing, social media analysis and biology. Since training a deep net is not possible with restricted amounts of data, having enough data is a prerequisite in using deep learning techniques. Social media, which creates massive amounts of data every day, is a good source of big data and therefore can help in further developing deep learning and neural networks.

During the 2017 International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction and Behavior Representation in Modeling and Simulation (SBP-BRiMS 2017),  I observed that several research groups are using this emerging method in the social computation field. These researches have different interests in using deep learning techniques, including the ability to predict links in social media, text analysis, predicting peoples’ feeling from pictures they share on social media, terrorism source prediction, news propagation modeling, and predicting peoples’ locations based on the information they share. The conference made it clear that deep learning is a new and promising method in social computing research, but we must remember that working with big data and training a deep net requires the proper hardware, such as the most appropriate GPUs. These factors still restrict the use of deep learning, but the future is promising for this growing field.

20161015_205156

Sahar Tavakoli is a PhD student in computer science at the University of Central Florida. She attended the 2017 International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction and Behavior Representation in Modeling and Simulation (SBP-BRiMS) with support from the South Big Data Hub. The conference was held July 5 – 8 at George Washington University in Washington, DC.

Login Logout