Tuesday, 25 October 2016

Introduction To Artificial Neural Networks

Introduction

In recent years scientific community is continuously putting its effort to build systems which can mimic human behaviors. The effort of making computers to process the data similar to human brain started in the year 1943 when McCulloch and Pitts designed the first ever artificial neural model which termed as Artificial Neural Network (ANN). ANN is a computational system inspired by the structure, processing method and learning ability of a biological brain. In this blogpost I will be discussing on paper “Introduction to the Artificial Neural Networks” by Andrej et.al., This paper is best suited for someone who is very new to the world of ANN. The authors have discussed the structure of artificial neuron, similarities between biological and artificial neuron, details of ANN, types of ANN, learning methodologies for ANN and the uses of ANN.

Human brain is distinct because of its logical thinking ability. Human body consists of billions of neurons, which play important role in carrying signals from external stimuli to the brain. These signals are correspondent to the particular action which will be performed by the human body. Thus Brain and Central Nervous System plays important role in the human physiology. The idea or inspiration behind ANN is the biological neural network system. Biological neurons are building blocks of the brain. Image 1 shows the structure of a biological neuron. A biological neuron receives impulses from dendrites and soma processes the impulse signals, when a threshold is reached the impulse signal charges are sent out via axon across a synapse. The neurons are interconnected in a complex way which forms structure called nervous system. The human body performs various biological pathways and neurons connect these pathways to the brain ex: some neurons are connected to cells in sensory organs like smell, hearing and vision. Similarly some conduct signals to the motor systems and other organs of the body ex: body movements and central nervous system which passes body signals.

Image 1: Biological Neuron (Image derived from one © John Wiley and Sons Inc. 2000)

The biological neurons operate in milliseconds which is six time slower than the computers, which operate in nanoseconds. So there is a huge advantage if we make computes to mimic biological neurons there by human brains.

The Artificial Neuron and its Function:

An artificial neuron is built in the same logic as the biological neuron. In artificial neurons information comes via inputs that weighed with specific weights (this step behaves as biological dendrites), the artificial neuron then sums these weights and bias with a transfer function (behaves as soma of biological neuron). At the end an artificial neuron passes the processed information via outputs (behaves as axon of biological neuron). Below is the schematic representation of an artificial neuron.

Image 2: Artificial Neuron (Introduction to the Artificial Neural Networks - Andrej Krenker, Janez Bešter and Andrej Kos (2011).

One should choose a transfer function based on the type of problem to be solved. To be precise the transfer function is a mathematical function which defines the properties of artificial neuron. Some of the general transfer functions include Step Function, Linear Function and Non-linear (Sigmoid) function.

Step function: It is a binary function which will have only two outputs zero and one. If input value meets a threshold it will result in a specific output. This is the way biological neuron threshold works as well; when there is a trigger from the outer environment it induces action potential or biological pathways. Using these types of binary functions or step functions in an artificial neuron is termed as perceptron. Perceptrons are usually used in the last layer of ANN.

Linear Function: In this type of transfer function neuron will be performing simple linear function over the sum of weighed inputs and bias. This is usually deployed in the input layer of ANN.

Non –Linear or Sigmoid Function: This is a commonly used function which performs simple derivative operations. It is helpful in calculating the weight updates in ANN.

Artificial Neural Networks

A result oriented systematic interconnection of two or more artificial neuron will form artificial neural network. A typical ANN will have three layers of neuron interconnections (each layer can have several neurons),

1.   Input Layer:  In this layer neurons will receive inputs or signals.
2.   Hidden Layer: In this layer neurons will perform mathematical calculations like summation, multiplications etc.
3.   Output Layer: In this layer neurons will deliver the outputs or results.

A simple schematic ANN is shown below.


Image 3: A Simple Artificial Neural Network (Introduction to the Artificial Neural Networks - Andrej Krenker, Janez Bešter and Andrej Kos (2011).

To achieve a desired results from an ANN, we need to connect the neurons in a systematic manner, random inter connections will not yield any results. The way in which individual neurons are interconnected is called “topology”. We have many pre-defined topographies which can help us in solving problems in an easier, faster and more efficient way. After determining the type of given problem we need to decide for topology of ANN we are going to use and then fine-tune it by adjusting the weights.

Although we can make numerous interconnections and build many topologies, all the topologies are classified into two basic classes called 
1.   Feed - Forward Topology
2.   Recurrent Topology.

1. Feed - Forward Topology (Feed - Forward Neural Network):  In this type of topology input information/signals will travel in only one direction i.e., from input layer to hidden layer and then to output layer. This type of topology does not have any restriction on the number of layers, type of transfer function used in individual artificial neuron or number of connections between individual artificial neurons. The below image 4 shows the simple Feed – Forward Topology.
Image 4: Feed-forward (FNN) topology of an artificial neural network. (Introduction to the Artificial Neural Networks - Andrej Krenker, Janez Bešter and Andrej Kos (2011).

2. Recurrent Topology (Recurrent Neural Network): In this type of topology flow of information is independent of direction i.e., the information can flow between any three layers between Input, Hidden and Output layer in any direction.  This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Recurrent artificial neural networks can use their internal memory to process any sequence of inputs. Image 5 shows the simple Recurrent Topology.


Image 5: Recurrent (RNN) topology of an artificial neural network. (Introduction to the Artificial Neural Networks - Andrej Krenker, Janez Bešter and Andrej Kos (2011).

There are some special of type of recurrent artificial neural networks such as Hopfield, Elman, Jordan, and bi-directional artificial neural networks.

(a) Hopfield Artificial Neural Networks

This is a recurrent neural network which consists of one or more neurons. The neurons in this model act as a stable vectors which are nothing but the memory centers. When we train the model with specific examples the vectors act as memory centers and when the test data is introduced these memory units interprets the results in binary units. The binary units take two different values for their states which will be determined by whether the input units exceed the threshold or not. The binary values can be either 1 or -1, or 1 or 0. The important thing about this network is that the connections must be symmetric otherwise it will exhibit chaotic behavior.

Image 6: Hopfield Artificial Neural Networks. (Introduction to the Artificial Neural Networks - Andrej Krenker, Janez Bešter and Andrej Kos (2011).

(b) Elman and Jordan Artificial Neural Networks

Elman neural network consists of three layer input, hidden and output layers. In this ANN input layer has a recurrent connection. Elman’s neural network has a loop from hidden layer to input layer through a unit called context unit. This type of ANN usually designed to learn sequential or varying patterns of data. Elman neural network has a sigmoid artificial neuron in hidden layer and linear artificial neuron in output layer, this combination increases the accuracy of the model. Jordan artificial neural network is similar to Elmans neural network but has a loop from output layer to input layer through a context unit.

Image 7 : Elman and Jordan Artificial Neural Networks. (Introduction to the Artificial Neural Networks - Andrej Krenker, Janez Bešter and Andrej Kos (2011).

(c) Long Short Term Memory (LSTM)

This is most widely used ANN because of its long term memory feature. LSTM can learn from its experience to process, classify and predict time series with very long time lags of unknown size between important events. LSTM has three gate concepts which include “Write Gate”, “Keep Gate” and “Read Gate”. When Write Gate is on information will get into the system. Information will stay in the system till the Keep Gate is on. The information can be read or retrieved when the Read Gate is on. The working principle of LSTM is shown in the image 8. As per the image the input layer consists of four neurons in the input layer. The top neuron in the input layer receives the input signal and passes it on to the subsequent neuron where the weights will be computed. The third neuron in the input layer decides as to how long it has to hold the values in the memory, and the forth neuron decides when it should release the values to the output layer. Neurons in the first hidden layer does simple multiplication of input values and the second hidden layer computes simple linear function on the input values. Output of the second hidden layer will be fed back to the input and first hidden layer which will help in making decisions. The output layer performs simple multiplication of input values.

Image 8: Long Short Term Memory. (Introduction to the Artificial Neural Networks - Andrej Krenker, Janez Bešter and Andrej Kos (2011).

(d) Bi-directional Artificial Neural Networks (Bi-ANN)

Bi-directional artificial neural networks are capable of predicting both future and past values. This makes them unique of all other available ANN. The schematic representation of Bi-ANN is shown in the image 9. The model consist of two individual inter connected artificial neural networks through two dynamic artificial neurons which are capable of remembering their internal state. The two inter connected neural networks perform direct and inverse transformation functions, this type of inter connection between future and past values increases the Bi-ANN’s prediction capabilities. This model has two phase learning methodology where in the first phase it should be taught future values and in the second phase about past values.
Image 9: Bi-directional Artificial Neural Networks (Bi-ANN). (Introduction to the Artificial Neural Networks - Andrej Krenker, Janez Bešter and Andrej Kos (2011).

(e) Self-Organizing Map (SOM)

Self Organizing Map (SOM) is a type of FNN however. SOM is different in its arrangement when compared to the other ANNs, these are usually arranged in an hexagonal shape. The topological property of this ANN is determined by the neighborhood function. This type of ANN produces low dimensional views of high dimensional data. Such ANNs can regularities and correlation in their input signal or values and adapt them for the future responses. This model uses unsupervised learning technique, it can be trained by adjusting the weights and arrive at a point of initialization. After learning phase the model has a process called mapping in which only one neuron whose weight vector lies closes to the input vector will be chosen, this neuron is termed as winning neuron.

Image 10: Self Organizing Map (Introduction to the Artificial Neural Networks - Andrej Krenker, Janez Bešter and Andrej Kos (2011).

(f) Stochastic Artificial Neural Network (Boltzmann machine)

The Stochastic Artificial Neural Networks works are built by either giving network's neurons random transfer functions, or by giving them random weights. Because of their random fluctuation these ANNs are useful in solving optimization problems.

(g) Physical Artificial Neural Network

The physical neural networks is a field which is growing slowly, the early first physical artificial neural networks were created using memory transistors called memistors. This technology did not last long because of its incapability of commercializing. However, in recent years many researches focused on similar approach using nanotechnology or phase change material.

Learning Methodologies for ANN

Fine tuning a topology is just a precondition for ANN. Before we can use ANN we have to teach it solving the type of given problem, this will be accomplished by learning process. As human’s behavior comes from continuous learning and social interactions, similarly we can make an ANN learn and behave as we require.

ANN learning can be classified into three types, Supervised Learning, Unsupervised Learning and Reinforcement Learning. Each learning methodology is chosen for specific type of problem that has to be solved by ANN.

1. Supervised learning: This is a type of Machine learning technique where
We are aware of the input values (X) as well as the results (Y). We will train (f) the ANN by adjusting weights which will produce the desired results.
Y= f(X)
The purpose of this is to approximate the mapping function so well that when you have new input data (x’) that you can predict the output variables (Y’) for that data. In this type of leaning data is divided into two parts Training Data and Test Data. The training data consist of pairs of input and desired output values that are represented as data vectors. Test data set consist of data that has not been introduced to ANN while learning. When supervised learning achieves an acceptable level of performance it can be deployed as a standardized way of learning in an ANN.

2. Unsupervised Learning: In this type of learning we will know only the input values which will fed into the ANN. The model has to come up with the learning process and produce the underlying structure of the learnt data in order to achieve a suitable output. In this type of learning ANN is given only unlabeled examples, one common form of unsupervised learning is clustering where we try to categorize data in different clusters by their similarity.

3. Reinforcement learning: In this type of learning data will not be given to the ANN but generated by interactions with the environment. In reinforcement learning ANN automatically determines the ideal behavior within a specific context, in order to maximize the performance. Reinforcement learning is widely used in robot control, telecommunications, and games such as chess and other sequential decision making tasks.

Applications of Artificial Neural Network’s

Artificial Neural Networks have wide variety of applications in various industries. The most ingesting applications are

Handwriting recognition: U.S. Postal department has deployed handwriting recognizing algorithms to sort its posts. Neural networks can learn and interpret the hand written data and are best suited for this type of activity. The below image shows how the algorithms interpret the hand written data correctly.


 Image 11: Hand Writing Recognition (Source: Wikipedia)

Information and Communication Technologies (ICT) fraud detection: The bi-directional ANN network can be used in ICT fraud detection, the telecommunication technologies not only has benefits but also has some threats. Criminals misuse the technology to capture the data like bank details, personnel information, money laundering and for terrorist activities. This can be overcome by deploying neural network system which monitors the behaviors of user and compares with the pre-defined data. In case of suspicion it triggers an alarm by which ICT companies can handle the situation way before things goes out of hand.

Retina Scan, Finger Print and Facial Recognition: In the current world Retina Scan, Finger Print and Facial Recognition are major security measures and neural network can be adopted learn the specific patterns of these and output the details when required.

Gaming Technology and Robotics: ANN is widely applied in the field of gaming and robotics. The ability of the ANN to learn, reproduce, and predict the future and past has made it best suited for gaming and robotic technology.

Financial Risk Management: In the financial field ANN is adopted in credit scoring, market risk estimation and predictions. ANN is successfully deployed for credit scoring and rating based on various inputs that will be given to it, it learns the input feature and predicts or gives a score as output. ANN is also useful in other areas of financial risk management such as market risk management and operational risk management.

Medical Imaging: In recent years loads of research is being done where ANN is set to learn the patterns of medical images such as cardiovascular imaging and made to predict the disease.

Natural Language Processing (NLP): ANN are widely used in NLP they are made to learn the patterns and tuned to give the desired outputs.

Voice and Image Recognition: ANN’s are used in learning/recognizing the voice and interpret it. After interpretation it will produce the desired results. This is the same way as iPhone structured its voice recognition technology Siri.

The ANN are deployed in recognizing photos, the ANN will trained on set of images and will be made to recognize the image. This is how the Facebook photo tagging works.

Similarly ANN’s have wide variety of significant application in most of the industries.


References:

1.  Andrej Krenker, Janez Bešter and Andrej Kos (2011). Introduction to the Artificial Neural Networks, Artificial Neural Networks - Methodological Advances and Biomedical Applications, Prof. Kenji Suzuki (Ed.), ISBN: 978-953-307-243-2, InTech, Available from: http://www.intechopen.com/books/artificial-neural-networksmethodological-advances-and-biomedical-applications/introduction-to-the-artificial-neural-networks

2. Simon Haykin. Neural Networks – a Comprehensive Foundation. Prentice Hall, New Jersey, 2nd edition, 1999

3. Alan Dorin, An Introduction to Artificial Neural Networks, AI, A-Life and Virtual Environments, Monash University

4. reinforcementlearning.ai-depot.com/

5. Artificial Neural Networks for Beginners, Carlos Gershenson

6. machinelearningmastery.com

7. Wikipedia


Thursday, 8 September 2016

Conway’s Game of Life: Beyond a Game

We would have played many Mathematical games for fun however, one game called “Life” which is designed by John Horton Conway stands out of all. The ingesting thing about this game as Conway explains, it’s a no player game! This game is built on simple rules which simulate unpredicted patterns. The reason it is beyond a game is that, it explains or at least makes us think about evolution of life, space and also its practical application in today’s industries.

Background: 
John von Neumann, a great mathematician of 20th century has made many contributed in various fields of science such as Astrophysics, Game Theory and Economic Behavior, Shockwave, Hydrodynamics, Weather Control, Atomic Energy, Computer Technology and Theory of Automata. Conway studied Theory of Automata where Von Neumann discusses on colonizing red planet Mars. Von Neumann states that, we can send machines to Mars whose job would be smelting iron oxide to separate iron and oxygen, this way we can probably create colonize the planet Mars. Now the machines which will be sent out to Mars should be capable of creating a copy of its own, using the available iron and other metals (The idea similar to that of DNA – Replication, Transcription and Translation in humans!). Von Neumann considered Mars as a plane object and imagined a machine with 29 squares; each square performing different functions with different states. Conway simplified this idea and refined the rules laid out by Von Neumann, which resulted in wonderful mathematical game called “Life “or popularly known as “Game Of Life”. Conway considered only two states in his game Alive and Dead, unlike Von Neumann’s 29 state machines. To be precise Von Neumann’s machine was very well designed but Conway’s wasn’t!

Game of Life
Conway’s Game of Life was published in Scientific American (October, 1970) and it was one of the most popular reads of that time. The game resembles the rise, fall and alternations of a society of living organisms because of which it is categorized under simulation games. This game was initially played using small checkers or poker chips and a go board.  Later it was programmed in 1970’s computers which were known as PDPs (programmed data processors). Today the game is so advanced that we have many different patterns.

Initial assumption of the game should be that, it will be played on an infinite plane. Each cell in the board or plane has eight neighboring cells, four adjacent orthogonaly, four adjacent diagonally. Below are the Conway’s rules which are delightfully simple.

1.Survival: Every cell with two or three similar neighbors will survive for next generation.
2.Death:     Each cell with four neighbors will die because of over population.
        Each cell with one or zero neighbor will die from isolation.
3.Births:    Each empty counter adjacent to exactly three neighbors cells will give birth to a new cell.

It is important to note that all birth and death occurs simultaneously, because of which population will constantly undergo some unusual but beautiful changes creating different patterns.

Let us see some simple triplet patterns that get generated initially.

1.       Image showing triplet patterns (“ The Fantastic combination of John Conway’s new solitaire game life”, Martin Gardner, Scientific American, 1970)

In the above figure “A” and “B” patterns dies in the third move. The pattern “C” which is a single diagonal chain of counters, loses its end counters on each move until the chain finally disappears (Conway calls this as “Speed of Light”). Pattern “D” becomes a stable block and “E” will become a blinking oscillator due to its flip-flop property. In each step of this pattern, two cells die, the middle cell stays alive, and two new cells are born to give orientation as shown above. This pattern is also known as “Period”

Above picture illustrates triplets, let us consider tetrominoes (four cells) and see what are the patterns that are produced in below figure.

2.   Image showing tetrominos patterns (“ The Fantastic combination of John Conway’s new solitaire game life”, Martin Gardner, Scientific American, 1970)

The above figure illustrates five tetrominos where pattern “A” is still-life figure. Pattern “B” and “C” becomes a stable figure called “Beehive” and “D” becomes beehive on third move. Interestingly the pattern “E” becomes isolated blinker, in fact after nine moves it becomes four isolated blinkers this pattern is called “Traffic Light”.

Let us go one step ahead and consider five-cell initial population. At each step two cells will die and two new ones get birth. After four steps the original population will re-appears, however it will move diagonally down and across the plane. This pattern continues to move in the same direction forever, eventually it will disappear from our view but will continue to exist in the infinite plane. This pattern is called “Glider”.

3. Image showing Glider pattern (“Game of Life”, Cleve Moler, 2011)

The game becomes more interesting when we think beyond these static patterns.  The computer programs make this game dynamic and we can watch the evolution of larger population. One such pattern which creates curiosity among the readers is “Glider Gun” developed by Bill Gosper in 1970.
In Glider Gun a portion of the cell population between the two static blocks oscillates back and forth, and at every 30 steps, a glider emerges. This leads to a huge number of gliders that fly out of the view but exist in the infinite plane.

4. Image showing Glider Gun (“Game of Life”, Cleve Moler, 2011)

Over the years the game has been advanced and now it has numerous patterns. Below are some of the animations of the patterns.

5 (a) Blinker (Source: Wikipedia)

5 (b) Glider (Source: Wikipedia)

5 (c) Light Weight Space Ship (Source: Wikipedia)

Image 6. Glider Gun (Source: Wikipedia)

Image 7: Evolution of an MSM breeder – a puffer that produces Gosper guns, which in turn emit gliders. (Source: Wikipedia)

Image 8: Puffer (Source: mathworld.wolfram.com)

There are hundreds of programs like Python R, C, Java etc.which can develop various patterns of Game of Life. 

The idea of Game of life is applied in the various fields such as Music, for creating sound patterns using MDMI (Musical Instrument Digital Interface). The game of life is also used as a basis to explain the astronomical events. The game can also explain the evolution and survival of cells, species and organisms. Because of its wide range of application, The Game of Life stands beyond a simple mathematical game.

Reference:
1.“The Fantastic combination of John Conway’s new solitaire game life”, Martin Gardner, Scientific American, 1970.
2. “Game of Life”, Cleve Moler, 2011
3. Wikipedia
4. mathworld.wolfram.com

Thursday, 4 August 2016

Paper Review: Knowledge Representation In Sanskrit And Artificial Intelligence - Author: Rick Briggs


This is an interesting paper which discusses on, how Sanskrit could be a best natural language for computer processing/Artificial Intelligence (AI). Over the decades scientific community is trying hard to identify and design systems which can represent and process natural language. English is widely spoken language and we intend machines to learn English and process the data. However, one cannot program systems using natural English language, we have to reframe or rephrase in a systematic way so that system can understand. In this paper author explains how Sanskrit is significantly advanced than English and can be made use in the field of artificial intelligence. The paper has three parts where first; knowledge representation scheme is discussed using semantic nets. In second part author outlines methods used by ancient Indian Grammarians to analyze sentence unambiguously. Finally, equivalence is established between Sanskrit language analysis to the techniques used in applications of AI.

When attempts of machine translation failed to teach a computer to understand natural language AI turned to knowledge representation. When we try to teach a machine any natural language, it should not be always a word to word mapping. One has to overcome ambiguity of words in natural language and interference of syntax. To overcome the ambiguity of words, there should be a representation of meaning independent of words used. Author takes three sentences as examples to demonstrate a prototypical semantic net system.

1.       “John gave the book to Mary”
The grammatical information can be transformed into an arc and a node. The above sentence can be
stored as triples.

give, agent, John
give, object, ball
give, recipient, mary
give,time, past
This can be schematically represented as below:

Figure 1: Schematic Representation of sentence “John gave the book to Mary” (Rick Briggs, 1985).

2.       “John told Mary that the train moved out of the station at 3 o’ clock.”
As the below figure shows there was a change in state in which the train moved to unspecified location from the station. It went to the former at 3:00 and from latter at 3:00. We can now covert this to triples like previous example. Here Verb is given significance and is considered the focus and distinguishing aspect of the sentence.


Figure 2: Schematic Representation of sentence “John told Mary that the train moved out of the station
at 3 o’ clock.” (Rick Briggs, 1985).

There are other sentences when drawn as above nets will represent only a state of a thing or an event.

3.       “John, a programmer living at Maple St., gives a book to Mary, who is a lawyer.”
The above statement if read as semantic net it would give an awkward  and cumbersome representation.  The degree to which a semantic net is cumbersome and odd-sounding in a natural language is the degree to which that language is “natural” and deviates from the precise or “artificial.” Refer to the below image which explains the same.


Figure 3: Schematic Representation of sentence “John, a programmer living at Maple St., gives a book to Mary, who is a lawyer.” (Rick Briggs, 1985).

Author gives brief history of Sanskrit grammarians, Panini who lived during 4th century BCE gave a strong foundation to the Sanskrit grammar. Panini’s successors like Bhartrhari gave algebraic formulation for grammar and tried to improve upon them.  During the 16th century Kaundabhatta and Bhattoji Dikshita gave new touch to the existing grammar with their publication of Bhattoji Dikshita’s Vaiyakarana-bhusanasara. Similarly during 17th century Nagesha contributed to the language with his major work on Vaiyakaranasiddhantamanjusa, or Treasury of definitive statements of grammarians. Author sites these grammarians and makes a strong point that the Sanskrit is not only a simple spoken language but has a scientific and mathematical backbone to it.



Part 2: Sanskrit Language Analysis and Its Equivalence with Techniques Used In Applications of AI


Sanskrit is unique and advanced because unlike other linguistic theories, it does not work Noun-Phrase model. In Indian analysis sentence expresses an action that is conveyed by verb and set of auxiliaries. The verbal action is represented by the root of the verbal form, the auxiliary activities by nominal (noun, adjectives etc.) and their case endings.

Meaning of verb in Sanskrit is Vyapara (Action) + Phala (Result)

In general verb is defined as “to do”. However, Sanskrit language is architected in such a way that the sentence provides not only the action but, also the other details as well such as tense, quality of the agent involved (Singular, Double, Plural) and the degree of the agent (First, Second, Third).
Ex: Gramam Gacchati Chaitra (Chaitra is going to village), - “An act of going taking place in the present of which the agent is no one other than Chaitra qualified by singularity and here object something not different from village.”

“John Gave the Ball To Mary” – this sentence has verbal meaning “to give” but has many auxiliary activities such as, John holding the ball, an act of movement starting from John, an act of giving, act of receiving etc. It is important for one to know where to stop the splits. While defining the verb Sanskrit clarifies that the name ‘action’ cannot be applied to solitary point reached by extreme sub-division. In these types of sentences, auxiliary activities become subordinated to the main sentence meaning. These auxiliary activities will be represented by case endings in Sanskrit.  There are seven types of case endings in Sanskrit out of which six are definable representation of auxiliary activities (Agent, Object, Instrument, Recipient, Point of Departure and Locality), seventh is genitive which is not represented by other six.

The case endings are explained by taking below sentence as example:
“Out of friendship, Maitra cooks rice for Devadatta in a pot, over a fire.”

Here the total process of cooking is rendered by the verb form “cooks” as well as a number of auxiliary actions:
1. An Agent represented by the person Maitra
2. An Object by the “rice”
3. An Instrument by the “fire”
4. A Recipient by the person Devadatta
5. A Point of Departure (which includes the causal relationship) by the “friendship” (which is between Maitra and Devadatta)
6. The Locality by the “pot”

This explanation shows how Sanskrit is advanced and stands out from other languages.  
Author gives another example to show how Sanskrit sentence formation is detailed when compared to English. Consider the below sentence in accordance with Sanskrit.
“Because of the wind, a leaf falls from a tree to the ground.” – Here wind is the instrument bringing leaf. Tree is point of departure. Ground is locality and Leaf is agent.
When we consider the same sentence in accordance with English the above sentence can be written as “The wind blows a leaf from the tree here wind becomes agent and leaf will be considered as object. This sentence is transitive whereas the earlier one was intransitive.

In the final section author tries to establish equivalence between Sanskrit language and techniques used in AI (semantic nets). Both these systems stands on extensive degree of specification which is crucial in understanding the real meaning of the sentence to the extent that it will allow inferences to be made about the facts not explicitly stated in the sentence.

“Out of friendship, Maitra cooks rice for Devadatta in a pot over a fire” – This sentence when represented in semantic nets, it will have triples as below

cause, event, friendship
friendship, objectl, Devadatta
friendship, object2, Maitra
cause, result cook
cook, agent, Maitra
cook, recipient, Devadatta
cook, instrument, fire
cook, object, rice
cook, on-lot, pot.

The same sentence in Sanskrit can be rendered as

cook, agent, Maitra
cook, object, rice
cook, instrument, fire
cook, recipient, Devadatta
cook, because-of, friendship
friendship, Maitra, Devadatta
cook, locality, pot.

Author makes a point that, to make AI more improved one has to adopt Phala/Vyapara distinction which is in Sanskrit. This helps is elaborating sentence, in the above case we can include the process of “heating” and the process of “making platable”. These comparisons reveal that Sanskrit is closest language which can be represented by systems. Also below is an easy semantic net for the above sentence.

Figure 4: Schematic Representation of sentence “Out of friendship, Maitra cooks rice for Devadatta in a pot over a fire.” (Rick Briggs, 1985).

My Views On This Paper: This is an quite old but very interesting paper where author tries to bring equilibrium between AI techniques and Sanskrit grammar. All the industries and scientific community would have huge advantage, if we will be able to represent a natural language for the system processing.  To enjoy the content of the paper one should have idea about Sanskrit language (Being said that, I have studied Sanskrit in my school and college). The idea of implementing Sanskrit as a natural language to the systems is very nicely laid out in the paper. Author was able to signify how cumbersome is to represent the semantic nets and how it can be made much simpler using Sanskrit. According to the paper, it is evident that Sanskrit as a language is very descriptive and beats English in the AI race. However, very minimal research is done in this area. Another hurdle would be how many of us would be willing to adopt for Sanskrit as English is widely spoken. A suggestion would be, if we will make two layered system where we can input any natural language and system will process it in terms of Sanskrit, this would be crazy but could be wonderful if we succeed. Author states that Sanskrit has relativity with Mathematics which is true – in Sanskrit there is a way of analyzing words with “Sandhi”. Using Sandhi we can break any word technically and group them under pre-defined category. Also there is a scoring system for each letter in a sentence and grouping them. I see this kind of approach will be useful in AI area. It requires huge research on the concept of Sanskrit being used as natural language for systems/AI. It would be a worth of a research, as it will enlighten us in making easier way to design the system representation.  Overall author makes us think in a different direction with his research and views.

References:

1. Rick Briggs (1985) Knowledge  Representation  In  Sanskrit  And  Artificial  Intelligence.
2. Bhatta, Nagesha (1963) Vaiyakarana-Siddhanta-Laghu-Manjusa, Benares (Chowkhamba Sanskrit Series Office).
3. Nilsson, Nils J. Principles of Artificial Intelligence. Palo Alto: Tioga Publishing Co
4. Bhatta, Nagesha (1974) Parama-La&u-Manjusa

Tuesday, 28 June 2016

Plotting Heat Map Using Python

For Machine Learning, practitioners commonly use  Python and R because they are open source languages. I was able to learn Python using "Pycharm" which I would strongly recommend for any Python beginners.

I was given a challenge to create 
(i) 2-dimensional array of size 100x100 and populate it with random floating points between 0 and 1, inclusive (i.e., [0,1]); 
(ii) plot the 2d array using any python library, to create a visual “heat map” representation of the data; 
(iii) write a loop that refreshes the numbers in the array and replots the heatmap each time the array is repopulated.  

Stretch assignment: Create a movie of the changing heat maps by playing each heat map frame by frame in a sequence. 

I was able to generate a heat map as shown in the picture with the following code:

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation

def animate(data, im):
    im.set_data(data)

def step():
    while True:
        data = np.random.rand(100, 100)
        yield data

fig, ax = plt.subplots()
im = ax.imshow(np.random.rand(100, 100), interpolation='nearest')
ani = animation.FuncAnimation(
    fig, animate, step, interval=100, repeat=True, fargs=(im, ))
plt.show()