made by
https://cneuralnets.netlify.app/
Let’s say you are thinking of something.
I want to have some ice cream as it is very hot.
Notice how you thought of ice creams when you noticed that it is very hot outside. We, as humans, form thoughts, based on a pre processed thought. You will never have a blank mind and think stuff from literally nothing. In simple words, your words have some persistence.
Artificial Neural Networks fail to capture this beautiful thing. It fails to use its reasoning about previous events to learn facts that will inform the later events. We lose the context or sequential information. Additionally ANNs have lots of unnecessary padding which will need extra compute.
This is where Recurrent Neural Networks(RNN) come in.
Unlike traditional neural networks where each input is independent, RNNs can access and process information from previous inputs. This allows it to handle sequential data.
This kind of data is a type of information in which order matters. Each data is connected to the earlier data. Without this connection there is no context and the data becomes meaningless.
I to have ice as is cream very hot it some want.
Makes no sense right? The order of the words give context to the sentence. This kind of sequential data is called Natural Language Text. If we use this sentence as some kind of speech and speak it out with correct syllables and pronunciation, it becomes a Speech Signal.
Any kind of signal that varies as $f(t)$ can be coined as a type of sequential data, popularly called as Time Series.
A one-to-many architecture represents a scenario where the network receives a single input but generates a sequence of outputs.
The RNN takes in a single piece of info as input, maybe something like image. Then the RNN processes the input and generates a sequence of outputs over time!
Applications