The simple Q-learning algorithm involves maintaining a table of the size m×n, where m is the total number of states and n the total number of possible actions. This means we can't use it for large state space and action space. An alternative is to replace the table with a neural network acting as a function approximator, approximating the Q-function for each possible action. The weights of the neural network in this case store the Q-table information (they match a given state with the corresponding action and its Q-value). When the neural network that we use to approximate the Q-function is a deep neural network, we call it a Deep Q-Network (DQN).
The neural network takes the state as its input and calculates the Q-value of all of the possible actions.