In order to train a neural network like any other machine learning model, we need to define a task to be done, that is, the goal of the predictor; for example, classification based on some labels or text translation. In an embedding scenario, we are not really interested in the model output, but we still need to define it in order for the neural network to be trained. That's why we refer to a fake task in this context.
In the skip-gram model, the fake task is the following: given a word, look at the words nearby and return one of these words randomly. This task was chosen because we can assume that similar words have similar context – cooking and kitchen are more likely to be part of the same sentence than kitchen and parachute.
In practice, nearby is determined by a number N, and all words within the window of size N around the target word are considered to be in the target word neighborhood.