The Word2Vec model trained by Google on the Google News dataset has a feature dimension of 300. The number of features is considered as a hyperparameter which you can, and perhaps should, experiment with in your own applications to see which setting yields the best results.
In this pretrained model, some stop words such as a, and, and of are being excluded, but others such as the, also, and should are included. Some misspelled words are also included, for example, both mispelled and misspelled—the latter is the correct one.
You can find open source tools such as https://github.com/chrisjmccormick/inspect_word2vec to inspect the word embeddings in the pre-trained model.