Provides the ability to pull out the NLP models into a separate model server which can then be hosted on a GPU instance if desired.