Creating a scraping microservice
Now let's take our scraper and make it into a Nameko microservice. This scraper microservice will be able to be run independently of the implementation of the API. This will allow the scraper to be operated, maintained, and scaled independently of the API's implementation.
How to do it
We proceed with the recipe as follows:
- The code for the microservice is straightforward. The code for it is in
10/02/call_scraper_microservice.py
and is shown here:
from nameko.rpc import rpc import sojobs.scraping class ScrapeStackOverflowJobListingsMicroService: name = "stack_overflow_job_listings_scraping_microservice" @rpc def get_job_listing_info(self, job_listing_id): listing = sojobs.scraping.get_job_listing_info(job_listing_id) print(listing) return listing if __name__ == "__main__": print(ScrapeStackOverflowJobListingsMicroService("122517"))
- We have created a class to implement the microservice and given it a single method,
get_job_listing_info...