Scrapinghub is a fast growing and diverse technology business turning web content into useful data with a cloud-based web crawling platform, off-the-shelf datasets, and turn-key web scraping services.
We’re a globally distributed team of over 140 SHubbers working from over 30 countries who are passionate about scraping, web crawling, and data science.
As a new SHubber, you will:
Become part of a self-motivated, progressive, multi-cultural team.
Have the opportunity to work remotely.
Have the opportunity to go to conferences and meet with the team from across the globe.
Get the chance to work with cutting-edge open source technologies and tools.
About the Job:
Scrapinghub is looking for software engineers to join our Professional Services team to work on web crawler development with Scrapy, our flagship open source project.
Are you interested in building web crawlers harnessing the Scrapinghub platform, which powers crawls of over 3 billion pages a month?
Do you like working in a company with a strong open source foundation?
Scrapinghub helps companies, ranging from Fortune 500 enterprises to up and coming early stage startups, turn web content into useful data with a cloud-based web crawling platform, off-the-shelf datasets, and turn-key web scraping services.
- Design, develop and maintain Scrapy web crawlers - Leverage the Scrapinghub platform and our open source projects to perform distributed information extraction, retrieval and data processing - Identify and resolve performance and scalability issues with distributed crawling at scale - Help identify, debug and fix problems with open source projects, including Scrapy
Scrapinghub’s platform and Professional Services offerings have been growing tremendously over the past couple of years but there are a lot of big projects waiting in the pipeline, and in this role you would be a key part of that process. Here’s what we’re looking for:
- 2+ years of software development experience.
- Solid Python knowledge.
- Good communication in written & spoken English.
- Availability to work full time.
Bonus points for:
- Scrapy experience is a big plus.
- Familiarity with techniques and tools for crawling, extracting and processing data (e.g. Scrapy, NLTK, pandas, scikit-learn, mapreduce, nosql, etc).