MLabs
MLabs
Nowa

Research Crawling Engineer

80 000 - 175 000 USD/ mies.
SeniorFull-time
#339125·Dodano dziś·0
Źródło: MLabs
Aplikuj teraz

Tech Stack / Keywords

NetworksJavaScriptMachine LearningGoRustPythonJavaC++

Firma i stanowisko

We are hiring on behalf of our client who is a technical infrastructure firm specializing in the delivery of massive-scale web data to organizations developing advanced artificial intelligence models. The organization supports high-capacity bandwidth-sharing networks and operates a distributed crawler capable of accessing high-quality public web data at a global scale. Additionally, the team has engineered sophisticated pipelines for the ingestion, segmentation, and annotation of billions of multimedia files, facilitating dataset creation for frontier research labs.


Wymagania

  • Extensive programming experience in one or more of the following: Go, Rust, Python, Java, or C++.
  • Proven experience in building web crawlers or large-scale data pipelines.
  • Solid understanding of HTTP, networking protocols, and browser behavior.
  • Familiarity with distributed systems and parallel processing techniques.
  • Experience handling large datasets, ideally at the terabyte to petabyte scale.
  • Demonstrated ability to debug and maintain systems within unstable or adversarial environments.

Preferred Qualifications:

  • Experience with NLP pipelines or dataset curation for machine learning.
  • Familiarity with LLM pre-training data or retrieval systems.
  • Practical experience with headless browsers (e.g., Playwright, Puppeteer, or Chrome DevTools Protocol).
  • Knowledge of proxy systems, IP rotation, and large-scale request orchestration.
  • Background in data quality evaluation or benchmarking.
  • Experience running workloads on cloud or bare-metal infrastructure.

Obowiązki

  • Construct and maintain large-scale web crawlers across diverse domains.
  • Design high-throughput, fault-tolerant systems for data collection, managing volumes ranging from millions to billions of URLs per day.
  • Navigate anti-bot systems, rate limits, and dynamic, JavaScript-heavy websites.
  • Develop robust pipelines for data cleaning, deduplication, filtering, and normalization.
  • Build and maintain datasets specifically structured for research and machine learning model training.
  • Monitor and optimize crawl performance, coverage, and data quality through rapid iteration.
  • Collaborate with research teams to ensure data collection efforts align with modeling requirements.
  • Optimize infrastructure to ensure cost-efficiency, low latency, and reliability.

Oferta

  • Impactful Opportunity: Contribute to the development of a web-scale crawler and knowledge graph at the forefront of AI data accessibility.
  • High-Performance Culture: Join a lean, low-ego team that prioritizes high output and professional growth.
  • Remote Work: This position is part of a fully remote team, offering flexibility and autonomy.
  • Competitive Compensation: A package including a competitive salary, comprehensive benefits, and equity, commensurate with experience and the ability to operate at scale.

Inne informacje

Location: Remote - Must have a 6 hour overlap with EST

Commitment to Equality and Accessibility: The employer is committed to equal opportunities, non-discrimination, accessible job adverts, and providing information in accessible formats. Reasonable adjustments during the hiring process are available upon request.

Data Privacy: MLabs Ltd collects and processes personal information for recruitment purposes only, managed securely and in compliance with data protection laws. Data may be shared with clients and trusted partners as necessary. Candidates may request data deletion or withdraw consent at any time.

MLabs

MLabs

8 aktywnych ofert

Zobacz wszystkie oferty
Aplikuj teraz