Sourcing Annotators
micro1 handles bi-weekly payroll for all annotators on your projects with 1 easy monthly invoice
The most important input to your AI models is the data you feed them, which requires great annotators. micro1 solves this process by interviewing thousands of applications for each opening and only choosing the very best candidates. For our annotators, we conduct a technical verbal interview and assess for soft skills, followed by an annotation assessment in the candidate’s specialty. We are able to do this 24/7 in any language by using our in-house AI Interviewer that interviews thousands of people every day at their convenience anytime any day in any time zone.
micro1 immediately matches open job requirements with the 50,000 experts in our talent pool and posts openings on platforms like LinkedIn, X jobs and their international equivalents when needed. Applicants are screened with our AI Interviewer and a report is generated for our recruiters to review, all on autopilot.
You can’t have RLHF without Humans, so let’s dive into our sourcing and vetting process. When we receive a new job requirement, either from our Talent as a Service customers or from our Human Data clients, we create a job on our platform and we have our technical recruiters search our talent pool of 50,000 experts. We are usually able to close new requirements immediately, but if that’s not the case then we post the job openings on platforms like LinkedIn, X jobs and their international equivalents. Job seekers apply on our job board and answer qualifying questions such as country of residence and enter their hourly rate expectations. Applicants receive an invite to an AI Interview which they can complete at a time that is best suited for them. The interview records the candidate’s webcam and we require candidates to share their screen. We use these data feeds to run in-house proctoring algorithms while maintaining candidate privacy by running it on-device only. We then combine these metrics into a simple proctoring score, although our recruiters never make a decision based solely on the proctoring score. If there is a low proctoring score for a candidate who performed well then our recruiters will review the recording to use their state of the art human judgment.
AI Interviews allow candidates to prove their skills regardless of their background resulting in a more fair and meritocratic process. Our interviews always start out with a theoretical skills portion and the questions are uniquely generated for each candidate. It is in a conversational format which allows candidates to ask for clarification and for the interview to skip over questions when the candidate is unable to answer. For annotator roles we embed our data platform into the proctored environment for the applicant to read the instructions for a task representative of RLHF work and have it auto-scored. For software engineer roles we have an additional data structures & algorithms coding test with questions generated on the fly by AI. Candidates that pass our interview process are then invited to complete a background check before working with any of our partners.
The entire interview is proctored and we never repeat the same questions, which allows us to safely and ethically source and deeply vet the humans that will contribute to building AGI & ASI.