The most important input to your AI models is the data you feed them, which requires great annotators. micro1 solves this process by interviewing thousands of applications for each opening and only choosing the very best candidates. For our annotators, we conduct a technical verbal interview and assess for soft skills, followed by an annotation assessment in the candidate’s specialty. We are able to do this 24/7 in any language by using our in-house AI Recruiter that interviews thousands of people everyday at a time most convenient for them.

The first step is to receive a new job requirement from our Human Data clients. We then create a job on our platform and have our technical recruiters search our talent pool of 25,000+ experts. We are usually able to close new requirements immediately, but if that’s not the case then we post the role on major job sites. Job seekers apply and answer qualifying questions such as country of residence and compensation expectations. Applicants then receive an invite to an AI Interview which they can complete at a time that is best suited for them.

Candidates share their webcam and screen as they begin the interview. We use this data for an in-house proctoring algorithm while maintaining candidate privacy by running it on-device. The candidate converses with the AI Recruiter for 3-5 skills, for 5-7 minutes per skill. They then continue to a coding exercise if they are engineers. Lastly, they perform a skill-specific RLHF exercise. We take all that data and generate an AI Interview report for our human recruiters to review.

micro1 immediately matches open job requirements with the 25,000 experts in our talent pool and posts openings on platforms like LinkedIn, X jobs and their international equivalents when needed. Applicants are screened with our AI Recruiter and a report is generated for our recruiters to review, all on autopilot.

AI Interviews allow candidates to showcase their skills, regardless of what’s written in their resumes, resulting in a more fair and meritocratic process. Our interviews always start out with a theoretical and situational portion, and the questions are uniquely generated for each candidate. Its conversational format allows candidates to ask for clarification, and the recruiter skips over questions and topics when the candidate is unable to answer.

For annotator roles, our data platform is integrated into the proctored environment, where applicants can review task instructions representative of RLHF work and complete an auto-scored assessment. For software engineer roles, we include an additional coding test focused on data structures and algorithms, featuring dynamically generated problems created by AI.

An example of an AI Interview report. Here our recruiters can open the exercise and go in-depth into the candidate’s answers inside the RLHF exercise.

We have 3 courses on Udemy, the first two teach AI tooling to improve the efficiency of the developers in our talent pool. Our most recent course explains RLHF and the important role annotators fulfill. Candidates that pass our interview process then take our relevant courses.

Finally, candidates are invited to complete a background check before working with any of our partners. The entire interview is proctored and we rarely repeat the same questions, which allows us to safely and ethically source and deeply vet the humans that will contribute to building AGI.