Team pod structure
All projects should begin with a preparation phase, which may also serve as a pilot phase in collaboration with the data vendor. During this stage, the data vendor is responsible for drafting clear and detailed annotation guidelines. A small team of experienced experts should then use these guidelines to complete an initial set of tasks. These tasks are reviewed by the project manager and the research team to evaluate guideline clarity and annotation quality.
Experts are first introduced to example tasks that reflect the expected output. They are also provided with instructional videos that walk them through the annotation process and explain the project requirements. Before submitting any official tasks, experts complete several rounds of practice tasks. Each round includes targeted feedback to help them align their work with project standards. This is an iterative process of refinement intended to ensure quality and consistency from the outset.
To support ongoing learning, we maintain a shared repository of common errors that both new and existing experts can consult. This helps reinforce best practices and prevent repeated mistakes. The iterative training process also familiarizes experts with the quality control and review mechanisms applied to their work, ensuring they understand how evaluations are conducted.
We clearly communicate the purpose and use of the data to all experts, reinforcing the value of their contributions. Additionally, we maintain a curated list of exemplary tasks, which serve as reference points for formatting, length, and detail. These examples provide a clear benchmark and help set expectations for high-quality submissions.
Use this first batch of tasks to iterate on the instructions. These tasks then become the first set of benchmark tasks, which you’ll test future onboarded experts on. You’d give new experts these tasks and expect the answers to be nearly the same, this way you can immediately score an expert’s performance on the benchmark before allowing them to work on real tasks. Additionally you should periodically give benchmarks to experts to test for continued adherence to the instructions.
Some data vendors operate mostly flat hierarchies for their projects. At micro1 we’ve learned that placing experts and their reviewers into a single team or “pod” has several benefits:
- People know who their manager is.
- Pod Leads feel responsible for upskilling their experts.
- Experts feel more comfortable asking for clarification within their pods.
- Since everyone in the pod has the same specialization, pods can stay together between projects.
- Experts can take feedback from their pod’s reviewers more constructively.
At micro1, pods are typically composed of 10 experts, 5 reviewers, and a pod lead. This pod lead may either be a senior reviewer or a member of micro1’s Client Success team. Pod sizes can also vary, but a good rule of thumb is that the pod lead should be able to recall the name of every expert in their pod. If the pod is too large, the pod lead may not be able to efficiently manage and track the progress of their experts.
While research teams don’t need to be familiar with their data vendor’s internal team structures, they should be aware that different structures can lead to different incentives and project outcomes and they should feel comfortable inquiring about this when selecting a data vendor.