Description: |
JD for Coding Project. General Overview: We are looking for skilled professionals to participate in the Coding Project, where you will evaluate an AI chatbot’s responses to coding-related prompts. Your role will be to assess code-based responses' quality, accuracy, and functionality, ensuring they meet high standards. This critical position requires attention to detail and strong coding evaluation skills. Key Responsibilities: Need at least 7 or more than 7 years of experienced candidates. Analyze and evaluate two AI-generated responses to the same coding-related prompt. Assess how well each response follows the prompt’s instructions and assign a rating based on compliance. Verify the factual accuracy of each response, ensuring all text and code provided is correct. Test and validate all code snippets by creating and running necessary test cases. Assign ratings based on instruction adherence and factual accuracy (No Issue / Minor Issue / Major Issue / N/A). Provide concise written justifications for each rating, explaining the reasoning behind your evaluation. Compare the two responses, rank them based on overall quality, or mark them as equal if they are of similar standard. Document your ranking decision with a clear explanation. Qualifications & Requirements: Strong knowledge of programming languages and experience in code review & debugging. Attention to detail and ability to identify even minor issues in AI-generated responses. Experience in software development, code testing, or technical writing is a plus. Ability to communicate findings effectively in written form. Given Languages:
R typescript scala excel c# kotlin Algorithm NLP Salesforce UI/UX |