W

AI QA Research Internship

Wellfit Technologies
On-site
Irving, TX

AI QA Research Intern

Location: Hybrid (On-site preferred; Dallas/Irving area)

Schedule: June 1 – July 30 | Monday – Thursday | 8:30 AM – 4:30 PM (1-hour lunch)

Team: Quality Assurance / Engineering

Reports To: QA Leadership



About Wellfit


Wellfit is a healthcare fintech platform modernizing payments and financing through AI-accelerated development practices. As we expand our AI-driven engineering methodology (BMAD), we are redefining how Quality Assurance operates to ensure independence, speed, and rigor in validation.


This internship sits at the intersection of AI systems, software testing, and performance engineering.



Position Overview


We are seeking a technically grounded and intellectually curious AI QA Research Intern to evaluate, validate, and compare AI-powered testing tools within our engineering organization.


This role focuses on researching AI compatibility in QA workflows, analyzing the BMAD QA Agent, and comparing it against other AI-driven testing platforms. The intern will design proof-of-concepts, assess strengths and limitations, and deliver structured recommendations to leadership.


If core AI research objectives are completed ahead of schedule, the intern will also contribute to performance testing initiatives using K6 (Grafana) to support load, stress, and scalability validation.


This role blends:

• AI systems evaluation

• Software testing fundamentals

• Analytical research

• Performance engineering exposure


It is ideal for someone excited about the future of AI in software engineering and hands-on experimentation within real evaluation frameworks.



Key Responsibilities


1. BMAD QA Agent Evaluation

• Use the BMAD QA Agent in a controlled environment to test product components

• Assess compatibility between BMAD-generated code and BMAD QA validation

• Identify edge cases, inconsistencies, and limitations

• Develop a structured, repeatable evaluation framework

• Document findings clearly and methodically



2. Comparative AI Analysis


Conduct structured comparisons across:

• BMAD QA Agent

• Kane AI (current QA AI platform in use)

• Additional AI tools identified by QA leadership


Responsibilities include:

• Building proof-of-concepts in each platform

• Evaluating:

• Test coverage depth

• Accuracy

• Speed

• Independence of validation

• Maintainability

• Integration feasibility

• Identifying where hybrid QA models may provide stronger validation



3. Performance Testing Expansion (K6 – Grafana)


If AI research milestones are met, the intern may:

• Assist in building and executing performance tests using K6

• Develop load and stress test scripts for key workflows

• Analyze system behavior under simulated traffic

• Identify bottlenecks and scalability constraints

• Document findings and recommend optimization areas


This phase provides hands-on exposure to performance engineering alongside AI validation strategy.



4. Research & Reporting

• Produce a structured summary analysis outlining:

• Strengths and weaknesses of each AI platform

• Ideal use cases

• Risks and limitations

• Long-term strategic recommendations

• Present findings to QA and Engineering leadership



5. Cross-Functional Collaboration

• Engage with developers, architects, and product stakeholders

• Ask thoughtful questions about workflows and AI integration

• Contribute ideas around hybrid QA models combining automation, AI, and performance testing



What We’re Looking For


Education


Currently pursuing or recently completed a degree in:

• Computer Science

• Software Engineering

• Data Science

• Artificial Intelligence

• Related technical field



Technical Foundation

• Understanding of at least one programming language (Python preferred, not required)

• Basic understanding of software testing concepts

• Ability to read and interpret generated code

• Comfort learning and experimenting with AI tools independently


Nice to Have:

• Familiarity with LLM platforms (OpenAI, Gemini, etc.)

• Exposure to performance testing concepts or tools (K6, JMeter, etc.)



Ideal Candidate Traits

• Strong intellectual curiosity about AI and emerging technologies

• Proactive and self-directed

• Analytical mindset with attention to detail

• Strong written and verbal communication skills

• Comfortable in experimentation-driven environments

• Interested in both AI systems and performance engineering fundamentals



What Success Looks Like


By the end of the internship, successful candidates will have:

• Built a documented comparison framework for AI-based QA tools

• Delivered clear recommendations for AI adoption strategy in QA

• Produced validated proof-of-concepts demonstrating real-world testing scenarios

• Developed initial performance test scripts and findings using K6 (if phase two is reached)

• Contributed insight into how AI-generated code can be independently validated and stress-tested at scale



Growth Opportunity


High-performing interns may be considered for full-time opportunities as AI-driven QA and performance validation become core functions within the organization. Strong performance in AI validation strategy and performance testing can accelerate long-term growth within Engineering or QA.



Why This Role Matters


As development accelerates through AI-assisted methodologies, independent validation and performance resilience become critical. This role directly contributes to building the safeguards required for sustainable AI adoption at scale.

\n


\n
$18 - $20 an hour
\n