๐Ÿ• Posted 6d ago

Automation Engineer

Puretech Digital - A Genesis Company

KolkataFull-timeMid LevelOn-site

Job Description

About the Role We are looking for a builder โ€” someone who writes code first and happens to be obsessed with quality. This is not a manual testing role and not a traditional QA role. You will architect and build an AI-assisted testing capability from the ground up: generating synthetic test data from scenario narratives using LLMs, automating test execution pipelines in Python, and building smart comparison frameworks to validate outputs against expected results.

If you have called yourself an SDET, a Python automation engineer, a test infrastructure engineer, or a backend developer who has owned testing frameworks โ€” this role is for you. You will have end-to-end ownership of the validation capability and directly shape the path to a reliable product beta. What You Will Build 1.

Scenario-to-Data Pipeline (LLM-powered) Translate verbal test scenario narratives into structured, machine-readable test specs (entities, timelines, events, edge conditions). Use ChatGPT / other LLMs with carefully designed prompts to generate realistic synthetic input datasets at scale. Build versioned, reproducible workflows so data generation is deterministic and auditable. 2.

Synthetic Data Generation & Test Automation Write Python scripts to generate input datasets with parameter sweeps, boundary conditions, error injection, and missing-field variants. Maintain a curated, tagged regression library of scenario packs for functional, boundary, and compatibility testing. Ensure all datasets are schema-valid, well-formed, and carry traceability metadata. 3.

Execution & Cross-Baseline Validation Execute datasets against the system under test via APIs and CLIs; capture outputs (logs, reports, metrics) in a structured way. Integrate test runs into CI/CD pipelines; produce consistent, repeatable reports with pass/fail/delta summaries. 4. AI-Assisted Comparison & Result Analysis Build a comparison framework (rule-based + optionally LLM-assisted) to diff outputs against golden expectations.

Detect anomalies, regressions, and inconsistencies; explain deltas in a developer-friendly way. Produce high-quality defect reports: scenario ID, input hash, observed vs expected delta, suspected module. Skills & Experience Must Have Python (strong) - Build automation scripts, parsers, validators, CLI tools โ€” not just glue code.

You own the framework. LLM / Prompt Engineering - Practical experience using ChatGPT or similar to generate structured outputs; knows how to constrain, validate, and iterate prompts. Test Automation Pipeline - Built end-to-end: data setup -> execute -> collect outputs -> compare -> report.

Owns the whole flow, not just one step. API Testing (REST) - Validating backend computations via API responses, logs, and derived metrics. Data Quality Thinking - Understands determinism, reproducibility, schema validation, versioning, and traceability of test data.

CI/CD Integration - pytest, GitHub Actions, Jenkins, or similar โ€” plugging test runs into pipelines with automated reporting. Good to Have Experience with event-log, time-series, or transactional datasets and validating rule-based outcomes. Exposure to golden dataset suites and managing expected-result evolution as the system changes.

LLM quality controls: schema validation, self-check prompts, adversarial prompting, automated sanity checks on generated data. Frameworks such as Great Expectations, Hypothesis, Pydantic, or similar data validation libraries. Experience building lightweight internal tools โ€” CLI or simple web UI โ€” for scenario management and execution.

Bonus (Not Required) Property-based testing, fuzzing, or mutation testing for data-driven systems. Experience in finance, logistics, compliance, or other domains where data correctness is non-negotiable. Embeddings or similarity approaches to cluster diffs and summarize large result sets.

Who Should Apply Apply if you are: An SDET or automation engineer who writes production-quality Python and is comfortable owning an entire test infrastructure โ€” not just writing test cases. A Python backend or data engineer who has worked on pipelines, internal tooling, or platform engineering and wants to focus on quality assurance. A developer who has used LLMs in a real engineering workflow โ€” designed prompts with constraints, handled hallucinations, integrated outputs into a pipeline.

Someone who finds edge cases exciting, not annoying. This is not the right role if: Your automation experience is primarily Selenium/Appium UI testing with no backend scripting or framework ownership. You have not written end-to-end automation code (framework + execution + reporting) as the primary author.

Your background is predominantly manual or exploratory testing. Experience & Qualifications 5-9 years of experience with clear ownership of test automation, validation frameworks, or related engineering. Strong Python scripting is mandatory โ€” you will be assessed on actual code during the interview process.

B.E. / B.Tech / M.Tech in Computer Science, IT, or equivalent practical experience. Based in Kolkata or willing to relocate. This role is on-site.

Posted 6 days ago

Related Jobs

Scada Engineer

Integrated Personnel Services Limited

Noida Yesterday
Full-time

Related Searches

Apply Now