Take-home reviews, prepared by AI

AI prepares the review. You still decide who to hire.

Send any take-home: writing, strategy, design, code. Claude reads the submission against your grading guide, scores each criterion, and drafts talking points for the follow-up interview. You walk into the debrief prepared. The human still makes the call.

No credit card. Five AI reviews a month on the free plan — enough to try it on your next few submissions.

See how it works

How Graden runs your take-home assessments

No more hand-rolled spreadsheets, scattered inboxes, or inconsistent reviewer notes.

1

Send the brief

Pick from the library or upload your own with a grading guide. The candidate gets a tokenised link, a clear deadline, and no account to create.

2

Track and collect

See who's opened, who's working, who's submitted. Whatever the format (inline writing, files, links, a GitHub repo), the work is locked at submission so later edits don't move the goalposts.

3

Review, prepared

Claude reads the submission against your grading guide and drafts scores, flags and talking points. You read the work yourself, keep what lands, overrule what doesn't.

Powered by Anthropic's Claude.

Most take-home reviews happen ten minutes before the call. Ours don't.

The brief is the easy part. The review is where quality usually dies. We fix that.

One pipeline, any format

Writing samples, campaign decks, strategy briefs, design exercises, GitHub repos. Same workflow, same grading guide structure, same calibrated review.

A coding-test tool for your engineer. A shared doc for your marketer. A Figma link for your designer. Graden runs all of them through the same prepared review.

Human in charge, always

Your grading guide, your criteria, your weights. Claude drafts the scores and talking points, the reviewer reads the work, agrees or overrules, and makes the call. Scores are inputs to judgment, not verdicts.

The reviewer lands prepared

Every review arrives with a summary, flags tied to each grading criterion, and specific things to discuss. Integrity checks on the candidate's explanations of their own work. No more opening the submission ten minutes before the call.

The questions you'd ask in the demo call

Short, direct answers. If something's not here, ask us.

Does the AI decide who to hire?

No. Graden drafts scores, flags and talking points against your grading guide. The reviewer reads the work, agrees or overrules, adds their own notes, and makes the call. We never surface a "verdict" to the reviewer or the candidate.

What kinds of take-homes does this work for?

Anything you can score against a grading guide. Writing samples, content briefs, strategy decks, marketing campaigns, design exercises, analytics tasks, code submissions. If you can score it against criteria, Graden can run the whole loop. The pipeline is the same regardless of format.

Is our candidates' work safe?

Yes. Submissions are captured the moment they're sent and stored in encrypted storage, scoped to your organisation. For code, the candidate installs the Graden GitHub App on the single repo they're submitting (read-only, per-repo, revocable). Nothing is shared outside your workspace.

What if the AI gets something wrong?

It will, sometimes. That's why every talking point links back to the grading criterion it's scoring and, where relevant, the specific file, page or passage. You mark a point as agreed, push back on it, or add your own. The review you save is yours, not the model's.

How do you handle late submissions?

Flagged, not blocked. The reviewer sees that a submission was late and by how long, and decides what to do. We're not in the business of disqualifying candidates on your behalf.

Free to start. Paid plans from £49/month. Priced per review, not per seat. See pricing →

Your next take-home deserves a prepared review.

Send the brief, track the submission, and walk into the debrief ready to talk about the work.