FAQ

“Can we switch methodology in the middle of an active cycle?”

Technically yes, but you lose comparability and depth. Choose the methodology intentionally before launch.

Recommendation: complete the current cycle first, then switch methodology in the next one. If respondents show survey fatigue, address critical areas in a workshop instead of changing method mid-cycle.

“Why use Hybrid if we can just add our own custom questions?”

Hybrid is a strategic choice: it changes the measurement structure and how contextual analytics and AI interpret signals.

Customization is tactical: it adjusts what you ask, not the underlying model behavior.

Recommendation: use both approaches together, but do not treat one as a substitute for the other.

“What if we need Classic for one team, Challenger for another, and Hybrid for a third?”

That is exactly the intended operating model. Create separate projects and pick the right context for each.

One project -> one methodology -> stronger analytics and clean comparability.

“Why can’t I upload our competency model directly?”

Most organizations already validate competency models through parallel HR processes. Duplicating that stack here often adds cost without adding signal.

“Aren’t competencies just behavior? Why do your indicators matter?”

In this product, competency is treated as a broader construct, while we measure observable behavior directly and score it consistently across rater groups.

“Why these specific questions?”

They come from field practice: pilot groups, consulting projects, and assessment work that surfaced concrete behavioral evidence.

Methodologies are reviewed in regular intervals as the system collects better evidence.

“I don’t understand your pricing”

The goal of AI in this product is process simplification and cost efficiency. Strong tools do not have to be expensive, and accessibility is a product principle.