GARP RAI Exam 2026 Readiness Quiz: Are You Ready to Sit the Exam?
- Mar 5
- 4 min read

This “GARP RAI Exam 2026 Readiness Quiz” is designed to quickly test whether you can apply the core concepts that repeatedly show up in the syllabus: explainability/interpretability and opaqueness, privacy & cybersecurity principles, supervised learning estimation and overfitting controls, NLP foundations, and reinforcement learning frameworks.
GARP RAI Exam 2026 Readiness Quiz
Answer the 10 questions without looking at the solutions first. If you score 8/10 or higher and your explanations match the logic in the answer key, you’re in strong shape for sitting the exam.
1) Explainability vs interpretability
Which statement is most accurate?
A. Explainability means you can predict a model’s behavior before it runs; interpretability is only possible after deployment.
B. Explainability is about giving understandable reasons for outputs (often after the fact); interpretability is about understanding the model’s behavior more directly.
C. Explainability and interpretability are identical terms.
D. Interpretability applies only to neural networks; explainability applies only to linear models.
2) The “black box” problem
In the context of AI risk, the “black box problem” is primarily about:
A. The model having too few parameters.
B. Difficulty understanding how certain algorithms reach decisions, creating accountability and trust issues in high-impact settings.
C. The dataset being too small to train the model.
D. The model being open-source.
3) Opaqueness
A customer is not aware they are being profiled by an algorithm at all. This is an example of:
A. Confidentiality opaqueness
B. Secrecy opaqueness
C. Expertise barrier only
D. Data minimization
4) Privacy & cybersecurity
Which practice is described as the most effective way to protect privacy?
A. Collecting as much data as possible, then encrypting it
B. Data minimization (collect only what’s necessary for a specific purpose)
C. Selling data only to “trusted partners”
D. Keeping data forever for future model improvements
5) Reinforcement learning
Reinforcement learning is best described as:
A. A method that always outputs a classification label.
B. A trial-and-error learning loop to learn a policy that maximizes long-term (cumulative) reward; output is typically a recommended action.
C. A clustering method for unlabeled data.D. A method that cannot be used in finance.
6) Multi-Armed Bandit (MAB)
Which statement is true in the standard MAB setup described in the materials?
A. The agent’s current action changes the environment’s future state distribution.
B. There are many states that evolve over time like an MDP.
C. Each round has one action and one reward; the “state” is effectively constant and does not change.
D. The goal is to minimize immediate reward to explore more.
7) ε-greedy with decay
You use ε = β^(t−1) with β = 0.85. What is ε at trial t = 3?
A. 0.85
B. 0.7225
C. 0.15
D. 0.2775
8) Case study — NLP + negation risk
You’re building sentiment detection for short customer messages. A message says:
“The service is not good.”
If your model uses a basic Bag of Words (BoW) and ignores punctuation/structure, what’s the key reason it can misclassify this?
A. BoW cannot be used on short texts.
B. Ignoring negation words can flip meaning; techniques like n-grams help capture “not good” as a unit.
C. Lowercasing always makes sentiment analysis impossible.
D. Stop-word removal prevents any model from working.
9) Case study — Naïve Bayes “zero probability”
You train a Naïve Bayes text classifier. In your labeled training set, a particular word never appears in class “Good,” so P(word | Good) becomes 0, and the entire class probability collapses to 0 for documents containing that word.
What’s the standard fix?
A. Increase the learning rate
B. Drop the word from the vocabulary in all cases
C. Apply Laplace smoothing (e.g., add 1 to counts) so probabilities are non-zero
D. Switch to k-means clustering
10) Case study — Overfitting control & regularization choice
You have many correlated features, and your model overfits. You want a method that can shrink coefficients and can also set some coefficients exactly to zero (feature selection).
Which choice best fits?
A. Ridge regression (L2) only
B. LASSO (L1)
C. Ordinary Least Squares (OLS)
D. Using a higher-degree polynomial
Answer Key + Explanations
1) B — Explainability is about explaining decisions in understandable terms (often ex-post), while interpretability is about how directly a human can understand and anticipate the model’s behavior.
2) B — The black box problem is the difficulty of understanding decision logic in opaque models (notably complex ones), which creates accountability gaps and erodes trust in high-impact domains.
3) B — Not being aware you’re subject to algorithmic decision-making is “secrecy” as a form of opaqueness.
4) B — Data minimization is described as the most effective privacy protection: collect only what’s necessary for a specific purpose, because collection itself imposes risk.
5) B — Reinforcement learning uses trial-and-error feedback to learn a policy that maximizes cumulative reward, and it typically outputs a recommended action given circumstances.
6) C — In the standard MAB framing, the action affects current reward but does not change the (single) state/environment going forward; each round is essentially one action → one reward.
7) B — ε = 0.85^(3−1) = 0.85^2 = 0.7225. This reflects higher exploration early, decaying as trials increase.
8) B — BoW treats words independently and can miss meaning shifts from negations; n-grams help capture phrases like “not good” as a combined feature.
9) C — Laplace smoothing (commonly +1) prevents conditional probabilities from being exactly zero, avoiding the “fatal zero” issue in Naïve Bayes.
10) B — Ridge (L2) shrinks coefficients toward zero but typically not to exactly zero; LASSO (L1) can drive some coefficients to exactly zero, acting as feature selection—useful with many features and overfitting risk.




Comments