This paper explores the use of designed experiments in an online environment. Motivated by real-world examples, we model a scenario where the practitioner is given a finite set of units and needs to select a subset of these which are expended toward a one-shot, multi-factor designed experiment. Following this phase, the designer is left with the remaining set of unused units to implement any learnings from the experiments. With this setting, we answer the key design question of how much to experiment, which translates to choosing the number of replicates for a given design. We construct a Bayesian framework that captures the expected cumulative gain across the entire set of units. We derive theoretical results for the optimal number of replicates for all two-level, full and fractional factorial designs with seven factors or fewer. We conduct simulations that serve as validation of the theoretical results, as well as enabling us to explore scenarios and techniques of analysis that are not captured in the theoretical studies. Our overall results indicate that the optimal allocation of units for experimentation varies from 1 to 20%20% of the total units available, which is mainly governed by the experimental environment and the total number of units. We conclude that experimenting with the optimal number of replicates recommended by our study can lead to a cumulative improvement which is 80–95% greater than the expected cumulative improvement gained when a practitioner chooses the number of replicates randomly