When considering the general trends seen across all three participants’ step counts, there exists a general downward trend in steps from the beginning of the intervention to the end. This may be the result of habituation and comfort by association. One of the leading limitations in an ABA study design is the inability to control for the effects on the outcome behavior caused by maturation and practice over time.12 In the initial days of the intervention, the novelty of the intervention itself served as an additional motivating factor (outside of the points and incentives) to encourage the adolescents to walk more. The introduction of the intervention itself may be referred to as the motivating operation (MO), the initial stimulus or event catalyzing behavior change.13 As time passed, the participants became more accustomed to the intervention process. They lost the “excitement factor” that was present initially, leading to less rigorous commitment to step goals. This can be seen in the distribution of the points earned across the intervention. All three participants hit their step goal at minimum six out of the first ten days of the intervention. When regarding the intervention phase in three stages – beginning, middle, and end – each participant exhibits an upward trajectory in number of steps during the initial third of the intervention. Towards the middle of the intervention period, there exists a downward trend in steps accumulated with participants earning less points, resulting in less incentives awarded. In the final third of the intervention period, participants began to increase their number of steps, which could be attributed to the participants realizing they were nearing the end of the intervention and hoping to earn additional incentives in the remaining time period.
The conclusions drawn from the results of this study must be considered within the lens of the following limitations: missing data, malfunctioned data, and serial dependency.
Missing Data: All three participants experienced some form of technical difficulty with the FitBit tracker at some point during the study period. Participant 701’s FitBit account did not correctly setup initially, resulting in a reduced baseline period. Additionally, her device failed to sync during the final days of the withdrawal period. Participant 702 adjusted his FitBit device settings in such a way as to “bootleg” the account, or lock it so the research team could no longer access the data. Unfortunately, this restricted any data collection for the participant during the withdrawal phase. Participant 703 misplaced her FitBit during the withdrawal phase, resulting in the device’s destruction after being run over by a car in her apartment parking structure. Due to these complications, each participant varied slightly in the exact number of days they spent in baseline, intervention, and withdrawal, limiting the ability for concrete conclusions to be drawn.
Malfunctioned Data: Participant 703 experienced significant technical difficulties during the middle of her intervention period. The FitBit stopped collecting steps at correct time periods, failing to “reset” overnight or choosing to reset in the middle of the day. This caused the step count to be extremely high some days, while it appeared essentially zero on other days. Once this participant was given a new tracker, this problem was alleviated. All data points collected during this time period were omitted. However, as it occurred in the middle of the intervention, it is difficult to assess its exact impact on the adaptive goal setting methodology being employed during the intervention.
Serial Dependency: The single-subject study design means that a participant’s data points are being compared to data points from him/herself, rather than to other study participants. This is beneficial in many regards, as it reduces bias by allowing observations on intervention impact to be made independent of differences in individual subjects. The downside of this type of design is seen in the issue of serial dependency. Because consecutive measures are not fully independent of one another (i.e. values on one day correlate with values from proximal days), error in one data point may compound error at following data points. In order to institute statistical significance testing in this type of study design, the serial dependency must be assessed using autocorrelation calculations. The ability to move forward with traditional statistical significance tests depends on how substantial this coefficient is.43 Due to the complexity of performing this type of calculation with the gaps in data due to the above mentioned limitations and the small sample size, it was decided to not perform this corrective test. No statistical significance can be given to the drawn conclusions. Resulting conclusions about effects cannot be weighted heavily as significance (i.e., probability of effect size given random error) cannot be confirmed. Rather, emphasis should primarily lay on the similarities in trends for all three participants across each phase of the study.