In 1996, Karen K. Shelton and colleagues at the University of Alabama launched a study assessing parenting practices in families of elementary school-age children, testing a newly developed assessment system designed to tap into the most important aspects of parenting practices related to disruptive behaviors in children. The following text is a summary of their findings and discussions about the utility of this new assessment system.
Understanding the dynamic between the parent and the child, and how this dynamic results in certain child developmental outcomes, have always proved to be difficult in research. We know that parental discipline practices are somehow related to disruptive child behaviors, but the assessment methods for pin-pointing exactly these are related have largely been inadequate. After all, the attempt to understand parent-child interactions are a relatively new endeavor that developed alongside the also new science of psychology (which is only a little over a century-old). As long as we cannot pin-point the exact reasons for childhood behavior problems, we cannot develop the most successful interventions for improving parenting practices.
There are several parenting practices that have been linked to disruptive child behaviors, and these include both positive and negative practices. The positive practices include parental monitoring and supervision, and parental involvement. The negative practices are: inconsistent use of discipline, failure to use positive change strategies, and excessive use of corporal punishment. Note how all of these are still vague practices. For instance, what exactly constitutes inconsistent use of discipline? It is questions like this that pose such difficulty for researchers.
Most studies commonly ask either just the child or just the parent one or two questions related to these parental constructs—certainly not enough data to come to any sound conclusions. There are too many variables that would not be accounted for here, and we do not even know if these questions are truly a measure of the construct they are meant to represent. Other studies measure parenting style instead of the specific parenting practices that may be related to a style. However, parenting style is also another vague construct that fails to explain the specific processes that underlie any child behavior. Specifically, Darling and Steinberg in 1993, discuss how parenting style is a constellation of specific parenting practices that result in certain child behaviors, but the exact set of practices that constellates a parenting style is still unknown.
Thus, the Child’s Report of Parental Behavior Inventory (CRPBI) was developed in 1965 to measure specific parenting practices. But the CRPBI lacked measures of certain important parenting practices, and relied only on child self-report which, as we will see with Shelton’s study, might undermine the report’s validity.
Observing parent-child interactions in the home or in the clinic is a popular method of assessing parenting behaviors in young school-aged children, but also pose problems that render its continued use ineffective. For one, as the child grows older, they become more reactive to the observations and act less naturally. Second, it is also difficult to set up situations, both in the lab and in the natural setting, that elicit parenting behaviors that most count in older children. Additionally, setting up such systems are costly and thus prohibitive.
In light of the setbacks of observational studies, Patterson and colleagues at the Oregon Social Learning Center (OSLC) developed telephone interviews assessing parenting practices by how frequent parents would engage in them over certain time periods. Ultimately, however, the OSLC had a number of limitations related to the inconsistency of their data-gathering methods.
So in 1991 Frick developed the Alabama Parenting Questionnaire (APQ) as a more sophisticated assessment system modeled after the OSLC. The present study by Shelton and his colleagues implements this new system in a sample of elementary school-aged children and their primary caretaker.
They took data from 160 children aged 6 to 13 and their primary caretakers. The participants were drawn from two sources. The first group was drawn from referrals to the Alabama School-Aged Service (ASAS), a university-based outpatient diagnostic and referral service for children with behavioral, emotional, or learning disorders. The second sample was a volunteer sample drawn from local schools—who generally had higher socioeconomic status (SES)—as well as from specific schools with families of low SES and minorities. Most of the participants were drawn from the first group.
Since one of the primary ways the validity of the APQ was assessed is by its ability to predict disruptive behavior disorders (DBD) in children, the second group of volunteers was screened so that it did not include children with symptoms for disruptive behavior disorders (DBDs). This is done so that the volunteer group could be better compared with the clinic-referred group in relation to parental practices that may be correlated with disruptive child behavior.
The APQ was designed to measure five most important aspects of parenting practices, mentioned earlier: parental involvement, monitoring/supervision, use of positive parenting techniques, inconsistency in discipline, and harsh discipline. The items for each of these can be seen in the table below.
The APQ was implemented using four different assessment formats: parent and child global report forms, and parent and child telephone interviews. The difference between the global report forms and the interviews is important to remember. Items in the global report forms were designed to be rated on a 5-point frequency scale ranging from 1 (never) to 5 (always), representing the typical frequency of those items at home. Items in the telephone interviews were designed to be answered with a best estimate of the specific number of occurrences of those items over the next 3 days.
The complexity of this study was done to allow for a direct comparison of the reliability and validity of the various methods of assessing parenting practices in families of elementary school-aged children, so that future assessment methods could be improved and made with stronger predictive power.
In the end, one primary finding that Shelton and his colleagues came upon was that child reports of parenting practices were unreliable and inadequate—for a number of reasons. Recall that the children in their sample were 6 to 13 years old, and imagine asking them a series of relatively complex questions about their interactions with their parents. Indeed, a number of the younger children gave such deviant, or obviously unrealistic, responses that their responses had to be eliminated from the data set (for instance, when interviewed, they would report that a behavior occurred 100 times). Additionally, children tended to report consistent response sets on the frequency of parenting behaviors. That is, they would report that certain parental practices occurred either with a very high frequency or a very low frequency. Ultimately, these data failed to differentiate families of children with DBD diagnoses and normal, control, volunteer families.
In contrast, parental reports from both the global reports and the interviews proved useful for distinguishing these families. This is a significant finding given that the APQ was designed specifically to study the association between parenting and DBDs in families of school-aged children. However, there were a few exceptions.
The items for the “Poor Monitoring/Supervision” scale, for instance, had low internal consistency. In research, internal consistency is a measure of how correlated the different items of one measure are. Essentially, it determines whether these items are, in fact, measuring the same construct—that is, a parental practice, namely “Poor Monitoring/Supervision.” One reason for this might be that the items in this construct already had low frequencies to begin with, so the construct may not be best represented using the interview format, which asked for the frequency with which specific certain behaviors had occurred within only 3 days.
The “Corporal Punishment” scale also had poor internal consistency for both the global report and interview formats, most likely because it only had 3 questions. Oddly enough, however, the “Corporal Punishment” scale distinguished between families with DBD diagnoses and volunteer families, suggesting that its internal consistency estimates may have underestimated its reliability. It is possible, for instance, that parents tend to use only one method of corporal punishment.
To the researchers’ surprise, the two positive parenting scales, “Involvement” and “Positive Parenting,” did not contribute to the differentiation of these families. This may be due to the fact that as children grow into adolescence, they may have negative perceptions of parental involvement and thus distance themselves from their parents. These two scales were also highly inter-correlated, suggesting that they may actually be measuring a single dimension.
It is important to note, lastly, that the sample for this study is limited and cannot be generalized to all parent-child interactions. Most of the families in this study, for example, is of Caucasian clinic-referred boys from lower-middle SES. Thus we would not be able to generalize these findings across different ethnic groups.
Shelton and his colleagues set the stage for developing an assessment method for parental practices more refined than it previously had ever been. With these exceptions and setbacks in mind, hopefully later research will develop possible alternatives for better measuring parental practices linked to disruptive child behavior, so that more effective interventions for improving these behaviors are constructed.