While policymakers often promote further education for displaced workers, evidence on its effectiveness in the U.S. context primarily comes from evaluations of specific government sponsored training programs, which only represent one narrow avenue for skill acquisition. This paper studies the returns to retraining among unemployed workers, where retraining is broadly defined as enrollment in community colleges, four-year institutions, and technical centers. We link together high quality administrative records from the state of Ohio and estimate the returns using a matching design in which we compare the labor market outcomes of similar workers who do and do not enroll. Our matching specification is informed by a separate validation exercise in the spirit of LaLonde (1986), which evaluates a wide array of estimators using a combination of experimental and non-experimental data in a setting similar to ours. We graphically present the average quarterly earnings trajectories of the enrollees and matched non-enrollees over a nine-year period and show that there is little difference in earnings pre-enrollment, followed by temporarily depressed earnings among enrollees during the first two years after enrolling, and sustained positive returns thereafter. We find that enrollees experience an average earnings gain of seven percent three to four years after enrolling, and that the returns are driven by those who switch industries, particularly to healthcare.
We provide new evidence on the effect of the unemployment insurance (UI) weekly benefit
amount on unemployment insurance spells based on administrative data from the state of Missouri covering the period 2003-2013. Identification comes from a regression kink design that exploits the quasi-experimental variation around the kink in the UI benefit schedule. We find that UI durations are more responsive to benefit levels during the recession and its aftermath, with an elasticity between 0.65 and 0.9 as compared to about 0.35 pre-recession.
Identification in a regression discontinuity (RD) design hinges on the discontinuity in the probability of treatment when a covariate (assignment variable) exceeds a known threshold. If the assignment variable is measured with error, however, the discontinuity in the first stage relationship between the probability of treatment and the observed mismeasured assignment variable may disappear. Therefore, the presence of measurement error in the assignment variable poses a challenge to treatment effect identification. This paper provides sufficient conditions for identification when only the mismeasured assignment variable, the treatment status and the outcome variable are observed. We prove identification separately for discrete and continuous assignment variables and study the properties of various estimation procedures. We illustrate the proposed methods in an empirical application, where we estimate Medicaid takeup and its crowdout effect on private health insurance coverage.
It has become standard practice to use local linear regressions in regression discontinuity designs.
This paper highlights that the same theoretical arguments used to justify local linear regression suggest
that alternative local polynomials could be preferred. We show in simulations that the local linear estimator
is often dominated by alternative polynomial specifications. Additionally, we provide guidance on the
selection of the polynomial order. The Monte Carlo evidence shows that the order-selection procedure
(which is also readily adapted to fuzzy regression discontinuity and regression kink designs) performs
well, particularly with large sample sizes typically found in empirical applications.
Despite the widespread use of graphs in empirical research, little is known about readers’ ability to
process the statistical information they are meant to convey (“visual inference”). In this paper, we evaluate
several key aspects of visual inference in regression discontinuity (RD) designs by measuring how
well readers can identify discontinuities in graphs. First, we assess the effects of graphical representation
methods on visual inference, using randomized experiments crowdsourcing discontinuity classifications
with graphs produced from data generating processes calibrated on datasets from 11 published papers.
Second, we evaluate visual inference by both experts and non-experts and study experts’ ability to predict
our experimental results. We find that experts perform comparably to non-experts and partly anticipate
the effects of graphical methods. Third, we compare experts’ visual inference to commonly used econometric
procedures in RD designs and observe that it achieves similar or lower type I error rates. Fourth,
we conduct an eyetracking study to further understand RD visual inference, but it does not reveal gaze
patterns that robustly predict successful inference. We also evaluate visual inference in the closely related regression kink design.