Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Understanding the Human Effects of Climate Change

Abstract

Climate change has already begun to profoundly alter the relationship between

humans and their environment for the vast majority of the world’s population. How-

ever, history has demonstrated that human are nothing if not responsive: as the

climate changes, so too will economies, governments, and individuals. This disser-

tation examines impacts and responses to climate change with an eye towards un-

derstanding how future societies might adapt to substantial climatic changes. The

first chapter measures the welfare cost of changes in amenity values due to climate

change by proxying for temperature preferences using contemporaneous changes in

mood, as detected from posts on the social media platform Twitter. The second

chapter examines the response of electricity demand to changes in temperature as

a means to project patterns of future energy consumption and large-scale capital

investments. The third chapter makes a methodological contribution to test three

quasi-experimental methods of estimating electricity savings in dynamic pricing pro-

grams versus an empirical “gold standard”: the results from this chapter will aid

policymakers in quantifying the effects these programs on curbing future increases

in electricity generation due to climate change.

The first chapter is motivated by a gap in the climate impacts literature: the

change in amenity values resulting from temperature increases may be a substantial

unaccounted-for cost of climate change. Without an explicit market for climate, prior

work has relied on cross-sectional variation or survey data to identify this cost. This

paper presents an alternative method of estimating preferences over nonmarket goods

which accounts for unobserved cross-sectional and temporal variation and allows forprecise estimates of nonlinear effects. Specifically, I create a rich panel dataset on

hedonic state: a geographically and temporally dense collection of updates from the

social media platform Twitter, scored using a set of both human- and machine-trained

sentiment analysis algorithms. Using this dataset, I find strong evidence of a sharp

declines in hedonic state above and below 20 ◦ C (68 ◦ F). This finding is robust across

all measures of hedonic state and to a variety of specifications.

The second chapter simulates the effect of climate change on future electricity

demand in the United States. We combine fine-scaled hourly electricity load data

with observations of weather to estimate the response of both average and peak

electricity demand to changes in temperature. Applying these estimates to a set of

locally downscaled climate projections, we project regional end-of-century changes

in electricity load. The results document increases in average hourly load across the

country, with more pronounced changes occurring in the southern United States.

Importantly, we find changes in peak demand to be larger than changes in aver-

age demand, which has implications for public policy choices around future capital

investment.

The third chapter compares quasi-experimental designs to experimental designs in

the context of a dynamic pricing setting designed to encourage customers to save en-

ergy. Randomized controlled trials (RCTs) are widely viewed as the “gold standard”

for evaluating the effectiveness of an intervention. However, because are percieved

to be prohibitively expensive and challenging to implement successfully, they are

not broadly executed in policy settings. In particular, analysis of the effect of energy

pricing has largely been conducted through a two commonly used quasi-experimental

methodologies: difference-in-differences and propensity score matching. Using a rare

set of large-scale randomized field evaluations of electricity pricing, we compare the

estimates obtained from these quasi-experimental designs and from a regression dis-

continuity design to the true estimates obtained through the experimental method.

We demonstrate empirical evidence in favor of four stylized facts that highlight the

importance of understanding selection bias and spillover effects in this context. First,

difference-in-differences and propensity-score methods mis-estimate the true effect

by up to 5% of mean peak hour usage. Second, propensity score estimates resemble

difference-in-difference findings, but standard errors tend to be larger and point esti-

mates are more biased for opt-out models. Third, regression discontinuity methods

can be heavily biased relative to the true average treatment effect. Finally, we find

strong evidence that biases are more pronounced in opt-in vs. opt-out designs.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View