## Economics and Computation (TEAC)

Search TEAC
enter search term and/or author name

## Accepted Papers

We study the revenue maximization problem of a seller with $n$ heterogeneous
items for sale to a single buyer whose valuation function for sets
of items is unknown and drawn from some distribution $D$. We show
that if $D$ is a distribution over subadditive valuations with independent
items, then the better of pricing each item separately or pricing
only the grand bundle achieves a constant-factor approximation to
the revenue of the optimal mechanism. This includes buyers who are
$k$-demand, additive up to a matroid constraint, or additive up to
constraints of any downwards-closed set system (and whose values for
the individual items are sampled independently), as well as buyers
who are fractionally subadditive with item multipliers drawn independently.
Our proof makes use of the core-tail decomposition framework developed
in prior work showing similar results for the significantly simpler

In the second part of the paper, we develop a connection between approximately
optimal simple mechanisms and approximate revenue monotonicity with
respect to buyers' valuations. Revenue non-monotonicity is the phenomenon
that sometimes strictly \emph{increasing} buyers' values for every
set can strictly \emph{decrease} the revenue of the optimal mechanism~\cite{HartR12}.
Using our main result, we derive a bound on how bad this degradation
can be (and dub such a bound a proof of \emph{approximate} revenue
monotonicity); we further show that better bounds on approximate monotonicity
imply a better analysis of our simple mechanisms.

#### Introduction to the Special Issue on EC?15

Redesigning the Israeli Medical Internship Match

The final step in getting an Israeli M.D. is performing a year-long internship in one of the hospitals in Israel. Internships are decided upon by a lottery, which is known as The Internship Lottery''. In 2014 we redesigned the lottery, replacing it with a more efficient one. This paper presents the market, the redesign process and the new mechanism which is now in use. There are two main lessons that we have learned from this market. The first is the Do No Harm'' principle, which states that (almost) all participants should prefer the new mechanism to the old one. The second is that new approaches need to be used when dealing with two-body problems in object assignment. We focus on the second lesson, and study two-body problems in the context of the assignment problem. We show that decomposing stochastic assignment matrices to deterministic allocations is NP-hard in the presence of couples, and present a polynomial time algorithm with the optimal worst case guarantee. We also study the performance of our algorithm on real-world and on simulated data.

Leximin Allocations in the Real World

As part of a collaboration with a major California school district, we study the problem of fairly allocating unused classrooms in public schools to charter schools. Our approach revolves around the randomized leximin mechanism. We extend previous work to the classroom allocation setting, showing that the leximin mechanism is proportional, envy-free, efficient, and group strategyproof. We also prove that the leximin mechanism provides a (worst-case) 4-approximation to the maximum number of classrooms that can possibly be allocated. Our experiments, which are based on real data, show that a nontrivial implementation of the leximin mechanism scales gracefully in terms of running time (even though the problem is intractable in theory), and performs extremely well with respect to a number of efficiency objectives. We take great pains to establish the practicability of our approach, and discuss issues related to its deployment.

Learning What's Going on: Reconstructing Preferences and Priorities from Opaque Transactions

Team Performance with Test Scores

Team performance is a ubiquitous area of inquiry in the social sciences, and it motivates the problem of team
selection  choosing the members of a team for maximum performance. Influential work of Hong and Page
has argued that testing individuals in isolation and then assembling the highest-scoring ones into a team is
not an effective method for team selection. For a broad class of performance measures, based on the expected
maximum of random variables representing individual candidates, we show that tests directly measuring
individual performance are indeed ineffective, but that a more subtle family of tests used in isolation can
provide a constant-factor approximation for team performance. These new tests measure the "potential" of
individuals, in a precise sense, rather than performance; to our knowledge they represent the first time
that individual tests have been shown to produce near-optimal teams for a non-trivial team performance
measure. We also show families of subdmodular and supermodular team performance functions for which
no test applied to individuals can produce near-optimal teams, and discuss implications for submodular
maximization via hill-climbing.

Public projects, Boolean functions and the borders of Border's theorem

Border's theorem gives an intuitive linear characterization of the feasible interim allocation rules of a Bayesian single-item environment, and it has several applications in economic and algorithmic mechanism design. All known generalizations of Border's theorem either restrict attention to relatively simple settings, or resort to approximation. This paper identifies a complexity-theoretic barrier that indicates, assuming standard complexity class separations, that Border's theorem cannot be extended significantly beyond the state-of-the-art. We also identify a surprisingly tight connection between Myerson's optimal auction theory, when applied to public project settings, and some fundamental results in the analysis of Boolean functions.

Revenue Maximization and Ex-Post Budget Constraints

We consider the problem of a revenue-maximizing seller with $m$ items for sale to $n$ additive bidders with hard budget constraints, assuming that the seller has some prior distribution over bidder values and budgets. The prior may be correlated across items and budgets of the same bidder, but is assumed independent across bidders. We target mechanisms that are Bayesian Incentive Compatible, but that are \emph{ex-post} Individually Rational and \emph{ex-post} budget respecting. Virtually no such mechanisms are known that satisfy all these conditions and guarantee any revenue approximation, even with just a single item. We provide a computationally efficient mechanism that is a $3$-approximation with respect to all BIC, ex-post IR, and ex-post budget respecting mechanisms. Note that the problem is NP-hard to approximate better than a factor of $16/15$, even in the case where the prior is a point mass~\cite{ChakrabartyGoel}. We further characterize the optimal mechanism in this setting, showing that it can be interpreted as a \emph{distribution over virtual welfare maximizers.}

We prove our results by making use of a black-box reduction from mechanism to algorithm design developed by~\cite{CaiDW13b}. Our main technical contribution is a computationally efficient $3$-approximation algorithm for the algorithmic problem that results by an application of their framework to this problem. The algorithmic problem has a mixed-sign objective and is NP-hard to optimize exactly, so it is surprising that a computationally efficient approximation is possible at all. In the case of a single item ($m=1$), the algorithmic problem can be solved exactly via exhaustive search, leading to a computationally efficient exact algorithm and a stronger characterization of the optimal mechanism as a distribution over virtual value maximizers.

Integrating Market Makers, Limit Orders, and Continuous Trade in Prediction Markets

We provide the first concrete algorithm for combining market makers and limit orders in a prediction mar- ket with continuous trade. Our mechanism is general enough to handle both bundle orders and arbitrary securities defined over combinatorial outcome spaces. We define the notion of an µ-fair trading path, a path in security space along which no order executes at a price more than µ above its limit, and every order exe- cutes when its market price falls more than µ below its limit. We show that, under a certain supermodularity condition, a fair trading path exists for which the endpoint is efficient, but that under general conditions reaching an efficient endpoint via an µ-fair trading path is not possible. We develop an algorithm for operating a continuous market maker with limit orders that respects the µ-fairness conditions in the general case. We conduct simulations of our algorithm using real combinatorial predictions made during the 2008 U.S. Presidential election and evaluate it against a natural baseline according to trading volume, social welfare, and violations of the two fairness conditions.