CS代考 EECS 70 Discrete Mathematics and Probability Theory Fall 2021 – cscodehelp代写

EECS 70 Discrete Mathematics and Probability Theory Fall 2021
1 The Stable Matching Problem
In the previous two notes, we discussed several proof techniques. In this note, we apply some of these techniques to analyze the solution to an important problem known as the Stable Matching Problem, which we now introduce. The Stable Matching Problem is one of the highlights of the field of algorithms.
Suppose you run an employment system1, and your task is to match up n jobs and n candidates. Each job has an ordered preference list of the n candidates, and each candidate has a similar list of the n jobs. For example, consider the case of n = 3, i.e., three jobs at Approximation Inc., Basis Co., and Control Corp., and three candidates2 Anita, Bridget, and Christine, with the following preference lists (lists are ordered from most favorable to least favorable):
Jobs Approximation Inc. Basis Co. Control Corp.
Candidates Anita
Control Corp. Control Corp. Control Corp.
Jobs Approximation Inc.
Approximation Inc. Basis Co. Approximation Inc. Basis Co.
What you would like to do as head of the employment system is to match each job with a candidate. For example, two possible matchings are {(Approximation Inc., Anita), (Basis Co., Bridget), (Control Corp., Christine)} and {(Approximation Inc., Bridget), (Basis Co., Christine), (Control Corp., Anita)}. However, you don’t want just any old matching! In order for your employment system to be successful, you wish the matching to make “everyone happy”, in that nobody can realistically hope to benefit by switching jobs. Can you do this efficiently?
It turns out that there is indeed an algorithm to achieve this; moreover, it’s remarkably simple, fast, and widely-used. It’s called the Propose-and-Reject algorithm (a.k.a. the Gale-Shapley algorithm), and we present it now.
2 The Propose-and-Reject Algorithm
We think of the algorithm as proceeding in “days” to have a clear unambiguous sense of discrete time.
1If you’d like, think of a hypothetical system that could be run by the EECS department for student internships.
2For clarity of exposition in written English, we will use female pronouns for candidates and neutral pronouns for jobs. If we were to use neutral pronouns for both, there might be confusion between inanimate jobs and living candidates.
EECS 70, Fall 2021, Note 4 1
Every Morning: Each job proposes (i.e. makes an offer) to the most preferred candidate on its list who has not yet rejected this job.
Every Afternoon: Each candidate collects all the offers she received in the morning; to the job offer she likes best among these, she responds “maybe” (she now has it in hand or on a string), and to the other offers she says “no” (i.e., she rejects them). (This is just a way for us to virtually model that there are no “exploding offers” and a job can’t withdraw an offer once an offer is made.)
Every Evening: Each rejected job crosses off the candidate who rejected its offer from its list.
The above loop is repeated each successive day until there are no offers rejected. At that point, each candidate has a job offer in hand (i.e. on a string); and on this day, each candidate accepts their offered job (i.e. the job offer she has in hand) and the algorithm terminates.
Let’s understand the algorithm by running it on our example above. The following table shows which jobs make offers to which candidates on the given day (the jobs in bold are “on a string”):
Days Candidates
Offers Approximation Inc., Control Corp. Basis Co.
— Approximation Inc.
Basis Co., Control Corp.
— Approximation Inc.
Control Corp.
Anita Anita Anita
Thus, the algorithm outputs the matching: {(Approximation Inc., Anita), (Basis Co., Bridget), (Control Corp., Christine)}.
Before analyzing the algorithm’s properties further, let’s take a moment to ask one of the first questions you should always ask when confronted with a new concept: Why study stable matchings in the first place? What makes it and its solution so interesting? For this, we discuss the very real impact of the Gale-Shapley algorithm on the Residency Match program.
3 The Residency Match
Perhaps the most well-known application of the Stable Matching Algorithm is the residency match program, which pairs medical school graduates and residency slots (internships) at teaching hospitals. Graduates and hospitals submit their ordered preference lists, and the stable matching produced by a computer matches students with residency programs.
The road to the residency match program was long and twisted. Medical residency programs were first introduced about a century ago. Since interns offered a source of cheap labor for hospitals, soon the number
EECS 70, Fall 2021, Note 4 2
of residency slots exceeded the number of medical graduates, resulting in fierce competition. Hospitals tried to outdo each other by making their residency offers earlier and earlier. By the mid-40s, offers for residency were being made by the beginning of junior year of medical school, and some hospitals were contemplating even earlier offers — to sophomores3! The American Medical Association finally stepped in and prohibited medical schools from releasing student transcripts and reference letters until their senior year. This sparked a new problem, with hospitals now making “short fuse” offers to make sure that if their offer was rejected they could still find alternate interns to fill the slot. Once again the competition between hospitals led to an unacceptable situation, with students being given only a few hours to decide whether they would accept an offer.
Finally, in the early 1950’s, this unsustainable situation led to the centralized system called the National Residency Matching Program (N.R.M.P.) in which the hospitals ranked the residents and the residents ranked the hospitals. The N.R.M.P. then produced a pairing between the applicants and the hospitals, though at first this pairing was not stable. It was not until 1952 that the N.R.M.P. switched to the Propose-and-Reject Algorithm, resulting in a stable matching.
Finally, if the above still hasn’t convinced you of the worth of this algorithm, consider this: In 2012, and won the in Economic Sciences by extending the Propose-and-Reject Algorithm. (The moral of the story? Careful modeling with appropriate abstractions pays off.)
4 Properties of the Propose-and-Reject Algorithm
There are two properties we wish to show about the propose-and-reject algorithm: First, that it doesn’t run forever, i.e., it halts; and second, that it outputs a “good” (i.e., stable) matching. The former is easy to show; let us do it now.
Lemma 4.1. The propose-and-reject algorithm always halts.
Proof. The argument is simple: On each day that the algorithm does not halt, at least one job must eliminate some candidate from its list (otherwise the halting condition for the algorithm would be invoked). Since each list has n elements, and there are n lists, this means that the algorithm must terminate in at most n2 iterations (days).
Next, we’d like to show that the algorithm finds a “good” matching. Before we do so, let’s clarify what we mean by “good”.
4.1 Stability
What properties should a good matching have? Perhaps we’d like to maximize the number of first ranked choices? Alternatively, we could minimize the number of last ranked choices. Or maybe it would be ideal to minimize the sum of the ranks of the choices, which may be thought of as maximizing the average happiness.
3Any resemblance to the frustration experienced by Berkeley students seeking internships in the summer and having to interview in the fall is not coincidental.
EECS 70, Fall 2021, Note 4 3
In this lecture we will focus on a much more basic criterion that is rooted in the idea of autonomy, namely stability. A matching is unstable4 if there is a job and a candidate who both prefer working with each other over their current matchings. We will call5 such a pair a rogue couple. So a matching of n jobs and n candidates is stable if it has no rogue couples. Let us now recall our example from earlier:
Jobs Approximation Inc. Basis Co. Control Corp.
Candidates Anita
Control Corp. Control Corp. Control Corp.
Jobs Approximation Inc.
Approximation Inc. Basis Co. Approximation Inc. Basis Co.
Sanity check! Consider the following matching for the example above: {(Approximation Inc., Christine), (Basis Co., Bridget), (Control Corp., Anita)}. Why is this matching unstable? (Hint: Approximation Inc. and Bridget are a rogue couple — why?)
Here is a stable matching for our example: {(Basis Co., Anita), (Approximation Inc., Bridget), (Control Corp., Christine)}. Why is (Approximation Inc., Anita) not a rogue couple here? It’s certainly true that Approximation Inc. would rather work with Anita than its current employee Bridget. Unfortunately for it, however, Anita would rather be with her current employer (Basis Co.) than with Approximation Inc.. Note also that both Control Corp. and Christine are paired with their least favorite choice in this matching. Nevertheless, this does not violate stability since none of their more preferred choices would rather work with them than who they are matched with.
Before we discuss how to find a stable matching, let us ask a more basic question: Do stable matchings always exist? Surely the answer is yes, since we could start with any matching and seemingly make it more and more stable as follows: If there is a rogue couple, modify the current matching so that this couple is matched up. Repeat. Surely this procedure must result in a stable matching? Unfortunately this reasoning is not sound! Why? Although pairing up the rogue couple reduces the number of rogue couples by one, it may also create new rogue couples. So it is not at all clear that this procedure will terminate!
Let’s further illustrate the fallacy of the reasoning above by applying it to a closely related scenario known as the Roommates Problem. In this problem, we have 2n people who must be paired up to be roommates (the
4Why is this called “unstable?” Because in such a situation, the rogue candidate could just renege on their official offer and the rogue job/employer could just fire the person that they officially hired to hire their prefered rogue candidate instead. Then one job is suddenly empty and one innocent person just got fired. This is not what we want to see happening. We want everyone to be happy enough that they all want to follow through on their final accepted offers.
5The choice of words for definitions is something that requires a balance of considerations. On the one hand, definitions in mathematical English have to be precise and unambiguous, and so to a computer, it doesn’t matter what specific English words we use for the definition. But our definitions are going to be used by humans, and we would like to allow people to leverage their intuition, etc. At the same time, we want the definitions to not be too cumbersome to use. Here, there is often a balancing act that we must follow between over-triggering the incredibly nuanced set of human intuitions and having concise wording. In this case, human intuition would suggest potentially rogue couple instead of rogue couple to be more accurate, since they aren’t actually matched up to each other. However, since “potentiality” isn’t something we want to model more fully, we just go with the more concise wording.
EECS 70, Fall 2021, Note 4 4
difference being that unlike in our view of stable matching with asymmetric partners of different types, a person can be paired with any of the other 2n − 1 people, i.e., there is no asymmetry in types). Now, suppose our approach of iteratively matching up rogue couples indeed did work. Since this approach does not exploit the asymmetry, we can apply it to the roommates problem. Thus, by the (flawed) reasoning above, we could conclude that there must always exist a stable matching for roommates. Wouldn’t you be surprised then to read the following counterexample, which gives an instance of the roommates problem which does not have a stable matching!
Roommates ABCD BCAD CABD D–––
We claim that in this instance, there always exists a rogue couple for any matching. For example, the matching {(A,B), (C,D)} contains the rogue couple B and C. How about {(B,C), (A,D)}? This matching is unstable because now A and C are a rogue couple.
Sanity check! Verify that in this example there is no stable matching!
We conclude that any proof that there always exists a stable matching in the stable matching problem must use the fact that there are two different types6 in an essential way. In the next section we give such a proof, and a comforting one at that: We prove that there must exist a stable matching because the propose-and- reject algorithm always outputs one!
4.2 Analysis
We now prove that the propose-and-reject algorithm always outputs a stable matching. Why should this be the case? Consider the following intuitive observation:
Observation 4.1. Each job begins the algorithm with its first choice being a possibility; as the algorithm proceeds, however, its best available option can only get worse over time. In contrast, each candidate’s offers can only get better with time.
Thus, as the algorithm progresses, each job’s options get worse, while each candidate’s improves; at some point, the jobs and the candidates must “meet” in the middle, and intuitively such a matching should be stable.
Let us formalize this intuition beginning with a formal statement of Observation 4.1 via the following lemma. Lemma 4.2 (Improvement Lemma). If job J makes an offer to candidate C on the kth day, then on every
subsequent day C has a job offer in hand (on a string) which she likes at least as much as J.
6For historical reasons that relate to how cable connectors and fasteners are described in mechanical systems, as well as how linguistics denotes grammatical types on nouns, this category of types is sometimes referred to using the technical term gender. You might see references to gender in the literature if you look up stable matchings. However, what matters here is that we need to match up two different kinds of things. It doesn’t have to be jobs and candidates. It could packets and network ports, computational jobs and servers, threads and cores, students and slots in classes, etc. As long as the two things are different in nature.
EECS 70, Fall 2021, Note 4 5
Proof. We proceed by induction on the day i, with i ≥ k.
Base case (i = k): On day k, C receives at least one offer (from J). At the end of day k, she will therefore have an offer in hand either from J or a job she likes more than J, since she chooses the best among her offers.
Inductive Hypothesis: Suppose the claim is true for some arbitrary i ≥ k.
Inductive Step: We prove the claim for day i + 1. By the Induction Hypothesis, on day i, C had an offer from job J′ on a string which she likes at least as much as J. (Note that J′ may be J.) According to the algorithm, J′ proposes again to C again on day i + 1 (since this job offer hasn’t yet been rejected by her and is not allowed to explode). Therefore, at the end of day i + 1, C will have on a string either J′ or an offer from a job she likes more than J′; in both cases, she likes this job at least as much as J. Hence the claim holds for day i + 1, completing the inductive step.
Detour: The Well-Ordering Principle. Let’s take a moment to consider an alternate proof of Lemma 4.2; in the process, we’ll uncover a fundamental principle which is equivalent to induction.
Alternate proof of Lemma 4.2. As in our original proof, the claim certainly holds on day i = k. Suppose now, for the sake of contradiction, that the ith day for i > k is the first counterexample where C has either no offer or an offer from some J∗ inferior to J on a string. On day i − 1, she had an offer from some job J′ on a string and liked J′ at least as much as J. According to the algorithm, J′ still has an offer out to C on the jth day. (i.e. offers don’t explode and can’t be withdrawn) So C has the choice of at least one job (J′) on the ith day; consequently, her best choice must be at least as good as J′, and therefore certainly better than J∗ or nothing. This contradicts our initial assumption.
What proof technique did we use to prove this lemma? Is it contradiction? Or some other beast entirely? It turns out that this is also a proof by induction! Why? Well, we began by establishing the base case of i = k. Next, instead of proving that for all i, P(i) =⇒ P(i + 1), we showed that the negation of this statement is false, i.e., that “there exists i such that ¬(P(i) =⇒ P(i + 1))” does not hold; thus, by the law of the excluded middle, it must indeed hold that for all i, P(i) =⇒ P(i + 1).
Sanity check! Ensure you understand why these two proofs are equivalent.
Let us now be a bit more careful — what exactly allowed us to take this alternate approach? The answer lies in a very special property of the natural numbers known as the Well-Ordering Principle. This principle says that any non-empty set of natural numbers contains a “smallest” element. Formally:
Definition 4.1 (Well-ordering principle). If S ⊆ N and S ̸= 0/ , then S has a smallest element.
Sanity check! Consider the following subsets of the natural numbers: S1 = {5,2,11,7,8}, S2 = {n ∈ N :
n is odd}, S3 = {n ∈ N : n is prime}. Does each of these sets have a smallest element?
Where did we use the well-ordering principle in our proof? In the line “Suppose . . . that the ith day for i > k
is the first counterexample. . . ” — specifically, if the natural numbers did not obey this principle, then the EECS 70, Fall 2021, Note 4 6
notion of a first counterexample would not be valid. Note also that induction relies on this principle: The statement “for all i, P(i) =⇒ P(i + 1)” only makes sense if there is a well-defined order imposed on the natural numbers (i.e., that 3 comes before 4, and 4 comes before 5, etc. . . ). That the natural numbers obey this principle may be a fact you have long taken for granted; but there do exist sets of numbers which do not satisfy this property! In this sense, the natural numbers are indeed quite special.
Sanity check! Do the integers satisfy the well-ordering principle? How about the reals? How about the non-negative reals?
Back to our analysis of the Propose-and-Reject Algorithm. Returning to our analysis, we now prove that when the algorithm terminates, all n candidates have job offers. Before reading the proof, see if you can convince yourself that this is true. The proof is remarkably short and elegant, and critically uses the Improvement Lemma.
Lemma 4.3. The propose-and-reject algorithm always terminates with a matching.
Proof. We proceed by contradiction. Suppose that there is a job J that is left unpaired when the algorithm terminates. It must have made an offer to all n of the candidates on its list and been rejected by all of them. By the Improvement Lemma, since its offer was rejected, each of these n candidates has had a better offer in hand since J made an offer to her. Thus, when the algorithm terminates, n candidates have n jobs on a string not including J. So there must be at least n + 1 jobs. But this is a contradiction, since there are only n jobs by assumption.
To complete our analysis of the propose-and-reject algorithm, we need to verify the key property that the matching produced by the algorithm is stable.
Theorem 4.1. The matching produced by the algorithm is always stable.
Proof. We give a direct proof that, in the matching output by the algorithm, no job can be involved in a ro

Leave a Reply

Your email address will not be published. Required fields are marked *