# 代写代考 COMP3530 Background material – cscodehelp代写

Background material
This unit of study assumes some basic knowledge of discrete math- ematics, algorithm analysis, and linear algebra. This is all basic material that you should have learned in other units of study. This note is only meant to serve as a quick review of some key concepts and notation that we will be using throughout the semester.
0.1 Discrete mathematics Graphs
A directed graph (V , E) is defined by a vertex set V and an edge set E made up of ordered pairs of vertices. An undirected graph (V , E) is similarly defined expect that edges are unordered pairs of vertices. Our graphs will never has loops (an edge from and to the same vertex). If (V,E) has parallel edges (more than one edge connecting some pair of vertices), we say (V,E) is a multi-graph; otherwise, we say (V,E) is a simple graph.
For any vertex u ∈ V we denote the set of edges incident on uasδ(u) = {(u,v)∈E}andthesetofneighborsofuasN(u) = {v∈V :(u,v)∈E}. Thedegreeofuisdeg(u) = |δ(u)|. Wecan extend this notation to any subset S ⊆ V:
One vertex, two vertices.
In the above undirected graph, δ(1) = {(1, 2)} and N(1) = {2}, while δ(3) = {(2, 3), (3, 4)} and N(3) = {2, 4}.
In the above directed graph,
δ(1) = {(1, 2)} and N(1) = {2}, while δ(3) = {(3, 4)} and N(3) = {4}.
δ(S) = {(u, v) ∈ E : u ∈ S and v ∈/ S} N(S) = {v ∈ V S : ∃ u ∈ S and (u, v) ∈ E}
A path is a sequence of vertices v1, v2, . . . , vk such that every two consecutive vertices are joined by an edge; that is, (vi, vi+1) ∈ E for i = 1,…,k − 1. If in addition (vk,v1) ∈ E, we call this a cycle. An undirected graph is connected if any two vertices in the graph are connected by a path. A directed graph is connected if its underlying undirected graph is connected. A directed graph is strongly connected if any two vertices in the graph are connected by a path. A graph is acyclic if it does not contain any cycles.
A tree is a connected acyclic undirected graph. Every tree on n vertices has exactly n − 1 edges.
week 0 – background material
discrete optimization 2
A graph G = (V,E) is bipartite if the vertex set can be split into two sets A and B such that E ⊆ A × B. The sets A and B are called the shores or color classes of G.
Some notable graphs:
• Kn is the complete graph on n vertices—every possible pair of vertices is connected by an edge
• Ka,b is the complete bipartite graph with shores of cardinality a and b.
• Cn is a cycle on n vertices.
• In is the empty graph on n vertices—it has no edges.
Proof techniques
Most of the proofs we will use in this unit will follows on of the following templates:
1. Direct proof: Starting from the premises, go through a sequences of implications leading to what we want to prove.
As an example, consider the fact that in any undirected graph G = (V, E) we have 􏰅 deg(u) = 2|E|. Here is a direct proof:
u∈V 􏰆􏰆􏰆􏰆􏰆􏰆
u∈V e∈δ(u)
1 = 2 = 2|E| e∈E
Notice how we start from the left-hand side of the equality we want to prove and we repeatedly transform it until we get its right-hand side.
2. Proof by contradiction: We assume what we want to prove is not true, and then we arrive at a contradiction.
A classic example is Euclid’s proof1 that there are an infinite number of primes.
3. Proof by induction: Here we are interested in proving a state- ment that is supposed to hold for every integer n ≥ 1. First we prove the base case n = 1; then, assuming that the statement holds for n ≥ 1, we prove it for n+1.
As an example, consider the fact that 􏰅n i = n(n+1) . To i=1 2
prove this by induction first we note that it holds for n = 1 since 1 = 1·2 . Then
2+3+3+2=2·5
1 Euclid’s theorem, Wikipedia.
􏰆 􏰆 n(n+1)
(n+1)(n+2) 2 ,
+ (n + 1) =
where the second equality follows by inductive hypothesis.
The book by Graham, Knuth, and Patashnik2 is a solid reference on Discrete Mathematics and a great read.
i = i + (n + 1) = 2 i=1
2 R. L. Graham, D. E. Knuth, and O. Patashnik. Concrete Mathematics: A Foundation for Computer Science. Addison-Wesley, 2nd edition, 1994
week 0 – background material
discrete optimization 3
0.2 Algorithm analysis
The main analytical tool for studying algorithms is asymptotic analysis. We focus on worst case analysis; that is, we want to bound the running time of our algorithm on the worst possible instance
of a fixed size n. In general we will consider an algorithm to be efficient if it runs in polynomial time.
Asymptotic analysis
Let f and g be functions N → N. Then we say that 1. fisbigomicron3 ofg,andwenoteitf=O(g),if
∃n0,c>0∀n>n0 :f(n)≤cg(n) 2. f is little omicron4 of g, and we note it f = o(g), if
¬∃n0,c>0∀n>n0 :g(n)≤cf(n) 3. fisomegaofg,andwenoteitf=Ω(g),if
∃n0,c>0∀n>n0 :f(n)≥cg(n) 4. fislittleomegaofg,andwenoteitf=ω(g),if
¬∃n0,c>0∀n>n0 :g(n)≥cf(n)
5. f is asymptotically equivalent to g, and we note it f = Θ(g), if
f = O(g) and f = Ω(g).
Efficiency
Suppose we are interested in comparing three different algorithms for solving some computational problem. Imagine the algorithms run in Θ(n), Θ(n2), and Θ(2n) time; meaning that on a worst-case instance of size n each algorithm spend an amount of time that is asymptotically equivalent to n, n2, and 2n respectively.
Consider what happens when we double the size of the instance. The linear time algorithm spends twice as much time, but the quadratic time algorithm spends four times as much time. The exponential time algorithm is even worse, the amount of time spent is squared!
In general, we will say an algorithm to efficient if it runs in poly- nomial. Ideally, the degree of the polynomial show be low if we want to handle large instances.
The book of Kleinberg and Tardos5 covers many techniques for designing and analyzing efficient algorithms.
3 Orbig-Oforshort.
4 Or little-o for short.
Noticethatiff=Ω(g)theng=O(f).
Noticethatiff=ω(g)theng=o(f).
Notice that if f = Θ(g) then g = Θ(f).
Notice that na = o(bn) for all a,b > 1
5 J. Kleinberg and E. Tardos. Algorithm Design. Addison-Wesley, 2005
week 0 – background material
discrete optimization 4
0.3 Linear Algebra
Let A ∈ Rn×m be a matrix with n rows and m columns. We use ai,j to denote the entry on the ith row and the jth column of A. We useAj todenotethejthcolumnofA. Letx ∈ Rm beavector;we use xi to denote the ith coordinate of x. For ease of notation, we treat an m-dimensional vector as a m × 1 matrix; we call such a vec- tor, a column vector for obvious reasons. We denote the transpose of a matrix A or a vector x with AT and xT respectively.
One matrix, two matrices
Unsurprisingly, the transpose of a column vector is called a row vector.
One basis, two bases.
Linear vector spaces
We call a set of vectors x1, x2, . . . , xk ∈ Rm linearly independent if the
onlyscalarsα ,α ,…,α thatsatisfy􏰅 αx =0areα =···= 12kiii1
αk = 0; otherwise, we cal the the set linearly dependent.
The span of a set of vectors S = {x1,x2,…,xk} is the vector space
span(S) = {􏰅 α x :∀α }. LetT beasubsetofS;ifspan(T) = iiii
span(S) and the vector in T are linearly independent then T is a basis of S. The cardinality of every basis is the same and we call it the rank of S.
The four fundamental spaces of a matrix
Associated with every matrix A ∈ Rn×m there are four fundamen- tal vector spaces:
1. 2. 3. 4.
span(A): the column space of A is the span of its columns. Ker(A): thenullspaceofAis{x:Ax=0}.
span(AT ): the row space of A is the span of its rows. Ker(AT): the left null space A is 􏰒y : yTA = 0T􏰓
There is a close relation between these linear spaces: rank(span(A))=rank(span(AT)).
rank(span(A)) + rank(Ker(AT )) = n.
rank(span(AT )) + rank(Ker(A)) = m.
In fact, something more interesting is going on here: span(A) and Ker(AT ) are orthogonal and every vector in Rm can be written as a linear combination of vectors from these spaces. The same holds for span(AT ), Ker(A), and Rn.
The rank of a matrix is the rank of its column space, or equiva- lently, the rank of its row space. A square matrix A ∈ Rn×n is said to be full rank if rank(A) = n. A square matrix A ∈ Rn×n has full rank if and only if its determinant does not equal 0.
Systems of linear equations
LetA∈Rn×m beamatrixandb∈Rn beavector.Thesystemof linear equations
has a solution if and only if b ∈ span(A). Furthermore, the solution is unique if and only if rank(A) = m.
week 0 – background material
discrete optimization 5
Strang’s book6 is a great introduction to linear algebra. Exercises
6 G. Strang. Linear Algebra and Its Applications. Brooks Cole, 1988
Prove that in every undirected graph the number of vertices with odd degree is always even.
How many edges does Kn have as function of n? What about Kn,n, Cn, and In?
Prove that 􏰅n i=1
1 Prove that n log n
Provethat2 logn =ω(logan)foranyfixeda≥1 Argue that the rank of any matrix A ∈ Rn×m is at most
min {n, m}.
Find bases for the four fundamental spaces of the matrix
1 2 4 3 2 2 223
Let A be the square matrix with entries ai,j = (i + j)!. Prove that A has full rank. Hint: You may want to review some of the properties of determinants and use induction.
i2 = n(n+1)(2n+1) . 6
5.Provethat2 logn=o(nε)foranyfixedε>0.
Problems marked with ∗ are harder than average.
week 0 – background material
discrete optimization 6
Solutions of selected exercises
1. Prove that in every undirected graph the number of vertices with odd degree is always even.
Solution. Let G = (V,E) be an undirected graph. Let A be the subset of vertices having even degree and B be the subset of vertices having odd degree. Recall that
deg(u) = 2|E|. u∈V
This means that the sum of all degrees is an even number.
Clearly the sum of even degrees ( u∈A deg(u)) is also an even
number. Together with the previous observation, this means that
the sum of odd degrees ( u∈B deg(u)) is also an eve since 􏰆􏰆􏰆
deg(u) − deg(u). u∈A
The only way that a bunch of odd numbers sums up to an even number is to have an even number of them. Therefore, it follows that G has an even number of odd degree vertices.
4. Prove that n log n = Θ(1).
Solution. Notice that n = 2log n. Therefore,
which is Θ(1).
1 􏰊 􏰋1 logn nlogn = 2logn logn = 2logn