Monthly Archives: February 2022

编程代考 COMP-273 – cscodehelp代写

Slides From Patterson’s 61C 1
Decisions in C/Assembly Language

Copyright By cscodehelp代写 加微信 cscodehelp

Review (1/2)
°In MIPS Assembly Language:
• Registers replace C variables
• One Instruction (simple operation) per line • Simpler is Better
• Smaller is Faster
° Memory is byte-addressable, but lw and sw access one word at a time.
° A pointer (used by lw and sw) is just a memory address, so we can add to it or subtract from it (using offset).
Slides From Patterson’s 61C 2

Review (2/2)
add, addi, sub, lw, sw
C Variables: $s0 – $s7
Temporary Variables: $t0 – $t9 Zero: $zero
Slides From Patterson’s 61C 3

° C/Assembly Decisions: if, if-else ° C/Assembly Loops: while, do while,
° Inequalities
°C Switch Statement
Slides From Patterson’s 61C 4

°All instructions have allowed us to manipulate data.
°So we’ve built a calculator.
°To build a computer, we need ability to
make decisions…
°Heads up: pull out some papers and pens, you’ll do some in-class exercises today!
Slides From Patterson’s 61C 5

C Decisions: if Statements
° 2 kinds of if statements in C
•if (condition) clause
•if (condition) clause1 else clause2
° Rearrange 2nd if into following:
if (condition) goto L1; clause2;
go to L2; L1: clause1;
• Not as elegant as if-else, but same meaning
Slides From Patterson’s 61C 6

MIPS Decision Instructions
°Decision instruction in MIPS:
•beq register1, register2, L1
•beq is “Branch if (registers are) equal” Same meaning as (using C):
if (register1==register2) goto L1
°Complementary MIPS decision instruction •bne register1, register2, L1
•bne is “Branch if (registers are) not equal” Same meaning as (using C):
if (register1!=register2) goto L1
°Called conditional branches COMP-273
Slides From Patterson’s 61C 7

MIPS Goto Instruction
°In addition to conditional branches, MIPS has an unconditional branch:
°Called a Jump Instruction: jump (or branch) directly to the given label without needing to satisfy any condition
°Same meaning as (using C): goto label
°Technically, it’s the same as: beq $0,$0,label
since it always satisfies the condition.
Slides From Patterson’s 61C 8

Compiling C if into MIPS (1/2)
°Compile by hand
if (i == j) f=g+h;
else f=g-h;
°Use this mapping:
f: $s0, g: $s1, h: $s2, i: $s3, j: $s4
(true) i == j
(false) i != j
Slides From Patterson’s 61C 9

Compiling C if into MIPS (2/2) (true)
°Final compiled MIPS code (fill in the blank):
(false) i != j
Slides From Patterson’s 61C 10

Compiling C if into MIPS (2/2) (true)
°Final compiled MIPS code:
(false) i != j
beq $s3,$s4,True # branch i==j sub $s0,$s1,$s2 # f=g-h(false) j Fin #gotoFin
True: add $s0,$s1,$s2 # f=g+h (true) Fin:
Note: Compiler automatically creates labels to handle decisions (branches) appropriately. generally not found in HLL code.
Slides From Patterson’s 61C 11

Loops in C/Assembly (1/3)
°Simple loop in C
g = g + A[i];
i = i + j;
} while (i != h);
°Rewrite this as:
Loop:g = g + A[i]; i = i + j;
if (i != h) goto Loop;
°Use this mapping:
g: $s1, h: $s2, i: $s3, j: $s4, base of A:$s5
Slides From Patterson’s 61C 12

Loops in C/Assembly (2/3)
°Final compiled MIPS code (fill in the blank):
Slides From Patterson’s 61C 13

Loops in C/Assembly (2/3)
°Final compiled MIPS code:
Loop: sll $t1,$s3,2 #$t1= 4*i add $t1,$t1,$s5 #$t1=addr A lw $t1,0($t1) #$t1=A[i] add $s1,$s1,$t1 #g=g+A[i] add $s3,$s3,$s4 #i=i+j
bne $s3,$s2,Loop# goto Loop # if i!=h
Slides From Patterson’s 61C 14

Loops in C/Assembly (3/3)
°There are three types of loops in C: •while
•do… while •for
°Each can be rewritten as either of the other two, so the method used in the previous example can be applied to while and for loops as well.
°Key Concept: Though there are multiple ways of writing a loop in MIPS, conditional branch is key to decision making
Slides From Patterson’s 61C 15

Inequalities in MIPS (1/5)
°Until now, we’ve only tested equalities (==and!=inC). Generalprogramsneed to test < and > as well.
°Create a MIPS Inequality Instruction: • “Set on Less Than”
•Syntax:slt reg1,reg2,reg3
• Meaning:
if (reg2 < reg3) else reg1 = 0; • In computereeze, “set” means “set to 1”, “reset” means “set to 0”. Slides From Patterson’s 61C 16 Inequalities in MIPS (2/5) °How do we use this? °Compile by hand: if (g < h) goto Less; °Use this mapping: g: $s0, h: $s1 Slides From Patterson’s 61C 17 Inequalities in MIPS (3/5) °Final compiled MIPS code (fill in the blank): Slides From Patterson’s 61C 18 Inequalities in MIPS (3/5) °Final compiled MIPS code: slt $t0,$s0,$s1 # $t0 = 1 if g, <= and >= ?
°We could add 3 more instructions, but: • MIPS goal: Simpler is Better
°Can we implement <= in one or more instructions using just slt and the branches? °What about >?
°What about >=?
°4 combinations of slt & beq/bneq
Slides From Patterson’s 61C 20

Inequalities in MIPS (5/5)
°4 combinations of slt & beq/bneq:
slt $t0,$s0,$s1 # $t0 = 1 if gh
bne $t0,$0,Grtr # if(g>h) goto Grtr slt $t0,$s0,$s1 # $t0 = 1 if g=h) goto Gteq slt $t0,$s1,$s0 # $t0 = 1 if g>h
beq $t0,$0,Lteq # if(g<=h) goto Lteq Slides From Patterson’s 61C 21 Immediates in Inequalities °There is also an immediate version of slt to test against constants: slti • Helpful in for loops if (g >= 1) goto Loop
Slides From Patterson’s 61C 22

Immediates in Inequalities
°There is also an immediate version of slt to test against constants: slti
• Helpful in for loops
if (g >= 1) goto Loop
Loop: . . . M
slti $t0,$s0,1
# $t0 = 1 if
# $s0<1 (g<1) beq $t0,$0,Loop # goto Loop # if $t0==0 # (if (g>=1))
Slides From Patterson’s 61C 23

What about unsigned numbers?
°there are unsigned inequality instructions:
sltu, sltiu
°which set result to 1 or 0 depending on
unsigned comparisons
° $s0 = FFFF FFFAhex, $ s1 = 0000 FFFAhex
°What is value of $t0, $t1? °slt $t0, $s0, $s1
°sltu $t1, $s0, $s1
Slides From Patterson’s 61C 24

Example: The C Switch Statement (1/3)
°Choose among four alternatives depending on whether k has the value 0, 1, 2 or 3. Compile this C code:
switch (k) {
case 0: f=i+j; break; /* k=0*/ case 1: f=g+h; break; /* k=1*/ case 2: f=g–h; break; /* k=2*/ case 3: f=i–j; break; /* k=3*/ }
Slides From Patterson’s 61C 25

Example: The C Switch Statement (2/3)
°This is complicated, so simplify.
°Rewrite it as a chain of if-else statements, which we already know how to compile:
if(k==0) f=i+j;
else if(k==1) f=g+h;
else if(k==2) f=g–h;
else if(k==3) f=i–j;
°Use this mapping:
f: $s0, g: $s1, h: $s2, i: $s3, j: $s4, k: $s5
Slides From Patterson’s 61C 26

Example: The C Switch Statement (3/3)
°Final compiled MIPS code (fill in the blank):
Slides From Patterson’s 61C 27

Example: The C Switch Statement (3/3)
°Final compiled MIPS code:
bne $s5,$0,L1 # branch k!=0 add $s0,$s3,$s4 #k==0 so f=i+j j Exit # end of case so Exit
L1: addi $t0,$s5,-1 # $t0=k-1
bne $t0,$0,L2 # branch k!=1
add $s0,$s1,$s2 #k==1 so f=g+h j Exit # end of case so Exit
L2: addi $t0,$s5,-2 # $t0=k-2
bne $t0,$0,L3 # branch k!=2 sub $s0,$s1,$s2 #k==2 so f=g-h
j Exit # end of case so Exit
L3: addi $t0,$s5,-3 # $t0=k-3
bne $t0,$0,Exit # branch k!=3
sub $s0,$s3,$s4 #k==3 so f=i-j Exit:
Slides From Patterson’s 61C 28

Things to Remember (1/2)
° A Decision allows us to decide which pieces of code to execute at run-time rather than at compile-time.
°C Decisions are made using conditional statements within an if, while, do while or for.
°MIPS Decision making instructions are the conditional branches: beq and bne.
°To help the conditional branches make decisions concerning inequalities, we introduce a single instruction: “Set on Less Than”called slt, slti, sltu, sltiu
Slides From Patterson’s 61C 29

Things to Remember (2/2)
° : beq, bne
slt, slti, sltu, sltiu
Slides From Patterson’s 61C 30

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

IT代考 Econometrics – cscodehelp代写

Econometrics Name_________________
Problem Set #1

1.) Import the “Homework #1 Airfare.csv” into R. A description of the variables is also included in the “Homework #1 Description” text file.

Copyright By cscodehelp代写 加微信 cscodehelp

a. Let’s first focus on the variable year. In class we noticed that some of the observations had an incorrect value as the expected data range is between 1997 – 2000. Rather than deleting these incorrect observations (as we did in class), I would like you to convert them all to the year 2000. Hint: in class we wrote out a command that replaced these observations with the value NA. Now you must replace these observations with the value 2000. How many observations are there in each year after this conversion is made?

b. Create an indicator variable (0/1) for each year. Label these new variables “yrXX”, where XX is the last two digits of the year.

c. What was the most expensive flight in 2000? What was the least expensive flight in 2000? (Hint: one way of solving this problem is by using the summary() command).

d. If you work for an airline, you might be interested in finding flights that are expensive and average a lot of passengers. Create an indicator variable (0/1) that equals 1 when a flight averages more than 500 passengers per day AND the fare was greater than 175, and 0 otherwise. (Hint: when creating an indicator variable with two conditions, use the “&” symbol to separate each condition). How many observations have a value of 1 for this binary variable?

e. You want to know what affects airfare prices. You believe that the inflation rate may affect rates over time, but this variable is not included in your initial dataset. I’ve included a second csv file titled “Homework #1 Inflation” in your homework folder. First import this data, and then merge it with your existing dataset using the common variable year. What is the value and sign of the correlation coefficients between airfare price and yr97, yr98, yr99 and yr00 (the dummies created in part 1b)? Does these match your expectations? Why or why not?

f. Now, run a regression which includes airfare as the dependent variable and dist, passen, yr98, yr99 and yr00 (from part b) as independent variables. Do the coefficients on yr98, yr99 and yr00 match what is observed in part 1e? How are these coefficients different from the correlation coefficient in part 1e?

g. Next let’s merge in information on the origin city. Currently we only know where the flights departed from based on the origin_id variable, which is only a numerical variable. Merge in the name of the origin city from the “Homework #1 Origin Name” csv file using the common variable origin_id. How many flights departed from AKRON, OH between 1997 – 2000? How many flights departed from AUSTIN, TX in the year 1998? Hint: you may want to create dummy variables for this question and then use the function table().

h. Re-estimate your regression from part f by including dist, passen, yr98, yr99, yr00 and a dummy for whether the flight originated from Akron, Ohio as independent variables. Are flights more costly or cheaper when they depart from Akron, Ohio? How much more expensive/cheap are they?

i. Save the predicted airfare prices (fare_hat) and the residuals (e) from the regression from part h. What is highest predicted airfare value? What is the lowest predicted airfare?

j. Create a scatter plot of the actual airfare values (fare) on the Y-axis and predicted airfare values (fare_hat) on the X-axis. Copy and paste this graph into your homework.

k. Plot the value of your residuals, which were calculated in part i, on a histogram and paste this diagram into your homework. Set xlim=c(-250,250) when constructing this diagram.

l. If an observation has a positive residual value is your model overpredicting or underpredicting airfare prices? If an observation has a negative residual value is your model overpredicting or underpredicting airfare prices?

m. If you were concerned with the accuracy of your regression model, would you want larger residual values or smaller residual values?

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代写 ========== – cscodehelp代写

==========
Base with the smallest quality score: T

==========

Copyright By cscodehelp代写 加微信 cscodehelp

Total number of reads: 5
Smallest average quality score: 64.60
Read with the smallest average quality score:

==========
TTTGCGCATCTGTTATGAAATAGTTTTTAACTGTACTATCCATAGGAATAAAATCTTCTA
AATCAACC*CACCCTCTTGTATTTTAA**CCC
TGTAGAGAATAAAACATTAAAGTTTG**CAATGCAGAATGCATCTG
TGTGTGAATTTGGA

==========
Length of the reference sequence: 350
Number of A bases: 92
Number of C bases: 87
Number of G bases: 64
Number of T bases: 107

==========
Read: TTTGCGCATCTGTTATGAAATAGTTTTTAACTGTACTATCCATAGGAATAAAATCTTCTA
Match: TTTGCGCATCTGTTATGAAATAGTTTTTAACTGTACTATCCATAGGAATAAAATCTTCTA
Read: AATCAACC*CACCCTCTTGTATTTTAA**CCC
Match: AATCAACCCCACCCTCTTGTATTTTAACACCC
Read: TGTAGAGAATAAAACATTAAAGTTTG**CAATGCAGAATGCATCTG
Match: TTGACAGGACACGAGTAACTCGTCTATCTTCTGCAGGCTGCTTACG
Read: TGTGTGAATTTGGA
Match: AGTATAATTTTTGC
Read: TCCTG
Match: TACTG

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代考 VOLUME 19, NUMBER 4 (1998) – cscodehelp代写

J. C. SPALL
An Overview of the Simultaneous Perturbation Method for Efficient Optimization
Multivariate stochastic optimization plays a major role in the analysis and control of many engineering systems. In almost all real-world optimization problems, it is necessary to use a mathematical algorithm that iteratively seeks out the solution because an analytical (closed-form) solution is rarely available. In this spirit, the “simultaneous perturbation stochastic approximation (SPSA)” method for difficult multivariate optimization problems has been developed. SPSA has recently attracted considerable international attention in areas such as statistical parameter estimation, feedback control, simulation-based optimization, signal and image processing, and experimental design. The essential feature of SPSA—which accounts for its power and relative ease of implementation—is the underlying gradient approximation that requires only two measurements of the objective function regardless of the dimension of the optimization problem. This feature allows for a significant decrease in the cost of optimization, especially in problems with a large number of variables to be optimized. (Keywords: Gradient approximation, Multivariate optimization, Simulation-based optimization, Simultaneous perturbation stochastic approximation, SPSA, Stochastic optimization.)
INTRODUCTION

Copyright By cscodehelp代写 加微信 cscodehelp

This article is an introduction to the simultaneous perturbation stochastic approximation (SPSA) algo- rithm for stochastic optimization of multivariate systems. Optimization algorithms play a critical role in the design, analysis, and control of most engineering systems and are in widespread use in the work of APL and other organizations:
The future, in fact, will be full of [optimization] algorithms. They are becoming part of almost everything. They are moving up the complexity chain to make entire companies more efficient. They also are moving down the chain as computers spread. (USA Today, 31 Dec 1997)
Before presenting the SPSA algorithm, we provide some general background on the stochastic optimiza- tion context of interest here.
482 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 19, NUMBER 4 (1998)

SIMULTANEOUS PERTURBATION METHOD FOR OPTIMIZATION
The mathematical representation of most optimiza- tion problems is the minimization (or maximization) of some scalar-valued objective function with respect to a vector of adjustable parameters. The optimization algorithm is a step-by-step procedure for changing the adjustable parameters from some initial guess (or set of guesses) to a value that offers an improvement in the objective function. Figure 1 depicts this process for a very simple case of only two variables, 􏰙1 and 􏰙2, where our objective function is a loss function to be minimized (without loss of generality, we will discuss optimization in the context of minimization because a maximization problem can be trivially converted to a minimization problem by changing the sign of the objective func- tion). Most real-world problems would have many more variables. The illustration in Fig. 1 is typical of a sto- chastic optimization setting with noisy input informa- tion because the loss function value does not uniformly decrease as the iteration process proceeds (note the temporary increase in the loss value in the third step of the algorithm). Many optimization algorithms have been developed that assume a deterministic setting and that assume information is available on the gradient vector associated with the loss function (i.e., the gra- dient of the loss function with respect to the parameters being optimized). However, there has been a growing interest in recursive optimization algorithms that do not depend on direct gradient information or measure- ments. Rather, these algorithms are based on an ap- proximation to the gradient formed from (generally noisy) measurements of the loss function. This interest has been motivated, for example, by problems in the adaptive control and statistical identification of com- plex systems, the optimization of processes by large Monte Carlo simulations, the training of recurrent neural networks, the recovery of images from noisy
sensor data, and the design of complex queuing and discrete-event systems. This article focuses on the case where such an approximation is going to be used as a result of direct gradient information not being readily available.
Overall, gradient-free stochastic algorithms exhibit convergence properties similar to the gradient-based stochastic algorithms [e.g., Robbins-Monro1 stochastic approximation (R-M SA)] while requiring only loss function measurements. A main advantage of such algorithms is that they do not require the detailed knowledge of the functional relationship between the parameters being adjusted (optimized) and the loss function being minimized that is required in gradient- based algorithms. Such a relationship can be notorious- ly difficult to develop in some areas (e.g., nonlinear feedback controller design), whereas in other areas (such as Monte Carlo optimization or recursive statis- tical parameter estimation), there may be large compu- tational savings in calculating a loss function relative to that required in calculating a gradient.
Let us elaborate on the distinction between algo- rithms based on direct gradient measurements and those based on gradient approximations from measurements of the loss function. The prototype gradient-based algorithm is R-M SA, which may be considered a gen- eralization of such techniques as deterministic steepest descent and Newton–Raphson, neural network back- propagation, and infinitesimal perturbation analysis– based optimization for discrete-event systems. The gra- dient-based algorithms rely on direct measurements of the gradient of the loss function with respect to the parameters being optimized. These measurements typ- ically yield an estimate of the gradient because the underlying data generally include added noise. Because it is not usually the case that one would obtain direct measurements of the gradient (with or without added noise) naturally in the course of operating or simulating a system, one must have detailed knowledge of the underlying system input–output relationships to calcu- late the R-M gradient estimate from basic system output measurements. In contrast, the approaches based on gradient approximations require only conversion of the basic output measurements to sample values of the loss function, which does not require full knowledge of the system input–output relationships. The classical method for gradient-free stochastic optimization is the Kiefer– Wolfowitz finite-difference SA (FDSA) algorithm.2
Because of the fundamentally different information needed in implementing these gradient-based (R-M) and gradient-free algorithms, it is difficult to construct meaningful methods of comparison. As a general rule, however, the gradient-based algorithms will be faster to converge than those using loss function–based gradient approximations when speed is measured in number of iterations. Intuitively, this result is not surprising given
Height of vertical line after each optimization step denotes loss function value
Initial guess
L (􏰙 1 , 􏰙 2 )
Figure 1. Example of stochastic optimization algorithm minimiz- ing loss function L(􏰙1, 􏰙2).
JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 19, NUMBER 4 (1998) 483

J. C. SPALL
the additional information required for the gradient-
based algorithms. In particular, on the basis of asymptotic
theory, the optimal rate of convergence measured in
terms of the deviation of the parameter estimate from
the true optimal parameter vector is of order k–1/2 for
the gradient-based algorithms and of order k–1/3 for the
algorithms based on gradient approximations, where k
represents the number of iterations. (Special cases exist
where the maximum rate of convergence for a non-
gradient algorithm is arbitrarily close to, or equal to, k–1/2.)
In practice, of course, many other factors must be considered in determining which algorithm is best for a given circumstance for the following three reasons: (1) It may not be possible to obtain reliable knowledge of the system input–output relationships, implying that the gradient-based algorithms may be either infeasible (if no system model is available) or undependable (if a poor system model is used). (2) The total cost to achieve effective convergence depends not only on the number of iterations required, but also on the cost needed per iteration, which is typically greater in gradient-based algorithms. (This cost may include greater computational burden, additional human effort required for determining and coding gradients, and experimental costs for model building such as labor, materials, and fuel.) (3) The rates of convergence are based on asymptotic theory and may not be represen- tative of practical convergence rates in finite samples. For these reasons, one cannot say in general that a gradient-based search algorithm is superior to a gradient approximation-based algorithm, even though the gradient-based algorithm has a faster asymptotic rate of convergence (and with simulation-based optimization such as infinitesimal perturbation analysis requires only one system run per iteration, whereas the approximation- based algorithm may require multiple system runs per iteration). As a general rule, however, if direct gradient information is conveniently and reliably available, it is generally to one’s advantage to use this information in the optimization process. The focus in this article is the case where such information is not readily available.
FDSA AND SPSA ALGORITHMS
This article considers the problem of minimizing a (scalar) differentiable loss function L(􏰙), where 􏰙 is a p-dimensional vector and where the optimization problem can be translated into finding the minimizing 􏰙* such that ∂L/∂􏰙 = 0. This is the classical formulation of (local) optimization for differentiable loss functions. It is assumed that measurements of L(􏰙) are available at various values of 􏰙. These measurements may or may not include added noise. No direct measurements of ∂L/∂􏰙 are assumed available, in contrast to the R-M framework. This section will describe the FDSA and SPSA algorithms. Although the emphasis of this article is SPSA, the FDSA discussion is included for compar- ison because FDSA is a classical method for stochastic optimization.
The SPSA and FDSA procedures are in the general recursive SA form:
The next section describes SPSA and the related
FDSA algorithm. Then some of the theory associated
with the convergence and efficiency of SPSA is
summarized. The following section is an illustration of
the implications of the theory in an example related to
neural networks. Then practical guidelines for
implementation are presented, followed by a summary
of some ancillary results and some extensions of the
algorithm. Not covered here are global optimization
methods such as genetic algorithms and simulated
annealing; Spall presents some discussion of such gˆk(􏰙k) (i = 1, 2,…, p) for a two-sided finite-difference methods in the context of stochastic approximation. approximation is given by
􏰙 k + 1 = 􏰙 k – a k gˆ k ( 􏰙 k ) ,
where gˆk(􏰙k) is the estimate of the gradient g(􏰙)􏰚
∂L/∂􏰙 at the iterate 􏰙ˆk based on the previously men- tioned measurements of the loss function. Under appro- priate conditions, the iteration in Eq. 1 will converge to 􏰙* in some stochastic sense (usually “almost surely”) (see, e.g., Fabian4 or Kushner and Yin5).
The essential part of Eq. 1 is the gradient approx-
imation gˆk(􏰙k). We discuss the two forms of interest
here. Let y(·) denote a measurement of L(·) at a design
level represented by the dot (i.e., y(·) =L(·) + noise)
and ck be some (usually small) positive number. One-
sided gradient approximations involve measurements
y(􏰙k) and y(􏰙k+perturbation), whereas two-sided
gradient approximations involve measurements of the form y(􏰙ˆk ± perturbation). The two general forms of gradient approximations for use in FDSA and SPSA are finite difference and simultaneous perturbation, respec- tively, which are discussed in the following paragraphs.
For the finite-difference approximation, each com-
ponent of 􏰙ˆk is perturbed one at a time, and corre-
sponding measurements y(·) are obtained. Each com-
ponent of the gradient estimate is formed by
differencing the corresponding y(·) values and then
dividing by a difference interval. This is the standard
approach to approximating gradient vectors and is
motivated directly from the definition of a gradient as
a vector of p partial derivatives, each constructed as the
limit of the ratio of a change in the function value over
a corresponding change in one component of the
argument vector. Typically, the ith component of 3ˆ
484 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 19, NUMBER 4 (1998)

ˆ y(􏰙k +ckei)–y(􏰙k –ckei)
at APL, where SPSA is providing a solution to a prob- lem that appeared intractable using other available methods. In addition, to illustrate some of the other possible applications, we close with a brief summary of some additional projects based on SPSA, most of which have been developed at other institutions.
Signal Timing for Vehicle Traffic Control
A long-standing problem in traffic engineering is to optimize the flow of vehicles through a given road network (Fig. 2). Improving the timing of the traffic signals at intersections in a network is generally the most powerful and cost-effective means of achieving this goal. However, because of the many complex as- pects of a traffic system—human behavioral consider- ations, vehicle flow interactions within the network, weather effects, traffic accidents, long-term (e.g., sea- sonal) variation, etc.—it has been notoriously difficult to determine the optimal signal timing, especially on a system-wide (multiple intersection) basis. Much of this difficulty has stemmed from the need to build extremely complex models of the traffic dynamics as a component of the control strategy. A “strategy” in this context is a set of rules providing real-time signal tim- ing in response to minute-by-minute changes in the traffic conditions. The APL approach is fundamentally different from those in existence in that it eliminates the need for such complex models. SPSA is central to the approach by providing a means for making small simultaneous changes to all the signal timings in a network and using the information gathered in this way to update the system-wide timing strategy. By avoiding conventional “one-signal-at-a-time” changes to the signal timing strategies, the time it would take to pro- duce an overall optimal strategy for the system is re- duced from years or decades (obviously impractical!) to several months (quite reasonable). Note that, unlike the two examples that follow, the savings here is not computational per se, but is inherent in the need for data on a daily basis (and hence represents a reduction in physical experimental costs such as labor and time). This approach is described in detail in Spall and Chin6 and Chin et al.,7 including realistic simulations of a 9- intersection network within the central business dis- trict of Manhattan, , and a 12-intersection network in , Maryland.
Optimal Targeting of Weapon Systems
This is an example of the use of simulations to optimize processes, something done in a wide range of DoD and non-DoD applications. More specifically, given a number of projectiles that are going to be di- rected at a target, the problem is to optimally select a set of aim points with the goal of maximizing damage to the target while minimizing so-called collateral
gˆki(􏰙k)= 2c , (2) k
2c 􏰖 , k ki
SIMULTANEOUS PERTURBATION METHOD FOR OPTIMIZATION
where ei denotes a vector with a one in the ith place and zeros elsewhere (an obvious analogue holds for the one-sided version; likewise for the simultaneous pertur- bation form below), and ck denotes a small positive number that usually gets smaller as k gets larger.
The simultaneous perturbation approximation has
all elements of 􏰙ˆk randomly perturbed together to ob-
tain two measurements of y(·), but each component ˆ
gˆki(􏰙k) is formed from a ratio involving the individual components in the perturbation vector and the differ- ence in the two corresponding measurements. For two- sided simultaneous perturbation, we have
ˆ gˆki(􏰙k)=
y(􏰙k +ck􏰖k)–y(􏰙k –ck􏰖k)
where the distribution of the user-specified p-dimen- sional random perturbation vector, 􏰖k = (􏰖k1, 􏰖k2,…, 􏰖kp)T, satisfies conditions discussed later in this article (superscript T denotes vector transpose).
Note that the number of loss function measurements y(·) needed in each iteration of FDSA grows with p, whereas with SPSA, only two measurements are need- ed independent of p because the numerator is the same in all p components. This circumstance, of course, pro- vides the potential for SPSA to achieve a large savings (over FDSA) in the total number of measurements required to estimate 􏰙 when p is large. This potential is realized only if the number of iterations required for effective convergence to 􏰙* does not increase in a way to cancel the measurement savings per gradient approx- imation at each iteration. A later section of this article will discuss this efficiency issue further, demonstrating when this potential can be realized by establishing that:
SELECTED APPLICATIONS OF SPSA
The efficiency issue mentioned in the preceding section (and treated in more detail in the next section) has profound implications for practical multivariate optimization. Many problems that formerly may have been considered intractable with conventional (say, FDSA) methods, may now be solvable. In this section, we summarize three distinct examples, based on work
Under reasonably general conditions, SPSA and FDSA achieve the same level of statistical accuracy for a given number of iterations, even though SPSA uses p times fewer function evaluations than FDSA (because each gradient approximation uses only 1/p the number of function evaluations).
JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 19, NUMBER 4 (1998) 485

J. C. SPALL
Figure 2. Overall system-wide traffic control concept. Traffic control center provides timing information to signals in traffic network; information on traffic flow is fed back to traffic control center.
damage (damage to sensitive locations not directly associated with the military mission, e.g., schools and hospitals). The projectiles have inherent random vari- ation and may be statistically dependent. So the tar- geter must allow for the statistical variation between the aim point and actual impact point in developing a strategy for determining aim points. In such cases it is desirable to use patterning of multiple projectiles. “Patterning” in this context means aiming the projec- tiles at a set of points that may not overlap each other or be within the target boundaries. Figure 3 illustrates the concept for one target and five projectiles; a bias away from the “stay-out zone” (e.g., a civilian facility) is apparent in this case where it is desired to destroy the target while minimizing the chances of producing dam- age to the stay-out zone. For scenarios with many pro- jectiles that are independently targeted, the damage function—which must be evaluated to find the best aim point pattern—is likely to be analytically unavailable and will require estimation by Monte Carlo simulation. In particular, to evaluate the effectiveness of a given set of aim points, it is necessary to run one or more
simulations (recall that there is random variation in the outcome of one “volley” of projectiles corresponding to one simulation). Commonly used techniques for solv- ing the optimization problem by simulation are com- putationally intensive and prohibitivel

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

代写代考 GBA468 – cscodehelp代写

Why Build Models?
• Represent (and often simplify) business situations
• Develop insight into the dynamics of complicated operations • Identify patterns in empirical data
• Use historical data to make future predications

Copyright By cscodehelp代写 加微信 cscodehelp

• Analyze potential outcomes of decisions alternatives
• Build proof of concept and hypothetical scenarios
• Visualize risk & uncertainty
“All models are wrong, but some are useful.” –

Decision Analysis

Summary of Decision Rules Under Conditions of Uncertainty
Maximax rule
Identify best outcome for each possible decision & choose decision with maximum payoff.
Maximin rule
Identify worst outcome for each decision & choose decision with maximum worst payoff.
Minimax regret rule
Determine worst potential regret associated with each decision, where potential regret with any decision & state of nature is the improvement in payoff the manager could have received had the decision been the best one when the state of nature actually occurred. Manager chooses decision with minimum worst potential regret.
Expected Monetary Value Rule (EMV)
Choose decision with maximum expected value (start by assuming all outcomes equally likely)
Expected Opportunity Loss (EOL)
Choose decision with the minimum expected regret (expected opportunity loss).
Mohr – GBA468

Summary of Decision Rules Under Conditions of Risk
Choose decision with maximum expected value
Mean- variance rules
Given two risky decisions A & B:
• If A has higher expected outcome & lower variance than B, choose decision A
• If A & B have identical variances (or standard deviations), choose decision with higher expected value
• If A & B have identical expected values, choose decision with lower variance (standard deviation)
Coefficient of variation rule
Choose decision with smallest coefficient of variation
Mohr – GBA468

Manager’s Attitude Toward Risk
• Risk averse
 If faced with two risky decisions with equal expected profits, the less risky decision is chosen
 Diminishing MUprofit
• Risk loving
 Expected profits are equal & the more risky decision is chosen  Increasing MUprofit
• Risk neutral
 Indifferent between risky decisions that have equal expected profit  Constant MUprofit

Roll Back Analysis for Decision Tree Model
• First step
• Calculate cumulative cash flow for each node as sum of all cash flow along the branch ending at that node.
• Second step (roll back, or backward induction)
• Starting at the leaf and working backwards (right to left), calculate EMV for each node
• For Event node:
• EMV = expected cumulative cash flow for branches extending from that node
• For Decision node:
• EMV = maximum cumulative cash flow for all branches extending from that node

60% – High R & D Costs -60000 16000
40% – Low R & D Costs
-30000 46000
20% – High R & D Costs
29000 -70000 5000 7
80% – Low R & D Costs
-40000 35000
10% – High R & D Costs
Black: Additional cash flow at each node
Red: Cumulative cash flow at each node
Blue: Rollback EMV calculation at each node
Green: Choice of next node @ Decision points
46000 5000
Receive Grant
85000 80000
-5000 75000
Submit Proposal
-5000 -5000 13500
2 Don’t Submit
Don’t Receive
-4000 76000
-80000 -4000 8
90% – Low R & D Costs -40000 36000
Proposal 00

Decision Analysis Examples
• Single Stage Decision examples
• Non-probabilistic examples
• Magnolia Inn facility location choice
• Fish House product sourcing
• Probabilistic examples
• Auction sales contract (self-paced)
• Atlanta-Boston-Cleveland comparison
• Expected utility examples
• Atlanta-Boston-Cleveland comparison
• Insurance example
• Multi-stage Decision examples
• OSHA grant decision
• Game theory strategic competitor interaction examples
• QC testing sequence

Constrained Optimization & Linear Programming

Mathematical Programming/Optimization
• MP is a field of management science that finds the optimal, or most efficient, way of using limited resources to achieve the objectives of an individual of a business.
• Applications
 Determining Product Mix  Manufacturing
 Routing and Logistics
 Financial Planning

Linear Programming (LP) Problems
MAX (or MIN): Subject to:
c1X1 + c2X2 + … + cnXn
a11X1 + a12X2 + … + a1nXn <= b1 ak1X1 + ak2X2 + ... + aknXn >=bk
am1X1 + am2X2 + … + amnXn = bm
• Decisions
• Constraints • Objectives

Linear Optimization Applications
• Resource Allocation
Amount Used < Amount Available  Product Mix Problems  Capital Budgeting/Portfolio Design Problems • Cost-Benefit-Tradeoff Amount Achieved/Produced > Minimum Amount Required
 AdvertisingMixProblems  Scheduling
• Mixed Constraints
Resource (<), Benefit (>) and Fixed Constraints (=)
 Logistic Optimizations
 Blending Problems
 Assignment Problems  Etc…

Shadow Prices
Amounts by which the objective function value will change given a one-unit change in the right-hand side of the constraint (all other values held constant).
Python PuLP
Microsoft Excel 15.0 Sensitivity Report
Worksheet: [Opt_HotTubs_Sensitivity.xlsx]Model Expanded Report Created: 2/18/2018 10:49:57 AM
Variable Cells
Final Cell Name Value $B$10 Production Level Required 122
Reduced Cost
Objective Allowable Allowable Coefficient Increase Decrease
350 100 20 300 50 40 320 13.33333333 1E+30
Constraint Allowable Allowable R.H. Side Increase Decrease
200 7 26 1566 234 126 2880 1E+30 168
$C$10 Production Level Hydro-Lux $D$10 Production Level Typhoon
Constraints
$F$6 Pumps Used
$F$7 Labor Used
$F$8 Tubing Used
0 -13.33333333
Final Value 200 1566 2712
16.66666667 0
Excel Solver sensitivity report

Linear Programming Examples
• Blue Ridge hot tubs (resource allocation)
• Phone survey (cost-benefit)
• Lunch menu (blending)
• School assignment (assignment + blending) • Logistics optimization (assignment)
• Optimal route
• Leases (set covering) (self-paced)
• Fire stations (set covering)
• Transmitters (set covering)
• Production planning (resource allocation & cost-benefit)
• with sensitivity analysis

Simulation Models

Simulation & Modeling
• In many business models, the value for one or more cells representing independent variables is unknown or uncertain.
• As a result, there is uncertainty about the value the dependent variable will assume:
Y = f(X1, X2, …, Xk)
• Simulation can be used to analyze these types of models.

Simulation
• To properly assess the risk inherent in the model we need to use simulation.
• Simulation is a 4 step process:
1) Identify the uncertain cells in the model.
2) Implement appropriate RNGs for each uncertain cell.
3) Replicate the model n times, and record the value of the bottom-line performance measure.
4) Analyze the sample values collected on the performance measure.

Simulation Examples
• Coin Flip Game
• Example distributions
• Endowment with Scholarship • Reservation Management
• Queuing model

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

程序代写 Problem Hamiltonian Cycle – cscodehelp代写

Problem Hamiltonian Cycle

A cycle C in

Copyright By cscodehelp代写 加微信 cscodehelp

if itvisitseach
is Hamiltonian Cycle
vertex exactly once Problem statement
Given an undirectedgraph G is there a Hamiltonian cycle in G

Show that the Hamiltonian Cycle Problem is NP complete

A suppose that G
a vertex cover ofSgi E
vertex cover setbe
Cv E has let the
we will identify neighbors of Ui as shown here
gg by following
Form a Ham Cycle in G
the nodes in start at s
u viif u iUfif
G in this order i

Then go to Sz and follow the nodes
UE I yds if
fun under D
Ye Thenreturnbackto s

Suppose G has a Hamiltonian cycle C then the set
S uj ell i ui D e C
for some k j will be a vortex cover set in G

Theorem if P NP then for any constant971 there is
no polynomial time approximation algorithm with approximation
ratio f for the general TSP Plan we will assume thatsuch
approximation algorithm
exists we will then use solve the H C problem

Given problem
construct G as
an instance ofthe on graph G we
follows has the same setnodes
as in G connected
G is a fully have a cost of 1
graph Edges in G that are also G
otheredges in 0 have a

Discussion 11
1. In the Min-Cost Fast Path problem, we are given a directed graph G=(V,E) along with positive integer times te and positive costs ce on each edge. The goal is to determine if there is a path P from s to t such that the total time on the path is at most T and the total cost is at most C (both T and C are parameters to the problem). Prove that this problem is NP-complete.
2. We saw in lecture that finding a Hamiltonian Cycle in a graph is NP-complete. Show that finding a Hamiltonian Path — 􏰒 􏰡􏰒􏰚􏰛 􏰚􏰛􏰒􏰚 􏰢􏰣􏰝􏰣􏰚􏰝 􏰜􏰒􏰤􏰛 􏰢􏰜􏰥􏰚􏰜􏰦 􏰜􏰦􏰒􏰤􏰚􏰧􏰨 􏰘􏰩􏰤􏰜􏰎 􏰒􏰩􏰪 􏰣􏰝􏰩􏰖􏰚 􏰥􏰜􏰫􏰬􏰣􏰥􏰜􏰪 􏰚􏰘 return to its starting point — is also NP-complete.
3. Some NP-complete problems are polynomial-time solvable on special types of graphs, such as bipartite graphs. Others are still NP-complete.
Show that the problem of finding a Hamiltonian Cycle in a bipartite graph is still NP-complete.

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代考 FFFF700B we know for sure that it is a: 1. negative number 2. positive numb – cscodehelp代写

CSCI-UA.0201-001
Computer Systems Organization
Exam 1 Fall 2018 (time: 60 minutes)
Last name: First name: NetID:

Copyright By cscodehelp代写 加微信 cscodehelp

• If you perceive any ambiguity in any of the questions, state your assumptions clearly.
• Questions vary in difficulty; it is strongly recommended that you do not spend too much time
on any one question.
• The exam consists of 5 pages, 5 questions, and a total of 50 points. Last paper is left
intentionally blank, for you to use if you want.
• You answer on the question sheet.
1. (5 points) Circle the correct answer among the choices given. If you circle more than one answer, you will lose the grade of the corresponding question.
(A) If we write a C program that consists of 5 C files. The output of the assembler will be: 1. five files 2. one file 3. depends on the type of the compiler 4. depends on OS
(B) By seeing the number: 0xFFFF700B we know for sure that it is a: 1. negative number 2. positive number
3. unsigned number 4. We do not know for sure
(C) Suppose we have a 32-bit machine. The size of “long int *” is: 1. 4 bytes 2. 8 bytes
3. 2 bytes 4. Depends on the OS.
(D) Suppose we have a 64-bit machine. The size of “long int *” is: 1. 4 bytes 2. 8 bytes
3. 2 bytes 4. Depends on the OS.
(E) If we write a C program that includes a parenthesis that we opened but forgot to close. Then:
1. the compiler will complain 2. the assembler will complain 2. the linker will complain 3. the loader will complain

2. [4 points] We have seen that the floating point presentation has normalized encoding and denormalized encoding. State two reasons we need denormalized encoding. Every reason must not be more than one sentence.
• To be able to present 0
• To present very small numbers.
3. [8 points] Suppose you want to include this condition in your C code: if( x & mask)
x is a char. You want the condition to be true if the third bit from the left of x is set to 1.
• What value mask must have in binary? 00100000
• What value mask must have in hexadecimal? 0x20
• Suppose x = 5, will the condition be true? Show the value of x & mask in binary to justify.
00000101 & 00100000 = 00000000→condition is false
• What if x = -5? Show the value if x in binary to justify.
11111011 & 00100000 = 00100000→condition is true

4. Suppose we have the following piece of C code (%p in printf just prints the address in hex):
void foo(int i) {
char a[2]; double d= 3.14;
a[i] = 0xFFFFABCD;
printf(“%d”, d);
a. [2 points] How many bytes does array a require?
b. [2 points] Suppose array a is stored at address A1. What is the address of a[0]? What is the address of a[1]?
a[0] will be stored in A1 a[1] will be stored in A1+1
c. [3 points] Suppose array a is stored at address A1. Will variable d always be stored in memory after array a? Justify.
Not necessarily. It depends on whether the OS will find an empty spot, big enough, for d after the array a.
d. [3 points] Something is wrong with the line: a[i] = 0xFFFFABCD; What is it?
a is an array of characters. The number assigned to a[i] is outside the range that a
character can present.
e. If we call the function foo as follows: foo(2);
[3 points] What will happen?
You will overwrite whatever is stored after array a in memory. The compiler may not complain especially if foo() is called at runtime with a number not known during compilation.
[3 points] Will this affect the value of d? Justify
If d is stored just after array a, it will be affected. Otherwise, no.

5. Given the following C code:
int compute(int a, int b) {
while (b != 0) {
c = (a & b) << 1; a=a^b; b=c; return a; } a. [4 points] Suppose the two inputs to the above function are: a = 10 and b = 20 What will the function return (both in decimal and in binary)? It will return 0000...011110 → 30 in decimal b. [3 points] Given the above two inputs ( a= 10, b = 20), how many times will the above loop execute? For each iteration, write down the value of c. The loop will be executed once. c will have the value 0 c. [4 points] Assume x = 5 and we call compute(~x, 1), what will be the value returned (in binary and decimal). [Hint, you can use dots “...” to represent repeated bits when you state the binary result] 5 → 00000....0101 ~5 → 11111...1010 the output will be 111111..1011→-5 d. [6 points] Suppose we have two pointer int * k and int * m declared inside compute. Write three statements: one to make k point to a, the second to make m points to be, and the third to execute statement a=a^b; using only k and m. k = &a; m = &b; *k = (*k)^(*m); 程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代写 II 51 – cscodehelp代写

Algorithms and Data Structures
Course notes by NDERSON
Last updated September 22, 2021

Copyright By cscodehelp代写 加微信 cscodehelp

Analysis of Algorithms
1 Analysis of Algorithms: Part I 1
1.1 ProgramVerification……………………………………… 1 1.1.1 Arguingcorrectness …………………………………. 2 1.1.2 Arguingtermination…………………………………. 3
1.2 ComplexityAnalysis ……………………………………… 4 1.2.1 Solutionstocommonrecurrencerelations ………………….. 5
1.3 AsymptoticNotation……………………………………… 6
1.4 MeasuresofComplexity …………………………………… 7
2 Analysis of Algorithms: Part II 9
2.1 QuadraticSortingAlgorithms ……………………………….. 9 2.1.1 Selectionsort……………………………………… 10 2.1.2 Insertionsort……………………………………… 12
2.2 PropertiesofAlgorithms …………………………………… 14
Sorting and Selecting
3 Fast Sorting Algorithms 15
3.1 Heapsort(Revision) ……………………………………… 16
3.2 MergeSort(Revision) …………………………………….. 19
3.3 Quicksort…………………………………………….. 20
3.4 ComplexityLowerBoundsforSorting ………………………….. 27
3.5 SortingIntegersFast……………………………………… 28
3.5.1 Countingsort……………………………………… 29 3.5.2 Radixsort ……………………………………….. 30
4 Order Statistics and Selection 35
4.1 OrderStatisticsandtheSelectionProblem……………………….. 35
4.2 TheQuickselectAlgorithm …………………………………. 37
4.3 RandomisedPivotSelection ………………………………… 37
4.4 MedianofMedians(NotexaminableinSemesterOne,2020). . . . . . . . . . . . . . . 39
Dynamic Programming
5 Dynamic Programming: Part I 43
5.1 TheKeyElementsofDynamicProgramming ……………………… 44 5.1.1 Memoisation-ComputingFibonaccinumbers ……………….. 44
5.1.2 Optimalsubstructure-Thecoinchangeproblem . . . . . . . . . . . . . . . . . . 46

5.2 Top-downvs.Bottom-upDynamicProgramming ………………….. 48 5.3 Reconstructing Optimal Solutions to Optimisation Problems . . . . . . . . . . . . . . . 49
6 Dynamic Programming: Part II 51
6.1 TheUnboundedKnapsackProblem …………………………… 51
6.2 The0-1KnapsackProblem …………………………………. 53
6.3 TheEditDistanceProblem …………………………………. 55
6.4 Matrix Multiplication (Not examinable in Semester One, 2020) . . . . . . . . . . . . . 58
6.5 TheSpace-SavingTrick……………………………………. 61
Data Structures
7 Hashing and Hashtables 63
7.1 Hashtables(Revision) …………………………………….. 64 7.2 WhatisaGoodHashFunction?………………………………. 66 7.3 HashingIntegers………………………………………… 67 7.4 HashingStrings ………………………………………… 68 7.5 UniversalHashing ………………………………………. 70 7.6 CuckooHashing………………………………………… 71
8 Balanced Binary Search Trees 73
8.1 AVLTrees…………………………………………….. 73 8.1.1 Definitions……………………………………….. 74 8.1.2 Rebalancing………………………………………. 75
9 Prefix Tries and Suffix Trees 79
9.1 ThePrefixTrie/RetrievalTreeDataStructure …………………….. 80 9.1.1 Applicationsoftries …………………………………. 81 9.2 SuffixTrees……………………………………………. 83 9.2.1 Buildingasuffixtree…………………………………. 85 9.2.2 Applicationsofsuffixtrees …………………………….. 85
10 Suffix Arrays 87
10.1 BuildingaSuffixArray…………………………………….. 89
10.1.1 Thenaiveapproach …………………………………. 89
10.1.2 Theprefixdoublingalgorithm ………………………….. 89
10.1.3 Faster suffix array algorithms (Not examinable in Semester One, 2020) . . 92
10.2 ApplicationsofSuffixArrays ………………………………… 93 10.2.1 Patternmatchingwithsuffixarrays……………………….. 93
11 The Burrows- 95
11.1 TheBurrows-WheelerTransform……………………………… 95 11.1.1 UsefulpropertiesoftheBWT …………………………… 96 11.1.2 ComputationoftheBWT ……………………………… 97

11.2TheInverseBurrows-WheelerTransform………………………… 98 11.2.1 Thenaiveinversionmethod ……………………………. 98 11.2.2 EfficientinversionoftheBWT…………………………… 100
11.3 PatternMatchingwiththeBurrows-WheelerTransform . . . . . . . . . . . . . . . . . . . 104
Graph and Network Algorithms
12 Graph Basics 109
12.1 ModellingwithGraphs ……………………………………. 109
12.2 RepresentationandStorageofGraphs ………………………….. 113
12.3 GraphTraversalandApplications …………………………….. 116
12.3.1 Depth-firstsearch ………………………………….. 116 12.3.2 Findingconnectedcomponents …………………………. 118 12.3.3 Cyclefinding ……………………………………… 119 12.3.4 Breadth-firstsearch …………………………………. 119 12.3.5 Unweightedshortestpaths…………………………….. 121
13 Shortest Paths 123
13.1 PropertiesofShortestPaths…………………………………. 124
13.2 Distanceestimatesandtherelaxationtechnique …………………… 127
13.3 TheBellman-FordAlgorithm………………………………… 127
13.4 Dijkstra’sAlgorithm………………………………………. 129
13.5 TheAll-PairsShortestPathProblem……………………………. 133
13.5.1 TheFloyd-Warshallalgorithm…………………………… 134 13.5.2 Transitiveclosure…………………………………… 136
14 Incremental Graph Connectivity 137
14.1 ConnectivityinGraphs ……………………………………. 137 14.2 TheIncrementalConnectivityProblem …………………………. 138 14.3 TheUnion-FindDisjoint-SetDataStructure………………………. 139
15 Minimum Spanning Trees 143
15.1 Prim’sAlgorithm………………………………………… 144 15.2 Kruskal’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
16 Directed Acyclic Graphs 149
16.1 TheTopologicalSortingProblem……………………………… 150 16.2 TheCriticalPathProblem ………………………………….. 152
17 Network Flow 155
17.1 TheFord-FulkersonAlgorithm ………………………………. 158 17.1.1 Theresidualnetwork ………………………………… 158 17.1.2 Augmentingpaths ………………………………….. 159 17.1.3 Implementationusingdepth-firstsearch……………………. 160
17.2 TheMinimumCutProblem…………………………………. 162 17.2.1 Themin-cutmax-flowtheorem …………………………. 163
17.3 BipartiteMatching ………………………………………. 165

List of Algorithms
1 BinarySearch………………………………………….. 2
2 Selectionsort………………………………………….. 10
3 Insertionsort ………………………………………….. 12
4 Heapsort …………………………………………….. 16
5 Heapify ……………………………………………… 17
6 Heap:Insert…………………………………………… 17
7 Heap:Delete ………………………………………….. 18
8 Heap:Rise ……………………………………………. 18
9 Heap:Fall…………………………………………….. 18
10 Merge ………………………………………………. 19
11 Mergesort ……………………………………………. 19
12 Naivepartitioning……………………………………….. 21
13 Hoarepartitioning ………………………………………. 21
14 Dutchnationalflagpartitioning ……………………………… 23
15 Quicksort…………………………………………….. 23
16 Countingsort………………………………………….. 29
17 Radixsort…………………………………………….. 31
18 Selectminimum………………………………………… 36
19 Selectminimumandmaximum ……………………………… 36
20 Quickselect……………………………………………. 37
21 Medianofmedians………………………………………. 40
22 RecursiveFibonaccinumbers ……………………………….. 44
23 MemoisedFibonaccinumbers ………………………………. 45
24 Bottom-upFibonaccinumbers ………………………………. 46
25 Top-downcoinchange ……………………………………. 47
26 Bottom-upcoinchange …………………………………… 48
27 Coinchangesolutionreconstructionusingbacktracking . . . . . . . . . . . . . . . . . . 49
28 Bottom-up coin change with solution reconstruction using decision table . . . . . 50
29 Bottom-upunboundedknapsack …………………………….. 53
30 Bottom-up0-1knapsack…………………………………… 55
31 Bottom-upeditdistance …………………………………… 57
32 Optimalsequencealignment………………………………… 58
33 Optimalmatrixmultiplication……………………………….. 61
34 Cuckoohashing:Lookup ………………………………….. 71
35 Cuckoohashing:Deletion………………………………….. 71
36 Cuckoohashing:Insertion …………………………………. 72
37 AVLtree:Rightrotation……………………………………. 75
38 AVLtree:Leftrotation…………………………………….. 76
39 AVLtree:Double-rightrotation………………………………. 76
40 AVLtree:Double-leftrotation ……………………………….. 76
41 AVLtree:Rebalance ……………………………………… 77
42 Prefixtrie:Insertion ……………………………………… 81
43 Prefixtrie:Lookup ………………………………………. 82
44 Prefixtrie:Stringsorting …………………………………… 82

45 Naivesuffixarrayconstruction ………………………………. 89
46 Prefix-doublingsuffixarrayconstruction ………………………… 92
47 Suffixarray:Patternmatching……………………………….. 94
48 Burrows-Wheelertransform ………………………………… 98
49 InverseBurrows-Wheelertransform …………………………… 104
50 ComputationoftheBWTstatistics…………………………….. 107
51 PatternmatchingusingtheBWT ……………………………… 108
52 Genericdepth-firstsearch………………………………….. 117
53 Findingconnectedcomponentsusingdepth-firstsearch . . . . . . . . . . . . . . . . . . 119
54 Cycle detection in an undirected graph using depth-first search . . . . . . . . . . . . . 120
55 Genericbreadth-firstsearch ………………………………… 121
56 Single-sourceshortestpathsinanunweightedgraph . . . . . . . . . . . . . . . . . . . . . 121
57 Reconstructshortestpath ………………………………….. 122
58 Edgerelaxation…………………………………………. 127
59 Bellman-Ford………………………………………….. 128
60 Bellman-Ford:Handlingnegativecycles ………………………… 129
61 Dijkstra’salgorithm………………………………………. 131
62 ImprovedDijkstra’salgorithm ……………………………….. 134
63 Floyd-Warshall………………………………………….135
64 Warshall’stransitiveclosurealgorithm………………………….. 136
65 Connectivitycheck……………………………………….138
66 Union-find using disjoint-set forests (without optimisations) . . . . . . . . . . . . . . . 140
67 TheFINDoperationusingpathcompression ……………………… 140
68 TheUNIONoperationusingunionbyrank……………………….. 141
69 Prim’salgorithm…………………………………………145
70 Kruskal’salgorithm………………………………………. 146
71 TopologicalsortingusingKahn’salgorithm ………………………. 151
72 TopologicalsortingusingDFS……………………………….. 151
73 Bottom-uplongestpathinaDAG …………………………….. 152
74 RecursivelongestpathinaDAG ……………………………… 153
75 TheFord-Fulkersonmethod ………………………………… 160
76 Ford-Fulkersonimplementedusingdepth-firstsearch . . . . . . . . . . . . . . . . . . . . 162

Analysis of Algorithms: Part I
When we write programs, we usually like to test them to give us confidence that they are cor- rect. But testing a program can only ever demonstrate that it is wrong (if we find a case that breaks it). To prove that a program always works, we need to be more mathematical. The two major focuses of algorithm analysis are proving that an algorithm is correct, and determining the amount of resources (time and space) used by the algorithm.
Summary: Analysis of Algorithms : Part I
In this chapter, we cover:
• Whatitmeanstoanalyseanalgorithm
• Provingcorrectnessofanalgorithm
• Analysingthetimecomplexityofanalgorithm
• Ananalysisofbinarysearchasanexample
• Revisionofrecurrencerelations,asymptoticnotation,andcomplexity
Recommended Resources: Analysis of Algorithms : Part I
• http://www.csse.monash.edu.au/~lloyd/tildeAlgDS/Math/ • CLRS,IntroductiontoAlgorithms,Pages17-19,Section2.1
1.1 Program Verification
Program verification or correctness hinges on two main principles. We must on one hand prove that our algorithm, if it terminates always produces the correct result. On the other, we must prove that it does indeed always terminate. We will explore these issues by analysing one of the most well-known fundamental search algorithms, binary search. An implementation of binary search is shown in Algorithm 1.
Binary search is an easy algorithm to get wrong. The most common mistakes are “off-by-one” errors in which the final index reached by the search is one away from where it should have been. For example, what if line 5 had read key > array[mid] instead? This would be a bug, but given how similar it is to the correct code, it would be easily missed. So how then do we reason that this algorithm is in fact the correct one? We will reason the correctness by establishing an invariant in the algorithm. The useful invariant for binary search is shown below.

Algorithm 1 Binary Search
1.1. ProgramVerification
// key is located at array[lo] // key not found
1: 2: 3: 4: 5: 6: 7: 8:
functionBINARY_SEARCH(array[1..n],key) lo = 1 and hi = n + 1
while lo < hi −1 do mid = ⌊(lo + hi)/2)⌋ if key ≥ array[mid] then lo = mid else hi = mid if array[lo] = key then return lo else return null Invariant: Binary Search If key ∈ array, then at each iteration: 1. array[lo]≤key, 2. ifhi̸=n+1,thenarray[hi]>key.
Combined with the sortedness of array, this implies that key (if it exists) lies within the range [lo..hi).
We can now re-interpret the binary search algorithm such that every action it takes is simply aiming to maintain these invariants. This observation can lead us to prove that the algorithm is indeed correct and that it terminates.
1.1.1 Arguing correctness
When we wish to prove an algorithm’s correctness via invariants, we are required to prove three things:
1. Initialisation:Wemustprovethattheinvariantsholdatthebeginning
2. Maintenance:Wemustprovethattheinvariantsremaintruethroughoutthealgorithm
3. Termination: We must prove that the invariants at termination imply correctness of the algorithm
We will argue the proof in two parts. First we will show show binary search is correct if the key that we are searching for is in the array. Then we will prove that it is correct when the key is not in the array.
Case 1: key ∈ array
We must first argue that the invariants established above are correct when the algorithm begins. Once established, we subsequently argue that the invariants are maintained throughout the algorithm. When we first begin, we initialise lo = 1, hi = n + 1, so:
1. If key ∈ array, since the array is sorted, lo = 1 implies that array[lo] is the minimum ele- ment, and hence array[lo] ≤ key.
2. Initially,hi=n+1,sopart2issatisfiedvacuously.

Chapter 1. Analysis of Algorithms: Part I 3
Therefore the invariant is true at the beginning of the binary search algorithm. As we perform iterations of the main loop of the binary search, what happens to this invariant? At each step, we compute mid = ⌊(lo + hi)⌋/2 and compare array[mid] to key. The conditional statement in the loop then enforces the invariant.
1. Ifkey≥array[mid],thenaftersettinglo=mid,wewillstillhavearray[lo]≤key
2. Ifkeykey.
Hence the invariant holds throughout the iterations of the loop. Therefore, if the loop termi- nates, it is true that array[lo] ≤ key < array[hi] (or hi = n + 1). Since the condition for the loop to terminate is that lo ≥ hi −1, combined with the previous inequality, it must be true that lo = hi - 1. Therefore if key exists in array, it must exist at position lo, and hence the binary search algorithm correctly identifies whether or not key is an element of array. Case 2: key ∈/ array Since key is not in the array, provided that the algorithm terminates, regardless of the value of lo, array[lo] ̸= key and hence the algorithm correctly identifies that key is not an element of array. 1.1.2 Arguing termination We have argued that binary search is correct if it terminates, but we still need to prove that the algorithm does not simply loop forever, otherwise it is not correct. Let us examine the loop con- dition closely. The loop condition for binary search reads lo < hi - 1, hence while the algorithm is still looping, this inequality holds. We now recall that mid=􏰚lo+hi􏰛. 2 Theorem: Finiteness of Binary Search If lo < hi - 1, then Since mid is the floor of (lo + hi)/2, it is true that lo + hi − 1 < mid ≤ lo + hi . Multiplying by two, we find lo+hi−2<2×mid≤lo+hi. Since lo < hi-1, we have lo ≤ hi-2. We replace hi-2 with lo on the left most term in the inequality. 2×lo<2×mid≤lo+hi. 1.2. ComplexityAnalysis Next, we add 1 to the right most term. 2×lo<2×mid1,
T(n)= 2 (1.1)
b if n = 1,
where a is some constant number of operations that we perform each step, and b is some con-
stant number of operations required at the end of the algorithm.
Theorem: Time Complexity of Binary Search
The recurrence relation (1.1) has the explicit solution T (n ) = a log2 (n ) + b
WewillprovethatT(n)=alog2(n)+b byinductiononn.
Base case:
Suppose n = 1, then a log2 (n ) + b = b which agrees with our definition of T

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代考 # Getting Started – cscodehelp代写

# Getting Started

1. [Capturing One Yellow Bird](#capturing-one-yellow-bird)

Copyright By cscodehelp代写 加微信 cscodehelp

2. [Capturing Many Yellow Birds](#capturing-many-yellow-birds)
3. [Competing With an Adversary](#competing-with-an-adversary)

We will start this Assignment by having you solve _manually_ three tasks.

## Capturing One Yellow Bird

Open up a console and navigate to the assignment folder and type the following

python red_bird.py -l search_layouts/anuSearch.lay

and the screen below should pop up:

![ANU Search initial state](images/getting_started_anu_search.png)

The first task you’ll encounter during the assignment is a simple navigation
problem. That is, how to get the red bird (the red circle) to the position of
the yellow bird (the yellow triangle) as fast as possible. Here “as fast as
possible” means that the agent is to _minimize the number of moves done_ to
reach the yellow bird.

You can move the agent using the following controls:

– `↑` or `w` to execute the action _Move North_
– `↓` or `s` to execute the action _Move South_
– `←` or `a` to execute the action _Move West_
– `→` or `d` to execute the action _Move East_

You will see that the _SCORE_ readout goes negative as you move, at the rate of
one unit per move done. The only way to get that score to be greater than zero
is to reach the _yellow bird_ in less than 100 steps, since doing so adds _100_
points to the score.

When you get the agent onto the yellow bird position, the screen will vanish
after a brief pause and the following information will be reported on the

The game is over. Score: 55
Average Score: 55.0
Scores: 55.0
Win Rate: 1/1 (1.00)
Record: Win

The only bit relevant to the current task is the “Score”.

You may be wondering by now how this fits into the characterization of Search
Problems you have seen already in class. Let us walk you through the details:

– The **state space** is the set of cells the agent (the _red circle_) can be
in at any point in time. These cells are identified by their coordinates and
represented as a pair `(x,y)`.
– The **goal state** is given by the cell where the yellow bird is located.
– The **initial state** is the cell where the agent starts.
– The **actions** are given by the set of possible moves (discussed above).
– The set of actions _A_ _applicable_ in a given state _s_ is given by the
following rules:

1. The agent cannot exit the map.
2. The agent cannot move through walls.
3. The agent can move in any of the 4 compass directions: North, East, South,

– The _cost function_ is given by the expression
cost = number_of_steps
The **optimal** solution is the one minimizing cost, i.e. it
**minimizes the number of steps taken**. The **score** of the agent is a
_different_ measure that is given by:
score = value_of_captured_yellow_bird – number_of_steps
so obtaining the **maximum** score means that the red agent is following the
shortest path to the goal.

We have provided several _instances_ of this task inside the folder
`search_layouts`:

– `aiSearch.lay`
– `anuSearch.lay`
– `mazeSearch.lay`

Feel free to get familiar with them by solving them manually.

## Capturing Many Yellow Birds

Open up a console, navigate to the assignment folder, and type the following command:

python red_bird.py -l search_layouts/anuMultiSearch.lay

and the screen below should pop up:

![ANU Search initial state](images/getting_started_anu_multi_search.png)

We can see that several yellow birds are now scattered all over the place. If
you try to move the red bird around, you’ll see that although the agent can do
the **same set of actions** (moving in one of four possible compass
directions), now it is not enough just to get to **one** yellow bird. Rather we
need to move through **each** of the yellow bird locations.

When you complete the task (did it require **more effort** than the previous
one?) you will be prompted with a similar message as before:

The game is over. Score: 481
Average Score: 481.0
Scores: 481.0
Win Rate: 1/1 (1.00)
Record: Win

As before, capturing each yellow bird is rewarded with 100 points, while each
move reduces your score by 1.

Intuitively, the task is pretty much the same. The environment is still fully
observable and the actions are deterministic. As a side note, this relates to
[this very famous problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem).

Formally, what really changes here is the _definition_ of a _state_:

– A _state_ is made up of **pairs** `(p, Y)` where
1. `p = (x, y)` the pair representing the _coordinates_ of the cell occupied by the agent,
2. `Y` is the tuple of coordinates `(x_i, y_i)` of the cells where
the `n` yellow birds still to collect are located
– The _initial_ state is `(p_0, Y_0)` where `p_0 = (x_0, y_0)` is the initial position of
the agent. Initially `Y_0 = ((x_1, y_1), …, (x_i, y_i) ,…, (x_n, y_n))`.
– A _goal_ state is the state `(p, Y’)` where `Y’ = ()`
(it’s an **empty tuple**),

– The _cost function_ remains the same:
cost = number_of_steps
while the score now involves counting up the values of all captured yellow
score = total_value_of_captured_yellow_birds – number_of_steps

The folder `search_layouts` includes a variety of scenarios that you can

– `aiMultiSearch.lay`
– `anuMultiSearch.lay`
– `mazeMultiSearch.lay`
– `smallMultiSearch.lay`

## Competing With an Adversary

Let’s go back to the console and now type the following command:

python red_bird.py -l adv_search_layouts/aiAdversarial.lay -n 3

this window should pop up:

![Fun And Games](images/fun_and_games.png)

We now introduce a new character: the black agent, who is also interested in
collecting yellow birds. You still control the red agent (use the same controls
as before). The goal now is for you to capture more yellow birds than the black
agent. You can play as many matches as you want by using the `-n` option from
the shell. For example, the example command above allows you to play 3 matches.

The follows shows the output over 3 matches:

All Birds EATEN
The game is over. Score: 86.87458127689783
All Birds EATEN
The game is over. Score: 86.87458127689783
All Birds EATEN
The game is over. Score: 83.45137614500875
Average Score: 85.73351289960146
Scores: 86.87458127689783, 86.87458127689783, 83.45137614500875
Win Rate: 3/3 (1.00)
Record: Win, Win, Win

The average score averages over the three match scores.

In this example the black agent is choosing its moves at **random**. You
can try to capture the black agent by moving onto its position. Beware! The
black agent could also do the same to you, if you happen to be on its path.

Not shown in the interface is the fact that the value of the yellow birds
decays at a constant rate after each player takes his move. In effect, this is
equivalent to have a **time out** to “complete the level”.

Now try this command:

python red_bird.py -l adv_search_layouts/aiDenseAdversarial.lay -n 3 -b GreedyBlackBirdAgent

If you go through the three games, you might get:

All Birds Eaten
The game is over. Score: -234.0304162711067
All Birds Eaten
The game is over. Score: -127.85748997030285
All Birds Eaten
The game is over. Score: -770.8716639224533
Average Score: -377.5865233879543
Scores: -234.0304162711067, -127.85748997030285, -770.8716639224533
Win Rate: 0/3 (0.00)
Record: Loss, Loss, Loss

With the second command, the black agent is no longer choosing its moves
randomly. Instead it is following a very simple rule:

next_action = first_move_in_shortest_path_to_nearest_yellow_bird

It will always try to move towards the closest yellow bird via the shortest
route, ignoring the red bird.

A _state_ now is the tuple `(pR, pB, Y, v)` where

– The `pR = (xR, yR)` is the pair representing the _coordinates_ of the cell
occupied by the **red** agent,
– `pB = (xB, yB)` is the pair representing the _coordinates_ of the cell
occupied by the **black** agent,
– `Y` is the tuple of coordinates `(x_i, y_i)` of the cells where the `n` yellow
birds are located, and is initially
`Y_0 = ((x_1, y_1), …, (x_i, y_i),…, (x_n, y_n))`,
– `v` is the current value of the yellow birds.

The _initial_ state is `(pR_0, pB_0, Y_0, 100)` where `pR_0 = (xR_0, yR_0)` is
the initial position of the red agent, and `pB_0 = (xB_0, yB_0)` is
the initial position of the black agent. Instead of goal states, now we have
_terminal states_: the tuples `(pR’, pB’, Y’)` such that

– `Y’ = ()` (**the empty tuple**), or
– `pR’ = pB’`, or
– the value of the yellow birds is less than 0.5.

Terminal states are **winning states** for the red bird when the scoring
function is _positive_:

score = total_value_of_yellow_birds_captured_by_red_agent – total_value_of_yellow_birds_captured_by_black_agent + capture_value

where `capture_value` is `250` if the red agent captures the black agent, and
`-250` the the black agent captures the red agent.

## And that’s all folks!

Thanks for having made it here, now you can move to the
[next section](2_implementation_notes.md) or go back to the [index](README.md).

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代考 AF 0000000000000000 0000000000000000 D0D1D2D3D4D5 D6D7D8D9 DADBDCDDDEDF 000 – cscodehelp代写

Computer Architecture Project 2 – Cache Simulation
Spring 2022, 10% of Course Grade
Completion Date: March 24 (But working on this before the midterm will help a lot!) 10% deduction for each week after that. Not accepted more than two weeks late. Submit via Blackboard. **********************************************************************
* SHARE IDEAS, BUT NOT CODE. NOT ONE LINE! BASING YOUR WORK ON *

Copyright By cscodehelp代写 加微信 cscodehelp

* SOMEONE ELSE’S WILL LIKELY GET YOU AN “F” FOR THE COURSE. DON’T USE *
* OTHERS WORK “TO GET AN IDEA OF HOW TO DO IT” OR FOR ANY OTHER *
* REASON. NO EXCUSES. NO EXCEPTIONS. NO POSTING SOLUTIONS, EVER! * **********************************************************************
1) OVERVIEW
You are to design and implement a software simulation of a cache memory subsystem. You can use any language. Note that any real cache is invisible to software. This will be a simulation of the way that a real cache works. Your “cache” will be a software structure/class that will contain information similar (but on a smaller and more simplified scale) to what would be in a real cache. Your “main memory” will be a 2K array. You can make it an array of integers or shorts (16-bits), but because it is simulating a byte-addressable memory, you won’t be putting any value larger than 0xFF (255 decimal or 11111111 binary) in it.
You will provide for these two areas by defining an array for main memory and a structure/class/record (or array of these) for the cache. For example,
short Main_mem[2048];
So, in effect, your Main_mem array will be the “pretend” main memory area, your cache structure/class which you define will be the “pretend” cache, your program will pretend to be the intelligence of the memory sub-system, and requests that you type in will be the equivalent of a processor making requests of the memory subsystem.
You will implement a direct-mapped, write-back cache. The block size will be 16 bytes and you’ll have 16 slots. Both of these numbers are smaller than a typical real cache, but make for a more manageable simulation. You should use a structure/class to manage each slot, where the structure consists of the appropriate information (e.g., a valid flag, tag, etc.) and you eventually have an array of these structures. (I’m not dictating that you MUST implement the cache in this exact way, but something close to this use of structures should make the most sense.)

2) INITIALIZATION
You MUST initialize your main memory (MM) in such a way that you’ll be able to know whether or not you have the original value or not and also such that you’ll be able to tell one byte from another. To achieve this, I want you to assign the value 0 to MM[0], 1 to MM[1], and so on until 0xFF to MM[0xFF]. 0xFF is the biggest value you can put in a byte, so then you should resume putting 0, 1, 2, and so on into MM elements x100, x101, and x102 (MM[x100] = 0, MM[x101] = 1, and MM[x102] = 2) until you¡¯ve initialized the entire main memory array (MM[0x7FF] = 0xFF).
You should also initialize your cache to all zeros so that it will be easy (from a debugging and correcting point of view) to see which slots in the cache have something in them and which are still empty.
USE HEX FOR ALL ADDRESSES AND VALUES. This is primarily just an issue for input/output and an occasional constant. (A variable that has the decimal value 10 ALSO has the hexadecimal value 0A and the binary value 00001010 — they are all the same number. It’s just a matter of whether you input/output it as a decimal, hex or binary number.)
Using hex numbers will actually make your life easier. Until you perform a write to an address, you will know that a read of address x7AE should get the value xAE, a read of the address x422 should get the value x22, and so on. You wouldn’t be able to immediately know if you had the correct value if you were typing in or printing out decimal values. (For example, x7AE is 1966 in decimal and xAE is x174 in decimal. It’ll be obvious to you that address x7AE must contain xAE in it after Main_Mem has been initialized, but there’d be no way of telling that address 1966 decimal should have a decimal value of 174.)
Ideally, you can make an input file of the test case listed below and read that in, but if you aren’t comfortable with that, feel free to again embed the data inside the program itself (even though that isn’t good programming practice). Remember the lessons you learned in project one – you don’t have to “convert” numbers to hex; just use the appropriate I/O or assignment statements.
3) BITWISE ANDS
I do NOT want you using the modulo function to extract the appropriate fields from the addresses. I want you using bitwise operations and shifts instead since they are much closer to what actually happens in hardware. Although you used bitwise ANDs and shifts in the first project, I offer the following example in C. It is based on a 32-bit address, a 32-byte block size (32 = 2 to the 5th) and 8K slots (8K = 2 to the 13th). This gives a 5-bit block offset in the least significant bits of the address, a 13-bit slot number in the next-least-significant bits and a 14-bit tag in the remaining most significant bits. You¡¯ll need to adjust the numbers for the specifics of this project, but the example combined with your experience in the first project should make it clear how to do that.

Block_offset = address & 0x0000001F; /*Zero out all but the last 5 bits*/ Block_begin_addr = address & 0xFFFFFFE0; /*Zero out last 5 bits to find the block¡¯s first address*/ Tag = address >> 18; /* Shift off the block offset and slot # bits, moving the tag
into the least significant bit positions */
Slot_num = (address & 0x0003FFE0) >> 5; /* First zero out the tag and block offset bits, leaving
behind just the bits for the slot number. Then shift the slot number bits into the least significant bit positions.*/
Again, your project¡¯s characteristics differ from this example, so you¡¯ll have to change some of the values. (I want you to think this through and understand what you¡¯re doing rather than just copy something that I¡¯ve done.) But this should be a big head start for you.
4) SUPPORTED OPERATIONS
You will need to support three types of requests from the screen once you have initialized your
“main memory” and “cache” : Read byte, Write byte, and Display cache.
If a Read is requested, then you should ask what “address” in your “main memory” the Read is for. Note that references to a memory location 7ae, for example, are not to the physical memory location 7ae in your computer’s memory, but rather to byte 7ae in your Main_mem array.
You should then simulate a real-life cache by checking to see if that address is in your cache. If it is, display the value you found for it in cache AND indicate that a cache hit took place. If not, go to your main memory instead, bring the entire block into your cache, display the value found AND indicate a cache miss. You MUST show whether it was a cache hit or miss and (on reads) what value you read out of the cache to receive full credit.
In other words, act like a cache acts. So, for example, the screen could look like the following, where the second and fourth lines are typed by the user (or provided by an input file or embedded in the program).
(R)ead, (W)rite, or (D)isplay Cache? R
What address would you like read? 7ae
At that byte there is the value ae (Cache Hit)
Writes would be very similar, except that the user must also indicate what the data to be written is. For example, (lines 2, 4, and 6 are typed by the user)
(R)ead, (W)rite, or (D)isplay Cache?
What address would you like to write to? 562

What data would you like to write at that address?
Value 2f has been written to address 562. (Cache Hit)
(The previous two examples indicated a cache hit, but if there was a cache miss, you’d output “Cache Miss” instead of “Cache Hit” while still showing the value.) Note on writes: If a write is to be performed on a block that is not already in the cache, you must bring the entire block into the cache, perform the write and consider it a cache miss.
Display Cache will display the contents of each slot in the cache. For example, if the first few reads were for hex addresses 7A2, 2E, 2F and 3D5, those addresses would map to slot numbers A, 2, 2 and D, respectively. (Work out the numbers so you see why that is.) So you would display the following:
(R)ead, (W)rite, or (D)isplay Cache? D
Slot Valid Tag 00 0 10 0 21 0 30 0 40 0 50 0 60 0 70 0 80 0 90 0 A1 7 B0 0 C0 0 D1 3 E0 0 F0 0
Data 0000000000000000 0000000000000000 202122232425262728292a2b2c2d2e2f 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 A0 A1 A2 A3 A4 A5 A6 A7 A8 A9 AA AB AC AD AE AF 0000000000000000 0000000000000000 D0D1D2D3D4D5 D6D7D8D9 DADBDCDDDEDF 0000000000000000 0000000000000000
This does not include any information that might be needed to handle writes. If any such information is also needed in the cache, add it and show it when you perform a Display Cache.
Of course, as you perform more operations, more of the cache will be filled in.
Note that the data in the cache really is data, not addresses. We are not bringing addresses into the data part of the cache, but rather the values at those addresses. I¡¯ve merely supplied a pattern to those values so that you (and I) can easily determine if the correct data is in the cache.

For example, at address 5AE the value is not 5AE it is AE. And AE is the value (along with the rest of the bytes in that block) that gets copied into the cache. Of course, your program shouldn’t depend on that pattern, but actually get the value out of Main_mem otherwise it would break as soon as a new pattern was chosen.
IMPORTANT: Use the sequence just mentioned — reads for addresses 7A2, 2e, 2f and 3d5 — as a test of your simulation before you move on to the rest of the actual test you will turn in. If a Display Cache following those reads doesn’t look EXACTLY like what is above (except for any extra information to handle writes that may or may not be needed), then you have a problem that you’ll need to fix before moving on to the actual test to be turned in. (It¡¯s okay if the formatting is a little different, but otherwise your values should be consistent with the above.)
5) TESTING AND WHAT TO TURN IN
You should turn in a copy of your source code and also an output file which shows the results of the following test. (After you are finished debugging, generate your output file, which you can then turn in with the source. It doesn¡¯t matter how you generate the output file. Do a screen capture if you must.
IF THERE ARE ANY DISCREPANCIES BETWEEN YOUR SOURCE CODE AND THE OUTPUT, YOU WILL BE EXPECTED TO LET ME OR THE TA PERSONALLY TEST YOUR PROGRAM.
Don¡¯t bother with paper copies of your project. Just email the files to the addresses specified on page 1.
You should perform exactly the following sequence of tests on your cache simulator and turn in the output with your source code. Please no variations to cover up bugs in your program. 🙂
Keep in mind that this is not a programming course, but one in Computer Architecture. You are assumed to already know programming before you entered this course. As a result, the focus of attention will not be on how clean your code is, but on how accurately it acts like a cache. If you perfectly program a flawed understanding of how caches work, you’ll be graded according to your flawed understanding.
Note also that you have ONE chance to turn the program in. You should act as your own tester, making sure that the correct results are the ones you have in fact achieved. Don’t expect to resubmit a program after the TA or I have pointed out its errors. If, however, you find errors before the TA has corrected your project, you are welcome to resubmit.
I may request a resubmission for errors not having to do with your understanding of caches, such as a failure to follow these directions (e.g., not following the test exactly, not using hex for the operations). For any of those resubmissions that are allowed, you will be deducted 10 points. Take pride in your work.

In the following test, I add some comments to provide clarity but, of course, you’ll just enter the operations. Note that I’ve shown the numbers in hex. You must do the same.
Be sure that after each read you output what value your “simulated memory subsystem” is providing as a result of the read, either from your simulated cache or your simulated main memory. It should also indicate if the read resulted in a cache hit or miss.
(Feel free to make an input file out of this or build it into your program so you don’t have to retype it each time.)
Operation R
R 14c R 14d R 14e R 14f R 150 R 151 R 3A6 R 4C3
W 14C 99 W 63B 7
Display the Cache. Be sure that all the blocks you have brought in show up and be especially careful that ALL of the bytes for that block are in the cache.
Write the value 99 hex to address 14C. If the block is not in the cache, bring it in on writes as you would for a read.

D Display the cache
R 348 R 3F
D Display the cache
R 14b R 14c R 63F R 83
D Display the cache one last time, making sure that the correct blocks are there.

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com