CS代考 Tutorial questions: topic 3 – cscodehelp代写

Tutorial questions: topic 3
1. Write each of the following statements as a temporal logic formula.
(a) If a person never dies then they are immortal. (Use the predicates dies(person)
and immortal(person).)
(b) Unless you study agents, you won’t know how great they are. (Use the predi-
cates study(agents, you) and knowGreat(agents, you).) Answer
Note, as is often the case with logic, there may be other correct answers.
(a) If a person never dies then they are immortal. (Use the predicates dies(person)
and immortal(person).)
(􏰘¬dies(person) ∧ 􏰗¬dies(person)) → immortal(person)
This says “If it is true at every point in the past that a person never dies and it is true now and at all points in the future that a person never dies, then the person is immortal”.
Here are some other answers that students have suggested in previous years that aren’t quite right.
􏰗¬dies(person) → immortal(person)
This says “If there’s no point in the future where a person dies then they are now immortal”. This isn’t quite right, since this means that someone who died last year is considered immortal.
immortal(person)Wdies(person)
This says “A person is immortal at every point in the future unless they die”. This isn’t quite right, since this means that someone who dies next year is immortal until the point that they die.
Version 2 of solutions update:
A question was raised in the online tutorial: is the following formula correct/do we need the 􏰗 on the right of the implication?
(􏰘¬dies(person) ∧ 􏰗¬dies(person)) → 􏰗immortal(person)
This says “If it is true that the person never dies in the past, the person doesn’t die now and never dies in the future, then it is true that now and at all points in the future the person is immortal.” This is correct, but the 􏰗 on the right of the implication (in front of immortal(person)) isn’t necessary here.
To understand why the 􏰗 isn’t necessary on the right of the implication, con- sider again our original solution:
(􏰘¬dies(person) ∧ 􏰗¬dies(person)) → immortal(person)
1

At any point where the left side of the implication is true, the right side of the implication is true. And if the left side of the implication is true at any point, then it must be true at all points: the left side essentially says that it is true at all points in the past and future that ¬dies(person), if this is true at any time point then it must be true at all time points.
According to the formula above, if there is ever a point where the person never dies in the past and never dies now or in the future, then they are immortal at that point. But we have just seen that if there is ever a point where the person never dies in the past and never dies now or in the future then it must be true that at all points the person never dies in the past and never dies now or in the future, thus our implication must be satisfied at each time point. Hence the implication above is sufficient to infer that, if there is ever a point where the person never dies in the past and never dies now or in the future, then the person is immortal at all time points in the future (and in fact in the past) and this is why we don’t need to add the 􏰗 in front of immortal(person).
(b) Unless you study agents, you won’t know how great they are. (Use the predi- cates study(agents, you) and knowGreat(agents, you).)
¬knowGreat(agents, you)Wstudy(agents, you)
This says “it will always be true that you don’t know how great agents are,
unless there comes a point in the future where you study agents”.
Version 2 of solutions update:
The answer given above is not correct! It does say “it will always be true that you don’t know how great agents are, unless there comes a point in the future where you study agents” but this doesn’t allow for the fact that someone might have studied agents in the past.
Correction:
􏰙(􏰘¬knowGreat(agents, you)∧¬knowGreat(agents, you)Wstudy(agents, you))
This says “It is true at some point in the past that: (1) at every point before that you didn’t know how great agents were, and (2) it will always be true going forward that you don’t know how great agents are, unless there comes a point in the future where you study agents”.
Here are some other answers that students have suggested in previous years that aren’t quite right.
♦(knowGreat(agents, you)Sstudy(agents, you))
This says “It will be true at some point in the future that at some point in the past you studied agents and you have known how great they are ever since.”. This isn’t quite right, since it doesn’t allow for the fact that you might never study agents.
¬study(agents, you) → 􏰗¬knowGreat(agents, you)
This says “If it’s not true (now) that you study agents, then you will never know how great they are”. This isn’t quite right, since it doesn’t allow for the fact that you might study agents in the future.
2

2. The following specification is adapted from a famous “Snow White” example in Concurrent MetateM:
snowW hite(ask(X))[give(X)] :
􏰚ask(X) → ♦give(X)
start → 􏰗¬(give(X) ∧ give(Y ) ∧ X ̸= Y )
eager(give(eager))[ask(eager)] :
start → ask(eager)
greedy()[ask(greedy)] :
􏰚give(eager) → ask(eager) start → 􏰗ask(greedy)
[Note that X and Y are variables. An agent that listens for messages of type give(X) will hear all messages of the form give(X) (where X can be instantiated to anything) (same for an agent who listens for messages of type ask(X)). For example, Snow White will listen to messages ask(eager) and ask(greedy).]
Trace a run of this system (i.e. give a run of the system) for the first 6 time steps (use a table like the one we saw in the example from the core material).
Answer
Note, this is only one possible run of the system. Others are possible, depending on the order that snowWhite chooses to satisfy her commitments.
snowW hite
􏰗¬(give(X) ∧ give(Y )
∧X ̸= Y )
♦give(eager), ♦give(greedy) ♦give(greedy)
♦give(greedy) ♦give(greedy)
eager ask(eager)
give(eager) ask(eager)
greedy
ask(greedy) ask(greedy) ask(greedy) ask(greedy) ask(greedy) ask(greedy)
􏰗ask(greedy)
ask(eager), ask(greedy) ask(greedy), give(eager) ask(greedy), give(greedy) ask(greedy), give(greedy) ask(greedy), ask(eager) give(greedy)
Timestep 1:
snowWhite’s second rule fires, causing her to add a commitment: from now and every point forward, she cannot give to two different agents at the same time point.
Eager’s first rule fires, and so it updates its propositions to include ask(eager) (which is broadcast, since this is something that Eager broadcasts).
Greedy’s rule fires, and so it updates its commitments to include that it will always ask, and adds ask(greedy) to its propositions to satisfy this commitment (which is broadcast, since this is a message type that Greedy broadcasts).
Timestep 2:
SnowWhite’s environment propositions are updated to reflect that she has received the broadcasts of eager’s and greedy’s asks in the previous timestep. None of its rules fire, since there is no ask(X) true in its propositions for the previous timestep.
Eager: only listens out for give(eager) messages, and none of these were broadcast in the previous timestep. None of its rules fire, since give(eager) does not appear in its propositions for the previous timestep.
3

Greedy: doesn’t listen for any messages, and no rules fire. It broadcasts an ask(greedy) message to satisfy its commitment from the first timestep (note that this commit- ment persists, since it is a 􏰗 commitment).
Timestep 3:
SnowWhite’s environment propositions are updated to reflect that she has received the broadcast of greedy’s ask in the previous timestep; her first rule fires twice, gen- erating the commitment to give to eager at some point now or in the future and the commitment to give to greedy now or at some point in the future. Now, snowWhite cannot satisfy both of these commitments at once, becuase of her commitment from the first time step. Since neither commitment is older than the other, she is free to select either of these. Let’s say she selects to give to eager, and so she adds give(eager) to her propositions, satisfying the commitment to give now or at some point in the future to eager.
Eager: no messages received, no rules fire.
Greedy: no rules fire, broadcasts an ask message to satisfy its commitment from first timestep.
Timestep 4:
SnowWhite’s environment propositions are updated to reflect that she has received the broadcast of greedy’s ask in the previous timestep; her first rule fires generating the commitment to give to greedy at some point now or in the future. SnowWhite also sends a give to greedy message to satisfy both her commitment from timestep 3 and her new commitment from this timestep.
Eager: updates its propositions to reflect that it received the give(eager) that snowWhite broadcast in the previous timestep. No rules fire.
Greedy: no rules fire, broadcasts an ask message to satisfy its commitment from first timestep.
Timestep 5:
SnowWhite’s environment propositions are updated to reflect that she has received the broadcast of greedy’s ask in the previous timestep; her first rule fires generating the commitments to give to greedy now or at some point in the future; she sends a give to greedy message to satisfy this commitment.
Eager: no new messages received. Eager’s second rule fires and so it broadcasts an ask(eager) message.
Greedy: no rules fire; broadcasts an ask message to satisfy its commitment from timestep 1.
Timestep 6:
SnowWhite’s environment propositions are updated to reflect that she has received the broadcast of greedy’s ask and the broadcast of eager’s ask in the previous timestep; her first rule fires generating the commitment to give to greedy now or at some point in the future and she broadcasts a give greedy message to satisfy this commitment.
Eager: no messages received and no rules fire.
Greedy: no rules fire; broadcasts an ask message to satisfy its commitment from timestep 1.
4

Version 2 of solutions update:
A couple of students had questions about whether it is possible with this example that snowWhite never ends up giving to eager.
Consider the run of the system below. This is the same as the run given above, but extended by three timesteps.
ask(eager), ask(greedy) ask(greedy), give(eager) ask(greedy), give(greedy) ask(greedy), give(greedy) ask(greedy), ask(eager) give(greedy) ask(greedy), give(greedy) ask(greedy), give(eager) ask(greedy), give(greedy)
Timestep 7:
snowW hite
􏰗¬(give(X) ∧ give(Y )
∧X ̸= Y )
♦give(eager), ♦give(greedy) ♦give(greedy)
♦give(greedy) ♦give(greedy)
♦give(greedy) ♦give(eager) ♦give(greedy)
♦give(greedy)
eager ask(eager)
give(eager) ask(eager)
give(eager)
greedy
ask(greedy) ask(greedy) ask(greedy) ask(greedy) ask(greedy) ask(greedy)
ask(greedy) ask(greedy) ask(greedy)
􏰗ask(greedy)
SnowWhite’s environment propositions are updated to reflect that she has received the broadcast of greedy’s ask made in the previous timestep. Her first rule fires twice, giving two new commitments at this timestep: to give to greedy now or at some point in the future and to give to eager now or at some point in the future. She cannot satisfy both of these commitments at once, and since neither is older than the other she selects one at random to satisfy and decides to give to greedy at this time step. (Note, at this timepoint, all of the earlier ♦ commitments are satsified, so she can choose either ♦give(greedy) or ♦give(eager) to satisfy at this timestep.)
Eager: no messages received and no rules fire.
Greedy: no rules fire; broadcasts an ask message to satisfy its commitment from timestep 1.
Timestep 8:
SnowWhite’s environment propositions are updated to reflect that she has received the broadcast of greedy’s ask made in the previous timestep. Her first rule fires generating a commitment to give to greedy now or at some point in the future. It is important to note that snowWhite has an older unsatsisfied commitment than this: the commitment from the previous timestep to give now or at some point in the future to eager. Since agents are obliged to give precedence to older unsatisfied commitments, snowWhite does not have a choice here but has to give to eager at this timestep in order to satisfy this older commitment.
Eager: no messages received and no rules fire.
Greedy: no rules fire; broadcasts an ask message to satisfy its commitment from timestep 1.
5

Timestep 9:
SnowWhite’s environment propositions are updated to reflect that she has received the broadcast of greedy’s ask made in the previous timestep. Her first rule fires generating a commitment to give to greedy now or at some point in the future. Her oldest unsatisfied commitment at this point is the ♦give(greedy) from the previous timestep, so she must now give to greedy (which also satisfies the ♦give(greedy) from the current timestep.)
Eager: updates propositions to reflect receipt of the give(eager) message that was broadcast in previous timestep; no rules fire.
Greedy: no rules fire; broadcasts an ask message to satisfy its commitment from timestep 1.
I hope you can see from the above why snowWhite will have to give to eager at some point. (Note that eager will always broadcast an ask in the first timestep, so snowWhite will always have a commitment at the third timestep to give to eager now or at some point in the future. This will, either in the third or fourth timestep, mean that snowWhite gives to eager to satisfy this commitment. This will trigger eager’s rule to ask, and so on.)
6

Leave a Reply

Your email address will not be published. Required fields are marked *