Monthly Archives: May 2022

CS代写 CS 162 HW 5 – cscodehelp代写

Introduction Background
Worker registration Job submission Tasks
Fault tolerance Conclusion
This site uses Just the Docs, a documentation theme for Jekyll.

Copyright By cscodehelp代写 加微信 cscodehelp

Search CS 162 HW 5
Example MapReduce job
TABLE OF CONTENTS
2 Job submission
3 Map task execution
4 Reduce task execution
5 Job completion
This section describes an example showing the steps involved in running word count on the MapReduce cluster you will implement in this assignment. This is only an example; don’t hardcode any of the numbers listed here. MapReduce may run differently depending on if and when failures happen.
The order in which workers poll for tasks, and the time it takes them to complete those tasks are non- deterministic. The specifics of which worker received which task are not important for your implementation.
1 The coordinator ( mr-coordinator ) is started.
2 After a few seconds, 3 workers ( mr-worker ) are started.
3 The workers register with the coordinator, and receive unique worker IDs. We’ll refer to the workers as worker 1, worker 2, and worker 3.
4 The workers begin sending regular heartbeats to the coordinator to indicate that they are still running.
Job submission
1 A client ( mr-client ) submits this job via the SubmitJob RPC:
“data/gutenberg/p.txt”,
“data/gutenberg/q.txt”,
“data/gutenberg/r.txt”,
“data/gutenberg/s.txt”
output_dir = “/tmp/hw-map-reduce/gutenberg”
app = “wc”
n_reduce = 2
args = [1, 2, 3]
Note: The args are not used by the word count map and reduce functions, but they are included to show you how they should be used for other applications that may depend on args .
Since there are 4 input files, there are 4 map tasks (one per input file). Since n_reduce is 2, there are two reduce tasks.
2 The coordinator accepts the job, assigns it an ID of 1, and returns this ID to the client. The job is added to the coordinator’s queue.
3 The client periodically polls the status of the job using the PollJob RPC with job_id = 1 to see when it completes or fails.
Map task execution
1 Worker 1 polls the coordinator for a task, and is assigned map task 0 for job 1.
2 Worker 2 polls the coordinator for a task, and is assigned map task 1 for job 1. Similarly, worker 3 is
assigned map task 2 for job 1.
3 Each worker executes its assigned map task:
• They create n_reduce (2 in this case) temporary buffers in memory. We’ll refer to these buffers as buckets.
• They read the input file into memory (eg. map task 0 would read data/gutenberg/p.txt ).
• They call the map function corresponding to the wc application. The key is the input filename; the value is the file’s contents. The auxiliary args [1, 2, 3] are also passed to the map function.
• They iterate over the resulting key value pairs. For each KV pair:
• The key is hashed using the ihash function in src/lib.rs .
• A reduce bucket is selected by computing ihash(key) % n_reduce .
• The KV pair is written into the corresponding buffer using codec::LengthDelimitedWriter . The key is sent first, then the value.
• The worker saves all the reduce buffers in memory for later.
4 Workers 1, 2, and 3 finish their map tasks and notify the coordinator. However, immediately after
notifying the coordinator, worker 3 fails (crashes).
5 The coordinator assigns the final map task (task 3) to worker 1. Worker 2 sits idle, since there are no available tasks. It periodically polls the coordinator to see if new tasks are available.
6 After some time, the coordinator realizes it hasn’t received a heartbeat from worker 3, and assumes that worker 3 has crashed. The coordinator notes that since worker 3 ran map task 2 and the results from map task 2 that were stored in worker 3’s memory must have been lost, map task 2 will need to be re-executed.
7 Worker 2 polls the coordinator for tasks, and is assigned map task 2.
8 Workers 1 and 2 finish their map tasks and notify the coordinator. All map tasks are now complete.
Reduce task execution
1 Worker 1 polls the coordinator and is assigned reduce task 0. Immediately after this, worker 2 crashes.
2 Worker 1 begins executing reduce task 0. It reads bucket 0 of map task 0 and map task 3 from its own in-memory buckets. It then tries to contact worker 2 to read bucket 0 of map task 1 and map task 2, but fails to connect. Worker 1 notifies the coordinator that it cannot complete the reduce task.
3 A new worker joins the cluster and is assigned ID 4.
4 Workers 1 and 4 continually poll the coordinator for tasks. Depending on your implementation, you may have the coordinator tell the workers to idle, to retry the failed reduce task, or to re-execute the necessary map tasks. We’ll assume that your coordinator is not particularly sophisticated, and just tells them to retry the failed task.
5 After some time passes without a heartbeat from worker 2, the coordinator will realize that worker 2 has crashed. Since worker 2 executed map tasks 1 and 2, the coordinator will schedule map tasks 1 and 2 for re-execution.
6 The next time worker 1 polls the coordinator, it is told to execute map task 1. Similarly, worker 4 is told to execute map task 2.
7 The map tasks complete successfully, and the coordinator is notified. All map tasks are now done, and reduce tasks become eligible for scheduling again.
8 The next time worker 1 polls the coordinator, it is assigned reduce task 0. Worker 4 receives reduce task 1.
9 Worker 1 executes reduce task 0, reading bucket 0 of map tasks 0, 1, and 3 from memory. It reads bucket 0 of map task 2 via an RPC to worker 4.
10 Worker 1 concatenates all the key-value pairs it obtains, and then sorts the pairs by key.
11 Foreachrunofkey-valuepairscorrespondingtothesamekeyK,worker1doesthefollowing:
• Calls the word count reduce function with key K , the list of values corresponding to K , and auxiliary args [1, 2, 3] . The reduce function returns a single value V .
• Writes (K, V) to the output file /tmp/hw-map-reduce/gutenberg/mr-out-0 .
12 Similarly,worker4executesreducetask1,readingbucket1ofmaptasks0,1,and3fromworker1
via RPC and bucket 1 of map task 2 from memory.
13 The workers notify the coordinator that they completed the reduce tasks.
14 The coordinator notes that all tasks for job 1 have been completed.
Job completion
1 On the next PollJob RPC issued by the MapReduce client, the coordinator notifies the client that the job was completed.
2 The client runs some post processing on the MapReduce output files to convert them to a human- readable format. Our autograder will inspect this final output. Your output files must be named
mr-out-i , where 0 <= i < n_reduce is the reduce task number. Back to top Copyright © 2022 CS 162 staff. Example MapReduce job 程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代考 Integrated Electronics & Design – cscodehelp代写

Integrated Electronics & Design
nMOS logic IC design
https://www.xjtlu.edu.cn/en/departments/academic-departments/electrical-and-electronic-engineering/staff/chun-zhao

Copyright By cscodehelp代写 加微信 cscodehelp

⚫ NMOSlogic(examples) ⚫ Calculation
⚫ DesignExercise

Transistor in Linear Mode VD
Assuming V
n+ – V(x) +
VGS-VT VDS
T : ID = b0 W/L [(VGS – VT)VDS – VDS2/2]
b0 = nCox
R = VDS / ID

Transistor in Saturation Mode
Assuming V
VDS > VGS – VT
n+ – VGS – VT + n+ Pinch-off
: ID = (b0/2) W/L [(VGS – VT) 2]

NMOS Logic (Inverter): Example 1
Calculate W/L with the following specification: -4 -2
1) RL=5k. 2) b= b0(W/L), and b0= 1.8*10 AV . 3) VT =0.3V. 4) VDD=5V.
The aspect ratio, W/L, is ??
Layout(版图)

NMOS Logic (Inverter): Example 1
Calculate W/L with the following specification: 1) RL=5k. 2) b= b0(W/L), and b0= 1.8*10-4AV-2. 3) VT =0.3V. 4) VDD=5V.
If Vin = VDD, let VOut = 0.1V << VT Potential divider: RD/(RD+RL)=VOut/VDD=0.1/5=0.02 → RD ≈ 100 ID= b[(VG-VT)VD-VD2/2]  b[(VG-VT)VD] RD= VOut/ID ={b[(VDD-VT)]}-1 =1/[b(5-0.3)]=100 → b ≈ 20*10-4. Therefore, the aspect ratio, W/L, is 12. NMOS Logic (Inverter): Example 1 Calculate W/L with the following specification: 1) RL=5k. 2) b= b0(W/L), and b0= 1.8*10-4AV-2. 3) VT =0.3V. 4) VDD=5V. NMOS Logic (Inverter): Example 1 Calculate W/L with the following specification: 1) RL=5k. 2) b= b0(W/L), and b0= 1.8*10-4AV-2. 3) VT =0.3V. 4) VDD=5V. NMOS Logic (Inverter): Example 2 Calculate W/L of Load with the following specification: 1) The aspect ratios of D is 12. 2) b= b0(W/L), and b0= 1.8*10-4AV-2. 3) VT = 0.3V. 4) VDD=5V. =24 L =2 NMOS Logic (Inverter): Example 2 Calculate W/L of Load with the following specification: 1) The aspect ratios of D is 12. 2) b= b0(W/L), and b0= 1.8*10-4AV-2. 3) VT = 0.3V. 4) VDD=5V. If Vin = VDD, let VOut = 0.1V: ID = bD[(Vin-VT)VOut-VOut2/2] RD= VOut/ID =(12b0[(VDD-VT)])-1 =100  [RD/(RD+RL)]=VOut/VDD=0.1/5=0.02 ID =bL(VDD -VT)2/2 R = (V -V )/I = 4.9*2/[b (5-0.3)2] =5k LDDoutD L → bL = 8.9*10-5 → aspect ratio of load = 0.5 NMOS Logic (Inverter): Example 2 Calculate W/L of Load with the following specification: 1) The aspect ratios of D is 12. 2) b= b0(W/L), and b0= 1.8*10-4AV-2. 3) VT = 0.3V. 4) VDD=5V. W/L of Load= 0.5 NMOS Logic (Inverter): Example 2 Calculate W/L of Load with the following specification: 1) The aspect ratios of D is 12. 2) b= b0(W/L), and b0= 1.8*10-4AV-2. 3) VT = 0.3V. 4) VDD=5V. NMOS Logic (NOR): Example 3 Calculate W/L of Load with the following specification: 1) The aspect ratios of D is 12. 2) b= b0(W/L), and b0= 1.8*10-4AV-2. 3) VT = 0.3V. 4) VDD=5V. R Solution: L let VOut = 0.1V: ID = bD[(VinA-VT)VOut-VOut2/2] RD= VOut/ID =(bD[(VDD-VT)])-1 =100  [0.5RD/(0.5RD+RL)]=VOut/VDD=0.1/5=0.02 → RL=2.5k ID =bL(VDD -VT)2/2 R = (V -V )/I = 4.9*2/[b (5-0.3)2] =2.5k LDDoutD L → bL =1.8*10-4 → aspect ratio of Load= 1. Example: Design Exercise 1 ➢ ➢ ➢ ➢ ➢ ➢ It consists of an n channel NOR gate feeding an inverter. The transistors A and B are the termed driver MOSFETs. The process parameters are defined: b0 = 1.8×10-4AV-2 VT = 0.3V VDD=5V RS = 100/sq W/L = 12/1 Example: Design Exercise 2 Layout design of the NMOS IC ➢ ➢ ➢ ➢ ➢ ➢ It consists of an n channel NOR gate feeding an inverter. The transistors A and B are the termed driver MOSFETs. The process parameters are defined: b0 = 1.8×10-4AV-2 VT = 0.3V VDD=5V RS = 100/sq NMOS Logic (NOR): Example 4 Calculate W/L with the following specification: 1) RL=5k. 2) b= b0(W/L), and b0= 1.8*10-4AV-2. 3) VT =0.3V. 4) VDD=5V. If VA=VB=VDD, let VOut = 0.1V << VT Potential divider: 0.5RD/(0.5RD+RL)=VOut/VDD=0.1/5=0.02 → RD ≈ 200 ID= b[(VG-VT)VD-VD2/2]  b[(VG-VT)VD] RD=VOut/ID={b[(VDD-VT)]}-1 =1/[b(5-0.3)]=200 → b ≈ 10*10-4. Therefore, the aspect ratio, W/L, is Example: Design Exercise 2 Layout design of the nMOS IC shown in Fig.1 ➢ ➢ ➢ ➢ ➢ ➢ It consists of an n channel NOR gate feeding an inverter. The transistors A and B are the termed driver MOSFETs. The process parameters are defined: b0 = 1.8×10-4AV-2 VT = 0.3V VDD=5V RS = 100/sq NMOS Logic (Inverter): Example 5 Calculate W/L of Load with the following specification: 1) The aspect ratios of D is 6. 2) b= b0(W/L), and b0= 1.8*10-4AV-2. 3) VT = 0.3V. 4) VDD=5V. If Vin = VDD, let VOut = 0.1V: ID = bD[(Vin-VT)VOut-VOut2/2] RD= VOut/ID =(6b0[(VDD-VT)])-1 =200  [RD/(RD+RL)]=VOut/VDD=0.1/5=0.02 → RL ≈ 10k ID =bL(VDD -VT)2/2 R = (V -V )/I = 4.9*2/[b (5-0.3)2] =10k LDDoutD L → bL = 4.4*10-5 → aspect ratio of load = 0.25 Example: Design Exercise 2 Layout design of the nMOS IC shown in Fig.1 ➢ ➢ ➢ ➢ ➢ ➢ It consists of an n channel NOR gate feeding an inverter. The transistors A and B are the termed driver MOSFETs. The process parameters are defined: b0 = 1.8×10-4AV-2 VT = 0.3V VDD=5V RS = 100/sq W=4, L=16 ⚫ NMOSlogic(examples) ⚫ Calculation ⚫ DesignExercise NMOS Logic (Inverter): Example 1 5K NMOS Logic (Inverter): Example 1 5K For small LR/WR: (e.g. LR/WR=9) For large LR/WR: (e.g. LR/WR=20) Squares are used to calculate the length, L. 22 NMOS Logic (Inverter): Example 1 For small LR/WR: (e.g. LR/WR=9) For large LR/WR: (e.g. LR/WR=20) Every square must disappear when drawing your layout NMOS Logic (Inverter): Example 1 5K NMOS Logic (Inverter): Example 1 5K Active mask for n+ region NMOS Logic (Inverter): Example 1 RL=5k & Rs=100/sq RL=Rs(LR/WR) → LR/WR=50 NMOS Logic (Inverter): Example 2 W/L VDD Aluminium Load device via to VDD Polysilicon gate of load MOSFET with a via to Aluminium and VDD Drain of driver with a via to output Source Via to load MOSFET polysilicon gate as input to circuit Source of driver with a via Ground line (Aluminium) NMOS Logic (Inverter): Example 2 Mask1 NMOS Logic (Inverter): Example 2 Mask2 NMOS Logic (Inverter): Example 2 Mask3 NMOS Logic (Inverter): Example 2 Mask4 NMOS Logic (Inverter): Example 2 VDD Aluminium Load device via to VDD Drain of driver with a via to output polysilicon gate as input to circuit Source of driver with a via Ground line (Aluminium) Polysilicon gate of load MOSFET with a via to Aluminium and VDD Source Via to load MOSFET NMOS Logic (NOR): Example 3 ⚫ NMOSlogic(examples) ⚫ Calculation ⚫ DesignExercise Example: Design Exercise 2 Layout design of the nMOS IC shown in Fig.1 ➢ ➢ ➢ ➢ ➢ ➢ It consists of an n channel NOR gate feeding an inverter. The transistors A and B are the termed driver MOSFETs. The process parameters are defined: b0 = 1.8×10-4AV-2 VT = 0.3V VDD=5V RS = 100/sq Example: Design Exercise 2 ⚫ Design rules: ⚫ The driver transistors should have channel length L equal to the minimum feature size m. The width of the drivers W, which must always be a whole number (n) of minimum feature sizes (nm), and the overall value of W must be chosen to give the required output voltage. This must be significantly less than the threshold voltage of the third gate C if this transistor is to stay off. ⚫ The layouts must take account of the alignment accuracy a. W/L=6 W/L=6 Example: Design Exercise 2 1) The Design involves producing the patterns corresponding to each of the stages of the process already discussed. 2) Each of the patterns should be drawn on graph paper with a stipulated scale. (e.g 1m per cm.) 3) The patterns would be transferred at a later stage to glass masks, as opaque regions. There are 4 masks : M1. define the device area (active) M2. define the gate stripe (poly) M3. define the contacts (contact) M4. define the metal pattern (metal) Example: Design Exercise 1 It consists of an n channel NOR gate feeding an inverter. The transistors A and B are the termed driver MOSFETs. The process parameters are defined: b0 = 1.8×10-4AV-2 VT = 0.3V VDD=5V RS = 100/sq HINTS: Liverpool notes. ➢ ➢ ➢ ➢ ➢ ➢ W/L = 12/1 程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代写 AcaAdceamdiecmSikcilSlskUillsnit – cscodehelp代写

AcaAdceamdiecmSikcilSlskUillsnit
http://i.kinja-img.com/gawker-media/image/upload/s–VIGHoFsz–/c_fit,fl_progressive,q_80,w_636/18ix6s43ynqh0jpg.jpg

Strategies in writing

Copyright By cscodehelp代写 加微信 cscodehelp

Academic Skills
TIP: Use a process to produce work
1. Determine the task type
What type of writing is it? What is the structure? What am I expected to do?
2. Carefully analyse the task Highlight key information
Lit Review
TIP: Make sure this highlighted language appears in your response

AcaAdceamdiecmSikcilSlskUillsnit
The goal: critically assess the effectiveness of various Machine Learning classification algorithms on the problem of determining a tweeter’s location, and to express the knowledge that you have gained in a technical report.
Anonymised report of 2000 words in length (+/-10%), including in-text references Introduction: short problem description and data set, and the research question
Literature review: a short summary of some related (relevant) literature, including the data set reference and at least two additional relevant research papers of your choice
Method: Identify the newly engineered feature(s), and the rationale behind including them Results: in terms of evaluation metric(s) and examples
Discussion / Critical Analysis: 2 areas – Contextualise the system’s behavior, i.e. reasons for the relative performance of different methods & Discuss any ethical issues
Conclusion: demonstrate identified knowledge
Bibliography: incl. Blodgett et al. (2016) + other related work (min. 2) – APA 7 recommended
What do others say?
How did I do this? Why?
What did I find?
What does it mean?
What do I now know?

Strategies in writing
Academic Skills
3. Plan / organise ideas
Based on analysis generate a ‘sectioned plan’ On your computer
Allocate word counts
TIP: method, results, discussion are substantive 4. Research
Find info and read it
As you read, put bullet points in your plan
– Definition of blah (Williams p44) –
– Good idea (Long p15) –
– Great thought (Ng p1) –
– Possible idea (Ng p3) –
TIP: Research using https://unimelb.libguides.com/

Strategies in writing
Academic Skills
5. Draft – start writing
Start anywhere in the body
Start where you feel confident
TIP: write Intro / Conclusion last (5% of words)
6. Finalise then submit
Do final check on hard copy Read out loud
– Definition of Blah (Williams p44) –
– Good idea (Long p15) –
– Great thought (Ng p1) –
– Possible idea (Ng p3) –
Conclusion

Academic Skills

Two key elements to critical literacy
AcaAdceamdiecmSikcilSlskUillsnit
2. An interpretive / response element: The ‘dialogue’
You interpret and respond to what you have read – critically engage with the readings and the topic
1. Areporting/describingelement:
You find information about the topic and report, describe
what it says, what they did, found, claim etc.
This frames the second part: important but don’t stop here
The ‘catalogue’

Cohesion – linking
2 types of language in writing:
Content – from reading and learning, e.g. tweet, machine, baseline, Functional – cohesive-linking-highlighting
Therefore, however, first, next, for example, though, and, which
This is important because … This shows that… This tells us …
https://m.eliteediting.com.au/50-linking-words-to-use-in-academic-writing/ http://www.phrasebank.manchester.ac.uk/

Interpretive language
AcaAdceamdiecmSikcilSlskUillsnit
This shows* that … (*suggests / implies / gives the impression that …) This is important / significant because…
This is worth noting as / because it …
This calls attention to …
This can be illustrated by …
What this means* is … (*shows / tells us / reveals / highlights / points to / implies) … tells us that …
… importantly* suggests that … (*crucially, significantly)
… which points to / suggests the need for …
… which is vital / crucial as it …
… which shows / illustrates that …
… which is significant as it …
… is illustrative because it …
… meaning that …
… illustrating / pointing to the need for …
In doing so, it points to … / In so doing, tells us that …
Use this language! It moves your thinking from the descriptive to the interpretive

Report writing Tips
AcaAdceamdiecmSikcilSlskUillsnit
Be on task – show you are doing so with key language
This paper explores / examines / identifies… (intent / position) – TIP
frame this in present simple
This suggested that… (discussion / analysis)
Having carried out … we conclude / find that … (concluding)
Use numbered headings
Be aware of the functionality of the report sections
Be interpretive and analytical, not just descriptive, of both text and data Clear graphics – simple is best; refer to the graphics

Report writing Tips
AcaAdceamdiecmSikcilSlskUillsnit
Edit before submitting
Formal language: full forms (didn’t vs did not); avoid emotive language
‘frustrating’, ‘disappointing’, ‘obvious’, ‘good’, ‘bad’
Be aware of tense : time … a second experiment is was performed.
Tense use TIPs:
Past tense for finished action; did, found, discovered, proved, showed
Present simple for fact, current observation or current feeling; ‘… it refers to … it signals that … we can see that … it shows…
Lee (2010) proposes that this is …

Cohesion – consider link & transition between ideas
AcaAdceamdiecmSikcilSlskUillsnit
Short, join:
A majority class baseline was used for this experiment. It is based on the ‘Zero Rule’. This rule classifies all tweets according to labels with the greatest training set ratio.
A majority class baseline was used for this experiment which is based on the ‘Zero Rule’ classifying all tweets according to labels with the greatest training set ratio. (28w) OR
The majority class baseline used for this experiment was based on the ‘Zero Rule’ classifying all tweets according to labels with the greatest training set ratio. (26w)
Long; cut:
By analysing the training set, it was somewhat surprising to find a number of feature values equal to 0, especially as many samples have 0-value for all their attributes, which means that none of the feature terms ever occurs in them. (41w)
Analysing the training set, we found a somewhat surprising number of feature values equal to 0, especially as many samples have 0-value for all their attributes. This means that none of the feature terms ever occurs in them. (26w & 12w)

The impact of sentence length
AcaAdceamdiecmSikcilSlskUillsnit
If the sentences are short and related, then join them
If the sentence is +40 words (not a list), consider breaking it up

Key sequence points
AcaAdceamdiecmSikcilSlskUillsnit
Think about how information is ordered within a paragraph How does each sentence link to the one before it and after it
– how are you showing this? Is the sequence logical?
Same for paragraphs – how does the paragraph relate to the next one?

Make time to edit
Editing attitude: task focus – Do I/we need it? Is it relevant? No? get rid of it Big edits: Possible removal of whole sections, paragraphs or sentences
Small edits: word, sentence, text level
AcaAdceamdiecmSikcilSlskUillsnit
Compression editing:
… which enabled us to make clear recommendations (7 words) vs … which enabled clear recommendations … (4 words) OR
… enabling clear recommendations (3 words)
Edit when fresh – get distance from paper Final edit on hard copy Read aloud – you or MS Word Read Aloud function
Reflexive repetition: removal of unnecessary repetition
This was evidence of the program’s three features. These features address … vs This evidenced the program’s three features which address …

APA In text – choices
Academic Skills
“Direct quotations” – Author’s exact words – do not overuse
Cheng et al. (2010) proposed a framework, “based purely on the content
of the user’s tweets, even in the any other geospatial cues” (p. 759).
user location
absence of
Paraphrasing (Indirect quoting) or summarising
– present the idea in your words – still need to cite
TIP: More of this
A framework was proposed whereby location of the user was estimated solely on tweet content even where geospatial data is not present (Cheng et al., 2010).
Author vs idea focus in citations
Citation is a writing skill; making a Reference list is a technical skill

Always need the year with the author(s) in text, not the initials (that’s for the Ref List)
Avoid starting or finishing a paragraph with a citation – try to start and finish on your words Et al. (three or more authors, shortened from the first citation)
– always takes a full stop in or out of the brackets
– always a plural; it means ‘and others’, e.g. Letts et al. (2015) argue that … (not ‘argues’)
– Don’t possess et al.
Cheng et al.’s (2010) paper proposed … VS … the paper by Cheng et al. (2010) proposed … Always comment on direct quotes, e.g.
… part of the data set” (Lee, 2017, p. 5). The implications of this are …

Stage II task
Write 200-400 words total per review, responding to three ’questions’:
• Briefly summarise what the author has done in one paragraph (50-100 words)
• Indicate what you think that the author has done well, and why in one paragraph
(100-200 words)
The strengths of the writing to me are … What is clear about the paper is …
You have … and this is evident in the way you … because …
• Indicate what you think could have been improved, and why in one paragraph (50- 100 words)
The writing could improve in the following areas … You could … It needs more … You could try to …Think about having more of … / less of…
What could we look for?
Message clarity / language / content / citation & Ref List / link / flow / accuracy / headings / relevance / critique / interpretation / use of data / figures / charts

AcaAdceamdiecmSikcilSlskUillsnit
Semester planners: Any library or Stop 1 or https://students.unimelb.edu.au/academic-skills/explore-our-resources/time-management/semester-planner
Finalise Stage 1
Read Write
Submit Stage 1
Submit Stage 2
Finalise Stage 2

Writing resources
AcaAdceamdiecmSikcilSlskUillsnit
Developing a research question

程序代写 COSC1076 Assignment 1 Marking Rubric (Semester 1 2022) – cscodehelp代写

COSC1076 Assignment 1 Marking Rubric (Semester 1 2022)
Mark/ Weight
HD DI CR PA PA- NN
100% 80% 65% 55% 30% 0%

Copyright By cscodehelp代写 加微信 cscodehelp

4 marks
 (15%)
Comprehensive Tests for Milestones 2 & 3 & 4, covering the majority of common use cases, and most edge cases
Sufficient Tests for Milestones 2 & 3, covering the majority of common use cases
Comprehensive Tests for Milestone 2, covering the majority of common use cases, and most edge cases
Sufficient Tests for Milestone 2, covering the majority of common use cases.
Poor (or missing) Unit tests for any part of the assignment, with poor coverage
Not Complete d
At least four test cases
At least four test cases
At least four test cases
Software Implementation
18 marks
 (60%)
Complete and error-free implementation of Milestones 2, 3 & 4.
Mostly complete and mostly error-free implementation of Milestones 2 & 3.
Complete and error-free implementation of Milestone 2.
Mostly complete and mostly error-free implementation of Milestone 2.
Incomplete or error ridden implementation of Milestone 2.
Not Complete d
Compiles with the Milestone 2 & 4 mandatory requirements and restrictions.
Compiles with the Milestone 2 mandatory requirements and restrictions.
Compiles with the Milestone 2 mandatory requirements and restrictions.
Compiles with the Milestone 2 mandatory requirements and restrictions.
Failure to comply with the Milestone 2 mandatory requirements and restrictions.
Code Style, Documentation & Code Description
8 marks
 (25%)
Exceptional and clear software design.
Good and clear software design.
Suitable consideration given to quality software design.
Some consideration given to quality software design.
Poor and messy software design.
Not Complete d
Exceptional coding style and suitably documented code. No input from the developers is required to comprehend the code.
Good and clear coding style and suitably documented code.
Suitable coding style and suitably documented code. Code is readable, but parts of the software may not be clear
Suitable coding style and suitably documented code, but code is confusing to comprehend without further explanation.
Poor and messy coding style with minimal documented code
Excellent use of permitted C++11/14 language features.
Excellent use of permitted C++11/14 language features.
Excellent use of permitted C++11/14 language features.
Sufficient use of permitted C++11/14 language features.
Poor use of permitted C++11/14 language features.

Mark/ Weight
HD DI CR PA PA- NN
100% 80% 65% 55% 30% 0%
Exceptional code description for all attempted Milestones 2, 3 & 4, with good analysis.
Good code description for all attempted Milestones, with some analysis.
Suitable code description for all attempted Milestone, with no analysis.
Code description is provided, and is a minimal outline of the code
No code description
No code descriptio n

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代考 COMP90049 Introduction to Machine Learning (Semester 1, 2022) – cscodehelp代写

School of Computing and Information Systems The University of Melbourne
COMP90049 Introduction to Machine Learning (Semester 1, 2022)
Week 5: Sample Solutions
1. How is holdout evaluation different to cross-validation evaluation? What are some reasons we would prefer one strategy over the other?

Copyright By cscodehelp代写 加微信 cscodehelp

In a holdout evaluation strategy, we partition the data into a training set and a test set: we build the model on the former and evaluate on the latter.
In a cross-validation evaluation strategy, we do the same as above, but a number of times, where each iteration uses one partition of the data as a test set and the rest as a training set (and the partition is different each time).
Why we prefer cross-validation to holdout evaluation strategy? Because, holdout is subject to some random variation, depending on which instances are assigned to the training data, and which are assigned to the test data. Any instance that forms part of the model is excluded from testing, and vice versa. This could mean that our estimate of the performance of the model is way off or changes a lot from data set to data set.
While Cross–validation mostly solves this problem: we’re averaging over a bunch of values, so that one weird partition of the data won’t throw our estimate of performance completely off; also, each instance is used for testing, but also appears in the training data for the models built on the other partitions. It usually takes much longer to cross-validate, however, because we need to train a model for every test partition.
2. A confusion matrix is a summary of the performance of a (supervised) classifier over a set of development (“test”) data, by counting the various instances:
2 3 1 531 371 035
Classified
(i). Calculate the classification accuracy of the system. Find the error rate for the system.
In this context, Accuracy is defined as the fraction of correctly identified instances, out of all of the instances. In the case of a confusion matrix, the correct instances are the ones enumerated along the main diagonal (classified as a and actually a etc.):
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = # of correctly identified instance 𝑡𝑜𝑡𝑎𝑙 # 𝑜𝑓 𝑖𝑛𝑠𝑡𝑎𝑛𝑐𝑒𝑠
= 10 + 5 + 7 + 5
10 + 2 + 3 + 1 + 2 + 5 + 3 + 1 + 1 + 3 + 7 + 1 + 3 + 0 + 3 + 5
= 27=54% 50
Error rate is just the complement of accuracy:

𝐸𝑟𝑟𝑜𝑟 𝑅𝑎𝑡𝑒 = # of incorrectly identified instance = 1 − 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 1 − 54% 𝑡𝑜𝑡𝑎𝑙 # 𝑜𝑓 𝑖𝑛𝑠𝑡𝑎𝑛𝑐𝑒𝑠
(ii). Calculate the precision, recall and F-score (where β = 1), for class d.
Precision for a given class is defined as the fraction of correctly identified instances of that class, from the times that class was attempted to be classified. We are interested in the true positives (TP) where we attempted to classify an item as an instance of said class (in this case, d) and it was actually of that class (d): in this case, there are 5 such instances. The false positives (FP) are those items that we attempted to classify as being of class d, but they were actually of some other class: there are 3 + 0 + 3 = 6 of those.
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛= 𝑇𝑃 = 5 = 5 ≈45% 𝑇𝑃 + 𝐹𝑃 5 + 3 + 0 + 3 11
Recall for a given class is defined as the fraction of correctly identified instance of that class, from the times that class actually occurred. This time, we are interested in the true positives, and the false negatives (FN): those items that were actually of class d, but we classified as being of some other class; there are 1 + 1 + 1 = 3 of those.
𝑅𝑒𝑐𝑎𝑙𝑙 = 𝑇𝑃 = 5 = 5 ≈ 62% 𝑇𝑃 + 𝐹𝑁 5 + 1 + 1 + 1 8
F-score is a measure which attempts to combine Precision (P) and Recall (R) into a single score. In general, it is calculated as:
𝐹 =(1+𝛽”)𝑃.𝑅
By far, the most typical formulation is where the parameter 𝛽 is set to 1: this means that Precision and Recall are equally important to the score, and that the score is a harmonic mean.
In this case, we have calculated the Precision of class d to be 0.45 and the Recall to be 0.62. The F-score where (𝛽 = 1) of class d is then:
𝐹# =2𝑃.𝑅= 2×0.45×0.62≈53% 𝑃 + 𝑅 0.45 + 0.62
(iii). Why can’t we do this for the whole system? How can we consider the whole system?
The concept of precision and recall is defined per-class on the bases of one (interesting) class vs the rest of (not interesting) classes.
Since this system, similar to all other multiclass classifiers, considers all classes (a, b, c and d) as interesting, we need to calculate the precision and recall per each class (vs the rest) and then find the average for the whole system.
As covered in the lectures, there are multiple methods for calculating the average precision and recall for the whole system. We can use methods like Macro Averaging, Micro Averaging or Weighted Averaging.
Our choice of method depends on our goal and the domain of the system. In cases where we want to emphasis on identifying the behaviour of the system for small classes Macro averaging can be a better choice. While in situations that we want to
(𝛽”.𝑃) + 𝑅

evaluate the system mostly based on its behaviour in detecting the large classes Micro averaging would be a better option.
3. For the following dataset:
ID Outl Temp Humi Wind PLAY TRAINING INSTANCES
AshhFN BshhTN CohhFY DrmhFY ErcnFY FrcnTN
TEST INSTANCES GocnT?
(i). Classify the test instances using the method of 0-R.
0-R is the quintessentially baseline classifier: we throw away all of the attributes, other than the class labels, and just predict each test instance according to whichever label is most common in the training data. (Hence, it is also common called the “majority class classifier”.)
In this case, the two labels Y and N are equally common in the training data — so we are required to apply a tiebreaker. Remember that we’re simply choosing one label to be representative of the entire collection, and both labels seem equally good for that here: so, let’s say N.
Consequently, both test instances are classified as N.
(ii). Classify the test instances using the method of 1-R.
1-R is a slightly better baseline, which requires us to choose a single attribute to represent the entire decision–making process. For example, if Outl is our preferred attribute, then we base the classification of each test instance solely based on its value of Outl. (This is sometimes called a “Decision Stump”.)
Given our preferred attribute, we will label a test instance according to whichever label in the training data was most common, for training instances with the corresponding attribute value.
How do we decide which attribute to choose? Well, the most common method is simply by counting the errors made on the training instances.
Let’s say we chose Outl: first, we need to observe the predictions for the 3 values (s, o, and r), and then we will count up the errors that those predictions will make on the training data (it turns out that we can do this simultaneously):
• When Outl is s, there are two such instances in the training data. Both of these in- stances are labelled as N: our label for this attribute value will be N. We will therefore predict the label of both of these training instances correctly (we will predict N, and both of these instances are actually N).
• When Outl is o, there is just a single instance in the training data, labelled as Y: our label for this attribute value will be Y. We will therefore predict the label of this training instance correctly.

• When Outl is r, there are three such instances in the training data. Two of these instances are labelled as Y, the other is N: our label for this attribute value will be Y (as there are more Y instances than N instances — you can see that we’re applying the method of 0-R here). We will therefore make one error: the instance which was actually n.
In total, that is 1 error for Outl. Hopefully, you can see that this is a very wordy explanation of a very simple idea. (We will go through the other attributes more quickly.)
• When Temp is h, there are two N instances and one Y: we will make one
• When Temp is m, there is one Y instance: we won’t make any mistakes.
• When Temp is c, there is one N instance and one Y instance: we will make one mistake.
This is 2 errors in total for Temp, which means that it’s less good that Outl.
We’ll leave the other attributes as an exercise, and assume that Outl was the best
attribute (it’s actually tied with Wind): how do the test instances get classified?
• For test instance G, Outl is o — the 0-R classifier over the o instances from
the training data gives Y, so we predict Y for this instance.
• For test instance H, Outl = s — the 0-R classifier over the s instances from
the training data gives N, so we predict N for instance H.
Given the above dataset, we wished to perform feature selection on this dataset, where the
class is PLAY:
Which of Humi and Wind has the greatest Pointwise Mutual Information for the class Y? What about N?
To determine Pointwise Mutual Information (PMI), we compare the joint probability to the product of the prior probabilities as follows:
𝑃𝑀𝐼(𝐴, 𝐶) = 𝑙𝑜𝑔 𝑃(𝐴 ∩ 𝐶) ” 𝑃(𝐴)𝑃(𝐶)
Note that this formulation only really makes sense for binary attributes and binary classes (which we have here.)
𝑃𝑀𝐼(𝐻𝑢𝑚𝑖=h,𝑃𝑙𝑎𝑦=𝑌)=𝑙𝑜𝑔 𝑃(𝐻𝑢𝑚𝑖=h ∩𝑃𝑙𝑎𝑦=𝑌) ” 𝑃(𝐻𝑢𝑚𝑖 = h)𝑃(𝑃𝑙𝑎𝑦 = 𝑌)
=𝑙𝑜𝑔” 6 =𝑙𝑜𝑔”(1)=0 43

𝑃𝑀𝐼(𝑊𝑖𝑛𝑑=𝑇,𝑃𝑙𝑎𝑦=𝑌)=𝑙𝑜𝑔 𝑃(𝑊𝑖𝑛𝑑=𝑇∩𝑃𝑙𝑎𝑦=𝑌) ” 𝑃(𝑊𝑖𝑛𝑑 = 𝑇)𝑃(𝑃𝑙𝑎𝑦 = 𝑌)
= 𝑙𝑜𝑔” 6 = 𝑙𝑜𝑔”(0) = −∞ 23
So, we find that Wind=T is (perfectly) negatively correlated with PLAY=Y; whereas Humi=h is (perfectly) uncorrelated.
You should compare these results with the negative class PLAY=N, where Wind=T is positively correlated, but Humi=h is still uncorrelated.
(ii). Which of the attributes has the greatest Mutual Information for the class, as a whole? A general form of the Mutual Information (MI) is as follows:
𝑀𝐼(𝑋, 𝐶) = S S 𝑃(𝑥, 𝑐)𝑃𝑀𝐼(𝑥, 𝑐) +∈, $∈{‘,)}
Effectively, we’re going to consider the PMI of every possible attribute value–class combination, weighted by the proportion of instances that actually had that combination.
To handle cases like PMI(Wind) above, we are going to equate 0 log 0 ≡ 0 (which is true in the limit anyway).
For Outl, this is going to look like:
𝑀𝐼(𝑂𝑢𝑡𝑙) = 𝑃(𝑠, 𝑌)𝑃𝑀𝐼(𝑠, 𝑌) + 𝑃(𝑜, 𝑌)𝑃𝑀𝐼(𝑜, 𝑌) + 𝑃(𝑟, 𝑌)𝑃𝑀𝐼(𝑟, 𝑌) +
𝑃(𝑠, 𝑁)𝑃𝑀𝐼(𝑠, 𝑁) + 𝑃(𝑜, 𝑁)𝑃𝑀𝐼(𝑜, 𝑁) + 𝑃(𝑟, 𝑁)𝑃𝑀𝐼(𝑟, 𝑁)
= 𝑙𝑜𝑔” 6+ 𝑙𝑜𝑔” 6+ 𝑙𝑜𝑔” 6+ 623613633
𝑙𝑜𝑔” 6+ 𝑙𝑜𝑔” 6+ 𝑙𝑜𝑔” 6 623613633
≈ 0 𝑙𝑜𝑔”0 + (0.1667)(1) + (0.3333)(0.4150) + (0.3333)(1) + 0 𝑙𝑜𝑔”0 + (0.1667)(−0.5850)
It’s worth noting that while some individual terms can be negative, the sum must be at least zero. For Temp, this is going to look like:
𝑀𝐼(𝑇𝑒𝑚𝑝) = 𝑃(h, 𝑌)𝑃𝑀𝐼(h, 𝑌) + 𝑃(𝑚, 𝑌)𝑃𝑀𝐼(𝑚, 𝑌) + 𝑃(𝑐, 𝑌)𝑃𝑀𝐼(𝑐, 𝑌) + 𝑃(h, 𝑁)𝑃𝑀𝐼(h, 𝑁) + 𝑃(𝑚, 𝑁)𝑃𝑀𝐼(𝑚, 𝑁) + 𝑃(𝑐, 𝑁)𝑃𝑀𝐼(𝑐, 𝑁)
= 𝑙𝑜𝑔” 6+ 𝑙𝑜𝑔” 6+ 𝑙𝑜𝑔” 6+ 633613623
𝑙𝑜𝑔” 6+ 𝑙𝑜𝑔” 6+ 𝑙𝑜𝑔” 6 623613623

≈ (0.1667)(−0.5850) + (0.1667)(1) + (0.1667)(0) + (0.3333)(0.4150) + 0 𝑙𝑜𝑔”0 + (0.1667)(−0.5850)
We will leave the workings as an exercise, but the Mutual Information for Humi is 0, and for Wind is 0.459.
Consequently, Outl appears to be the best attribute (perhaps as we might expect), and Wind also seems quite good; whereas Temp is not very good, and Humi is completely unhelpful.

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

留学生辅导 SDM08) – cscodehelp代写

Lecture 20: Anomaly Detection
Introduction to Machine Learning Semester 1, 2022
C 2022 The University of Melbourne Acknowledgement:

Copyright By cscodehelp代写 加微信 cscodehelp

Lecture Outline
• AnomalyDetection • Definition
• Importance • Structure
• AnomalyDetectionAlgorithms • Statistical
• Proximity-based • Density-based
• Clustering-based

What are Outliers/Anomalies?
• Anomaly: A data object that deviates significantly from the normal objects as if it were generated by a different mechanism
• Ex.: Unusual credit card purchase, sports: , …
• Anomalies are different from noise
• Anomalies are interesting:
• They violate the mechanism that generates the normal data
• translate to significant (often critical) real life entities
• Cyber intrusions
• Credit card fraud
Noise is random error or variance in a measured variable Noise should be removed before anomaly detection

Importance of Anomaly Detection?
Ozone Depletion History
• In 1985 three researchers (Farman, Gardinar and Shanklin) were puzzled by data gathered by the British Antarctic Survey showing that ozone levels for Antarctica had dropped 10% below normal levels
• Why did the Nimbus 7 satellite, which had instruments aboard for recording ozone levels, not record similarly low ozone concentrations?
• The ozone concentrations recorded by the satellite were so low they were being treated as noise by a computer program and discarded!

Variants of Anomaly Detection Problem
• VariantsofAnomaly/OutlierDetectionProblems
• Given a database D, find all the data points x ∈ D having the top-n largest anomaly scores f(x)
• Given a database D, containing mostly normal (but unlabeled) data points, and a test point x, compute the anomaly score of x with respect to D
Given a database D, find all the data points x ∈ D with anomaly scores greater than some threshold t

Structure of Anomalies
• Global/Point anomalies
• Contextual/Conditional anomalies • Collective anomalies

Global/Point anomalies
• GlobalAnomaly(orpoint)
– Object is Og if it significantly deviates from the rest of the data set
– Ex.Intrusiondetectionincomputernetworks
– Issue:Findanappropriatemeasurementofdeviation

Contextual anomalies
• Contextual Anomaly (or conditional)
• Object is Oc if it deviates significantly based on a selected
• Attributes of data objects should be divided into two groups
• Contextual attributes: defines the context, e.g., time & location
• Behavioral attributes: characteristics of the object, used in anomaly evaluation, e.g., temperature
• Can be viewed as a generalization of local anomalies—whose density significantly deviates from its local area
• Issue: How to define or formulate meaningful context?
* Song, et al, “Conditional Anomaly Detection”, IEEE Transactions on Data and Knowledge Engineering, 2006.

Example of Contextual Anomalies
Ex. 10o C in Paris: Is this an anomaly?

Collective anomalies
Collective Anomalies
• A subset of data objects that collectively deviate significantly from the whole data set, even if the individual data objects may not be anomalies
E.g., intrusion detection:
When a number of computers keep sending denial-of-service • Detection of collective anomalies
packages to each other
Consider not only behavior of individual objects, but also that of
groups of objects
Requires background knowledge about the relationship among
data objects, such as a distance or similarity measure on objects.

Example of Collective anomalies
• Requires a relationship among data instances
• The individual instances within a collective anomaly are not anomalous by themselves
anomalous subsequence
Sequential data Spatial data Graph data

Anomaly detection paradigms: supervised, semi-supervised, and unsupervised

Supervised Anomaly Detection
Supervised anomaly detection
• Labels available for both normal data and anomalies
• Samples examined by domain experts used for training & testing • Challenges
Require both labels from both normal and anomaly class Imbalanced classes, i.e., anomalies are rare: Boost the
anomaly class and make up some artificial anomalies
Cannot detect unknown and emerging anomalies
Catch as many outliers as possible, i.e., recall is more important than accuracy (i.e., not mislabeling normal objects as outliers)

Semi-supervised Anomaly Detection
Semi-Supervised anomaly detection
Labels available only for normal data
Model normal objects & report those not matching the model as
Challenges
Require labels from normal class
Possible high false alarm rate – previously unseen (yet legitimate)
data records may be recognized as anomalies

Unsupervised Anomaly Detection I
Unsupervised anomaly detection
• Assume the normal objects are somewhat “clustered” into multiple
groups, each having some distinct features
• An outlier is expected to be far away from any groups of normal
General steps
Build a profile of “normal” behavior
summary statistics for overall population model of multivariate data distribution
• Use the “normal” profile to detect anomalies
anomalies are observations whose characteristics
differ significantly from the normal profile

Unsupervised Anomaly Detection II
Unsupervised anomaly detection Challenges
• Normal objects may not share any strong patterns, but the
collective outliers may share high similarity in a small area
• Ex. In some intrusion or virus detection, normal activities are diverse
• Unsupervised methods may have a high false positive rate but still miss many real outliers.
Many clustering methods can be adapted for unsupervised methods
• Find clusters, then outliers: not belonging to any cluster
• Problem 1: Hard to distinguish noise from outliers
• Problem 2: Costly since first clustering: but far less outliers than normal objects

Unsupervised anomaly detection: Approaches
• Statistical (or: model-based)
• Assume that normal data follow some statistical model
• Proximity-based
• An object is an outlier if the nearest neighbors of the object are far away
• Density-based
• Outliers are objects in regions of low density
• Clustering-based
Normal data belong to large and dense clusters

Statistical Anomaly detection

Statistical anomaly detection
Anomalies are objects that are fit poorly by a statistical model. • Idea: learn a model fitting the given data set, and then identify the
objects in low probability regions of the model as anomalies
• Assumption:normaldataisgeneratedbyaparametricdistribution
with parameter θ
– Theprobabilitydensityfunctionoftheparametricdistributionf(x,θ)
gives the probability that object x is generated by the distribution – Thesmallerthisvalue,themorelikelyxisanoutlier
Challenges of Statistical testing:
– highly depends on whether the assumption of statistical model holds in the real data

Visualizing the data
Graphical Approaches
• Boxplot (1-D), Scatter plot (2-D), Spin plot (3-D)
• Limitations: Time consuming, Subjective
Image: https://en.wikipedia.org/wiki/Box_plot#/media/File:Boxplot_with_outlier.png

Univariate data — General Approach
Avg. temp.: x={24.0, 28.9, 28.9, 29.0, 29.1, 29.1, 29.2, 29.2, 29.3, 29.4} • Use the maximum likelihood method to estimate μ and σ
• Fortheabovedataxwithn=10:
• Decide on a confidence limits, e.g.,
• Then 24 is an outlier since:
(24 – 28.61) /1.51 = – 3.04 < –3 Image: https://en.wikipedia.org/wiki/Standard_deviation#/media/File:Standard_deviation_diagram.svg Multivariate Data • Multivariate Gaussian distribution – OutlierdefinedbyMahalanobisdistance – Grubb’stestonthedistances Mahalanobis Mahalanobis Distance Mahalanobis Distance • Mahalanobis Distance • S is the covariance matrix: • For 2-dimensional data: Likelihood approach • Assume the dataset D contains samples from a mixture of two probability distributions: • M (majority distribution) • A (anomalous distribution) • Generalapproach: • Initially, assume all the data points belong to M • Let Lt(D) be the log likelihood of D at time t • For each point xt that belongs to M, move it to A Let Lt+1 (D) be the new log likelihood. Compute the difference, Δ = Lt(D) – Lt+1 (D) If Δ > c (some threshold), then xt is declared as an anomaly and moved permanently from M to A

Likelihood approach
Data distribution, D = (1 –λ) M +λA
• M is a probability distribution estimated from data • A is initially assumed to be uniform distribution
• Likelihood at time t:
L(D) P(x)(1λ)t
 |A| P(x)λt
 i1xM xA
LL(D)M log(1λ)logP (x)A logλlogP (x)
xM xA it it
ttMitAi tt

Statistical Anomaly detection
• Statistical tests are well-understood and well-validated.
• Quantitative measure of degree to which object is an outlier.
• Data may be hard to model parametrically.
• multiple modes
• variable density
• In high dimensions, data may be insufficient to estimate true distribution.

Proximity-based Anomaly detection

Proximity-based Anomaly detection
Anomalies are objects far away from other objects.
• An object is an anomaly if the nearest neighbors of the object are far away, i.e., the proximity of the object significantly deviates from the proximity of most of the other objects in the same data set
• Common approach:
– Outlier score is distance to kth nearest neighbor. – Scoresensitivetochoiceofk.

Proximity-based anomaly detection

Proximity-based anomaly detection

Proximity-based outlier detection
– Easiertodefineaproximitymeasureforadatasetthandetermineits statistical distribution.
– Quantitativemeasureofdegreetowhichobjectisanoutlier.
– Dealsnaturallywithmultiplemodes.
– O(n2)complexity.
– Scoresensitivetochoiceofk.
– Doesnotworkwellifdatahaswidelyvariabledensity.

Density-based Anomaly detection

Density-based outlier detection
Outliers are objects in regions of low density. • Outlierscoreistheinverseofthedensity
around a point
• Scoresusuallybasedonproximities. • Examplescores:
• #pointswithinafixedradiusd
• Reciprocalofaveragedistancetok
nearest neighbors:
density(x,k)1 distance(x,y)1 k yN(x,k) 
Tend to work poorly if data has variable density.
Image: https://en.wikipedia.org/wiki/Local_outlier_factor#/media/File:Reachability-distance.svg

Density-based outlier detection
Relative density outlier score
Local Outlier Factor (LOF)
Reciprocal of average distance to k nearest neighbors, relative to that of those k neighbors.
relative density(x, k )  density(x, k )
1 density(y,k) k yN(x,k)
Image: https://en.wikipedia.org/wiki/File:LOF-idea.svg

Density-based outlier detection
In the NN approach, o2 is not
considered as outlier, while LOF approach find both o1
and o2 as outliers!

Density-based outlier detection
• Quantitative measure of degree to which object is an outlier.
• O(n2) complexity
• Must choose parameters
• k for nearest neighbor
• d for distance threshold
Can work well even if data has variable density.

Cluster-based Anomaly Detection

Cluster-based outlier detection
Outliers are objects that do not belong strongly to any cluster.
Approaches:
– Assess degree to which object belongs to any cluster.
– Eliminate object(s) to improve objective function.
– Discard small clusters far from other clusters
– Outliers may affect initial formation of clusters.

Cluster-based outlier detection
Assess degree to which object belongs to any cluster.
For prototype-based clustering (e.g. k-means), use distance to cluster centers.
To deal with variable density clusters, use relative distance:
distance(x,centroidC ) medianx’C distance(x’,centroidC )
Similar concepts for density-based or connectivity-based clusters.

Cluster-based outlier detection
distance of points from nearest centroid

Cluster-based outlier detection
relative distance of points from nearest centroid

Cluster-based outlier detection
Eliminate object(s) to improve objective function.
1) Form initial set of clusters.
2) Remove the object which most improves objective function.
3) Repeat step 2) until …
Discard small clusters far from other clusters.
• Need to define thresholds for “small” and “far”.

Cluster-based outlier detection
• Some clustering techniques have O(n) complexity.
• Extends concept of outlier from single objects to groups of objects.
• Requires thresholds for minimum size and distance.
• Sensitive to number of clusters chosen.
• Hard to associate outlier score with objects.
• Outliers may affect initial formation of clusters.

• Types of outliers
• Supervised, semi-supervised, or unsupervised
• Statistical, proximity-based, clustering-based approaches
• Ethics in Machine Learing
• Friday: Defining and measuring (un)fairness in ML
Next Wednesday: Guest lecture (live)

References
• Tan et al (2006) Introduction to Data Mining. Section 4.3, pp 150-171. (Chapter 10)
• V. Chandola, A. Banerjee, and V. Kumar, (2009). Anomaly detection: A survey. ACM computing surveys (CSUR), 41(3), 1-58.
• A. Banerjee, et al (2008). Tutorial session on anomaly detection. The SIAM Data Mining Conference (SDM08)

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

代写代考 SWEN90010 – cscodehelp代写

High Integrity Systems Engineering Subject Notes for SWEN90010
School of Computing and Information Systems
The University of Melbourne

Copyright By cscodehelp代写 加微信 cscodehelp

1 Introduction to High Integrity Systems Engineering 7
1.1 Subjectinformation………………………………. 7 1.2 Introductiontohighintegritysystems ……………………… 9 1.3 Subjectoutline ………………………………… 10
2 Safety-critical and Security-critical systems engineering 12
2.1 Anintroductiontosoftwaresafety ……………………….. 12
2.2 Whysafetyengineering? ……………………………. 13
2.3 Safetyengineering ………………………………. 15
2.4 Accidentsandsafetyengineering………………………… 16
2.4.1 CaseStudy:TheA320Airbus ……………………… 17 2.4.2 Casestudy:Warsaw,Okecie–14September1993 . . . . . . . . . . . . . . . . 20 2.4.3 CaseStudy:Boeing737MAX……………………… 22 2.4.4 Theroleofaccidentanalysisinsafetyengineering . . . . . . . . . . . . . . . . 25
2.5 Toolsandtechniquesforsafetyengineering …………………… 27 2.5.1 Safetystandardsandsafetylifecycles ………………….. 27 2.5.2 Preliminaryhazardanalysis ………………………. 28
2.6 HazardAnalysisMethods……………………………. 31 2.6.1 HAZOPS……………………………….. 31 2.6.2 FaultTreeAnalysis…………………………… 37 2.6.3 FMEA ………………………………… 39
2.7 ThreatModellingforSecurity …………………………. 42 2.7.1 TheSTRIDEMethod………………………….. 42 2.7.2 AttackTrees ……………………………… 45
3 Model-based specification

3.1 Whatisformalspecification?………………………….. 49
3.2 Whatismodel-basedspecification?……………………….. 50
3.3 Thecostsofformalspecification………………………… 51
3.4 Logicandsettheory………………………………. 53
3.5 IntroductiontoAlloy ……………………………… 55
3.5.1 Keycharacteristics …………………………… 55
3.6 AlloyLogic………………………………….. 56
3.6.1 Everythingisarelation…………………………. 56
3.6.2 Operators……………………………….. 57
3.6.3 Temporal Logic: specifying systems that change over time . . . . . . . . . . . . 61
3.7 AlloyLanguage………………………………… 61 3.7.1 Modules ……………………………….. 61 3.7.2 Signatures……………………………….. 62 3.7.3 Facts …………………………………. 62 3.7.4 Predicates……………………………….. 63 3.7.5 Functions……………………………….. 64 3.7.6 Assertions……………………………….. 64 3.7.7 ThecompleteLastPassexample …………………… 65
3.8 AnalysiswiththeAlloyAnalyser………………………… 67 3.8.1 Runs …………………………………. 67 3.8.2 Checks ………………………………… 67 3.8.3 Scope…………………………………. 68
3.9 Abstraction and incremental specification of state machines using Alloy operators . . . . 69 3.9.1 Thespecificationprocess………………………… 69
3.10Trace-BasedModellinginAlloy ………………………… 73 3.11Verificationandvalidation …………………………… 74
4 Introduction to Ada 78
4.1 ThehistoryofAda ………………………………. 78 4.2 Adaforprogramming“inthelarge” ………………………. 79 4.3 HelloWorld………………………………….. 79 4.4 Specificationsandprogrambodies ……………………….. 81 4.5 Comments…………………………………… 83 4.6 Types …………………………………….. 83

4.6.1 ThepredefinedAdatypesystem…………………….. 83 4.6.2 Arrays…………………………………. 85 4.6.3 Definingnewtypes…………………………… 86
4.7 ControlStructures……………………………….. 87
4.8 ProceduralAbstractionsandFunctionalAbstractions . . . . . . . . . . . . . . . . . . . 90 4.8.1 ParameterModes……………………………. 90 4.8.2 Callingsubprograms ………………………….. 90
4.9 accessTypes:PassingParametersByReference ……………….. 91 4.9.1 AccessTypestoRead-OnlyData ……………………. 94 4.9.2 Summary……………………………….. 94
4.10Example:Insertionsort…………………………….. 94 4.11Concurrencywithtasks…………………………….. 95 4.12Furtherreading ………………………………… 98
5 Safe programming language subsets 100
5.1 Principlesofcorrectnessbyconstruction……………………..100
5.2 Safeprogramminglanguagesubsets ……………………….101
5.3 SPARK …………………………………….103
5.3.1 SPARKStructure…………………………….103
5.3.2 WhatisleftoutofAda? …………………………104
5.3.3 Why bother with SPARK or other safe programming language subsets? . . . . . 106
5.4 SPARKExaminer………………………………..107 5.4.1 Annotations……………………………….107 5.4.2 SPARKExaminer ……………………………108
6 Design by contract 111
6.1 IntroductiontoDesignbyContract………………………..111 6.1.1 History …………………………………111 6.1.2 SomeDefinitions…………………………….112
6.2 SpecifyingADTinterfaces……………………………112 6.2.1 ADTinterfacesasnatural-languagecomments . . . . . . . . . . . . . . . . . . 112 6.2.2 ADT interfaces as invariants, preconditions, and postconditions . . . . . . . . . 113 6.2.3 ADTinterfacesascontracts ……………………….114 6.2.4 Formalcontracts …………………………….116

6.2.5 Advantagesanddisadvantagesofformalcontracts . . . . . . . . . . . . . . . . 117
6.3 ContractsinSPARK ………………………………117 6.3.1 Syntax …………………………………117 6.3.2 SimilaritytoAlloy ……………………………121 6.3.3 Tools ………………………………….121
6.4 Additionalreading ……………………………….121
7 Reasoning about program correctness 122
7.1 Introductiontoreasoningaboutprograms …………………….123 7.1.1 Thecorrectnessofbinarysearch……………………..123
7.2 Asmallprogramminglanguage………………………….125
7.3 Hoarelogic …………………………………..126 7.3.1 IntroductiontoHoarelogic………………………..126
7.3.2 Anintroductoryexample—Incrementinganinteger . . . . . . . . . . . . . . . 127
7.4 TherulesofHoarelogic …………………………….127 7.4.1 Assignmentaxiom ……………………………127 7.4.2 Consequencerule…………………………….128 7.4.3 Sequentialcompositionrule ……………………….129 7.4.4 Emptystatementaxiom …………………………132 7.4.5 Conditionalrule …………………………….132 7.4.6 Iterationrule ………………………………133 7.4.7 Establishingloopinvariants ……………………….139 7.4.8 Arrayassignmentrule ………………………….142 7.4.9 Procedurecallrule ……………………………143 7.4.10 Otherrules ……………………………….146
7.5 MechanisingHoarelogic…………………………….146
7.6 Dijkstra’sweakestpreconditioncalculus……………………..147 7.6.1 Transformers………………………………147 7.6.2 Programproof ……………………………..147
7.7 Additionalreading ……………………………….148
8 Advanced Verification 149
8.1 PointersandAliasing………………………………149 8.2 Safe Programming with Pointers: Avoiding Unsafe Aliasing . . . . . . . . . . . . . . . 150

8.2.1 UnsafeAliasing……………………………..151 8.2.2 AvoidingUnsafeAliasinginSPARK …………………..151 8.2.3 Anti-AliasingAssumptionsinSPARK…………………..152 8.2.4 SPARKPointerChecks………………………….152 8.2.5 OtherProgrammingLanguages:Rust …………………..153
8.3 SeparationLogic ………………………………..153 8.3.1 PointersinC ………………………………153 8.3.2 SeparationLogic(forC)…………………………154 8.3.3 TheMeaningofSeparationLogicStatements……………….156 8.3.4 TheRulesofSeparationLogic………………………157 8.3.5 AnExampleProof ……………………………159
8.4 Security:InformationFlow……………………………161
9 Fault-tolerant design 162
9.1 Introductiontofault-tolerance ………………………….162
9.2 HardwareRedundancy ……………………………..165 9.2.1 Redundancy……………………………….165 9.2.2 Staticpairs ……………………………….166 9.2.3 Redundancyandvoting………………………….166 9.2.4 N-ModularRedundancy(NMR)……………………..170
9.3 SoftwareRedundancy………………………………171 9.3.1 N-VersionProgramming…………………………171 9.3.2 Recoveryblocks …………………………….173
9.4 ByzantineFailures ……………………………….176
9.5 InformationRedundancy…………………………….176 9.5.1 Duplication……………………………….176 9.5.2 Paritycoding………………………………177 9.5.3 Checksums……………………………….179
9.6 Faulttoleranceinpractice…………………………….179 9.6.1 AirbusA310–A380……………………………179 9.6.2 Boeing777……………………………….180 9.6.3 Boeing737MAXFlightControlRedesign ………………..180
9.7 Additionalreading ……………………………….180

Introduction to High Integrity Systems Engineering
1.1 Subject information Subject aims
The main aim of the subject is to explore the principles, techniques and tools used to analyse, design and implement high integrity systems. High integrity systems are systems that must operate with critical levels of security, safety, reliability, or performance. The engineering methods required to develop high integrity systems go well beyond the standards for less critical software.
Topics will include:
• high integrity system definition;
• programming languages for high-integrity systems; • safety and security analysis;
• modelling and analysis;
• fault tolerance; and
• proving program correctness.
Subject outcomes
On completion of this subject students should be able to:
• Demonstrate an understanding of the issues facing high integrity systems developers.
• Demonstrate the ability to analyse highly dependable systems and to synthesise requirements and operating parameters from their analysis.

• Demonstrate the ability to choose and make trade-offs between different software and systems architectures to achieve multiple objectives.
• Develop algorithms and code to meet high integrity requirements.
• Demonstrate the ability to assure high integrity systems.
Generic Skills
The subject is a technical subject. The aim is to explore the various approaches to building software with specific attributes into a system.
The subject will also aim to to develop a number of generic skills.
• We encourage independent research in order to develop your ability to learn independently, assess
what you learn, and apply what you learn.
• You will be developing experience at empirical software engineering, and the interpretation and assessment of quantitative and qualitative data.
• Finally, and perhaps most importantly, you will be encouraged to develop critical analysis skills that will complement and round what you have learnt so far in your degree.
Preconditions
The reason for giving preconditions is that the specify the knowledge assumed at the start of the subject and consequently the knowledge and skills upon which the subject builds. Preconditions also allow students from a Universities other than The University of Melbourne and who do not have the formal prerequisites listed above to judge whether or not they have the knowledge required for the subject. Specifically the subject assumes:
1. Knowledge and experience of software engineering processes and practices, the risks associated with software and system development, quality and quality assurance, software process, and soft- ware project management.
2. Knowledgeandexperienceofrequirementselicitationandanalysistechniques,requirementsmod- elling and specification, use-case modelling, and UML or similar modelling notations.
3. Knowledge and experience of software architectures, architectural styles and design patterns, de- sign with UML or similar graphical notation, developing code to meet architectural and detailed designs.
4. Knowledge and experience of discrete mathematics, including predicate logic and formal proof.
The LMS contains information about the subject staff, mode of delivery, and assessment criteria. It is the primary resource for the subject.

All announcements will be made via the LMS.
It also contains all assignment sheets, notes, lecture recordings, tutorial/workshop sheets and solutions.
http://www.lms.unimelb.edu.au/.
1.2 Introduction to high integrity systems What is a high integrity system?
High integrity systems a software controlled systems in which, in the event they fail, could result in harm to humans (including loss of life), harm on the environment, mass destruction of property, or harm to society in general.
There are four broad areas that are commonly considered as high integrity systems:
• Safety-critical systems: Systems whose failure may result in harm to humans or the environment. Examples include aerospace systems, medical devices, automotive systems, power systems, and railway systems.
• Security-critical systems: Systems whose failure may result in breach of security. Examples in- clude defence systems, government data stores.
• Mission-critical systems: Systems whose failure may result in the failure of some deliberative missions. Examples include navigation systems on autonomous aerospace vehicles, computer- aided dispatch systems (e.g. in emergency services), and command-and-control systems in armed forces.
• Business-critical systems: Systems whose failure may result in extreme financial loss to a business or businesses. Examples include stock exchange systems, trade platforms, and even accounting systems (e.g. in a bank).
What is high integrity systems engineering?
The high impact of failure of a high integrity system means that the software controlling the system must be highly dependable. The standard software engineering methods used for constructing good soft- ware systems typically do not lead to such dependability. As such, specialised methods for engineering software are required for high integrity systems.
Avizienis et al.’s work on dependability attributes help us to understand high integrity systems based on six key attributes, shown in Figure 1.1. For a single high integrity system, one or more of the above attributes will be of high importance.
How do we achieve high-integrity?
High integrity is achieved using a combination of:
• Rigorous engineering processes, such as strict configuration management, auditing, and review process. We will not introduce any new theory on these topics in this subject.

Availability
Readiness for use
Reliability
Continuity of Service
No catastrophic consequences for users or the environment
Dependability
Confidentiality
No unauthorised disclosure of confidential information
No improper system changes
Maintainability
The ability to undergo repairs and evolution.
Figure 1.1: Dependability model from “Avizienis, Laprie & Randell: Fundamental Concepts of Relia- bility”
• Specialised analysis and design methods, such as safety & hazard analysis and fault prediction methods. These will be covered in Part II of the subject.
• Specialised modelling and analysis methods/notations, such as mathematical logic and set theory. These will be covered in Part III of the subject.
• Fault tolerant design methods, such as redundancy, design diversity, and fault detection mecha- nisms. These will be covered in Part IV of the subject.
• Specialised assurance methods, such as model checking, proof, and model-based testing. These will be covered in Part V of the subject.
How do we achieve high-integrity?
Methods that are otherwise considered too expensive to apply to “standard” software systems become cost-effective in high integrity domains. This is because the cost of failure far outweighs the effort involved required to engineer a dependable product.
Of importance is that high dependability cannot be demonstrated by just running large set of tests. Al- though testing still plays an important role in high integrity systems, a really large set of tests is still usually only a fraction of all possible behaviour. As such, testing does not demonstrate dependability.
Perhaps the important lesson to learn from this introduction is:
“Program testing can be used to show the presence of bugs, but never to show their ab-
sence!” — Edsger W. Dijkstra [1].
1.3 Subject outline
Now that we have a better idea about high integrity systems engineering, we define the subject outline: Part I: Introduction to high integrity systems.

Safety- and security-critical systems.
Topics will be drawn from: safety engineering, and accident & hazard analysis for high- integrity systems; security analysis, cryptography.
Modelling and analysis.
Topics will be drawn from: discrete maths for software engineering, model-based specifica- tion and analysis, proof, and lightweight verification.
High integrity systems design.
Topics will be drawn from: reliability, fault detection, fault containment, fault tolerant design, and design by contract.
Assurance.
Topics will be drawn from: proving program correctness, programming languages for high integrity systems, safe-subset programming languages, programming tools for high integrity verification, and model-based testing.
Bibliography
[1] E. W. Dijkstra. Notes on structured programming. T. H. Report 70-WSK-03, Technological Univer- sity Eindhoven, Department of Mathematics, 1970.

Safety-critical and Security-critical systems engineering
2.1 An introduction to software safety
The trend in systems engineering is toward multi-disciplinary systems. Many of these systems are de- ployed outside of the standard desktop domain on which programming is typically taught. Such a trend has safety implications: if a piece of software is used to control a hardware system, it may have safety related issues.
For example, consider the following list of applications that have severe safety implications if they fail:
• Computer Aided Dispatch Systems (CAD);
• The A320 Airbus Electronic Flight Control System (EFCS);
• The Boeing 737 MAX 8 MCAS system
• Biomedical technology, such as pacemakers, and surgical robots; • Train protection systems;
• Space shuttle;
• Automated vehicle braking systems; and
• Chemical plant control systems.
Such systems have influence well beyond the desktop and an individual user. Software itself cannot harm people – it is merely an abstraction. However, software, when used to control hardware or other parts of a system (e.g. to make decisions or send orders on behalf of a human), can be a root cause of accidents and incidents.
The systems described above all rely on an integration of different technologies, and different engineering disciplines. What is perhaps new to most people in this subject is the reliance on computer systems and software to perform safety critical system functions.

The aim of the module on safety engineering is to develop an understanding of the safety engineering approach to developing safety-critical systems. Specifically:
• We will study accidents to get an idea of how they arise, how they are analysed, and what can be done to prevent or mitigate accidents.
• We will study safety engineering processes as the pathw

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代考 ISBN-13: 9780190498511 – cscodehelp代写

Virtue Ethics, Technology, and Human Flourishing
Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting

Print publication date: 2016

Copyright By cscodehelp代写 加微信 cscodehelp

Print ISBN-13: 9780190498511
Published to Oxford Scholarship Online: September 2016 DOI: 10.1093/acprof:oso/9780190498511.001.0001
Virtue Ethics, Technology, and Human Flourishing
DOI:10.1093/acprof:oso/9780190498511.003.0002
Abstract and Keywords
Starting with an overview of virtue ethics in the philosophical tradition of the West, beginning with Aristotle, I discuss the contemporary revival of virtue ethics in the West (and its critics). In reviewing virtue ethics’ advantages over other traditional ethical approaches, especially consequentialism (such as utilitarianism) and deontology (such as Kantian ethics), I note that virtue ethics is ideally suited for managing complex, novel, and unpredictable moral landscapes, just the kind of landscape that today’s emerging technologies present. Yet I also note that an exclusively Western approach to virtue would be inadequate and provincial; moreover, emerging technologies present global problems requiring collective action across cultural and political lines. Finally, I review the various ways in which contemporary philosophers of technology have addressed the ethical dimensions of technology, the limits of those previous approaches, and the potential of a global technosocial virtue ethic to go beyond them.
Keywords: virtue ethics, Aristotle, philosophy of technology, , utilitarianism, deontology, emerging technologies
TO SOME, THE title of this chapter may seem faintly anachronistic. In popular moral discourse, the term ‘virtue’ often retains lingering connotations of Victorian sexual mores, or other historical associations with religious conceptions of morality that focus narrowly on ideals of piety, obedience, and chastity. Outside of the moral context, contemporary use of the term ‘virtue’ expresses something roughly synonymous with ‘advantage’ (e.g., “the virtue of
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2022. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. Subscriber: University of Melbourne; date: 14 March 2022
Page 1 of 22

Virtue Ethics, Technology, and Human Flourishing
this engineering approach is that it more effectively limits cost overruns”). Neither use captures the special meaning of ‘virtue’ in the context of philosophical ethics. So what does virtue mean in this context? How does it relate to cultivating moral character? And why should virtue, a concept rooted in philosophical theories of the good life dating back to the 5th century BCE, occupy the central place in a book about how 21st century humans can seek to live well with emerging technologies?
The term ‘virtue’ has its etymological roots in the Latin virtus, itself linked to the ancient Greek term arête, meaning ‘excellence.’ In its broadest sense, the Greek concept of virtue refers to any stable trait that allows its possessor to excel in fulfilling its distinctive function: for example, a primary virtue of a knife would be the sharpness that enables it to cut well. Yet philosophical discussions of ethics by Plato and Aristotle in the 5th and 4th centuries BCE reveal a growing theoretical concern with distinctly human forms of arête, and here the concept acquires an explicitly moral sense entailing excellence of character.1 A distinct but related term de (德) appears in classical Chinese ethics from approximately the same period. De originally meant a characteristic ‘power’ or influence, but in Confucian thought it acquired the sense of a distinctly ethical power of the exemplary person, one that fosters ‘uprightness’ or ‘right-seeing.’2 Buddhist ethics makes use of a comparable concept, śīla, implying character that coordinates and (p.18) upholds right conduct.3 The perfection of moral character (śīla pāramitā) in Buddhism expresses a sense of cultivated personal excellence akin to other ethical conceptions of virtue. Thus the concept of ‘virtue’ as a descriptor of moral excellence has for millennia occupied a central place in various normative theories of human action—that is, theories that aim to prescribe certain kinds of human action as right or good.
In the Western philosophical tradition, the most influential account of virtue is Aristotle’s, articulated most fully in his Nicomachean Ethics (~350 BCE). Other notable accounts of virtue in the West include those of the Stoics, St. , , , and . Yet Aristotle remains the dominant influence on the conceptual profile of virtue most commonly engaged by contemporary ethicists, and this profile will be our starting point. While cultural and philosophical limitations of the Aristotelian model will lead us to extend and modify this profile in subsequent chapters, its basic practical commitments will remain largely intact.
Moral virtues are understood by Aristotle to be states of a person’s character: stable dispositions such as honesty, courage, moderation, and patience that promote their possessor’s reliable performance of right or excellent actions. Such actions, when the result of genuine virtue, are not only praiseworthy in themselves but imply the praiseworthiness of the person performing them. In human beings, genuine virtues of character are not gifts of birth or passive circumstance, nor can they be taught in any simple sense. They are states that
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2022. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. Subscriber: University of Melbourne; date: 14 March 2022
Page 2 of 22

Virtue Ethics, Technology, and Human Flourishing
the person must cultivate in herself, and that once cultivated, lead to deliberate, effective, and reasoned choices of the good.4 The virtuous state emerges gradually from habitual and committed practice and study of right actions. Thus one builds the virtue of courage only by repeatedly performing courageous acts; first by patterning one’s behavior after exemplary social models of human courage, and later by activating one’s acquired ability to see for oneself what courage calls for in a given situation. Virtue implies an alignment of the agent’s feelings, beliefs, desires, and perceptions in ways that are appropriate to the varied practical arenas and circumstances in which the person is called to act.5 Moral virtues are conceived as personal excellences in their own right; their value is therefore not exhausted in the good actions or consequences they promote. When properly integrated, individual virtues contribute to a person’s character writ large; that is, they motivate us to describe such a person as virtuous, rather than merely noting their embodiment of a particular virtue such as courage, honesty, or justice. States of character contrary to virtue are vices, and a person whose character is dominated by these traits is therefore vicious— broadly incapable of living well.
Most understandings of virtue ethics make room for something like what Aristotle called phronēsis, variously translated as prudence, prudential reason,
(p.19) or practical wisdom.6 This virtue directs, modulates, and integrates the enactment of a person’s individual moral virtues, adjusting their habitual expression to the unique moral demands of each situation. A fully virtuous person, then, is never blindly or reactively courageous or benevolent—rather, her virtues are expressed intelligently, in a manner that is both harmonious with her overall character and appropriate to the concrete situation with which she is confronted.7 Virtues enable their possessor to strike the mean between an excessive and a deficient response, which varies by circumstance. The honest person is not the one who mindlessly spills everyone’s secrets, but the one who knows how much truth it is right to tell, and when and where to tell it, to whom, and in what manner.8 Reasoning is therefore central to virtue ethics. Yet unlike theories of morality that hinge on rationality alone, such as Kant’s, here reason must work with rather than against or independently of the agent’s habits, emotions, and desires. The virtuous person not only tends to think and act rightly, but also to feel and want rightly.9
A virtuous person is not merely conceived as good, they are also understood to be moving toward the accomplishment of a good life; that is, they are living well. In most cases, they enjoy a life of the sort that others recognize as admirable, desirable, and worthy of being chosen. Of course not every life that appears desirable or admirable is, in fact, so. Conversely, a virtuous person with the misfortune to live among the vicious is unlikely to be widely admired, although this does nothing to diminish the fact of their living well. The active flourishing of the virtuous person is not a subjective appearance; virtue just is the activity of living well. This means that while virtue ethics can allow for many different
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2022. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. Subscriber: University of Melbourne; date: 14 March 2022
Page 3 of 22

Virtue Ethics, Technology, and Human Flourishing
types of flourishing lives, it is incompatible with moral relativism. There are certain biological, psychological, and social facts about human persons that constrain what it can mean for us to flourish, just as a nutrient-starved, drought- parched lawn fails to flourish whether or not anyone notices its poor condition. While the cultivation of virtue is not egoistic, since it does not aim at securing the agent’s own good independently of the good of others, a virtuous character is conceptually inseparable from the possibility of a good life for the agent.10 This is why Aristotle describes the virtuous person as objectively happy; even in misfortune they will retain more of their happiness than the vicious would.11 Although it is widely recognized that the Greek term eudaimonia, which we translate as ‘happiness,’ is far richer than the modern, psychological sense of that term (an issue to which we will return later in this book), it will serve our preliminary analysis well to note that the classical virtue ethical tradition regards virtue as a necessary, if not sufficient, condition for living well and happily.12
If thinkers in this tradition are correct, then just as in every previous human era, living well in the 21st century will demand the successful cultivation of moral
(p.20) virtue. Yet given what was noted at the beginning of this chapter— namely, that the popular understanding of virtue is largely divorced from the philosophical teachings of virtue traditions—we have to ask: how can we possibly reconnect popular ideas about living well with technology to a robust discourse about the moral virtues actually needed to achieve that end? While a satisfactory answer to this question cannot be given until later in this book, it may be helpful to briefly examine the circumstances that have led to the revival of contemporary philosophical discourse about the moral virtues and their role in the good life.
1.1 The Contemporary Renewal of Virtue Ethics
Ethical theories in which the concept of virtue plays an essential and central role are collectively known as theories of virtue ethics. Such theories treat virtue and character as more fundamental to ethics than moral rules or principles. Advocates of other types of ethical theory generally see virtues as playing a lesser and more derivative role in morality; these include the two approaches that previously dominated philosophical ethics in the modern West: consequentialist ethics (for example, and Mill’s utilitarianism) and deontological or rule-based ethics (such as ’s categorical imperative).13
Compared with these alternatives, virtue ethics stood in general disfavor in the West for much of the 19th and 20th centuries. Reasons for the relative neglect of virtue ethics in this period include its historical roots in tightly knit, premodern societies, which appeared to make the approach incompatible with Enlightenment ideals of modern cosmopolitanism. Thanks to the medieval philosopher St. ’s use of Aristotelian ideas throughout his
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2022. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. Subscriber: University of Melbourne; date: 14 March 2022
Page 4 of 22

Virtue Ethics, Technology, and Human Flourishing
writings, virtue ethics had also acquired strong associations with the Thomistic moral theology of the Catholic Church. This made it an even less obvious candidate for a universal and secular ethic. Virtue ethics was seen as incompatible with evolutionary science, which denied what Aristotle and many other virtue ethicists had assumed—that human lives are naturally guided toward a telos, a single fixed goal or final purpose. Virtue ethics’ emphasis on habit and emotion was also seen as undermining rationality and moral objectivity; its focus on moral persons rather than moral acts was often conflated with egoism. Finally, virtue ethics’ eschewing of universal and fixed moral rules was thought by some to render it incapable of issuing reliable moral guidance.14
The contemporary reversal of the fortunes of virtue ethics began with the publication of G.E.M. Anscombe’s 1958 essay “Modern Moral Philosophy,” in which she sharply criticized modern deontological and utilitarian frameworks for their narrow preoccupations with law, duty, obligation, and right to the exclusion
(p.21) of considerations of character, human flourishing, and the good. Anscombe also claimed that modern moral theories of right and wrong, having detached themselves from their conceptual origins in religious law, were now crippled by vacuity or incoherence, supplying poor foundations for secular ethics. Her proposal that moral philosophers abandon such theories and revisit the conceptual foundations of virtue was the guiding inspiration for a new generation of thinkers, whose diverse works have restored the philosophical reputation of virtue ethics as a serious competitor to Kantianism, utilitarianism, and other rule- or principle-based theories of morality.15 Among Western philosophers, scholarly interest in virtue ethics continues to grow today thanks to the prominent work of neo-Aristotelian thinkers such as Alasdair MacIntyre, Dowell, , Rosalind Hursthouse, and , to name just a few.
Yet Anscombe was clear that Aristotelian virtue theory was not a satisfactory modern ethic. Even contemporary virtue ethicists who identify as neo- Aristotelian typically disavow one or more of Aristotle’s theoretical commitments, such as his view of human nature as having a natural telos or purpose, or his claims about the biological and moral inferiority of women and non-Greeks. No contemporary virtue ethicist can deny that there are significant problems, ambiguities and lacunae in Aristotle’s account; whether these can be amended, clarified, and filled in without destroying the integrity or contemporary value of his framework is a matter of ongoing discussion. As a consequence, contemporary Western virtue ethics represents not a single theoretical framework but a diverse range of approaches. Many remain neo- Aristotelian, while others are Thomistic, Stoic, Nietzschean, or Humean in inspiration, and some offer radically new theoretical foundations for moral virtue.16
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2022. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. Subscriber: University of Melbourne; date: 14 March 2022
Page 5 of 22

Virtue Ethics, Technology, and Human Flourishing
In addition to internal disagreements, the contemporary renewal of virtue ethics has met with external resistance from critics who challenge the moral psychology of character upon which virtue theories rely. Using evidence from familiar studies such as the Milgram and Stanford prison experiments, along with more recent variations, these critics argue that moral behavior is determined not by stable character traits of individuals, but by the concrete situations in which moral agents find themselves.17 Fortunately, virtue ethicists have been able to respond to this ‘situationist’ challenge. First, the impact of unconscious situational influences, blind spots, and cognitive biases on moral behavior is entirely compatible with virtue ethics, which already regards human moral judgments as imperfect and contextually variable. Moreover, unconscious biases can, once discovered, be mitigated by a range of compensating moral and social techniques.18 Perhaps the most powerful response to the situationists is that robust moral virtue is by definition exemplary rather than typical; indeed, the experiments most often used as evidence against the existence of virtue consistently (p.22) reveal substantial minorities of subjects who respond with exemplary moral resistance to situational pressure—exactly what virtue ethics predicts.19 Thus despite its critics, the contemporary renewal of virtue ethics as a compelling alternative to principle- and rule-based ethics shows no sign of losing steam; if anything, intensified critical scrutiny is a healthy indicator of virtue ethics’ returning philosophical strength.
While a survey of contemporary virtue ethics in the West might stop here, it would be dangerously provincial and chauvinistic to ignore the equally rich virtue ethical traditions of East and Southeast Asia, especially Confucian and Buddhist virtue ethics. While there is important contemporary work being done in this area, relatively few Anglo-American virtue ethicists have acknowledged or attempted to engage this work.20 This is a substantial loss. To ignore the content of active and longstanding virtue traditions with related, but very distinct, conceptions of human flourishing is to forgo an opportunity to gain a deeper critical perspective on the admittedly narrow preoccupations of Aristotelian virtue theory.21
As we move beyond the realm of theory and into the domain of applied virtue ethics, Western provincialism becomes entirely unsustainable; for applied ethics —which tackles real-world moral problems through the lens of philosophy—is increasingly confronted with problems of global and collective action. Environmental ethics offers the starkest selection of practical problems demanding global cooperation and coordinated human responses that reach across national, philosophical, and ethnic lines, but this is hardly an isolated case. The expansion of global markets for new technologies is having profound and systemic moral impacts on the entire human community—primarily by strengthening the shared economic, cultural, and physical networks upon which our existence and flourishing increasingly depend. If we look at the spread of global information and communications systems, unmanned weapons systems,
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2022. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

代写代考 LCCN 2016003900 (print) | LCCN 2016016487 (ebook) | ISBN 9780553418811 (har – cscodehelp代写

Copyright ý 2016 by ¡¯ rights reserved.
Published in the United States by Crown, an imprint of the Crown Publishing Group, a division of Penguin Random House LLC, .
crownpublishing.com
CROWN is a registered trademark and the Crown colophon is a trademark of Penguin Random House LLC.

Copyright By cscodehelp代写 加微信 cscodehelp

Library of Congress Cataloging-in-Publication Data Name: O¡¯Neil, Cathy, author.
Title: Weapons of math destruction: how big data increases inequality and threatens democracy / ¡¯ : First edition. | : Crown Publishers [2016]
Identifiers: LCCN 2016003900 (print) | LCCN 2016016487 (ebook) | ISBN 9780553418811 (hardcover) | ISBN 9780553418835 (pbk.) | ISBN 9780553418828 (ebook)
Subjects: LCSH: Big data¡ªSocial aspects¡ªUnited States. | Big data¡ªPolitical aspects¡ªUnited States. | Social indicators¡ªMathematical models¡ªMoral and ethical aspects. | Democracy¡ªUnited States. | United States¡ªSocial conditions ¡ª21st century.
Classification: LCC QA76.9.B45 064 2016 (print) | LCC QA76.9.B45 (ebook) | DDC 005.7¡ªdc23
LC record available at https://lccn.loc.gov/2016003900
ISBN 9780553418811
Ebook ISBN 9780553418828
International Edition ISBN 9780451497338
Cover design by

INTRODUCTION CHAPTER 1
BOMB PARTS: What Is a Model? CHAPTER 2
SHELL SHOCKED: My Journey of Disillusionment CHAPTER 3
ARMS RACE: Going to College CHAPTER 4
PROPAGANDA MACHINE: Online Advertising CHAPTER 5
CIVILIAN CASUALTIES: Justice in the Age of Big Data CHAPTER 6
INELIGIBLE TO SERVE: Getting a Job CHAPTER 7
SWEATING BULLETS: On the Job CHAPTER 8
COLLATERAL DAMAGE: Landing Credit CHAPTER 9
NO SAFE ZONE: Getting Insurance CHAPTER 10
THE TARGETED CITIZEN: Civic Life
CONCLUSION

The small city of Reading, Pennsylvania, has had a tough go of it in the postindustrial era. Nestled in the green hills fifty miles west of Philadelphia, Reading grew rich on railroads, steel, coal, and textiles. But in recent decades, with all of those industries in steep decline, the city has languished. By 2011, it had the highest poverty rate in the country, at 41.3 percent. (The following year, it was surpassed, if

barely, by Detroit.) As the recession pummeled Reading¡¯s economy following the 2008 market crash, tax revenues fell, which led to a cut of forty-five officers in the police department¡ªdespite persistent crime.
Reading police chief had to figure out how to get the same or better policing out of a smaller force. So in 2013 he invested in crime prediction software made by PredPol, a Big Data start-up based in Santa Cruz, California. The program processed historical crime data and calculated, hour by hour, where crimes were most likely to occur. The Reading policemen could view the program¡¯s conclusions as a series of squares, each one just the size of two football fields. If they spent more time patrolling these squares, there was a good chance they would discourage crime. And sure enough, a year later, Chief Heim announced that burglaries were down by 23 percent.
Predictive programs like PredPol are all the rage in budget-strapped police departments across the country. Departments from Atlanta to Los Angeles are deploying cops in the shifting squares and reporting falling crime rates. City uses a similar program, called CompStat. And Philadelphia police are using a local product called HunchLab that includes risk terrain analysis, which incorporates certain features, such as ATMs or convenience stores, that might attract crimes. Like those in the rest of the Big Data industry, the developers of crime prediction software are hurrying to incorporate any information that can boost the accuracy of their models.
If you think about it, hot-spot predictors are similar to the shifting defensive models in baseball that we discussed earlier. Those systems look at the history of each player¡¯s hits and then position fielders where the ball is most likely to travel. Crime prediction software carries out similar analysis, positioning cops where crimes appear most likely to occur. Both types of models optimize resources. But a number of the crime prediction models are more sophisticated, because they predict progressions that could lead to waves of crime. PredPol, for example, is based on seismic software: it looks at a crime in one area, incorporates it into historical patterns, and predicts when and where it might occur next. (One simple correlation it has found: if burglars hit your next-door neighbor¡¯s house, batten down the hatches.)
Predictive crime models like PredPol have their virtues. Unlike the crime-stoppers in ¡¯s dystopian movie Minority Report (and some ominous real-life initiatives, which we¡¯ll get to shortly), the cops don¡¯t track down people before they commit crimes. , the UCLA anthropology professor who founded PredPol, stressed to me that the model is blind to race and ethnicity. And

unlike other programs, including the recidivism risk models we discussed, which are used for sentencing guidelines, PredPol doesn¡¯t focus on the individual. Instead, it targets geography. The key inputs are the type and location of each crime and when it occurred. That seems fair enough. And if cops spend more time in the high- risk zones, foiling burglars and car thieves, there¡¯s good reason to believe that the community benefits.
But most crimes aren¡¯t as serious as burglary and grand theft auto, and that is where serious problems emerge. When police set up their PredPol system, they have a choice. They can focus exclusively on so-called Part 1 crimes. These are the violent crimes, including homicide, arson, and assault, which are usually reported to them. But they can also broaden the focus by including Part 2 crimes, including vagrancy, aggressive panhandling, and selling and consuming small quantities of drugs. Many of these ¡°nuisance¡± crimes would go unrecorded if a cop weren¡¯t there to see them.
These nuisance crimes are endemic to many impoverished neighborhoods. In some places police call them antisocial behavior, or ASB. Unfortunately, including them in the model threatens to skew the analysis. Once the nuisance data flows into a predictive model, more police are drawn into those neighborhoods, where they¡¯re more likely to arrest more people. After all, even if their objective is to stop burglaries, murders, and rape, they¡¯re bound to have slow periods. It¡¯s the nature of patrolling. And if a patrolling cop sees a couple of kids who look no older than sixteen guzzling from a bottle in a brown bag, he stops them. These types of low- level crimes populate their models with more and more dots, and the models send the cops back to the same neighborhood.
This creates a pernicious feedback loop. The policing itself spawns new data, which justifies more policing. And our prisons fill up with hundreds of thousands of people found guilty of victimless crimes. Most of them come from impoverished neighborhoods, and most are black or Hispanic. So even if a model is color blind, the result of it is anything but. In our largely segregated cities, geography is a highly effective proxy for race.
If the purpose of the models is to prevent serious crimes, you might ask why nuisance crimes are tracked at all. The answer is that the link between antisocial behavior and crime has been an article of faith since 1982, when a criminologist named teamed up with a public policy expert, . Wilson, to write a seminal article in the Atlantic Monthly on so-called broken-windows policing. The idea was that low-level crimes and misdemeanors created an atmosphere of disorder in a neighborhood. This scared law-abiding citizens away. The dark and empty streets they left behind were breeding grounds for serious crime. The

antidote was for society to resist the spread of disorder. This included fixing broken windows, cleaning up graffiti-covered subway cars, and taking steps to discourage nuisance crimes.
This thinking led in the 1990s to zero-tolerance campaigns, most famously in City. Cops would arrest kids for jumping the subway turnstiles. They¡¯d apprehend people caught sharing a single joint and rumble them around the city in a paddy wagon for hours before eventually booking them. Some credited these energetic campaigns for dramatic falls in violent crimes. Others disagreed. The authors of the bestselling book Freakonomics went so far as to correlate the drop in crime to the legalization of abortion in the 1970s. And plenty of other theories also surfaced, ranging from the falling rates of crack cocaine addiction to the booming 1990s economy. In any case, the zero-tolerance movement gained broad support, and the criminal justice system sent millions of mostly young minority men to prison, many of them for minor offenses.
But zero tolerance actually had very little to do with Kelling and Wilson¡¯s ¡°broken- windows¡± thesis. Their case study focused on what appeared to be a successful policing initiative in Newark, . Cops who walked the beat there, according to the program, were supposed to be highly tolerant. Their job was to adjust to the neighborhood¡¯s own standards of order and to help uphold them. Standards varied from one part of the city to another. In one neighborhood, it might mean that drunks had to keep their bottles in bags and avoid major streets but that side streets were okay. Addicts could sit on stoops but not lie down. The idea was only to make sure the standards didn¡¯t fall. The cops, in this scheme, were helping a neighborhood maintain its own order but not imposing their own.
You might think I¡¯m straying a bit from PredPol, mathematics, and WMDs. But each policing approach, from broken windows to zero tolerance, represents a model. Just like my meal planning or the U.S. News Top College ranking, each crime-fighting model calls for certain input data, followed by a series of responses, and each is calibrated to achieve an objective. It¡¯s important to look at policing this way, because these mathematical models now dominate law enforcement. And some of them are WMDs.
That said, we can understand why police departments would choose to include nuisance data. Raised on the orthodoxy of zero tolerance, many have little more reason to doubt the link between small crimes and big ones than the correlation between smoke and fire. When police in the British city of Kent tried out PredPol, in 2013, they incorporated nuisance crime data into their model. It seemed to work. They found that the PredPol squares were ten times as efficient as random patrolling

and twice as precise as analysis delivered by police intelligence. And what type of crimes did the model best predict? Nuisance crimes. This makes all the sense in the world. A drunk will pee on the same wall, day in and day out, and a junkie will stretch out on the same park bench, while a car thief or a burglar will move about, working hard to anticipate the movements of police.
Even as police chiefs stress the battle against violent crime, it would take remarkable restraint not to let loads of nuisance data flow into their predictive models. More data, it¡¯s easy to believe, is better data. While a model focusing only on violent crimes might produce a sparse constellation on the screen, the inclusion of nuisance data would create a fuller and more vivid portrait of lawlessness in the city.
And in most jurisdictions, sadly, such a crime map would track poverty. The high number of arrests in those areas would do nothing but confirm the broadly shared thesis of society¡¯s middle and upper classes: that poor people are responsible for their own shortcomings and commit most of a city¡¯s crimes.
But what if police looked for different kinds of crimes? That may sound counterintuitive, because most of us, including the police, view crime as a pyramid. At the top is homicide. It¡¯s followed by rape and assault, which are more common, and then shoplifting, petty fraud, and even parking violations, which happen all the time. Prioritizing the crimes at the top of the pyramid makes sense. Minimizing violent crime, most would agree, is and should be a central part of a police force¡¯s mission.
But how about crimes far removed from the boxes on the PredPol maps, the ones carried out by the rich? In the 2000s, the kings of finance threw themselves a lavish party. They lied, they bet billions against their own customers, they committed fraud and paid off rating agencies. Enormous crimes were committed there, and the result devastated the global economy for the best part of five years. Millions of people lost their homes, jobs, and health care.
We have every reason to believe that more such crimes are occurring in finance right now. If we¡¯ve learned anything, it¡¯s that the driving goal of the finance world is to make a huge profit, the bigger the better, and that anything resembling self- regulation is worthless. Thanks largely to the industry¡¯s wealth and powerful lobbies, finance is underpoliced.
Just imagine if police enforced their zero-tolerance strategy in finance. They would arrest people for even the slightest infraction, whether it was chiseling investors on 401ks, providing misleading guidance, or committing petty frauds. Perhaps SWAT teams would descend on Greenwich, Connecticut. They¡¯d go undercover in the

taverns around Chicago¡¯s Mercantile Exchange.
Not likely, of course. The cops don¡¯t have the expertise for that kind of work. Everything about their jobs, from their training to their bullet-proof vests, is adapted to the mean streets. Clamping down on white-collar crime would require people with different tools and skills. The small and underfunded teams who handle that work, from the FBI to investigators at the Securities and Exchange Commission, have learned through the decades that bankers are virtually invulnerable. They spend heavily on our politicians, which always helps, and are also viewed as crucial to our economy. That protects them. If their banks go south, our economy could go with them. (The poor have no such argument.) So except for a couple of criminal outliers, such as Ponzi-scheme master , financiers don¡¯t get arrested. As a group, they made it through the 2008 market crash practically unscathed. What could ever burn them now?
My point is that police make choices about where they direct their attention. Today they focus almost exclusively on the poor. That¡¯s their heritage, and their mission, as they understand it. And now data scientists are stitching this status quo of the social order into models, like PredPol, that hold ever-greater sway over our lives.
The result is that while PredPol delivers a perfectly useful and even high-minded software tool, it is also a do-it-yourself WMD. In this sense, PredPol, even with the best of intentions, empowers police departments to zero in on the poor, stopping more of them, arresting a portion of those, and sending a subgroup to prison. And the police chiefs, in many cases, if not most, think that they¡¯re taking the only sensible route to combating crime. That¡¯s where it is, they say, pointing to the highlighted ghetto on the map. And now they have cutting-edge technology (powered by Big Data) reinforcing their position there, while adding precision and ¡°science¡± to the process.
The result is that we criminalize poverty, believing all the while that our tools are not only scientific but fair.
One weekend in the spring of 2011, I attended a data ¡°hackathon¡± in City. The goal of such events is to bring together hackers, nerds, mathematicians, and software geeks and to mobilize this brainpower to shine light on the digital systems that wield so much power in our lives. I was paired up with the Civil Liberties Union, and our job was to break out the data on one of the NYPD¡¯s major anticrime policies, so-called stop, question, and frisk. Known simply as stop and

frisk to most people, the practice had drastically increased in the data-driven age of CompStat.
The police regarded stop and frisk as a filtering device for crime. The idea is simple. Police officers stop people who look suspicious to them. It could be the way they¡¯re walking or dressed, or their tattoos. The police talk to them and size them up, often while they¡¯re spread-eagled against a wall or the hood of a car. They ask for their ID, and they frisk them. Stop enough people, the thinking goes, and you¡¯ll no doubt stop loads of petty crimes, and perhaps some big ones. The policy, implemented by Mayor Michael Bloomberg¡¯s administration, had loads of public support. Over the previous decade, the number of stops had risen by 600 percent, to nearly seven hundred thousand incidents. The great majority of those stopped were innocent. For them, these encounters were highly unpleasant, even infuriating. Yet many in the public associated the program with the sharp decline of crime in the city. , many felt, was safer. And statistics indicated as much. Homicides, which had reached 2,245 in 1990, were down to 515 (and would drop below 400 by 2014).
Everyone knew that an outsized proportion of the people the police stopped were young, dark-skinned men. But how many did they stop? And how often did these encounters lead to arrests or stop crimes? While this information was technically public, much of it was stored in a database that was hard to access. The software didn¡¯t work on our computers or flow into Excel spreadsheets. Our job at the hackathon was to break open that program and free the data so that we could all analyze the nature and effectiveness of the stop-and-frisk program.
What we found, to no great surprise, was that an overwhelming majority of these encounters¡ªabout 85 percent¡ªinvolved young African American or Latino men. In certain neighborhoods, many of them were stopped repeatedly. Only 0.1 percent, or one of one thousand stopped, was linked in any way to a violent crime. Yet this filter captured many others for lesser crimes, from drug possession to underage drinking, that might have otherwise gone undiscovered. Some of the targets, as you might expect, got angry, and a good number of those found themselves charged with resisting arrest.
The NYCLU sued the Bloomberg administration, charging that the stop-and-frisk policy was racist. It was an example of uneven policing, one that pushed more minorities into the criminal justice system and into prison. Black men, they argued, were six times more likely to be incarcerated than white men and twenty-one times more likely to be killed by police, at least according to the available data (which is famously underreported).
Stop and frisk isn¡¯t exactly a WMD, because it relies on human judgment and is not

formalized into an algorithm. But it is built upon a simple and destructive calculation. If police stop one thousand people in certain neighborhoods, they¡¯ll uncover, on average, one significant suspect and lots of smaller ones. This isn¡¯t so different from the long-shot calculations used by predatory advertisers or spammers. Even when the hit ratio is miniscule, if you give yourself enough chances you¡¯ll reach your target. And that helps to explain why the program grew so dramatically under Bloomberg¡¯s watch. If stopping six times as many people led to six times the number of arrests, the inconvenience and harassment suffered by thousands upon thousands of innocent people was justified. Weren¡¯t they interested in stopping crime?
Aspects of stop and frisk were similar to WMDs, though. For example, it had a nasty feedback loop. It ensnared thousands of black and Latino men, many of them for committing the petty crimes and misdemeanors that go on in college frats, unpunished, every Saturday night. But while the great majority of university students were free to sleep off their excesses, the victims of stop and frisk were boo

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com

CS代写 THE HISTORY OF ARTIFICIAL INTELLIGENCE – cscodehelp代写

THE HISTORY OF ARTIFICIAL INTELLIGENCE
School of Computing and Information Systems
Co-Director, Centre for AI & Digital Ethics The University of Melbourne @tmiller_unimelb

Copyright By cscodehelp代写 加微信 cscodehelp

This material has been reproduced and communicated to you by or on behalf of the University of Melbourne pursuant to Part VB of the Copyright Act 1968 (the Act).
The material in this communication may be subject to copyright under the Act.
Any further copying or communication of this material by you may be the subject of copyright protection under the Act.
Do not remove this notice

LEARNING OUTCOMES
Discuss key eras of the development of artificial intelligence
Apply lessons from the past to discussions in the present.
Make new links between the way we currently build AI, and the effects it has on the people that use it

A BRIEF HISTORY OF

IN WHAT YEAR WAS THE FIRST PROGRAMMABLE COMPUTER BUILT?

British mathematicians & engineers build the Bombe and Colossus for cracking codes during World World II
The first AI winter. The hype of the golden era did not eventuate. Funding dried up and excitement dwindled.
The second AI winter. The hype of the knowledge era did not eventuate. Funding dried up and excitement dwindled.
1956-1974 1980-1987 1974-1980
1994-now 1987-1993
The Dartmouth Conference is held. The proposal coins the term ‘artificial intelligence’
invents the Analytical Engine. recognises it is a general purpose calculating machine
proposes the Turing Test as a goal for artificial intelligence, rather than defining ‘intelligence’
The Golden Era. Foundational search algorithms and artificial neural networks are invented
The Knowledge Era. Knowledge- based systems, such as Prolog and expert systems invented.
The Revival Era. Intelligence agents, computational power, the Internet, and machine learning

THE ERAS OF
ARTIFICIAL INTELLIGENCE

THE TURING TEST (1950)
“The new form of the problem can be described in terms of a game which we call the ‘imitation game’”
. Computing Machinery and Intelligence, Mind,
LIX(236):433–460, 1950. https://doi.org/10.1093/mind/LIX.236.433

DARTMOUTH WORKSHOP (1956)
THE FOUNDERS OF AI
“We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, . The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” – Dartmouth Summer School on AI Proposal
JOHN McCARTHY
MARVIN MINSKY
CLAUDE SHANNON
RAY SOLOMONOFF
NATHANIEL ROTCHESTER
ALAN NEWELL
TRENCHARD MOORE
HERBERT SIMON
ARTHUR SAMUEL
OLIVER SELFRIDGE

DARTMOUTH OUTCOMES
Perception
Knowledge Representation
The divide and conquer model of artificial intelligence

THE GOLDEN AGE OF AI
(1956-1974)

GOLDEN AGE: REASONING AS SEARCH Start
2 4 9 10 11
3 6 7 14 15

GOLDEN AGE: REASONING AS SEARCH Initial state: At(A), Level(low), BoxAt(C), BananasAt(B)
Goal state: Have(bananas)
// move from X to Y
_Move(X, Y)_
Preconditions: At(X), Level(low) Postconditions: not At(X), At(Y)
// climb up on the box
_ClimbUp(Location)_
Preconditions: At(Location), BoxAt(Location), Level(low) Postconditions: Level(high), not Level(low)
// climb down from the box
_ClimbDown(Location)_
Preconditions: At(Location), BoxAt(Location), Level(high) Postconditions: Level(low), not Level(high)
// move monkey and box from X to Y
_MoveBox(X, Y)_
Preconditions: At(X), BoxAt(X), Level(low) Postconditions: BoxAt(Y), not BoxAt(X), At(Y), not At(X)
// take the bananas
_TakeBananas(Location)_
Preconditions: At(Location), BananasAt(Location), Level(high)
Postconditions: Have(bananas)
Stanford Research Institute Problem Solver (STRIPS) — Fikes and Nilsson (1971)
Shakey the robot and A* — Hart, Nillson, and Raphael (1968)

GOLDEN AGE: PERCEPTRONS AND NEURAL NETWORKS
Single-layer perceptron – Rosenblatt (1958)

THE FIRST AI WINTER
(1974-1980)

WHAT WENT WRONG?
Outcomes failed to live up to the hype
SCALABILITY
COMMONSENSE KNOWLEDGE PERCEPTRON LIMITATIONS MORAVEC’S PARADOX

WHAT WAS THE RESULT? Lack of progress meant:
FUNDING DRIED UP
GLOBAL INTEREST IN AI DIED DOWN
CRITICISM FROM PHILOSOPHERS AND COGNITIVE SCIENTISTS

THE KNOWLEDGE ERA
(1980-1987)

KNOWLEDGE ERA: KNOWLEDGE-BASED SYSTEMS
Employee Customer
Casual Fixed- Permanent term
Prolog – Colmerauer and Roussel (1972) Formal onologies
mother_child(trude, sally). father_child(tom, sally). father_child(tom, erica). father_child(mike, tom).
sibling(X, Y) :- parent_child(Z, X),
parent_child(Z, Y).
parent_child(X, Y) :- father_child(X, Y). parent_child(X, Y) :- mother_child(X, Y).
?- sibling(sally, erica).

KNOWLEDGE ERA:
EXPERT SYSTEMS
MYCIN expert system for diagnosis of blood diseases – Shortcliffe, Buchanan, and Cohen (1970s)

THE SECOND AI WINTER
(1987-1993)

WHAT WENT WRONG?
Outcomes failed to live up to the hype
SCALABILITY
MAINTENANCE
THE QUALIFICATION PROBLEM MORAVEC’S PARADOX

WHAT WAS THE RESULT? Lack of progress meant:
FUNDING DRIED UP (DARPA DECLARED AI WAS `NOT THE NEXT WAVE’)
GLOBAL INTEREST IN AI DIED DOWN AI COMPANIES WENT BANKRUPT
THIS SLIDE IS NOT A COPY FROM THE FIRST AI WINTER

THE AI REVIVAL
(1994-present)

CURRENT ERA: INTELLIGENT AGENTS AND DECISION THEORY
Bayesian Networks – Pearl (1988)

CURRENT ERA: COMPUTATIONAL POWER Moore’s law
1980 1985 1990
NUMBER OF TRANSISTORS (000s)

CURRENT ERA: THE INTERNET AND BIG DATA
Image source: https://www.promptcloud.com/blog/want-to-ensure-business-growth-via-big-data-augment-enterprise-data-with-web-data/

CURRENT ERA: MACHINE LEARNING
Backpropagation in deep neural networks – Rumelhart, Hinton, and Williams (1986)

THE THIRD AI WINTER?

WE’VE BEEN HERE BEFORE
“There is no reason and no way that a human mind can keep up with an artificial intelligence machine by 2035.”
— Gray Scott (2017)
“We will have fully self- driving cars on the road by 2017” — Elon Musk (2014)
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
Amara’s Law
“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, and we will have multiplied the intelligence – the human biological machine intelligence of our civilization – a billion-fold.”
— (1999)
“In from three to eight years we will have a machine with the general intelligence of an average human being.”
— (1970)
“Machines will be capable, within twenty years, of doing any work a man can do”
— (1956)
“… the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”
– Times on the Perceptron (1958)

WHAT ARE SOME OF THE RISKS?
“By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”
The beauty of #AI and what we can do with it is currently being overshadowed by reckless #hype, monotheistic techniques, and absurd deification. Yet, it’s not the first time in 65+ years nor it’ll be the last one. We haven’t learnt a thing. ―

HISTORY AND
REPRESENTATION

UNDER-REPRESENTATION IN AI
“Computer Science Communities: Who is Speaking, and Who is Listening to the Women? Using an Ethics of Care to Promote Diverse Voices”. , , . In ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2021

EFFECTS OF UNDER-REPRESENTATION Lack of diversity means lack of:
Design decisions, data, attitudes, etc., are all influenced by who we are as individuals, as teams, and as societies
PRIVACY FAIRNESS
ACCESSIBILITY & INCLUSION
SAFETY TRANSPARENCY FUNCTIONALITY
The History of AI ⊆ The History of Culture and Society

FURTHER READING

DIVERSITY IN TEAMS
Sexual orientation Disability
Family status Age
Class Education Etc.
DIVERSITY OF INPUTS
Work with users
Get out of the building!
This is not just good for the soul: it is good for business!

HISTORY OF AI: SUMMARY
Dartmouth workshop is the “birth” of artificial intelligence
AI winters caused by hyped expectations not being met
Eras of artificial intelligence
GOLDEN ERA KNOWLEDGE ERA REVIVAL ERA
Will we have another AI winter/autumn?
HISTORY AND REPRESENTATION
Artificial intelligence (including ethics of AI) has been driven mostly by male, western culture
Important contributions from non-male, non- Western culture, but still marginalised
Culture (and therefore its history) influences design decision of software systems
DIVERSE TEAMS DIVERSE INPUTS

School of Computing and Information Systems Co-Director, Centre for AI & Digital Ethics
The University of Melbourne
@tmiller_unimelb

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com