CS代写 Computer Vision (7CCSMCVI / 6CCS3COV) – cscodehelp代写

Computer Vision (7CCSMCVI / 6CCS3COV)
Recap
• Image formation
● Low-level vision
● Mid-level vision
● grouping and segmentation of image elements
● Biological
● bottom-up influences (Gestalt cues)
● top-down influences (knowledge, expectation, etc.)
● Artificial
● Thresholding, Region-based, Clustering, Fitting
● Multi-View Vision
● Stereo and Depth
● Video and Motion ←Today
● High-level vision
Computer Vision / Mid-Level Vision / Video and Motion
1

Today
• Optic flow
– motion fields
– measurement
» correspondence problem » aperture problem
– applications » depth
» time-to-collision • Tracking
• Segmentation
– image differencing
– background subtraction
Computer Vision / Mid-Level Vision / Video and Motion 2

Video
Video is a series of N images, or frames, acquired at discrete time instants tk=t0+kΔt, where Δt is a fixed time interval and k=0,1,…,N-1
t
In static images
The intensity of a pixel can be seen as a function of its spatial coordinates (x,y):
i.e. an image is I(x,y)
In video
The intensity of a pixel can be seen as a function of its spatial coordinates (x,y) and time (t):
i.e. a video is I(x,y,t)
I(x,y,t)
Computer Vision / Mid-Level Vision / Video and Motion
3

Video and Stereo
Similar to stereo in that we are dealing with more than one image
Stereo
» multiple cameras
» one time (images taken simultaneously)
Video
» one camera
» multiple times (images taken at different times)
OL
z xy
(uL,vL)
O R
x
t
O
y
z
(uR,vR)
z
x
y
(u,v)
Computer Vision / Mid-Level Vision / Video and Motion
4

Video Enables:

• segmentation of objects from background without recovery of
inference of 3D structure (as with stereo).
depth (unlike stereo)
• inference of self and object motion (unlike stereo)
essential for some applications, e.g.: robot navigation driver assistance

surveillance
Computer Vision / Mid-Level Vision / Video and Motion
5

Motion Analysis
Projection of a scene point changes with: 1. object motion
x1
f’
x1
x2
f’
x2
2. camera motion, or “ego motion”
Both types of movement give rise to “optic flow”.
Computer Vision / Mid-Level Vision / Video and Motion 6

Optic flow (OF) optic flow
vector
I(x,y,t–1)
optic flow vector
the image motion of a scene point.
optic flow field
I(x,y,t)
the collection of all the optic flow vectors
Optic flow fields can be sparse (vectors defined only for specified features) or dense (vectors defined everywhere).
optic flow vectors are analogous to disparity vectors in stereo vision measuring optic flow requires finding correspondences between
images
Computer Vision / Mid-Level Vision / Video and Motion 7

Motion field (MF)
motion field
I(x,y,t–1) I(x,y,t)
the true image motion of a scene point
i.e. the actual projection of the relative motion between the camera and the 3D scene
Optic flow provides an approximation to the motion field. But it is not always accurate…
Computer Vision / Mid-Level Vision / Video and Motion 8

Motion field (MF) ≠ Optic flow (OF)
Consider a smooth, lambertian, uniform sphere rotating around a diameter:
MF ≠ 0 as points on the sphere are moving
OF = 0 as there are no changes in the images
Consider a stationary, specular, sphere and a moving light source:
MF = 0 as points on the sphere are not moving OF ≠ 0 as there is a moving pattern in the images
Consider a barber’s pole: MF = horizontal
OF = vertical
Despite MF ≠ OF in all circumstances, MF cannot be observed, so we must estimate MF by observing OF.
Computer Vision / Mid-Level Vision / Video and Motion 9

The video correspondence problem
To measure optic flow, it is necessary to find corresponding points in different frames of the video.
To solve the video correspondence problem, we can use:


Direct methods
Directly recover image motion at each pixel from temporal variations of the image brightness (by applying spatio-temporal filters)
Dense optic flow fields, but sensitive to appearance variations Suitable when image motion is small
Feature-based methods
Extract descriptors from around interest points find similar features in next frame (identical to method used for stereo correspondence)
Sparse optic flow fields
Suitable when image motion is large
Computer Vision / Mid-Level Vision / Video and Motion 10

The video correspondence problem
To measure optic flow, it is necessary to find corresponding points in different frames of the video.
To use feature-based methods
Basic requirements to be able to solve the correspondence problem: 1. Most scene points visible in both images
2. Corresponding image regions appear “similar”
Computer Vision / Mid-Level Vision / Video and Motion 11

Video Constraints on Correspondence
To measure optic flow, it is necessary to find corresponding points in different frames of the video.
Constraints used to help find corresponding points:
Spatial coherence
Similar neighbouring flow vectors are preferred over dissimilar ones. – The assumption is that the scene is made up of smooth
surfaces, and hence, neighbouring points in the scene typically belong to the same surface, and hence, typically have similar motions and induce similar optic flow.
Small motion
Small optic flow vectors are preferred over large ones.
– The assumption is that relative velocities are slow compared to
the frame rate, and hence, that the amount of motion between frames is small compared to the size of the image.
Computer Vision / Mid-Level Vision / Video and Motion 12

Aperture problem
Consider two consecutive frames showing a moving rectangle.
I(x,y,t) I(x,y,t+1)
The image patch marked in the first frame could match any of the patches marked in the second frame.
Because the intensity varies across the edge but not along the edge a motion parallel to the edge can never be recovered.
The inability to determine optic flow along the direction of the brightness pattern is known as the “aperture problem”.
Computer Vision / Mid-Level Vision / Video and Motion 13

Aperture problem
The brain is also faced with the aperture problem as each motion sensitive neuron sees only a small spatial region (i.e. its receptive field)
A demonstration.
What is the direction of motion here?
Computer Vision / Mid-Level Vision / Video and Motion 14

Aperture problem
This is what we perceive:
These motions are also possible:
Any movement with a component perpendicular to the edge is possible.
Locally the direction of motion is ambiguous.
We may see motion perpendicular to edge because:
• it is average of all possibilities
• it predicts the slowest movement
Computer Vision / Mid-Level Vision / Video and Motion 15
t t+1

Aperture problem: solutions
Local motion measurements are combined across space.
i.e. more than one local measurement is used to resolve the ambiguity and to accurately compute the direction of global motion.
=
Locations where the direction of motion is unambiguous are used.
i.e. corners
Note, SIFT and Harris corner detectors commonly used for finding interest points for solving stereo correspondence. The same features are also good for solving video correspondence (i.e. calculating optic flow)
Computer Vision / Mid-Level Vision / Video and Motion 16

Optic flow applications
Optic flow can be used in various ways.
• To estimate the layout of the environment
– depths and orientations of surfaces.
• To estimate ego motion
– the camera velocity relative to a visual frame of reference.
• To estimate object motions
– relative to the visual frame of reference, or relative to an environmental frame of reference.
• To obtain predictive information for the control of action. This information need not make layout or motion explicit.
Computer Vision / Mid-Level Vision / Video and Motion 17

Depth from optic flow and known ego-motion
Simple case 1:
• direction of motion is perpendicular to optical axis
• velocity Vx of camera is known
Given two images taken at times 1 and 2:
Z= fX 1 = fX 2 x1 x2
X 1 x2=X 2 x1
X 1 x2=X 1−V x tx1
Vx x
f
X
x= fX Z
Z
Z = fX 1 =− fV x x 1 x ̇
X 1x2−x1=−V x t x1
X 1=−V x x1 where: x ̇=x2−x1
x ̇ t Hence, by measuring the velocity of an image point, we can recover
its depth.
Computer Vision / Mid-Level Vision / Video and Motion 18

Depth from optic flow and known ego-motion
Simple case 2:
• direction of motion is along camera optical axis
• velocity Vz of camera is known
Given two images taken at times 1 and 2:
f x
x= fX Z
Vz
Z
X
fX=x1 Z1=x2 Z2 x1Z2V z t=x2 Z2 x1Vzt=x2−x1Z2
where x ̇=x2−x1 t
Z2=Vz x1 x ̇
Hence, by measuring the velocity of an image point, we can recover its depth.
Computer Vision / Mid-Level Vision / Video and Motion 19

Time-to-collision from optic flow
Simple case 2:
• direction of motion is along camera optical axis
• velocity Vz of camera is unknown
f x
Vz
Z
X
Z2=Vz x1 x ̇
= time-to-collision (if the camera velocity is constant).
Z2=x1 Vz x ̇
Computer Vision / Mid-Level Vision / Video and Motion
20
can be measured purely from the image.

Time-to-collision from optic flow
x1=α1=2 A1 x ̇ α ̇ A ̇
Vz
Time-to-collision =
where:
1
2
α is the angle subtended by the object, and
A is the area of the the object’s image.
Hence, time-to-collision can be calculated without knowing anything about the speed of the camera, the size of, or distance from, the object.
Used in nature by birds and insects for catching prey, landing on a surface, etc.
A1
A2
Computer Vision / Mid-Level Vision / Video and Motion
21

Ego-motion from optic flow
Camera translations induce characteristic patterns of optic flow (for a static scene).
turn/translate turn/translate left right
Parallel Optic Flow Field (Vz= 0)
• all optic flow vectors are parallel • direction of camera movement
(p0)
● direction of camera movement determined by whether FOE (focus of expansion) or FOC (focus of contraction)
● destination of movement is FOE
move move forward backward
Radial Optic Flow Field (Vz ≠ 0)
● all optic flow vectors point towards/away from a vanishing point
opposite to direction of optic flow field
• speed of camera movement proportional to length of optic flow vectors
Computer Vision / Mid-Level Vision / Video and Motion
22

Relative depth from optic flow
turn/translate turn/translate left right
Parallel Optic Flow Field (Vz= 0)
• depth inversely proportional to magnitude of optic flow vector
• This is the same as motion parallax with fixation on infinity
move move forward backward
Radial Optic Flow Field (Vz ≠ 0)
● depth of point p inversely proportional to magnitude of optic flow vector, and also proportional to distance from p to p0
Computer Vision / Mid-Level Vision / Video and Motion
23

Segmentation from optic flow
I(x,y,t)
I(x,y,t+1)
Discontinuities in optic flow field indicate different depths, and hence, different objects.
Computer Vision / Mid-Level Vision / Video and Motion
24

Optic flow applications
Using optic flow you can
• •






get to a destination by moving so that the destination is the FOE
judge relative depths by relative magnitudes of optic flow vectors (points closer to the camera move more quickly across the image plane)
measure absolute depths (with knowledge of camera velocity) judge camera speed by the rates of expansion/contraction measure time-to-collision
judge direction of ego-motion
judge directions and speeds of object motions
segment objects at different depths / determine orientation of surfaces
Computer Vision / Mid-Level Vision / Video and Motion 25

Leave a Reply

Your email address will not be published. Required fields are marked *