DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Amendment
This office action is responsive to the amendment received 02/25/2026.
In the response to the Non-Final Office Action 11/26/2025, the applicant states that claims 1, 5, 7-9,13, and 15-17 are herein amended. Claims 1-19 are pending in the application.
Claims 1, 5, 7-9, 13, and 15-17 have been amended. Claim 20 is cancelled. In summary, claims 1-19 are pending in current application.
Response to Arguments
Applicant's arguments filed 02/25/2026 have been fully considered but they are not persuasive.
Regarding to claims 1, 9, and 17, the applicant argues that none of the cited references teaches or remotely suggests the feature "store labeling information including the first position and a first parameter indicating that the first position corresponds to the first image, the second position and a second parameter indicating that the second position corresponds to the second image, and one or more third positions and one or more third parameters respectively correspond to the one or more third images," recited in amended independent claim 1 and similarly recited in amended independent claims 9 and 17. The arguments have been fully considered, but they are not persuasive. The examiner cannot concur with the applicant for following reasons:
Marman discloses “store labeling information including the first position and a first parameter indicating that the first position corresponds to the first image, the second position and a second parameter indicating that the second position corresponds to the second image, and one or more third positions and one or more third parameters respectively correspond to the one or more third images”. For example, in paragraph [0030], Marman teaches an object indexing module 250 is connected to a data storage system 255 for storing image data and other information; Marman further teaches the signatures are stored in data storage system 255 and act as index elements that enable retrieval of video clips of the objects. In paragraph [0031], Marman teaches searching through metadata signatures or index elements stored in data storage system 255; Marman further teaches metadata signatures and index elements are stored in data storage system 255; Marman further more teaches metadata includes location information. In paragraph [0036], Marman teaches data is stored at a location remote from the cameras. In paragraph [0043], Marman teaches video analytics 120 automatically track the person of interest and generate metadata, e.g., location information, unique identifier label, corresponding to the person of interest. In paragraph [0044], Marman teaches the first set of image data representing zoomed-out images of the scene is retained in data storage system 255 to allow the user to review, e.g., play back, video and zoom in on different parts of the scene captured at different times. In paragraph [0046], Marman teaches tag those images so that corresponding cropped close-ups of those clear images are saved as snapshots. In paragraph [0049], Marman teaches the first set of image data is modified before being stored in storage device 395. In paragraph [0055], Marman teaches the two sets of image data and the metadata, i.e., location information, may also be stored in storage device 390. In Fig. 6 and paragraph [0058], Marman teaches the second set of image data is stored in storage device 395 to enable the user to later access and play back the cropped close-up images. In Fig. 8 and paragraph [0065], Marman teaches calculating an area 910 of image 800 that is localized to and encompasses a bounding box 920 of object 820. In Fig. 9 and paragraph [0068], Marman teaches if object 810 moves sufficiently close to objects 830 and 840, display management module 340 automatically collapses windows 970 and 980 together; Marman further teaches splitting window 980 into two separate windows when objects 830 and 840 diverge in their paths or rates of walking.
Regarding to claims 5 and 11, the applicant argues that none of the cited references teaches or remotely suggests the amended claim limitations of claim 5 and claim 11. The arguments have been fully considered, but they are not persuasive. The examiner cannot concur with the applicant for following reasons:
Marman discloses “based on identifying that a time difference between the second image and the first image is longer than a threshold interval after the first timing”. For example, in paragraph [0025], Marman teaches object tracking module 220 predicts the location and size of an object in a new frame based upon its previously estimated trajectory and velocity. In paragraph [0038], Marman teaches tracking the object as it moves through the scene. In paragraph [0043], Marman teaches tracking objects by motion and also appearance, i.e. a feature; Video analytics 120 automatically track the person of interest and generate metadata, e.g., location information, unique identifier label, corresponding to the person of interest. In paragraph [0053], Marman teaches tracking and matching, frame by frame, the moving locations of the objects of interest),
Stokking discloses “determining the third positions by changing a fourth positions, which are obtained by interpolation of the first position and the second position, using at least one feature point included in the one or more third images”. For example, in paragraph [0191], Stokking teaches the transition animation is performed using an animation function a(t) which interpolates the p values over the animation duration d. In Fig. 5A, Fig. 5B, and paragraph [0198], Stokking teaches a movement trajectory 370 is determined from the transition data in various ways; Stokking further teaches the movement trajectory 370 is determined as a series of intermediate positions for the avatar 310 by linear interpolation between the coordinates of the first viewing position and the second viewing position as a function of time;
PNG
media_image1.png
478
354
media_image1.png
Greyscale
.
Claims 2-4, 6-8, 10, 12-16, and 18-19 are not allowable due to the similar reasons as discussed above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-19 are rejected under 35 U.S.C. 103 as being unpatentable over Marman (US 20120062732 A1) and in view of Stokking (US 20200202597 A1).
Regarding to claim 1 (Currently amended), Marman discloses an electronic device (Fig. 1; [0020]: system 100 includes a video camera 110 and various components for processing image data produced by a video camera 110; [0023]: enable detection, classification, and tracking of objects present in the scene based on analysis of first set of the image data; [0030]: a data storage system 255 stores image data and other information; [0045]: a colored bounding box 380 is generated and superimposed over the image of the person when video analytics 120 detect and track the person; as the person moves through the scene, the cropped-close up images presented in zoomed-in tracking window 355 automatically track the movement of the person), comprising:
Memory ([0022]: video buffer memory; [0036]: data storage system 255 stores image data and metadata created by system 100; one or more storage devices; non-volatile memory and volatile memory.); and
a processor, wherein the processor is configured to ([0022]: a specialized video processor; [0035]: a processor of computer 320; Fig. 4; [0048]: a processor of a remote server; a processor of computer 320 of user station 265):
identify, from the memory, a first position associated with an external object, among a plurality of images for a video and a first image at a first timing from the plurality of images ([0021]: imager 115 of video camera 110 captures multiple images, e.g., video frames, of the field of view and produces a first set of image data; [0023]: video analytics 120 use the first set of image data to carry out various functions such as, object location detection and identification, classification, tracking, indexing, and search; [0025]: the object tracking module 206 uses object motion between frames as a cue to tracking; [0028]: object tracking module 220 identifies and tracks an object for multiple frames, i.e., multiple images; Fig. 3; [0045]: as the person enters the scene and video analytics 120 detect, identify, and track the person, and identify first position of the person; a colored bounding box 380 is generated and superimposed over the image of the person when video analytics 120 detect and track the person;
PNG
media_image2.png
442
668
media_image2.png
Greyscale
; Fig. 7; [0064]: display management module 340 identifies the X-Y coordinate information of bounding boxes 740 and 750 and calculates an area 760 of image 700 that includes both bounding boxes 740, i.e. label, and 750;
PNG
media_image3.png
232
210
media_image3.png
Greyscale
; Fig. 9; [0068]:
PNG
media_image4.png
330
646
media_image4.png
Greyscale
);
identify a first position related to an external object, in a first image among the plurality of images at a first timing (Fig. 3; [0045]: as the person enters the scene and video analytics 120 detect, identify, and track the person, and identify first position of the person; a colored bounding box 380 is generated and superimposed over the image of the person when video analytics 120 detect and track the person;
PNG
media_image2.png
442
668
media_image2.png
Greyscale
; Fig. 7; [0064]: display management module 340 identifies the X-Y coordinate information of bounding boxes 740 and 750 and calculates an area 760 of image 700 that includes both bounding boxes 740, i.e. label, and 750;
PNG
media_image3.png
232
210
media_image3.png
Greyscale
);
identify a second position related to the external object in a second image among the plurality of images at a second timing different from the first timing ([0025]: the object tracking module 206 uses object motion between frames as a cue to tracking; [0026]: track a particular object in the field of view where many objects are present; [0033]: a colored bounding box, i.e. labels, that surrounds and is superimposed over the image of the object; tracking information of the object includes the location and size of the object, the size of the bounding box surrounding the object; Fig. 3; [0045]: a colored bounding box 380 is generated and superimposed over the image of the person when video analytics 120 detect and track the person; as the person moves through the scene, the cropped-close up images presented in zoomed-in tracking window 355 automatically track the movement of the person and identify second positions of the person in the movement;
PNG
media_image2.png
442
668
media_image2.png
Greyscale
; Fig. 9; [0068]: if object 810 moves sufficiently close to objects 830 and 840, display management module 340 automatically collapses windows 970 and 980 together; when objects 830 and 840 diverge in their paths or rates of walking, split window 980 into two separate windows;
PNG
media_image4.png
330
646
media_image4.png
Greyscale
);
obtain, based on the first position and the second position, one or more third positions related to the external object (Fig. 3; [0045]: as the person moves through the scene, the cropped-close up images automatically track the movement of the person; dashed outline box 370 moves relative to scene viewing window 365 in unison with movement of the person; Fig. 7; [0064]: display management module 340 identifies the X-Y coordinate information of bounding boxes 740 and 750 and calculates an area 760 of image 700 that includes both bounding boxes 740 and 750; Fig. 8; [0065]: calculates an area 910 of image 800 that is localized to and encompasses a bounding box 920 of object 820; Fig. 9; [0068]: if object 810 moves sufficiently close to objects 830 and 840, display management module 340 automatically collapses windows 970 and 980 together; split window 980 into two separate windows when objects 830 and 840 diverge in their paths or rates of walking); and
store labeling information including the first position and a first parameter indicating that the first position corresponds to the first image, the second position and a second parameter indicating that the second position corresponds to the second image, and one or more third positions and one or more third parameters respectively correspond to the one or more third images ([0030]: an object indexing module 250 is connected to a data storage system 255 for storing image data and other information; the signatures are stored in data storage system 255 and act as index elements that enable retrieval of video clips of the objects; [0031]: search through metadata signatures or index elements stored in data storage system 255; [0036]: data is stored at a location remote from the cameras; [0043]: Video analytics 120 automatically track the person of interest and generate metadata, e.g., location information, unique identifier label, corresponding to the person of interest; [0044]: the first set of image data representing zoomed-out images of the scene is retained in data storage system 255 to allow the user to review, e.g., play back, video and zoom in on different parts of the scene captured at different times; [0046]: tag those images so that corresponding cropped close-ups of those clear images are saved as snapshots; [0049]: the first set of image data is modified before being stored in storage device 395; [0055]: the two sets of image data and the metadata may also be stored in storage device 390; Fig. 6; [0058]: the second set of image data is stored in storage device 395 to enable the user to later access and play back the cropped close-up images; Fig. 7; [0064]: display management module 340 identifies the X-Y coordinate information of bounding boxes 740 and 750 and calculates an area 760 of image 700 that includes both bounding boxes 740 and 750; Fig. 8; [0065]: calculates an area 910 of image 800 that is localized to and encompasses a bounding box 920 of object 820; Fig. 9; [0068]: if object 810 moves sufficiently close to objects 830 and 840, display management module 340 automatically collapses windows 970 and 980 together; split window 980 into two separate windows when objects 830 and 840 diverge in their paths or rates of walking).
Marman fails to explicitly disclose:
in the one or more third images among the plurality of images in a time section between the first timing and the second timing.
In same filed of endeavor, Stokking teaches:
obtain, based on the first position and the second position, one or more third positions related to the external object in the one or more third images among the plurality of images in a time section between the first timing and the second timing ([0070]: intermediate positions are determined by the first processor system by time-based interpolation or tracing along the predetermined motion trajectory; Fig. 5A; Fig. 5B; [0198]: a movement trajectory 370 is determined from the transition data in various ways; the movement trajectory 370 is determined as a series of intermediate positions for the avatar 310 by linear interpolation between the coordinates of the first viewing position and the second viewing position as a function of time;
PNG
media_image1.png
478
354
media_image1.png
Greyscale
).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Marman to include obtain, based on the first position and the second position, one or more third positions related to the external object in the one or more third images among the plurality of images in a time section between the first timing and the second timing as taught by Stokking. The motivation for doing so would have been to maintain or improve the second user's sense of immersion and/or may avoid confusion of the second user; to determine intermediate viewing positions by the first processor system by time-based interpolation or tracing along the predetermined motion trajectory; to determine a series of intermediate positions for the avatar by linear interpolation between the coordinates as taught by Stokking in Fig. 5A, paragraphs [0055], [0070], and [0198].
Regarding to claim 2 (Original), Marman in view of Stokking discloses the electronic device of claim 1, wherein the processor is configured to:
obtain the one or more third positions by interpolating, using a length between the first timing and the second timing, a first coordinate indicating the first position within the first image at the first timing, and a second coordinate indicating the second position within the second image at the second timing (Stokking; [0070]: determine intermediate viewing positions by the first processor system by time-based interpolation or tracing along the predetermined motion trajectory; Fig. 5A; Fig. 5B; [0198]: a movement trajectory 370 is determined from the transition data in various ways; the movement trajectory 370 is determined as a series of intermediate positions for the avatar 310 by linear interpolation between the coordinates of the first viewing position and the second viewing position as a function of time;
PNG
media_image1.png
478
354
media_image1.png
Greyscale
).
Same motivation of claim 1 is applied here.
Regarding to claim 3 (Original), Marman in view of Stokking discloses the electronic device of claim 2, wherein the processor is configured to:
obtain the one or more third positions by interpolating the first coordinate and the second coordinate based on timings within the time section of the one or more images (Stokking; [0191]: the transition animation is performed using an animation function a(t) which interpolates the p values over the animation duration d; Fig. 5A; Fig. 5B; [0198]: a movement trajectory 370 is determined from the transition data in various ways; the movement trajectory 370 is determined as a series of intermediate positions for the avatar 310 by linear interpolation between the coordinates of the first viewing position and the second viewing position as a function of time;
PNG
media_image1.png
478
354
media_image1.png
Greyscale
).
Same motivation of claim 1 is applied here.
Regarding to claim 4 (Original), Marman in view of Stokking discloses the electronic device of claim 2, wherein the processor is configured to:
identify, by comparing one or more feature points included in a portion of the first image including the first position corresponding to the external object, and one or more feature points included in a second image corresponding to the external object, the second position within the second image (Marman; [0027]: match classifier 225 determines which features, distance measures, and discriminant functions enable the most accurate and quickest match classification; using match classifier 225 for tracking enables accurate tracking even; [0033]: a confidence level of an object match between frames of video; [0038]: track the object as it moves through the scene; [0053]: track and match, frame by frame, the moving locations of the objects of interest).
Regarding to claim 5 (Currently amended), Marman in view of Stokking discloses the electronic device of claim 1, wherein the processor is configured to:
based on identifying that a time difference between the second image and the first image is longer than a threshold interval after the first timing (Marman; [0025]: object tracking module 220 predicts the location and size of an object in a new frame based upon its previously estimated trajectory and velocity; [0038]: track the object as it moves through the scene; [0043]: track objects by motion and also appearance, i.e. a feature; Video analytics 120 automatically track the person of interest and generate metadata, e.g., location information, unique identifier label, corresponding to the person of interest; [0053]: track and match, frame by frame, the moving locations of the objects of interest), determining the third positions by changing a fourth positions, which are obtained by interpolation of the first position and the second position, using at least one feature point included in the one or more third images ( Stokking; [0191]: the transition animation is performed using an animation function a(t) which interpolates the p values over the animation duration d; Fig. 5A; Fig. 5B; [0198]: a movement trajectory 370 is determined from the transition data in various ways; the movement trajectory 370 is determined as a series of intermediate positions for the avatar 310 by linear interpolation between the coordinates of the first viewing position and the second viewing position as a function of time;
PNG
media_image1.png
478
354
media_image1.png
Greyscale
).
Regarding to claim 6 (Original), Marman in view of Stokking discloses the electronic device of claim 1, wherein the processor is configured to:
identify the first position and the second position of the external object captured in the time section by inputting the first image and the second image to a model to recognize the external object (Marman; [0025]: object tracking module 220 predicts the location and size of an object in a new frame based upon its previously estimated trajectory and velocity; [0028]: object tracking module 220 tracks an object for multiple frames; [0034]: the metadata include a unique identifier label for each object of interest tracked by object tracking module 220; [0038]: enable display management module 340 to generate a video display window that zooms in on and tracks the object as it moves through the scene captured by camera 110; [0043]: video analytics 120 recognizes object types, e.g., people, vehicles, watercraft, and track objects by motion and also appearance.).
Regarding to claim 7 (Currently amended), Marman in view of Stokking discloses the electronic device of claim 1, further comprising a display,
wherein the processor is configured to:
display a screen for reproducing the video in the display (Marman; [0034]: each object is tracked and viewed separately on a display 280; [0044]: play back video and zoom in on different parts of the scene captured at different times; Fig. 6; [0058]: the second set of image data is stored in storage device 395 to enable the user to later access and play back the cropped close-up images); and
in a state that one image among the plurality of images is displayed within the screen based on an input indicating to reproduce the video, display a visual object indicating a position of the external object superimposed on the image displayed in the display based on the labeling information (Marman; [0044]: the first set of image data representing zoomed-out images of the scene is retained in data storage system 255 to allow the user to review, e.g., play back, video and zoom in on different parts of the scene captured at different times; Fig. 3; [0045]: a colored bounding box 380 is generated and superimposed over the image of the person when video analytics 120 detect and track the person; [0101]: identify video segments having the metadata corresponding to the activities selected by the user; display management module 340 receives the pertinent video segments from data storage system 255 and sorts them by time; [0103]: display management module 340 superimposes images 1705, 1710, 1715, 1720, 1725, 1730, and 1735 over a background image of the scene).
Regarding to claim 8 (Currently amended), Marman in view of Stokking discloses the electronic device of claim 7, wherein the processor is configured to:
in the state of displaying one image among the one or more third images in the display, identify an input indicating to move the visual object (Marman; [0044]: the user selects an image of the person of interest using input device 330; Fig. 3; [0045]: dashed outline box 370 moves relative to scene viewing window 365 in unison with movement of the person;
PNG
media_image2.png
442
668
media_image2.png
Greyscale
); and
based on the input, adjust, based on a position of the visual object moved by the input, a position of the external object corresponding to another image different from an image displayed in the screen among the one or more third screens (Marman; [0025]: object tracking module 220 predicts the location and size of an object in a new frame based upon its previously estimated trajectory and velocity; [0038]: track the object as it moves through the scene; [0043]: track objects by motion and also appearance, i.e. a feature; Video analytics 120 automatically track the person of interest and generate metadata, e.g., location information, unique identifier label, corresponding to the person of interest; Fig. 3; [0045]: a colored bounding box 380 may be generated and superimposed over the image of the person when video analytics 120 detect and track the person; [0053]: track and match, frame by frame, the moving locations of the objects of interest).
Regarding to claim 9 (Currently amended), Marman discloses a method of an electronic device (Fig. 1; [0020]: system 100 includes a video camera 110 and various components for processing image data produced by a video camera 110; [0023]: enable detection, classification, and tracking of objects present in the scene based on analysis of first set of the image data; [0030]: a data storage system 255 stores image data and other information; [0045]: a colored bounding box 380 is generated and superimposed over the image of the person when video analytics 120 detect and track the person; as the person moves through the scene, the cropped-close up images presented in zoomed-in tracking window 355 automatically track the movement of the person), comprising:
The rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is also used to reject claim 9.
Regarding to claim 10 (Original), Marman in view of Stokking discloses the method of claim 9, wherein the obtaining comprises:
The rest claim limitations are similar to claim limitations recited in claim 2. Therefore, same rational used to reject claim 2 is also used to reject claim 10.
Regarding to claim 11 (Original), Marman in view of Stokking discloses the method of claim 10, wherein the obtaining comprises:
The rest claim limitations are similar to claim limitations recited in claim 3. Therefore, same rational used to reject claim 3 is also used to reject claim 11.
Regarding to claim 12 (Original), Marman in view of Stokking discloses the method of claim 10, wherein the identifying the second position (same as rejected in claim 9) comprises:
The rest claim limitations are similar to claim limitations recited in claim 4. Therefore, same rational used to reject claim 4 is also used to reject claim 12.
Regarding to claim 13 (Currently amended), Marman in view of Stokking discloses the method of claim 9, wherein the obtaining comprises:
The rest claim limitations are similar to claim limitations recited in claim 5. Therefore, same rational used to reject claim 5 is also used to reject claim 13.
Regarding to claim 14 (Original), Marman in view of Stokking discloses the method of claim 9, wherein the identifying the second position (same as rejected in claim 9) comprises:
The rest claim limitations are similar to claim limitations recited in claim 6. Therefore, same rational used to reject claim 6 is also used to reject claim 14.
Regarding to claim 15 (Currently amended), Marman in view of Stokking discloses the method of claim 9, further comprising:
The rest claim limitations are similar to claim limitations recited in claim 7. Therefore, same rational used to reject claim 7 is also used to reject claim 15.
Regarding to claim 16 (Currently amended), Marman in view of Stokking discloses the method of claim 15, further comprising:
The rest claim limitations are similar to claim limitations recited in claim 8. Therefore, same rational used to reject claim 8 is also used to reject claim 16.
Regarding to claim 17 (Currently amended), Marman discloses an electronic device (Fig. 1; [0020]: system 100 includes a video camera 110 and various components for processing image data produced by a video camera 110; [0023]: enable detection, classification, and tracking of objects present in the scene based on analysis of first set of the image data; [0030]: a data storage system 255 stores image data and other information; [0045]: a colored bounding box 380 is generated and superimposed over the image of the person when video analytics 120 detect and track the person; as the person moves through the scene, the cropped-close up images presented in zoomed-in tracking window 355 automatically track the movement of the person), comprising:
a display (Fig. 1; [0035]: display 280);
memory ([0022]: video buffer memory; [0036]: data storage system 255 stores image data and metadata created by system 100; one or more storage devices; non-volatile memory and volatile memory); and
a processor, wherein the processor is configured to ( [0022]: a specialized video processor; [0035]: a processor of computer 320; Fig. 4; [0048]: a processor of a remote server; a processor of computer 320 of user station 265):
identify, in a state of displaying a first image of a video stored in the memory in the display, a first input indicating to select a first position associated with an external object within the first image ([0021]: imager 115 of video camera 110 captures multiple images, e.g., video frames, of the field of view and produces a first set of image data; [0023]: video analytics 120 use the first set of image data to carry out various functions such as, object location detection and identification, classification, tracking, indexing, and search; [0025]: the object tracking module 206 uses object motion between frames as a cue to tracking; [0028]: object tracking module 220 identifies and tracks an object for multiple frames, i.e., multiple images; [0044]: user station 265 allows the user to manually intervene, e.g., by selecting via input device 330 control icons presented on display 280, to engage or disengage automatic tracking; user selection; Fig. 3; [0045]: identify, and track the person, and identify first position of the person; a colored bounding box 380 is generated and superimposed over the image of the person when video analytics 120 detect and track the person; Fig. 7; [0064]: display management module 340 identifies the X-Y coordinate information of bounding boxes 740 and 750 and calculates an area 760 of image 700 that includes both bounding boxes 740, i.e. label, and 750;
PNG
media_image3.png
232
210
media_image3.png
Greyscale
; Fig. 8; [0065]: the user selects object 820; Fig. 9; [0068]);
identify, by performing a first type of computation for recognizing the external object based on the first input, a second position associated with the external object within the second image, among a plurality of images for the video, after a time section beginning from a timing at the first image ([0025]: the object tracking module 206 uses object motion between frames as a cue to tracking; [0026]: track a particular object in the field of view where many objects are present; [0033]: a colored bounding box, i.e. labels, that surrounds and is superimposed over the image of the object; tracking information of the object includes the location and size of the object, the size of the bounding box surrounding the object; [0044]: user station 265 allows the user to manually intervene, e.g., by selecting via input device 330 control icons presented on display 280, to engage or disengage automatic tracking; user selection; Fig. 8; [0065]: the user selects object 820; Fig. 9; [0068]: if object 810 moves sufficiently close to objects 830 and 840, display management module 340 may automatically collapse windows 970 and 980 together; split window 980 into two separate windows when objects 830 and 840 diverge in their paths or rates of walking;
PNG
media_image4.png
330
646
media_image4.png
Greyscale
);
obtain, third positions associated with the external object within one or more third images included in the time section (Fig. 7; [0064]: display management module 340 identifies the X-Y coordinate information of bounding boxes 740 and 750 and calculates an area 760 of image 700 that includes both bounding boxes 740 and 750; Fig. 8; [0065]: calculates an area 910 of image 800 that is localized to and encompasses a bounding box 920 of object 820; Fig. 9; [0068]: if object 810 moves sufficiently close to objects 830 and 840, display management module 340 automatically collapses windows 970 and 980 together; split window 980 into two separate windows when objects 830 and 840 diverge in their paths or rates of walking);
store labeling information including the first position and a first parameter indicating that the first position corresponds to the first image, the second position and a second parameter indicating that the second position corresponds to the second image, and third positions and third parameters respectively indicating that a respective third position corresponds to a respective third image ([0030]: an object indexing module 250 is connected to a data storage system 255 for storing image data and other information; the signatures are stored in data storage system 255 and act as index elements that enable retrieval of video clips of the objects; [0031]: search through metadata signatures or index elements stored in data storage system 255; [0036]: data is stored at a location remote from the cameras; [0043]: Video analytics 120 automatically track the person of interest and generate metadata, e.g., location information, unique identifier label, corresponding to the person of interest; [0044]: the first set of image data representing zoomed-out images of the scene is retained in data storage system 255 to allow the user to review, e.g., play back, video and zoom in on different parts of the scene captured at different times; [0046]: tag those images so that corresponding cropped close-ups of those clear images are saved as snapshots; [0049]: the first set of image data is modified before being stored in storage device 395; [0055]: the two sets of image data and the metadata may also be stored in storage device 390; Fig. 6; [0058]: the second set of image data is stored in storage device 395 to enable the user to later access and play back the cropped close-up images; Fig. 7; [0064]: display management module 340 identifies the X-Y coordinate information of bounding boxes 740 and 750 and calculates an area 760 of image 700 that includes both bounding boxes 740 and 750; Fig. 8; [0065]: calculates an area 910 of image 800 that is localized to and encompasses a bounding box 920 of object 820; Fig. 9; [0068]: if object 810 moves sufficiently close to objects 830 and 840, display management module 340 automatically collapses windows 970 and 980 together; split window 980 into two separate windows when objects 830 and 840 diverge in their paths or rates of walking); and
display, in response to a second input indicating to reproduce at least portion of the video included in the time section, at least one of the first image, the one or more third images and the second image in the display ([0044]: the first set of image data representing zoomed-out images of the scene is retained in data storage system 255 to allow the user to review, e.g., play back, video and zoom in on different parts of the scene captured at different times; [0045]: a colored bounding box 380 is generated and superimposed over the image of the person when video analytics 120 detect and track the person; as the person moves through the scene, the cropped-close up images presented in zoomed-in tracking window 355 automatically track the movement of the person; Fig. 6; [0058]: the second set of image data is stored in storage device 395 to enable the user to later access and play back the cropped close-up images), and display a visual object that is superimposed on an image displayed in the display and corresponds to one of the first position, the third positions and the second position (Marman; [0044]: the first set of image data representing zoomed-out images of the scene is retained in data storage system 255 to allow the user to review, e.g., play back, video and zoom in on different parts of the scene captured at different times; Fig. 3; [0045]: a colored bounding box 380 is generated and superimposed over the image of the person when video analytics 120 detect and track the person; [0101]: identify video segments having the metadata corresponding to the activities selected by the user; display management module 340 receives the pertinent video segments from data storage system 255 and sorts them by time; [0103]: display management module 340 superimposes images 1705, 1710, 1715, 1720, 1725, 1730, and 1735 over a background image of the scene).
Marman fails to explicitly disclose:
by performing a second type of computation for interpolating the first position and the second position.
In same filed of endeavor, Stokking teaches:
by performing a second type of computation for interpolating the first position and the second position (Fig. 5A; Fig. 5B; [0198]: a movement trajectory 370 is determined from the transition data in various ways; the movement trajectory 370 is determined as a series of intermediate positions for the avatar 310 by linear interpolation between the coordinates of the first viewing position and the second viewing position as a function of time;
PNG
media_image1.png
478
354
media_image1.png
Greyscale
).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Marman to include by performing a second type of computation for interpolating the first position and the second position as taught by Stokking. The motivation for doing so would have been to maintain or improve the second user's sense of immersion and/or may avoid confusion of the second user; to determine intermediate viewing positions by the first processor system by time-based interpolation or tracing along the predetermined motion trajectory; to determine a series of intermediate positions for the avatar by linear interpolation between the coordinates as taught by Stokking in Fig. 5A, paragraphs [0055], [0070, and [0198].
Regarding to claim 18 (Original), Marman in view of Stokking discloses the electronic device of claim 17, wherein the processor is configured to:
repeatedly perform the first type of computation to recognize the external object based on one or more feature points, at each time section after the timing of the first image (Marman; [0027]: match classifier 225 determines which features, distance measures, and discriminant functions enable the most accurate and quickest match classification; using match classifier 225 for tracking enables accurate tracking even; [0033]: a confidence level of an object match between frames of video; [0038]: track the object as it moves through the scene; [0053]: track and match, frame by frame, the moving locations of the objects of interest; [0044]: user station 265 allows the user to manually intervene, e.g., by selecting via input device 330 control icons presented on display 280, to engage or disengage automatic tracking; user selection; Fig. 8; [0065]: the user selects objects).
Regarding to claim 19 (Original), Marman in view of Stokking discloses the electronic device of claim 17, wherein the processor is configured to:
perform the second type of computation to obtain the third positions, based on the first position, the second position, and timings of the one or more third images in the time section (Stokking; [0070]: determine intermediate viewing positions by the first processor system by time-based interpolation or tracing along the predetermined motion trajectory; Fig. 5A; [0198]: a movement trajectory 370 is determined from the transition data in various ways; the movement trajectory 370 is determined as a series of intermediate positions for the avatar 310 by linear interpolation between the coordinates of the first viewing position and the second viewing position as a function of time;
PNG
media_image1.png
478
354
media_image1.png
Greyscale
; Fig. 5B; [0199]).
Same motivation of claim 1 is applied here.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hai Tao Sun whose telephone number is (571)272-5630. The examiner can normally be reached 9:00AM-6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 5712727642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAI TAO SUN/Primary Examiner, Art Unit 2616