DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 6, 8, 10-13, 16, 18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mirza et al. (US 20210125346 A1) in view of Foote et al. (US 20130251205 A1).
Regarding claim 1. Mirza discloses A system (abstract, A system includes sensors and a tracking subsystem) comprising:
a plurality of imaging devices comprising a first imaging device located at a first location and a second imaging device located at a second location ([0062] For larger physical spaces (e.g., convenience stores and grocery stores), additional sensors can be installed throughout the space to track the position of people and/or objects as they move about the space. For example, additional cameras can be added to track positions in the larger space; figure 1, [0072] the tracking system 100 comprises one or more sensors 108; [0067] a sensor 108 may capture one or more images of a person as they enter the space 102), wherein the first imaging device has a first field of view and the second imaging device has a second field of view (figure 1, units 108; [0075] Each frame 302 is a snapshot of the people and/or objects within the field of view of a particular sensor 108 at a particular moment in time);
a plurality of markers, wherein each of the plurality of markers corresponds to a known geospatial coordinate ([0006] the tracking system determines coefficients for a homography based on the physical location of markers in a global plane for the space and the pixel locations of the markers in an image from a sensor; figure 1, [0083] the server 106 determines coefficients for a homography 118 based on the physical location of markers in the global plane 104 and the pixel locations of the markers in an image from a sensor 108); and
a controller operably coupled to the plurality of imaging devices (figure 1, [0072] the tracking system 100 comprises one or more servers 106), wherein the controller is configured to:
receive a first image from the first imaging device, wherein the first image captures a first marker of the plurality of markers and an object (figure 4, [0075] The tracking system 100 uses pixel locations 402 to describe the location of an object with respect to pixels in a frame 302 from a sensor 108, the tracking system 100 can identify the location of different marker 304 within the frame 302 using their respective pixel locations 402);
retrieve a first geospatial coordinate of the first marker of the plurality of markers (figure 1, [0083] the server 106 determines coefficients for a homography 118 based on the physical location of markers in the global plane 104 and the pixel locations of the markers in an image from a sensor 108; [0092] the tracking system 100 receives (x,y) coordinates 306 for markers 304 in the space 102. Referring to FIG. 3 as an example, each marker 304 is an object that identifies a known physical location within the space 102; [0094]);
determine a first transform for relating a position of the first marker of the plurality of markers captured in the first image and its known geospatial coordinate (figure 1, [0083] the server 106 determines coefficients for a homography 118 based on the physical location of markers in the global plane 104 and the pixel locations of the markers in an image from a sensor 108);
estimate a first object location of the object based, at least in part, on the first transform and the first geospatial coordinate ([0076] The tracking system 100 is configured to map pixel locations 402 within each sensor 108 to physical locations in the space 102 using homographies 118. A homography 118 is configured to translate between pixel locations 402 in a frame 302 captured by a sensor 108 and (x,y) coordinates in the global plane 104 (i.e. physical locations in the space 102). The tracking system 100 uses homographies 118 to correlate between a pixel location 402 in a particular sensor 108 with a physical location in the space 102. In other words, the tracking system 100 uses homographies 118 to determine where a person is physically located in the space 102 based on their pixel location 402 within a frame 302 from a sensor 108);
receive a second image from the second imaging device, wherein the second image captures a second marker of the plurality of markers and the object ([0076] the tracking system 100 uses multiple sensors 108 to monitor the entire space 102; figure 4, [0075] The tracking system 100 uses pixel locations 402 to describe the location of an object with respect to pixels in a frame 302 from a sensor 108, the tracking system 100 can identify the location of different marker 304 within the frame 302 using their respective pixel locations 402);
retrieve a second geospatial coordinate of the second marker of the plurality of markers (figure 1, [0083] the server 106 determines coefficients for a homography 118 based on the physical location of markers in the global plane 104 and the pixel locations of the markers in an image from a sensor 108; [0092] the tracking system 100 receives (x,y) coordinates 306 for markers 304 in the space 102. Referring to FIG. 3 as an example, each marker 304 is an object that identifies a known physical location within the space 102; [0094]);
determine a second transform for relating a position of the second marker of the plurality of markers captured in the second image and its known geospatial coordinate ([0076] each sensor 108 is uniquely associated with a different homography 118 based on the sensor's 108 physical location within the space 102; figure 1, [0083] the server 106 determines coefficients for a homography 118 based on the physical location of markers in the global plane 104 and the pixel locations of the markers in an image from a sensor 108);
estimate a second object location of the object based, at least in part, on the second transform and the second geospatial coordinate ([0076] the tracking system 100 uses multiple sensors 108 to monitor the entire space 102; [0076] The tracking system 100 is configured to map pixel locations 402 within each sensor 108 to physical locations in the space 102 using homographies 118. A homography 118 is configured to translate between pixel locations 402 in a frame 302 captured by a sensor 108 and (x,y) coordinates in the global plane 104 (i.e. physical locations in the space 102). The tracking system 100 uses homographies 118 to correlate between a pixel location 402 in a particular sensor 108 with a physical location in the space 102. In other words, the tracking system 100 uses homographies 118 to determine where a person is physically located in the space 102 based on their pixel location 402 within a frame 302 from a sensor 108);
determine a first time at which the object is detected at the first object location (figure 24B, [0275] at time t.sub.1, Since the object 2402 is also within the field-of-view 2404b of the second sensor 108b at t.sub.1 (see FIG. 24A), the tracking system also detects a contour 2414 in image 2408b and determines corresponding pixel coordinates 2416a (i.e., associated with bounding box 2416b) for the object 2402. Pixel position 2416c is determined based on the coordinates 2416a; [0278] The particle filter determines several estimated subsequent positions 2506 for the object. The estimated subsequent positions 2506 are illustrated as the dots or “particles” in FIG. 25A and are generally determined based on a history of previous positions of the object; [0286]; [0292] the global particle filter tracker 2446 may generate probability-weighted estimates of subsequent global positions at subsequent times);
determine a second time at which the object is detected at the second object location (figure 24B, [0282] at time t.sub.3, the object 2402 is within the field-of-view 2404b of sensor 108b and the field-of-view 2404c of sensor 108c, a contour 2432 and corresponding pixel coordinates 2434a, pixel region 2434b, and pixel position 2434c are detected in frame 2426c from sensor 108c; [0278] The particle filter determines several estimated subsequent positions 2506 for the object. The estimated subsequent positions 2506 are illustrated as the dots or “particles” in FIG. 25A and are generally determined based on a history of previous positions of the object; [0286]; [0292] the global particle filter tracker 2446 may generate probability-weighted estimates of subsequent global positions at subsequent times);
estimate an object velocity based on changes in the first object location over time while the object is within the first field of view (figure 24B, [0275] at time t.sub.1, Since the object 2402 is also within the field-of-view 2404b of the second sensor 108b at t.sub.1 (see FIG. 24A), the tracking system also detects a contour 2414 in image 2408b and determines corresponding pixel coordinates 2416a (i.e., associated with bounding box 2416b) for the object 2402. Pixel position 2416c is determined based on the coordinates 2416a; [0277] at time t.sub.2, the object 2402 is within fields-of-view 2404a and 2404b corresponding to sensors 108a,b, a contour 2422 is detected in image 2418b and corresponding pixel coordinates 2424a, which are illustrated by bounding box 2424b, are determined. Pixel position 2424c is determined based on the coordinates 2424a; [0278] The particle filter determines several estimated subsequent positions 2506 for the object. The estimated subsequent positions 2506 are illustrated as the dots or “particles” in FIG. 25A and are generally determined based on a history of previous positions of the object; [0286]; [0292] the global particle filter tracker 2446 may generate probability-weighted estimates of subsequent global positions at subsequent times); and
determine, based on the first object location, the second object location, that the object in the first image and the second image is the same object ([0076] This configuration allows the tracking system 100 to determine where a person is physically located within the entire space 102 based on which sensor 108 they appear in and their location within a frame 302 captured by that sensor 108; [0236] the same object may be detected by two different sensors 108; [0276] the tracking subsystem 2400 may compare the distance between first and second physical positions 2412d and 2416d to a threshold distance 2448 to determine whether the positions 2412d, 2416d correspond to the same person or different people (see, e.g., step 2620 of FIG. 26, described below)).
However, Mirza does not explicitly disclose
the first field of view and the second field of view being separated by a gap such that the first field of view and the second field of view do not overlap;
determine, based on the first object location, the second object location, and the object velocity, that the object in the first image and the second image is the same object.
Foote discloses
the first field of view and the second field of view being separated by a gap such that the first field of view and the second field of view do not overlap (figure 1, [0010] two cameras have non-overlapping fields of view of a ground plane);
determine, based on the first object location, the second object location, and the object velocity, that the object in the first image and the second image is the same object ([0004] For each person or other subject represented by the trajectory information, a plurality of head position points in the first image data and a plurality of head position points in the second image data are determined. Each plurality of head position points corresponds to successive positions of a head region of a person at a corresponding plurality of timepoints spanning a time period; [0031] the trajectories can have any shape that can be mathematically characterized in a way that enables the location of a person or other subject at a future point in time to be predicted; [0042] the trajectories have any shape that can be mathematically characterized in a way that enables the location of a person or other subject at a future point in time to be predicted).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of Mirza and Foote, to determine, based on the first object location, the second object location, and the object velocity, that the object in the first image and the second image is the same object, in order to better locate/track the object in non-overlapping fields of view.
Regarding claim 2. Mirza discloses The system of claim 1, wherein the first and second geospatial coordinates comprise a latitude and longitude ([0092] the tracking system 100 receives (x,y) coordinates 306 for markers 304 in the space 102. Referring to FIG. 3 as an example, each marker 304 is an object that identifies a known physical location within the space 102).
Regarding claim 3. Mirza discloses The system of claim 1, wherein determining that the object in the first image and the second image is the same object comprises using multi-valued logic ([0276] the tracking subsystem 2400 may compare the distance between first and second physical positions 2412d and 2416d to a threshold distance 2448 to determine whether the positions 2412d, 2416d correspond to the same person or different people (see, e.g., step 2620 of FIG. 26, described below); [0290]).
Regarding claim 6. Mirza discloses The system of claim 1, wherein the first transform comprises a transformation matrix, wherein the transformation matrix comprises a mapping of two-dimensional (2d) image coordinates to three-dimensional (3d) real-world coordinates (figure 5A; [0075] A frame 302 may be a two-dimensional (2D) image. Referring to FIG. 4 as an example, a frame 302 comprises a plurality of pixels that are each associated with a pixel location 402 within the frame 302).
Regarding claim 8. Mirza discloses The system of claim 1, further comprising recording a plurality of locations of the object over time ([0377] At step 3414, a region-of-interest from the images may be accessed. For example, following storing the buffer frames, the tracking system 100 may determine a region-of-interest of the top-view images to retain. For example, the tracking system 100 may only store a region near the center of each view (e.g., region 3006 illustrated in FIG. 30 and described above); [0163] Once the tracking system 100 determines that the first person has left the field of view of the first sensor 108, then the tracking system 100 can stop tracking the first person 1106 using the first sensor 108 and can free up resources (e.g. memory resources) that were allocated to tracking the first person 1106).
Regarding claim 10. Mirza discloses The system of claim 1, wherein the plurality of markers are configured so that each of the plurality of imaging devices view at least eight markers of the plurality of markers (figure 3, unit 302; [0075] Each frame 302 is a snapshot of the people and/or objects within the field of view of a particular sensor 108 at a particular moment in time, the tracking system 100 can identify the location of different marker 304 within the frame 302 using their respective pixel locations 402; figure 4).
According to MPEP 2144.04 VI. B. Duplication of Parts, mere duplication of parts has no patentable significance unless a new and unexpected result is produced.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Mirza, to configure each of the plurality of imaging devices to view at least nine instead of eight markers.
Regarding claim 11. The same analysis has been stated in claim 1.
Regarding claim 12. The same analysis has been stated in claim 2.
Regarding claim 13. The same analysis has been stated in claim 3.
Regarding claim 16. The same analysis has been stated in claim 6.
Regarding claim 18. The same analysis has been stated in claim 8.
Regarding claim 20. The same analysis has been stated in claim 1.
Claims 5, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Mirza et al. (US 20210125346 A1) in view of Foote et al. (US 20130251205 A1) as applied above in claim 1, and further in view of Wood (US 20100008661 A1).
Regarding claim 5. Wood discloses an imaging device configured to move between a first location and a different location (abstract, The camera slider system supports a camera for longitudinal sliding).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of Mirza, Foote and Wood, to comprise an imaging device configured to move between the first location and a different location, in order to better locate/track the object.
Regarding claim 15. The same analysis has been stated in claim 5.
Claims 7, 9, 17, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Mirza et al. (US 20210125346 A1) in view of Foote et al. (US 20130251205 A1) as applied above in claim 1, and further in view of Arbabian et al. (US 20220130109 A1).
Regarding claim 7. Arbabian discloses displaying a map including an object ([0079] An example visualization subsystem 110 is configured to display on the electronic display screen 305 of FIG. 3, a bird's eye view (BEV) map of a site, with one or more overlaid tracked object locations at the site. A sequence of object locations comprises an object path at the site).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of Mirza, Foote and Arbabian, to display a map including the object, in order to visualize the location and movement of the object.
Regarding claim 17. The same analysis has been stated in claim 7.
Regarding claim 9. Arbabian discloses displaying a heat map of a plurality of locations ([0079] A visual enhancement such as a heat map can be used to indicate regions of a site that objects most frequently traverse).
The same motivation has been stated in claim 7.
Regarding claim 19. The same analysis has been stated in claim 9.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOLAN XU whose telephone number is (571)270-7580. The examiner can normally be reached Mon. to Fri. 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH V. PERUNGAVOOR can be reached at (571) 272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIAOLAN XU/ Primary Examiner, Art Unit 2488