DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of claims: claims 1-20 are pending below.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 4/8/2024 was filed and considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3 and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over LU et al (US 2023/0252649) in view of Slama (US 11,491,923).
Claim 1, similarly claim 11:
LU et al (US 2023/0252649) anticipated the following subject matter:
A vehicle control apparatus (0001 detail use for traffic flow monitoring, autonomous driving, mobile robotics) comprising:
a sensor (0324 detail device sensors (e.g., a front facing camera, a rear facing camera, digital image sensors, Lidar (light detection and ranging)); and
a processor configured to (0033-0035 detail use of processor):
convert a first virtual box, associated with a first object and obtained via the sensor at a first time, into a second virtual box associated with a second time that is later than the first time (figure 2 and 0063-0080 detail object tracker over time (first time and second time), where 0067 ROI tracker for identify object at K (boxing in first time) and later again at 1-K (boxing in second time) in bounding box (paragraph 0069));
determine a plurality of virtual boxes, associated with the second virtual box, based on a plurality of data points obtained at the second time via the sensor (0063-0080, especially 0069 detail plurality of bounding boxes 217 (third boxes) due to inner detection base on re-identification (from second time/box));
merge at least part of the plurality of virtual boxes into a merge box based on at least one of (paragraph 0070 detail merges together of tracked target object):
a distance between two or more of the plurality of virtual boxes at the second time, OR
whether at least one of the plurality of virtual boxes at the second time corresponds to a second object that is separated from a road boundary (0069-0071 detail with paragraph 0234 detail calculates distance between cropped image (boundary box) over time with color histogram, color metric);
maintain or cancel the merging of the at least part of the plurality of virtual boxes based on at least one of: a distance between two or more data points included in the merge box, OR a type of the first object (0069-0074, where 0070 detail merges of boxes with inner detection 217 with inner tracker 105 (1-k), and 0073 teaches overlaid (merge) with corresponding detection and tracking information (bounding boxes over time)); and
output a signal indicating a result of the maintaining or canceling of the merging (0073 detail real-time object tracking process is presented with a visual representation (output signal) of the object detections.).
LU et al teaches the subject matter regarding plurality of virtual boxes above, but not the following: determine a plurality of third virtual boxes, associated with the second virtual box.
Slama (US 11,491,923) teaches the following subject matter: determine a plurality of third virtual boxes, associated with the second virtual box (column 5 line 58 to column 6 line 17 teaches second sub-window where a third and fourth sub-window of the same view in order to provide an aspherical wider view; figure 5 and column 7 line 45 to column 8 line 20 detail further teaching).
LU et al and Slama are both in the field of image analysis, especially controlling vehicle awareness of surrounding/exterior, detail using of object detection with bounding boxes/windows over time such that the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify LU et al by Slama regarding using plurality of bounding boxes/window for detect object exterior of the vehicle assist and provide the driver the understanding relationship of the vehicle boundaries to their surroundings as disclosed by Slamain column 8 lines 20-35.
Regarding claim 11 for method, where LU et al teaches in 0004 and figures 5, 8, 10-5-15 are all flowchart/method.
Claim 2, similarly claim 12:
LU et al teaches:
The vehicle control apparatus of claim 1, wherein the processor is configured to determine the plurality of third virtual boxes by determining the plurality of third virtual boxes based on at least one of:
the second virtual box overlapping each of the plurality of third virtual boxes at the second time by a proportion greater than a first threshold value, OR a distance between a plurality of first data points included in the second virtual box and a plurality of second data points included in the plurality of third virtual boxes at the second time being within a second threshold value (above teaches distance calculation of tracked object over time (first, second and plurality of bounding boxes after 1-k), where 0105 further detail compare against a threshold confidence over a set time period. Examiner note that because of the claim language regarding selection of “at least one”, the first and second threshold are view as the same).
Claim 3, similarly claim 13:
LU et al teaches:
The vehicle control apparatus of claim 1, wherein the processor is further configured to: determine whether the first object is stationary or moving based on at least one of:
the first object being not occluded, the type of the first object being a pedestrian, OR a number of the plurality of third virtual boxes at the second time being less than a threshold value (0309 detail tracking object such as pedestrian with cellular device for accident prevention alerts).
Allowable Subject Matter
Claim 4, similarly claim 14, are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination, analogous art Alghanem et al (US 2022/0126873) also in the art of controlling vehicle, detail using of Lidar and bounding box of target, where paragraph 0190-0192 detail the bounding box in regard with length-to-width ratio, but do no teach the detail disclose by the instant invention.
Claim 5, similarly claim 15, are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination, analogous art Alghanem et al (US 2022/0126873) also in the art of controlling vehicle, detail using of Lidar and bounding box of target, where paragraph 0190-0192 detail the bounding box in regard with length-to-width ratio, but do no teach the detail disclose by the instant invention.
Claim 6, similarly claim 16, are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination, analogous art Alghanem et al (US 2022/0126873) also in the art of controlling vehicle, detail using of Lidar and bounding box of target, where paragraph 0190-0192 detail the bounding box in regard with length-to-width ratio, but do no teach the detail disclose by the instant invention.
Claims 7-8, , similarly claims 17-18, are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination, analogous art Alghanem et al (US 2022/0126873) also in the art of controlling vehicle, detail using of Lidar and bounding box of target, where paragraph 0190-0192 detail the bounding box in regard with length-to-width ratio, but do no teach the detail disclose by the instant invention.
Claims 9-10, similarly claim 19-20, are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination, analogous art Alghanem et al (US 2022/0126873) also in the art of controlling vehicle, detail using of Lidar and bounding box of target, where paragraph 0190-0192 detail the bounding box in regard with length-to-width ratio, but do no teach the detail disclose by the instant invention.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Jang et al (US 2023/0266468) teaches Track Association Method And System For Tracking Objects - generating a plurality of predicted tracks from a plurality of track information generated in a previous step, generating link relationships between a plurality of object boxes detected in a current step and the plurality of predicted tracks, based on associations between the plurality of object boxes and the plurality of predicted tracks, and determining one of the two or more predicted tracks, based on association scores between the two or more predicted tracks and an object box having the link relationships with two or more predicted tracks among the plurality of predicted tracks, and associating the determined one of the two or more predicted tracks with a track of the object box having the link relationships.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSUNG-YIN TSAI whose telephone number is (571)270-1671. The examiner can normally be reached 7am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TSUNG YIN TSAI/Primary Examiner, Art Unit 2656