DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/14/2025 has been entered.
Response to Arguments
On pages 11–12 of the Remarks, Applicant contends the combination of Lewin and Kobayashi fails to teach or suggest the features added by way of amendment. Examiner disagrees. Applicant’s newly added features merely require portions of an image to be considered when determining a region of interest (ROI) wherein the region of interest (ROI) is defined by a “feature amount.” Kobayashi’s paragraph [0044] and Fig. 4A teach a feature amount for a ROI compared to a feature amount outside the ROI. The region outside the ROI surrounds the ROI horizontally, vertically, and diagonally. Therefore, the combination of Lewin and Kobayashi would teach or suggest the features recited by Applicant. Accordingly, the rejection under 35 U.S.C. 103 is sustained.
Examiner notes the large number of prior art references cited under the Conclusion Section of this Office Action as further evidence the currently recited features are well-covered in the prior art.
Claim Objection
Claim 14 is objected to for the following informality: A portion of the amended claim language recites, “they horizontal direction.” Examiner interprets the language as typographical error requiring correction.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1–14 are rejected under 35 U.S.C. 103 as being unpatentable over Lewin (US 2023/0245414 A1) and Kobayashi (US 2017/0098136 A1).
Regarding claim 1, the combination of Lewin and Kobayashi teaches or suggests an object detection device comprising: a storage medium configured to store computer-readable instructions, and a processor connected to the storage medium, wherein the processor executes the computer-readable instructions to execute (Lewin, ¶ 0011: teaches a processor and memory for storing instructions) acquiring a captured image of a surface along which a mobile object is able to travel, which is captured with an inclination with respect to the surface (Lewin, ¶¶ 0042–0043: teaches various types of image sensors on a vehicle to capture the environment of the vehicle; Lewin, ¶ 0053: teaches the sensors can be configured to capture a road surface the vehicle is about to travel), generating a low-resolution image obtained by lowering image quality of the captured image (Lewin, Abstract: teaches setting a lower resolution for the field of view outside the region of interest and setting a higher resolution for the region of interest), defining a plurality of partial area sets each having partial areas in the low-resolution image, the partial areas including a first partial area and a plurality of second partial areas, the plurality second partial areas being peripheral to the first partial area (Lewin, ¶¶ 0059–0060: teach identifying one or more regions of interest (ROIs) from the low resolution general scan to identify areas on which to focus high resolution scanning; Lewin, ¶ 0018: teaches that periphery regions, such as the side of a road or a side of the vehicle matching an anticipated turn direction are areas of analysis and may become regions of interest or not; see also e.g. Lewin, ¶ 0078) such that each of the plurality of second partial areas is adjacent to the first partial area in a vertical direction, a horizontal direction, and a diagonal direction, calculating a difference in feature amount between the first partial area and the plurality of second partial areas by comparing the first partial area and one of the plurality of second partial areas adjacent to the first partial area in the vertical direction, calculating a difference in feature amount between the first partial area and the plurality of second partial areas by comparing the first partial area and one of the plurality of second partial areas adjacent to the first partial area in the horizontal direction, calculating a difference in feature amount between the first partial area and the plurality of second partial areas by comparing the first partial area and one of the plurality of second partial areas adjacent to the first partial area in the diagonal direction, and based on the calculated differences by comparison in the vertical direction, the horizontal direction, and the diagonal direction, deriving a total value obtained by totalizing differences in feature amount between the first partial area and the plurality of second partial areas included in each of the plurality of partial area sets, extracting a point of interest on the basis of the total value (Lewin, ¶ 0078: teaches adapting the region of interest based on identified features; Lewin does not teach counting features to define regions of interest or segment regions of interest within an image; However, to one skilled in the art, such a feature is obvious and is well-represented in the art; Kobayashi, ¶ 0044 and Fig. 4A: teaches that the process of tracking an object of interest within a region of interest and keeping the region of interest centered around the object of interest can be based on calculating a difference in the feature amount of a region versus a feature amount in adjacent regions (horizontally, vertically, and diagonally) to maintain a large number of feature points within the ROI versus the surrounding regions), and wherein each of the plurality of partial area sets is defined to include a plurality of partial areas in a target area of each partial area set (Lewin, Fig. 4: teaches a number of regions of interest; Examiner notes Applicant’s claimed partial areas are known in the art as regions of interest; Lewin, ¶ 0060: teaches to one skilled in the art that regions of interest can be separated or combined or extended according to desired behavior), and the target area is obtained by cutting out a part of the low-resolution image limited in a vertical direction so that at least a part thereof does not overlap with another partial area set in the vertical direction (Lewin, ¶ 0097 and Fig. 4: teaches a lower portion of the field of view being a first region corresponding to a closer position to the vehicle and another portion of the field of view corresponding to a location much further down the road).
One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Lewin, with those of Kobayashi, because both references are drawn to tracking objects within a region of interest such that one wishing to practice in the art would have been led to their relevant teachings, and because Kobayashi is merely explaining how regions of interest can be defined by maintaining a feature point amount within the ROI compared to outside the ROI. Thus, the combination is a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Lewin and Kobayashi used in this Office Action unless otherwise noted.
Regarding claim 2, the combination of Lewin and Kobayashi teaches or suggests the object detection device according to claim 1, wherein the processor defines the plurality of partial area sets so that the number of pixels in the partial area increases as the partial area is defined to be closer to a front side of the low-resolution image among the plurality of partial area sets (Examiner notes the skilled artisan knows how perspective works and understands foreground vs. background and its correlation to distance in imaging; Lewin, Fig. 4: illustrates that partial area (i.e. ROIs) can be bigger when closer to the imager).
Regarding claim 3, the combination of Lewin and Kobayashi teaches or suggests the object detection device according to claim 1, wherein the processor derives the total value by totalizing differences in feature amount between the partial areas included in each of the partial areas included in each of the plurality of partial area sets and other vertically, horizontally, and diagonally adjacent partial areas (Kobayashi, ¶ 0044: teaches that the process of tracking an object of interest within a region of interest and keeping the region of interest centered around to object of interest can be based on calculating a difference in the feature amount of a region versus a feature amount in adjacent regions to maintaining a large number of feature points within the ROI versus the surrounding regions).
Regarding claim 4, the combination of Lewin and Kobayashi teaches or suggests the object detection device according to claim 3, wherein the processor further adds, for the partial areas included in each of the plurality of partial area sets, a difference in feature amount between the vertically adjacent partial areas, a difference in feature amount between the horizontally adjacent partial areas, and a difference in feature amount between the diagonally adjacent partial areas to the total value (Kobayashi, ¶ 0044: teaches that the process of tracking an object of interest within a region of interest and keeping the region of interest centered around to object of interest can be based on calculating a difference in the feature amount of a region versus a feature amount in adjacent regions to maintaining a large number of feature points within the ROI versus the surrounding regions).
Regarding claim 5, the combination of Lewin and Kobayashi teaches or suggests the object detection device according to claim 1, wherein the processor further performs high-resolution processing on the point of interest in the captured image to determine whether an object on a road is an object with which a mobile object needs to avoid contact (Lewin, ¶¶ 0019 and 0025: teaches obstacles can be avoided by detecting them in the region of interest which is subjected to higher-resolution scanning; see also Lewin, ¶¶ 0059–0061: likewise teaching low resolution scanning for areas outside the area of interest and high resolution scanning for the area of interest containing an obstacle).
Regarding claim 6, the combination of Lewin and Kobayashi teaches or suggests the object detection device according to claim 1, wherein the object detection device is mounted on a mobile object, and the processor changes an aspect ratio of the partial area on the basis of an environment in which the mobile object is placed (Lewin, ¶¶ 0023, 0054, and 0055: teaches the location, size, and shape (including width and length) of the ROI can be determined based on the vehicle’s speed or steering angle or other operating parameter or environmental condition, which teaches or suggests to the skilled artisan that the size or shape of the region of interest could depend on speed, gradient, weather condition, etc. to maintain salient features within the region of interest).
Regarding claim 7, the combination of Lewin and Kobayashi teaches or suggests the object detection device according to claim 6, wherein, when a speed of the mobile object is greater than a reference speed, the processor changes the aspect ratio of the partial area to be vertically longer than when the speed of the mobile object is equal to or less than the reference speed (Lewin, ¶¶ 0023, 0054, and 0055: teaches the location, size, and shape (including width and length) of the ROI can be determined based on the vehicle’s speed or steering angle or other operating parameter or environmental condition, which teaches or suggests to the skilled artisan that the size or shape of the region of interest could depend on speed, gradient, weather condition, etc. to maintain salient features within the region of interest).
Regarding claim 8, the combination of Lewin and Kobayashi teaches or suggests the object detection device according to claim 6, wherein, when a turning angle of the mobile object is greater than a reference angle, the processor changes the aspect ratio of the partial area to be horizontally longer than when the turning angle of the mobile object is equal to or less than the reference angle (Lewin, ¶¶ 0023, 0054, and 0055: teaches the location, size, and shape (including width and length) of the ROI can be determined based on the vehicle’s speed or steering angle or other operating parameter or environmental condition, which teaches or suggests to the skilled artisan that the size or shape of the region of interest could depend on speed, gradient, weather condition, etc. to maintain salient features within the region of interest).
Regarding claim 9, the combination of Lewin and Kobayashi teaches or suggests the object detection device according to claim 6, wherein, when the mobile object is on a road surface with an upward gradient equal to or greater than a predetermined gradient, the processor changes the aspect ratio of the partial area to be vertically longer than when the mobile object is not on a road surface with an upward gradient equal to or greater than the predetermined gradient (Lewin, ¶¶ 0023, 0054, and 0055: teaches the location, size, and shape (including width and length) of the ROI can be determined based on the vehicle’s speed or steering angle or other operating parameter or environmental condition, which teaches or suggests to the skilled artisan that the size or shape of the region of interest could depend on speed, gradient, weather condition, etc. to maintain salient features within the region of interest; Examiner notes Lewin’s terrain type teaches or suggests to the skilled artisan up-hill and down-hill region of interest adjustments).
Regarding claim 10, the combination of Lewin and Kobayashi teaches or suggests the object detection device according to claim 6, wherein, when the mobile object is on a road surface with a downward gradient equal to or greater than a predetermined gradient, the processor changes the aspect ratio of the partial area to be horizontally longer than when the mobile object is not on a road surface with a downward gradient equal to or greater than the predetermined gradient (Lewin, ¶¶ 0023, 0054, and 0055: teaches the location, size, and shape (including width and length) of the ROI can be determined based on the vehicle’s speed or steering angle or other operating parameter or environmental condition, which teaches or suggests to the skilled artisan that the size or shape of the region of interest could depend on speed, gradient, weather condition, etc. to maintain salient features within the region of interest; Examiner notes Lewin’s terrain type teaches or suggests to the skilled artisan up-hill and down-hill region of interest adjustments).
Regarding claim 11, the combination of Lewin and Kobayashi teaches or suggests the object detection device according to claim 1, wherein the processor defines the partial area in a horizontally long rectangular shape (Lewin, Fig. 4: teaches the region of interest being a horizontally long rectangular shape; Lewin, ¶¶ 0023, 0054, and 0055: teaches the location, size, and shape (including width and length) of the ROI can be determined based on the vehicle’s speed or steering angle or other operating parameter or environmental condition, which teaches or suggests to the skilled artisan that the size or shape of the region of interest could depend on speed, gradient, weather condition, etc. to maintain salient features within the region of interest).
Regarding claim 12, the combination of Lewin and Kobayashi teaches or suggests the object detection device according to claim 1, wherein the processor regards the total value less than a lower limit as zero and extracts the point of interest (Kobayashi, ¶ 0043: teaches the difference value between adjacent regions being below a threshold means the region is excluded as being of small visual impact, i.e. not containing an object of interest and therefore not a region of interest; see also Kobayashi, ¶ 0069: teaching a block region is extracted as a region of interest if the interest level for the region is indicated by a comparison to a predetermined threshold).
Claim 13 lists the same elements as claim 1, but in method form rather than apparatus form. Therefore, the rationale for the rejection of claim 1 applies to the instant claim.
Claim 14 lists the same elements as claim 1, but in CRM form rather than apparatus form. Therefore, the rationale for the rejection of claim 1 applies to the instant claim.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Valdmann (US 2024/0103175 A1) teaches high resolution images at a narrow field of view for highway detection of obstacles (¶ 0318).
Oblak (US 2022/0180131 A1) teaches high resolution for high-importance objects and lower resolution for unimportant objects (¶ 0063).
Chaudhuri (US 2021/0201578 A1) Fig. 1B
Bruflodt (US 2023/0001854 A1) teaches changing the aspect ratio of a crop window as vehicle speed or other operation conditions change (¶ 0026).
Seki (US 10,919,450 B2) teaches the aspect ratio of a part of the display changes according to vehicle speed (Claim 6).
Takanashi (US 2020/0326897 A1) teaches detecting a feature amount to choose from among a number of candidate regions of interest (e.g. ¶ 0063) and teaches combining regions of interest based on the proportion (ratio) of feature amounts in each region (¶ 0072).
Shin (US 2020/0307560 A1) teaches a threshold number of feature points and using it to determine the highest density of data to extract the region as the feature point (¶¶ 0064 and 0069).
Tudosie (US 2020/0239018 A1) teaches changing the aspect ratio by lengthening or shortening the map according to vehicle speed (¶ 0046).
Dwivedi (US 2019/0251372 A1) teaches a region of interest is defined by its number of features (¶ 0007) and teaches road markings and guardrail tracking using diagonally adjacent regions of interest (e.g. Figs 6A–6C and Figs. 17A–17E).
Johnson (US 2018/0241953 A1) teaches regions of interest and non-interest are defined by the number of features (¶ 0040).
Kim (US 2018/0186349 A1) teaches tracking the number of feature points in a set ROI (¶ 0009).
Yatsu (US 2017/0144591 A1) teaches aspect ratio based on vehicle speed (Claim 5).
Okumura (US 2016/0283801 A1) teaches totaling the number of feature points in ROIs (¶ 0038) and creating a feature quantity map or score map (¶ 0047).
Sano (US 2016/0247022 A1) teaches an image can be segmented into ROIs based on feature amount (¶ 0027).
Cho (US 2012/0106784 A1) teaches when the number of features falls below a threshold, the processing system determines the ROI (¶ 0032).
Huang (US 2012/0093361 A1) teaches when the number of feature points falls below a threshold, the feature point detection is executed (¶ 0032).
Koitabashi (US 2008/0199050 A1) teaches feature amount calculating units and ROI setting units (e.g. ¶ 0170) wherein when the feature amounts drop below a threshold the ROIs are considered processed (¶ 0243).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael J Hess whose telephone number is (571)270-7933. The examiner can normally be reached on Mon - Fri 9:00am-5:30pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MICHAEL J. HESS
Primary Examiner
Art Unit 2481
/MICHAEL J HESS/Primary Examiner, Art Unit 2481