Prosecution Insights
Last updated: April 19, 2026
Application No. 17/996,402

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

Final Rejection §101§103
Filed
Oct 17, 2022
Examiner
ISLAM, MOHAMMAD K
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Sony Semiconductor Solutions Corporation
OA Round
4 (Final)
83%
Grant Probability
Favorable
5-6
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
1070 granted / 1288 resolved
+15.1% vs TC avg
Strong +16% interview lift
Without
With
+16.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
83 currently pending
Career history
1371
Total Applications
across all art units

Statute-Specific Performance

§101
21.4%
-18.6% vs TC avg
§103
32.6%
-7.4% vs TC avg
§102
25.0%
-15.0% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1288 resolved cases

Office Action

§101 §103
DETAILED ACTION Final Rejection Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s amendments, filed 02/20/2026 to claims are accepted. In this amendment, claims1, 11 and 19-20 have been amended. Regarding claims 3 and 18: cancelled and 21-22: added Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 4-9, 11, 13-17 and 19-22 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Each of claims 1, 3-9, 11-17 and 19-22 falls within one of the four statutory categories. See MPEP § 2106.03. For example, each of claims 1, 3-9 and 11-17 and 19-22 falls within category of machine, i.e., a “concrete thing, consisting of parts, or of certain devices and combination of devices.” Digitech, 758 F.3d at 1348–49, 111 USPQ2d at 1719 (quoting Burr v. Duryee, 68 U.S. 531, 570, 17 L. Ed. 650, 657 (1863)) and For example, and the claims 18 fall within category of process. Regarding claims 1 and 3-9, 13-17 and 21 Step 2A – Prong 1 Exemplary claim 1 is directed to an abstract idea of extracts sensor data corresponding to an object region. The abstract idea is set forth or described by the following italicized limitations: An information processing apparatus comprising: a memory storing a program, and at least one processor configured to execute the program to perform operations comprising: extracting, on a basis of an object recognized in an imaged image obtained by a camera, sensor data corresponding to an object region including the object in the imaged image among the sensor data obtained by a range finding sensor; setting an extraction condition of the sensor data on a basis of the object having been recognized; determining whether the sensor data corresponding to the object region is in a predetermined positional relationship with respect to the object region; in a case where the sensor data corresponding to the object region is in a predetermined positional relationship with respect to the object region, setting only the sensor data corresponding to a part of the object region as an extraction target; excluding, from the extraction target, the sensor data corresponding to a region overlapping another object region for another object in the object region; and performing an operation control based upon the sensor data. The italicized limitations above represent a combination of mathematical concepts (i.e., a process that can be performed by mathematical relationships or rules or idea) and mental step (i.e., a process that can be performed by can be performed mentally and/or with pen and paper). Therefore, the italicized limitations fall within the subject matter groupings of abstract ideas enumerated in Section I of the 2019 Revised Patent Subject Matter Eligibility Guidance. For example, the limitations “setting [..]; determining [..]; setting only the sensor data[..];performing an operation control[..] ” are mental steps (i.e., a process that can be performed by can be performed mentally and/or with pen and paper or a mental judgment), see 2106.04(a)(2). For example, the limitations “excluding, from the extraction target, the sensor data [..] ” is athematical concepts (i.e., a process that can be performed by mathematical relationships or rules or idea), see 2106.04(a)(2). Limitations are considered together as a single abstract idea for further analysis. (discussing Bilski v. Kappos, 561 U.S. 593 (2010)). Step 2A – Prong 2 Claim 1 does not include additional elements (when considered individually, as an ordered combination, and/or within the claim as a whole) that are sufficient to integrate the abstract idea into a practical application. The first additional element is “image obtained by a camera; extracting, on a basis of an object recognized in an imaged image obtained by a camera, sensor data corresponding to an object region including the object in the imaged image among the sensor data obtained by a rangefinding sensor ” to be performed, at least in-part, by use of using a generic system with generic components and obtaining data, these additional elements appear to only add insignificant extra-solution activity (e.g., data gathering) and only generally link the abstract idea to a particular field. Therefore, this element individually does not provide a practical application. See MPEP 2106.05(g). The 2nd additional element is “An information processing apparatus comprising:a memory storing a program, and at least one processor configured to execute the program to perform operations comprising” to be performed, at least in-part, by use of a computer running software. This element amounts to mere instructions to implement the abstract idea on a computer and/or mere use of a generic computer component with generic sensor as a tool to perform the abstract idea. Therefore, this element individually does not provide a practical application. see MPEP 2106.05(d). The 3rd additional element is “sensor data corresponding to image among the sensor data obtained by a ranefinding sensor” to be performed, at least in-part, by use of using a generic system with generic components and obtaining data, these additional elements appear to only add insignificant extra-solution activity (e.g., data gathering) and only generally link the abstract idea to a particular field. Therefore, this element individually does not provide a practical application. See MPEP 2106.05(g). In view of the above, three “additional elements” individually do not provide a practical application of the abstract idea. Furthermore, the three “additional elements” in combination amount to a plurality of generic devices associated with computer with software, where such generic data colleting device with computers and software amount to mere instructions to implement the abstract idea on a computer(s) and/or mere use of a generic computer component(s) as a tool to perform the abstract idea. Therefore, these elements in combination do not provide a practical application. The combination of additional elements does no more than generally link the use of the abstract idea to a particular technological environment, i.e., an environment of computer hardware/software in communication with one another (a network of computing devices), and for this additional reason, the combination of additional elements does not provide a practical application of the abstract idea. Step 2B Claim 1 does not include additional elements, when considered individually and as an ordered combination, that are sufficient to amount to significantly more than the abstract idea. For example, the limitation of Claims camera, ragefinding device, processor, memory”, generic device, which is well understood, routine and convention (see background of current discloser, IDS and the Examiner cited prior arts) and MPEP 2106.05(d)). The reasons for reaching this conclusion are substantially the same as the reasons given above in § Step 2A – Prong 2. For brevity only, those reasons are not repeated in this section. See MPEP §§ 2106.05(g) and MPEP §§2106.05(II). . Dependent Claims 4-9, 13-17 and 21 Dependent claims 3-9 fail to cure this deficiency of independent claims 1 (set forth above) and are rejected accordingly. Particularly, claims 3-9 recite limitations that represent (in addition to the limitations already noted above) either the abstract idea or an additional element that is merely extra-solution activity, mere use of instructions and/or generic computer component(s) as a tool to implement the abstract idea, and/or merely limits the abstract idea to a particular technological environment. 4. excluding, from an the extraction target, the sensor data in which a difference between a speed of the object having been recognized and a speed calculated on a basis of a time-series change of the sensor data is larger than a predetermined speed threshold.(abstract idea: combination of mathematical concepts (i.e., a process that can be performed by mathematical relationships or rules or idea) and mental step (i.e., a process that can be performed by can be performed mentally and/or with pen and paper)). 5. wherein the operations further comprise: excluding, from the extraction target, the sensor data in which a distance to the object having been recognized is larger than a predetermined distance threshold. (abstract idea: combination of mathematical concepts (i.e., a process that can be performed by mathematical relationships or rules or idea) and mental step (i.e., a process that can be performed by can be performed mentally and/or with pen and paper)) 6. the operations further comprise: setting the distance threshold in accordance with the object having been recognized. (abstract idea: a mental step (i.e., a process that can be performed by can be performed mentally and/or with pen and paper)) 7. the camera and the rangefinding sensor are mounted on a moving body,(generic data collection elements of vehicle control system: which is well understood, routine and convention (see background of current discloser, IDS and the Examiner cited prior arts) and MPEP 2106.05(d)). ) and the operations further comprise changing the distance threshold in accordance with a moving speed of the moving body(abstract idea: a mental step (i.e., a process that can be performed by can be performed mentally and/or with pen and paper)). 8. the operations further comprise: in a case where the object region is larger than a predetermined area, setting only sensor data corresponding to a vicinity of a center of the object region as an the extraction target. (abstract idea: a mental step (i.e., a process that can be performed by can be performed mentally and/or with pen and paper)). 9. in a case where the object region is smaller than a predetermined area, setting sensor data corresponding to an entirety of the object region as the extraction target (abstract idea: combination of mathematical concepts (i.e., a process that can be performed by mathematical relationships or rules or idea) and mental step (i.e., a process that can be performed by can be performed mentally and/or with pen and paper)). . 13. the operations further comprise setting, as an the extraction target, sensor data of a plurality of frames corresponding to the object region in accordance with weather(This element appears to limit the “collecting data” to be performed, at least in-part, by use of a memory and to be performed, at least in-part, these additional elements appear to only add insignificant extra-solution activity (e.g., data gathering)). 14. the operations further comprise: comparing the sensor data with distance information obtained by sensor fusion processing based on the imaged image and other sensor data(abstract idea: a mental step (i.e., a process that can be performed by can be performed mentally and/or with pen and paper)). 15. the operations further comprise: performing sensor fusion processing based on the imaged image and other sensor data; and correcting distance information obtained by the sensor fusion processing on a basis of the sensor data(This element appears to limit the “collecting data” to be performed, at least in-part, by use of a memory and to be performed, at least in-part, these additional elements appear to only add insignificant extra-solution activity (e.g., data gathering)). 16. wherein the rangefinding sensor includes aLiDAR, and the sensor data is point cloud data(This element appears to limit the “collecting data” to be performed, at least in-part, by use of a memory and to be performed, at least in-part, these additional elements appear to only add insignificant extra-solution activity (e.g., data gathering)). 17. the rangefinding sensor includes a millimeter wave radar, and the sensor data is distance information indicating a distance to the object(generic data collection elements of vehicle control system: which is well understood, routine and convention (see background of current discloser, IDS and the Examiner cited prior arts) and MPEP 2106.05(d)). 21. the operations further comprise: determining, for sensor data corresponding to the object region, whether a spatial correspondence between the sensor data and the object region in the imaged image satisfies a positional consistency condition; and in a case where only a portion of the sensor data corresponding to the object region satisfies the positional consistency condition, setting, as the extraction target, only the sensor data corresponding to a spatially consistent portion of the object region(abstract idea: combination of mathematical concepts (i.e., a process that can be performed by mathematical relationships or rules or idea) and mental step (i.e., a process that can be performed by can be performed mentally and/or with pen and paper)). Regarding claim 11 Step 2A – Prong 1 Exemplary claim 11 is directed to an abstract idea of extracts sensor data corresponding to an object region. The abstract idea is set forth or described by the following italicized limitations: 11. An information processing apparatus comprising: a memory storing a program, and at least one processor configured to execute the program to perform operations comprising: extracting, on a basis of an object recognized in an imaged image obtained by a camera, sensor data corresponding to an object region including the object in the imaged image among the sensor data obtained by a rangefinding sensor; setting an extraction condition of the sensor data on a basis of the object having been recognized; determining whether the object region exists higher than a predetermined height in the imaged image; in a case where it is determined that the object region exists higher than a predetermined height in the imaged image, setting sensor data of a plurality of frames corresponding to the object region as an extraction target; in a case where a difference in speed calculated on a basis of time-series change in the sensor data between an upper part and a lower part of the object region is larger than a predetermined threshold, excluding, from the extraction target, the sensor data corresponding to an upper part of the object region; and performing an operation control based upon the sensor data. The italicized limitations above represent a combination of mathematical concepts (i.e., a process that can be performed by mathematical relationships or rules or idea) and mental step (i.e., a process that can be performed by can be performed mentally and/or with pen and paper). Therefore, the italicized limitations fall within the subject matter groupings of abstract ideas enumerated in Section I of the 2019 Revised Patent Subject Matter Eligibility Guidance. For example, the limitations “setting [..]; determining [..]; setting sensor data of a plurality of frames [..]; performing an operation control[..]” are mental steps (i.e., a process that can be performed by can be performed mentally and/or with pen and paper or a mental judgment), see 2106.04(a)(2). For example, the limitations “in a case where a difference in speed calculated on a basis of time-series change in the sensor data between an upper part and a lower part of the object region is larger than a predetermined threshold, excluding, from the extraction target, the sensor data corresponding to an upper part of the object region” is a mathematical concepts (i.e., a process that can be performed by mathematical relationships or rules or idea), see 2106.04(a)(2). Limitations are considered together as a single abstract idea for further analysis. (discussing Bilski v. Kappos, 561 U.S. 593 (2010)). Step 2A – Prong 2 Claim 11 does not include additional elements (when considered individually, as an ordered combination, and/or within the claim as a whole) that are sufficient to integrate the abstract idea into a practical application. The first additional element is “image obtained by a camera; extracting, on a basis of an object recognized in an imaged image obtained by a camera, sensor data corresponding to an object region including the object in the imaged image among the sensor data obtained by a rangefinding sensor;” to be performed, at least in-part, by use of using a generic system with generic components and obtaining data, these additional elements appear to only add insignificant extra-solution activity (e.g., data gathering) and only generally link the abstract idea to a particular field. Therefore, this element individually does not provide a practical application. See MPEP 2106.05(g). The 2nd additional element is “An information processing apparatus comprising:a memory storing a program, and at least one processor configured to execute the program to perform operations comprising” to be performed, at least in-part, by use of a computer running software. This element amounts to mere instructions to implement the abstract idea on a computer and/or mere use of a generic computer component with generic sensor as a tool to perform the abstract idea. Therefore, this element individually does not provide a practical application. see MPEP 2106.05(d). The 3rd additional element is “sensor data corresponding to image among the sensor data obtained by a ranefinding sensor” to be performed, at least in-part, by use of using a generic system with generic components and obtaining data, these additional elements appear to only add insignificant extra-solution activity (e.g., data gathering) and only generally link the abstract idea to a particular field. Therefore, this element individually does not provide a practical application. See MPEP 2106.05(g). In view of the above, three “additional elements” individually do not provide a practical application of the abstract idea. Furthermore, the three “additional elements” in combination amount to a plurality of generic devices associated with computer with software, where such generic data colleting device with computers and software amount to mere instructions to implement the abstract idea on a computer(s) and/or mere use of a generic computer component(s) as a tool to perform the abstract idea. Therefore, these elements in combination do not provide a practical application. The combination of additional elements does no more than generally link the use of the abstract idea to a particular technological environment, i.e., an environment of computer hardware/software in communication with one another (a network of computing devices), and for this additional reason, the combination of additional elements does not provide a practical application of the abstract idea. Step 2B Claim 11 does not include additional elements, when considered individually and as an ordered combination, that are sufficient to amount to significantly more than the abstract idea. For example, the limitation of Claims camera, ragefinding device, processor, memory”, generic device, which is well understood, routine and convention (see background of current discloser, IDS and the Examiner cited prior arts) and MPEP 2106.05(d)). The reasons for reaching this conclusion are substantially the same as the reasons given above in § Step 2A – Prong 2. For brevity only, those reasons are not repeated in this section. See MPEP §§ 2106.05(g) and MPEP §§2106.05(II). . Claims 19-20 and 22 Claims 19-20 and 22 and contain language similar to claims 1, 21 and 11 as discussed in the preceding paragraphs, and for reasons similar to those discussed above, claims 19-20 and 22 are also rejected under 35 U.S.C. § 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over KATO et al (US 2021/0403015) in view of Liu et al.(US2021/0181745). Regarding Claims 1 and 19: Kato teaches an information processing apparatus comprising(figs.17-18): a memory storing a program, and at least one processor configured to execute the program to perform operations comprising: extracting (2450a: fig.17) on a basis of an object (P4) recognized in an imaged (s1: fig.18) image obtained by a camera(243a: fig.18), sensor data corresponding to an object region including the object in the imaged image among the sensor data obtained (s2: fig. 18) by a rangefinding sensor (244a: fig. 18); setting an extraction condition of the sensor data on a basis of the object having been recognized([0410]-[0415]); excluding, from the extraction target(P4: fig.18), the sensor data corresponding to a region overlapping another object (P3: fig.18)region (Sy: fig.18) for another object in the object region (e.g. the detection area S2 excluding the overlapping areas Sx, Sy: [0411]-[0415]; [0428]-[0431]); KATO silent about determining whether the sensor data corresponding to the object region is in a predetermined positional relationship with respect to the object region; in a case where the sensor data corresponding to the object region is in a predetermined positional relationship with respect to the object region, setting only the sensor data corresponding to a part of the object region as an extraction target; and performing an operation control based upon the sensor data. However, Liu teaches setting an extraction condition of the sensor data on a basis of the object having been recognized([0139]); determining whether the sensor data corresponding to the object region is in a predetermined positional relationship with respect to the object region(the target object is a traffic control unit such as a traffic light, and the pose data for the traffic light indicates a three-dimensional orientation/position (e.g., a longitude, latitude, height, yaw, pitch, roll) of the traffic light in the environment: [0110]; the traffic light can be displayed in a landscape orientation, having a specific height above the street surface, and facing a particular direction (e.g., southeast): [0133]; determining whether an object is within a predetermined location or distance from the vehicle that allows the target object to be classified as a particular object (e.g., a traffic light)): [0139]); in a case where wherein in a case where the sensor data corresponding to the object region is in a predetermined positional relationship with respect to the object region, setting only the sensor data corresponding to a part of the object region as an extraction target ([0139]); and performing an operation control based upon the sensor data(1710-1712: fig. 17 and fig. 13; a determination that the extracted subset of the LiDAR data points that corresponds to the detected target object (e.g., 1316 or 1318) satisfies registration criteria (e.g., the extracted subset of LiDAR data points have a location or distance (e.g., depth) value (e.g., relative to a location of AV 100) that is less than a location or distance threshold value), the system (e.g., 1300) registers (e.g., automatically), using the processing circuit, the detected target object (e.g., a representation or indication of the detected target object (e.g., 1600 a-1600 e)) in map data (e.g., three-dimensional map data) (e.g., including a representation of the detected target object in the map data without requiring manual annotation of the map data). the system (e.g., 1300) navigates (e.g., controls operation (e.g., driving) of the vehicle), using a control circuit (e.g., control module 406), the vehicle in the environment according to the map data:[0137]-[0141]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to the invention of KATO, determining whether the sensor data corresponding to the object region is in a predetermined positional relationship with respect to the object region; in a case where the sensor data corresponding to the object region is in a predetermined positional relationship with respect to the object region, setting only the sensor data corresponding to a part of the object region as an extraction target; and performing an operation control based upon the sensor data, as taught by Liu, so as to erforming automated object detection on the merged data to provide the map data by including an indication of the objects detected and associated positional data for the objects improve and it improves the accuracy and efficiency of object detection and provides additional details for the objects detected. Regarding Claim 13: Kato further teaches setting, as an extraction target, sensor data or a plurality of frames corresponding to the object region in accordance with weather([0270][0527]-[0530]). Regarding Claim 14: Kato further teaches comparing the sensor data with distance information obtained by sensor fusion processing based on the imaged image and other sensor data (a detection area Sf that is a combination of the detection area S1 of the camera 43 a, the detection area S2 of the LiDAR unit 44 a, and the detection area S3 of the millimeter wave radar 45 a as shown in FIG. 4. For example, the surrounding environment information If may include information on an attribute of a target object, a position of the target object with respect to the vehicle 1, a distance between the vehicle 1 and the target object and/or a velocity of the target object with respect to the vehicle 1. [0247]) Regarding Claim 15: Kato further teaches performing sensor fusion processing based on the imaged image and other sensor data (As shown in FIG. 12, the surrounding environment information If may include information on a target object existing at an outside of the vehicle 101 in a detection area Sf that is a combination of the detection area S1 for the camera 143 a, the detection area S2 for the LiDAR unit 144 a, and the detection area S3 for the millimeter wave radar 145 a. For example, the surrounding environment information If may include information on an attribute of a target object, a position of the target object with respect to the vehicle 101, a distance between the vehicle 101 and the target object and/or a speed of the target object with respect to the vehicle 101. The surrounding environment information fusing module 1450 a transmits the surrounding environment information If to the vehicle control unit 103: [0334]); and correcting (2460: fig. 17) that corrects distance information obtained by the sensor fusion processing on a basis of the sensor data (system 202 can be provided in which the recognition accuracy with which the surrounding environment of the vehicle 201 is recognized can be improved by making use of the information on the detection accuracies of the sensors: [0394]; the surrounding environment information identification module 2400 a is configured to identify the surrounding environment of the vehicle 201 based on the detection data of the sensors (the camera 243 a, the LiDAR unit 244 a, the millimeter wave radar 245 a) and the detection accuracy of the sensors: [00396]) Regarding Claim 16: Kato further teaches the rangefinding sensor includes a MiDAR(244a: fig. 17; ), and the sensor data is point cloud data(cloud server: [0405]). Regarding Claim 17: Kato further teaches the rangefinding sensor includes a millimeter wave radar(245a: fig. 17), and the sensor data is distance information indicating a distance to the object([0311]). Claim(s) 11, 13-17 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over KATO et al (US 2021/0403015) in views of Honda (US 2003/0011509) and CHIBA et al .(US 2018/0361854). Regarding claim 11 and 20. Kato teaches an information processing apparatus comprising(figs.17-18): a memory storing a program, and at least one processor configured to execute the program to perform operations comprising: extracting (2450a: fig.17) on a basis of an object (P4) recognized in an imaged (s1: fig.18) image obtained by a camera(243a: fig.18), sensor data corresponding to an object region including the object in the imaged image among the sensor data obtained (s2: fig. 18) by a rangefinding sensor (244a: fig. 18); setting an extraction condition of the sensor data on a basis of the object having been recognized([0410]-[0415]); KATO silent about determining whether the object region exists higher than a predetermined height in the imaged image; in a case where it is determined that the object region exists higher than the predetermined height in the imaged image, setting sensor data of a plurality of frames corresponding to the object region as an extraction target; and performing an operation control based upon the sensor data. However, Honda teaches determining whether the object region exists higher than a predetermined height in the imaged image(when it is determined from an image captured by the camera that the target is at a height higher than road level, the method determines that the target is a stationary object located above the road: abstract; [0040]); in a case where it is determined that the object region exists higher than the predetermined height in the imaged image, setting sensor data of a plurality of frames corresponding to the object region as an extraction target(S6 to determine whether the height Hg of the target captured by the camera is higher than the horizon level. If the answer is Yes, the target is recognized as being an overhead structure, and the range Rr to the target, the angle θr of the beam reflected by the target, and the relative velocity Vr with respect to the target, obtained from the radar, are deleted from data concerning the vehicle traveling in front so that these data will not be treated as the data concerning the vehicle ahead: [0040]); and performing an operation control based upon the sensor data(the data are deleted from the target data to be used for collision avoidance or vehicle-to-vehicle distance control: [0040]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to the invention of KATO, determining whether the object region exists higher than a predetermined height in the imaged image; in a case where it is determined that the object region exists higher than the predetermined height in the imaged image, setting sensor data of a plurality of frames corresponding to the object region as an extraction target; and performing an operation control based upon the sensor data, as taught by Honda, so as to detect stationary object over road surface and control the vehicle movement. KATO further teaches the target speed and the target angular velocity of the vehicle 1 are calculated ([0084]) The modified KATO silent about in a case where a difference in speed calculated on a basis of time-series change in the sensor data between an upper part and a lower part of the object region is larger than a predetermined threshold, excluding, from an extraction target, the sensor data corresponding to an upper part of the object region. However, CHIBA teaches in a case where a difference in speed calculated on a basis of time-series change in the sensor data between an upper part (V3R: fig. 3; [0041][0043]) and a lower part of the object region (v2R: fig. 3; [0041][0043]) is larger than a predetermined threshold, excluding, from an extraction target (2: fig.3), the sensor data corresponding to an upper part of the object region (The driving assistance control device excludes the upper structure from the object in the driving assistance control: [0014]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to the modified invention of KATO, in a case where a difference in speed calculated on a basis of time-series change in the sensor data between an upper part and a lower part of the object region is larger than a predetermined threshold, excluding from an extraction target, the sensor data corresponding to an upper part of the object region, as taught by CHIBA, so as to reduce a sense of discomfort and uneasiness of a driver and reliability to the driving assistance system is further improved. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over KATO et al (US 2021/0403015) in view of Liu, further in view of CHIBA et al .(US 2018/0361854) Regarding claim 4. KATO further teaches the target speed and the target angular velocity of the vehicle 1 are calculated ([0084]) The modified KATO silent about excluding, from an extraction target, the sensor data in which a difference between a speed of the object having been recognized and a speed calculated on a basis of a time-series change of the sensor data is larger than a predetermined speed threshold. However, CHIBA teaches excluding, from an extraction target, the sensor data in which a difference between a speed of the object having been recognized and a speed calculated on a basis of a time-series(fig. 6) change of the sensor data (relative speed: [0014]; [0040]-[0042], [0047]) is larger than a predetermined speed threshold (relative speed exceeds a threshold, excludes the upper structure from the object: [0014]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to the modified invention of KATO, excluding, from an extraction target, the sensor data in which a difference between a speed of the object having been recognized and a speed calculated on a basis of a time-series change of the sensor data is larger than a predetermined speed threshold, as taught by CHIBA, so as to reduce a sense of discomfort and uneasiness of a driver and reliability to the driving assistance system is further improved. Claim(s) 5-9 and 21-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over KATO et al (US 2021/0403015) in view of Liu, further in view of Schamp et al.(US2009/0010495). Regarding claim 5. The modified KATO silent about the excluding, from an extraction target, the sensor data in which a distance to the object having been recognized is larger than a predetermined distance threshold. However, Schamp teaches excluding, from an extraction target, the sensor data in which a distance to the object having been recognized is larger than a predetermined distance threshold (range-filtered image 416 by eliminating objects that are farther than a given maximum distance:[0098]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to the modified invention of KATO, excluding, from an extraction target, the sensor data in which a distance to the object having been recognized is larger than a predetermined distance threshold, as taught by Schamp, so as to identify vulnerable road user by protection system incorporated in vehicle to protect vulnerable road user, such as pedestrian, and cyclist, from collision with vehicle. Regarding claim 6. Schamp further teaches wherein setting the distance threshold in accordance with the object having been recognized(a given maximum distance:[0098]).. Regarding claim 7. KATO teaches the camera and the rangefinding sensor are mounted on a moving body (figs. 2-3), The modified KATO silent about changing the distance threshold in accordance with a moving speed of the moving body. However, Schamp teaches changing the distance threshold in accordance with a moving speed of the moving body (fig. 10a). It would have been obvious to one of ordinary skill in the art before the effective filing date of the modified invention to the invention of KATO, changing the distance threshold in accordance with a moving speed of the moving body, as taught by Schamp, so as to identify vulnerable road user by protection system incorporated in vehicle to protect vulnerable road user, such as pedestrian, and cyclist, from collision with vehicle. Regarding claim 8. The modified KATO silent about in a case where the object region is larger than a predetermined area, setting only sensor data corresponding to a vicinity of a center of the object region as an extraction target. However, Schamp teaches in a case where the object region is larger than a predetermined area(fig. 46a), setting only sensor data corresponding to a vicinity of a center of the object region as an extraction target(fig. 46 b; [0133]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to the modified invention of KATO, in a case where the object region is larger than a predetermined area, setting only sensor data corresponding to a vicinity of a center of the object region as an extraction target, as taught by Schamp, so as to identify vulnerable road user by protection system incorporated in vehicle to protect vulnerable road user, such as pedestrian, and cyclist, from collision with vehicle. Regarding claim 9. Schamp further teaches in a case where the object region is smaller than a predetermined area (car: fig.19a), setting sensor data corresponding to an entirety of the object region as an extraction target (fig.19b-c; 424: fig. 25; [0105];[0112]-[0113] ). Regarding claims 21-22. Modified KATO silent about the operations further comprise: determining, for sensor data corresponding to the object region, whether a spatial correspondence between the sensor data and the object region in the imaged image satisfies a positional consistency condition; and in a case where only a portion of the sensor data corresponding to the object region satisfies the positional consistency condition, setting, as the extraction target, only the sensor data corresponding to a spatially consistent portion of the object region. However, Schamp teaches the operations further comprise: determining, for sensor data corresponding to the object region, whether a spatial correspondence between the sensor data and the object region in the imaged image satisfies a positional consistency condition(figs 27-29; he composite range map image 82: [0115],[0117], [0124]); and in a case where only a portion of the sensor data corresponding to the object region satisfies the positional consistency condition, setting, as the extraction target, only the sensor data corresponding to a spatially consistent portion of the object region(figs 27-29; he composite range map image 82: [0115],[0117], [0124]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to the modified invention of KATO, the operations further comprise: determining, for sensor data corresponding to the object region, whether a spatial correspondence between the sensor data and the object region in the imaged image satisfies a positional consistency condition; and in a case where only a portion of the sensor data corresponding to the object region satisfies the positional consistency condition, setting, as the extraction target, only the sensor data corresponding to a spatially consistent portion of the object region, as taught by Schamp, so as to identify vulnerable road user by protection system incorporated in vehicle to protect vulnerable road user, such as pedestrian, and cyclist, from collision with vehicle. Response to Argument Applicant’s arguments with respect 101 rejection, specially claims 1, 11 and 19-20 The applicant did not agree with it. Applicant argus that “These operations necessarily involve correlating camera-derived object regions with rangefinding sensor data, evaluating spatial correspondence and overlap between distinct object regions, and selectively modifying sensor inputs prior to operation control. Such processing cannot be performed mentally or with pen and paper. It requires coordinated use of physical sensors and spatial computations tied to real-world geometry. Accordingly, claim 1 is not directed to an abstract idea under Step 2A, Prong One. Even if an abstract idea were identified, the amended limitations integrate the claim into a practical application by improving the accuracy and reliability of sensor data used for operation control, satisfying Step” and “claim 11 is likewise patent-eligible. Claim 11 recites determining whether an object region exists higher than a predetermined height in an imaged image, extracting sensor data of a plurality of frames corresponding to that object region, and further requires that, where a speed difference between an upper part and a lower part of the same object region exceeds a threshold, sensor data corresponding to the upper part is excluded. These features are grounded in image-space geometry, time-series analysis of sensor data, and intra-object-region discrimination based on measured physical behavior. The claim does not merely classify data or apply a mathematical rule, but instead controls which portions of real sensor data are relied upon for operation control based on spatial and temporal characteristics. Claim 11 therefore recites a concrete technical solution and is patent-eligible under § 101”, see, pages 7-8. In response, the Examiner respectfully disagree because current claim amendment also directed to details a combination of mathematical concepts (i.e., a process that can be performed by mathematical relationships or rules or idea) and mental step (i.e., a process that can be performed by can be performed mentally and/or with pen and paper). Therefore, the italicized limitations fall within the subject matter groupings of abstract ideas enumerated in Section I of the 2019 Revised Patent Subject Matter Eligibility Guidance.For example, the limitations “setting [..]; determining [..]; setting sensor data of a plurality of frames [..]; performing an operation control[..]” are mental steps (i.e., a process that can be performed by can be performed mentally and/or with pen and paper or a mental judgment), see 2106.04(a)(2). For example, the limitations “in a case where a difference in speed calculated on a basis of time-series change in the sensor data between an upper part and a lower part of the object region is larger than a predetermined threshold, excluding, from the extraction target, the sensor data corresponding to an upper part of the object region” is a mathematical concepts (i.e., a process that can be performed by mathematical relationships or rules or idea), see 2106.04(a)(2) . Therefore, the amended limitations fails to overcome current 101 rejection(see above details of current rejection), Limitations are considered together as a single abstract idea for further analysis. (discussing Bilski v. Kappos, 561 U.S. 593 (2010)). In view of the three “additional elements” individually do not provide a practical application of the abstract idea. Furthermore, the three “additional elements” in combination amount to a plurality of generic devices associated with computer with software, where such generic data colleting device with computers and software amount to mere instructions to implement the abstract idea on a computer(s) and/or mere use of a generic computer component(s) as a tool to perform the abstract idea. Therefore, these elements in combination do not provide a practical application. The combination of additional elements does no more than generally link the use of the abstract idea to a particular technological environment, i.e., an environment of computer hardware/software in communication with one another (a network of computing devices), and for this additional reason, the combination of additional elements does not provide a practical application of the abstract idea. As such, the rejection is maintained. Applicant’s arguments with respect 103 rejection, specially claims 1 and 19, Liu does not teach the amended limitations. In response, the Examiner respectfully disagree because Liu teaches the amended limitation as regarding claim 1: excluding, from the extraction target(P4: fig.18), the sensor data corresponding to a region overlapping another object (P3: fig.18)region (Sy: fig.18) for another object in the object region (e.g. the detection area S2 excluding the overlapping areas Sx, Sy: [0411]-[0415]; [0428]-[0431]); and regarding claim 19: However, CHIBA teaches in a case where a difference in speed calculated on a basis of time-series change in the sensor data between an upper part (V3R: fig. 3; [0041][0043]) and a lower part of the object region (v2R: fig. 3; [0041][0043]) is larger than a predetermined threshold, excluding, from an extraction target (2: fig.3), the sensor data corresponding to an upper part of the object region (The driving assistance control device excludes the upper structure from the object in the driving assistance control: [0014]).It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to the modified invention of KATO, in a case where a difference in speed calculated on a basis of time-series change in the sensor data between an upper part and a lower part of the object region is larger than a predetermined threshold, excluding from an extraction target, the sensor data corresponding to an upper part of the object region, as taught by CHIBA, so as to reduce a sense of discomfort and uneasiness of a driver and reliability to the driving assistance system is further improved. As Such 103 rejection is maintained. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. a) Shiba et al. (US 20200301019) disclose the terrain map information 230 indicates a horizontal position (i.e., latitude and longitude) and a height (i.e., altitude) of each point of the road surface RS, wherein the horizontal position and the height are associated with each other. The terrain map database 30 may be stored in a predetermined memory device installed on the vehicle 1, or may be stored in a management server outside the vehicle 1. FIG. 4 is a conceptual diagram indicating objects that are recognized by the object recognition processing according to the present embodiment, in the object recognition processing, an object being a target of tracking is recognized. Such the tracking target is exemplified by another vehicle, a pedestrian, a sign board, a pylon, a roadside structure, a fallen object, and so forth. In the present embodiment, a fallen object 3 on the road surface RS and a tracking target 4 other than the fallen object 3 are considered separately. In general, the fallen object 3 exists only in the vicinity of the road surface RS, and a height of the fallen object 3 is significantly lower than that of the tracking target 4. The object recognition processing according to the present embodiment distinguishes between the fallen object 3 and the tracking target 4 b) Li et al (US 20190180467) disclose LiDAR device 420. During operation, the LiDAR device 420 may rotate and use the laser beam to scan the surrounding environment of the vehicle, thereby generating a LiDAR point cloud image according to the reflected laser beam. Since the LiDAR device 420 rotates and scans along limited heights of the vehicle's surrounding environment, the LiDAR point cloud image measures the 360° environment surrounding the vehicle between the predetermined heights of the vehicle. c) Pink et al. (US 20170023678) disclose . The lateral LIDAR sensor is tilted about its transverse axis with respect to the horizontal, so the detection area of the lateral LIDAR sensor detects the remote upper spatial area at a predefined height above the vehicle using its part which is at the front in the direction of travel. d) Neumann et al. (US 20040105573) disclose The range sensor information can provide a clear footprint of a building's position and height information. This global geometric information can be used to determine a building's geo-location and isolate it from the surrounding terrain.Identification of buildings in the range sensor information can be based on height. For example, applying a height threshold to the reconstructed 3D mesh data can create an approximate building mask. The mask can be applied to filter all the mesh points, and those masked points can be extracted as building points. In addition to height, area coverage can also be taken into consideration in this identification process. Moreover, the height and area variables used can be set based on information known about the region being modeled. e) Weinberg (US 10229596) disclose a lidar system can be provided for measuring a clearance of overhead infrastructure, such as a bridge or overpass. The lidar system can alert a vehicle driver or automatically brake the vehicle if the available clearance is smaller than a height of the vehicle. The lidar system can emit rays of light over a range of angles towards a target region where the rays of light can have a vertical span. The lidar system can then receive rays of light reflected or scattered from the target region and can determine a distance traveled by the rays of light by determining a round trip travel time of the rays. A clearance of the overhead infrastructure can then be determined using geometric relationships. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD K ISLAM whose telephone number is (571)270-0328. The examiner can normally be reached M-F 9:00 a.m. - 5:00 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shelby A Turner can be reached on 571-272-6334. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMAD K ISLAM/Primary Examiner, Art Unit 2857
Read full office action

Prosecution Timeline

Oct 17, 2022
Application Filed
Feb 21, 2025
Non-Final Rejection — §101, §103
May 14, 2025
Response Filed
Jul 26, 2025
Final Rejection — §101, §103
Sep 30, 2025
Response after Non-Final Action
Oct 30, 2025
Request for Continued Examination
Nov 05, 2025
Response after Non-Final Action
Jan 27, 2026
Non-Final Rejection — §101, §103
Feb 20, 2026
Response Filed
Mar 31, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601849
SYSTEMS AND METHODS FOR PLANNING SEISMIC DATA ACQUISITION WITH REDUCED ENVIRONMENTAL IMPACT
2y 5m to grant Granted Apr 14, 2026
Patent 12596361
FAILURE DIAGNOSIS METHOD, METHOD OF MANUFACTURING DISK DEVICE, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12596872
HOLISTIC EMBEDDING GENERATION FOR ENTITY MATCHING
2y 5m to grant Granted Apr 07, 2026
Patent 12596868
CREATING A DIGITAL ASSISTANT
2y 5m to grant Granted Apr 07, 2026
Patent 12597434
CONTROL OF SPEECH PRESERVATION IN SPEECH ENHANCEMENT
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+16.5%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 1288 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month