Prosecution Insights
Last updated: April 19, 2026
Application No. 18/601,870

SYSTEMS FOR OBJECT DETECTION

Final Rejection §102§103
Filed
Mar 11, 2024
Examiner
AFRIN, NAZIA
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Qualcomm Incorporated
OA Round
2 (Final)
40%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
57%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
4 granted / 10 resolved
-12.0% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
63 currently pending
Career history
73
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
60.7%
+20.7% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
6.4%
-33.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claims Claims 1-20 are pending. No claims are amended, no claim is added. Response to arguments With respect to Applicant’s remarks filed on 11/24/2025; Applicant's “Amendments and Remarks” have been fully considered. Applicant’s remarks will be addressed in sequential order as they were presented. Applicant arguments: Applicant submits that the object detection via machine learning systems is an established technology and the claims 1 and 18 provide a particular solution/improvement to this technology. So, applicant requests withdrawal of the rejection under 35 U.S.C. 101. Ferreira fails to teach or suggest “wherein a respective vertical field of view of each camera sensor of the one or more camera sensors is greater than a respective vertical field of view of each sensor of one of more sensors”. Applicant further argues that the other independent claims which recite similar features are allowable and the dependent claims are also allowable since they depend on allowable subject. Office responses: Applicant argument overcomes 35 U.S.C. 101 rejection. Applicant’s remarks and arguments on rejection 35 U.S.C. 102 are respectfully considered but not persuasive. Applicant’s arguments on page 8 that” Ferreira fails to teach or suggest “obtaining from one or more camera sensors, camera data of the environment, …..…..is greater than a respective vertical field of view of each sensor of the one or more sensors”. Ferreira teaches in para[2312]In other words, both, LIDAR sensor elements (also referred to as LIDAR sensors or sensors) 52 and Camera sensors, perform their respective Field-of-View (FoV) analysis independent from each other and come to separate (individual) measurement and analysis results, for example, two different point clouds (three-dimensional (3D) for LIDAR, two-dimensional (2D) for a regular camera, or 3D for a stereo camera), and separate object recognition and/or classification data… This leads to Results A (LIDAR) and Results B (Camera) for a system comprised of the LIDAR First Sensor System 40 and the Second LIDAR Sensor System 50, as well as a Camera sensor system 81.. Depending on the used sensor systems and mathematical models, sensor point clouds may have the same or different dimensionality). Ferreira teaches in para[1661] a conventional LIDAR system , where a vertical laser line is emitted to scan the scene (e.g., to scan the field of view of the LIDAR system), only a specific vertical line on the sensor should be detected., para[1691] The LIDAR system (e.g., the waveguiding component) may include collection optics configured to image the vertical and horizontal field of view onto the waveguiding components of the first plurality of waveguiding components, para[1693] The LIDAR system may include an array of lenses (e.g., micro-lenses) configured to image the vertical and horizontal field of view onto the optical fibers. Ferreira teaches there are a number of energy consuming device like sensors (LIDAR, camera) (para [0017]), differences between those sensing system related o perception range (see para [0007]), vertical and horizontal field of view. According to Ferreira, (see para [1127]) In a conventional combination of a LIDAR sensor and a camera sensor, two separate image sensors are provided and these are combined by means of a suitable optics arrangement (e.g. semitransparent mirrors, prisms, and the like). As a consequence, a rather large LIDAR sensor space is required and both partial optics arrangements of the optics arrangement and both sensors (LIDAR sensor and camera sensor) have to be aligned to each other with high accuracy… This may also incorporate the fact that the fields of view of both sensors do not necessarily coincide with each other and that regions possibly exist in a region in close proximity to the sensors in which an object cannot be detected by all sensors of the one or more other sensors simultaneously. Additionally, Ferreira teaches in the claim 28 (the LIDAR sensory system predefined direction is the vertical direction) and in para[2011] (For example, a middle region that directs light into a certain vertical angular range is complemented by a horizontal side region that directs light into a larger vertical angular range. This allows the LIDAR Sensor System 10 to emit adjusted laser power into each horizontal region within the FOV, enabling longer range with a smaller vertical FOV in the central region and shorter range, but a larger vertical FOV on the sides). Office respectfully disagrees with allowability of the dependent claims. It is the Office's stance that all of the claimed subject matter has been properly rejected, office response provides with explanation with citation; therefore, the Office's respectfully disagrees with applicant’s arguments. Since, independent claims maintain 35 U.S.C. 102 rejection, Therefore, Examiner maintains 35 U.S.C. 103 rejection and repeat both rejections as before with additional citation for the convenience. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2,10-16 and 18 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by US 20200284883 A1 to Ferreira et al. (herein after “Ferreira”). Regarding claim 1, Ferreira teaches An apparatus of detecting one or more objects, the apparatus comprising: at least one memory (see Ferreira para[0501] The TIA 1600 is configured to collect the injected charge signal from the photosensitive SPAD 52 and to store it on a memory capacitor for being read out from the backend on command); and at least one processor (see Ferreira processor 62) coupled to the at least one memory and configured to: obtain point cloud data of an environment of the apparatus(see Ferreira para[2232] Program Device, Data Storage Device, Soft-and Hardware) of information about the observed object, e.g. point clouds (Point Cloud),), the point cloud data comprising one or more point clouds obtained using one or more sensors and a respective field of view of each sensor of the one or more sensors (see Ferreira para2333] LIDAR measurements generate 3D Point Cloud Data that can be used for object recognition and classification); obtain, from one or more camera sensors, camera data of the environment, each camera sensor of the one or more camera sensors comprising a respective field of view (see Ferreira a camera sensor), wherein a respective vertical field of view (see Ferreira claim 28 wherein the predefined direction is the vertical direction. ) of each camera sensor of the one or more camera sensors is greater than a respective vertical field of view of each sensor of the one or more sensors (see Ferreira para[1127] , claim 28, para[1127], para[2011] For example, a middle region that directs light into a certain vertical angular range is complemented by a horizontal side region that directs light into a larger vertical angular range. This allows the LIDAR Sensor System 10 to emit adjusted laser power into each horizontal region within the FOV, enabling longer range with a smaller vertical FOV in the central region and shorter range, but a larger vertical FOV on the sides ); obtain map data of the environment, the map data comprising one or more spatial priors indicative of at least one of elevated object patterns or locations; (see Ferreira traffic map data in paras[5361]-[5365]) and determine, using a trained machine learning system (see Ferreira para[0093] machine learning software.)a location of an object based on the point cloud data, the camera data, and the map data of the environment of the apparatus (see Ferreira para [1126] In the LIDAR Sensor System, a combination of a LIDAR sensor and a camera sensor may be desired e.g. in order to identify an object or characteristics of an object by means of data fusion;para [5363] , a traffic control device, and the like. Illustratively, the one or more traffic-related conditions may be GPS-coded, e.g. they may be associated with a location of a vehicle (e.g., with the GPS-coordinates of a vehicle).)) (see para [0073] In some embodiments of the LIDAR Sensor System, the instructions to the LIDAR Sensor Management Software are based on measured values and/or data of any member selected from the following group or a combination thereof: vehicle (LIDAR Sensor Device) speed, distance, density, vehicle specification and class, para [0054]The LIDAR Sensor Device can further comprise one or more LIDAR Sensor Systems as well as other sensor systems, like optical camera sensor systems (CCD; CMOS), RADAR sensing system, and ultrasonic sensing systems) Regarding claim 2, Ferreira remains applied as claim 1. Ferreira teaches wherein the at least one processor is configured to determine the location of the object using the trained machine learning system (see Ferreira para [0084]The software platform may cumulate data from one's own or other vehicles (LIDAR Sensor Devices) to train machine learning algorithms for improving surveillance and car steering strategies; para[0093] machine learning software.)) further based on at least one azimuth, at least one respective radius, and at least one respective elevation of the object with respect to the apparatus. (see Ferreira para[3063] Since the Retrofit LIDAR sensor device can communicate with a driver's smartphone, the camera picture as well as all relevant tilt (azimuth and elevation) and yaw angles can be transmitted and displayed) Regarding claim 10, Ferreira remains applied as claim 1. Ferreira teaches wherein the at least one processor is configured to determine the location of the object further based on temporal data of the environment (see Ferreira para 1551]The sensor controller 53 may be configured to control the movement of the one or more optical components of the plurality of optical components with a same time dependency). Regarding claim 11, Ferreira remains applied as claim 1. Ferreira teaches further comprising the one or more sensors, the one or more sensors configured to capture the one or more point clouds. (see Ferreira para[2312] measurement and analysis results, for example, two different point clouds (three-dimensional (3D) for LIDAR, two-dimensional (2D) for a regular camera, or 3D for a stereo camera)) Regarding claim 12, Ferreira remains applied as claim 1. Ferreira teaches wherein the point cloud data is light detection and ranging (LiDAR) data, and wherein the one or more sensors includes one or more LiDAR sensors (see Ferreira para 2710] Another aspect of the LIDAR Sensor System deals with light detection and light ranging). Regarding claim 13, Ferreira remains applied as claim 1. Ferreira teaches further comprising the one or more camera sensors, the one or more camera sensors configured to capture the camera data (see Ferreira para [0054] optical camera sensor systems (CCD; CMOS), RADAR sensing system, and ultrasonic sensing systems.). Regarding claim 14, Ferreira remains applied as claim 1. Ferreira teaches wherein the apparatus is part of a vehicle (See Ferreira para 5389] In various embodiments, one or more types of traffic map information may be compiled and integrated into a traffic density probability map, By way of example the traffic density probability map may be downloaded by a vehicle and stored in the vehicle's data storage system., para [5712]The illumination and sensing system 17300 may be a system for a vehicle. Illustratively, a vehicle may include one or more illumination and sensing systems 17300 described herein, for example arranged in different locations in the vehicle). Regarding claim 15, Ferreira remains applied as claims 1 and 14. Ferreira teaches wherein the at least one processor is configured to adjust an operating parameter of the vehicle based on the location of the object (see Ferreira para [5387] The traffic map provider may be configured to adjust the traffic map to be transmitted to the vehicle according to the received information., para[5390] The information described by the traffic density probability map may enable a driver and/or an autonomously driving vehicle to adjust its route,para[5641] The one or more processors 12308 may be configured to provide driving commands for (and/or to) the vehicle 12300. The one or more processors 12308 may be configured to control (e.g., adjust) vehicle control options. The one or more processors 12308 may be configured to adjust (e.g., reduce) the speed of the vehicle 12300, to control the vehicle 12300 to change lane, to control the driving mode of the vehicle 12300, and the like). Regarding claim 16, Ferreira remains applied as claims 1, 14 and 15. Ferreira teaches wherein the operating parameter is associated with at least one of a path for the vehicle to travel (see Ferreira para [5390] The information described by the traffic density probability map may enable a driver and/or an autonomously driving vehicle to adjust its route,), an automatic braking parameter for operating one or more brakes of the vehicle, a lane change parameter for causing the vehicle to navigate from a first lane to a second lane (see para[3943] the focus of the vehicle sensor system may be mainly on advanced driver-assistance systems (ADAS) functionalities, such as lane keeping, cruise control, emergency braking assistance, and the like.), or a display parameter associated with a user interface of the vehicle (see para[0076] the Controlled LIDAR Sensor System comprises a software user interface (UI), particularly a graphical user interface (GUI).). Regarding claim 18, Ferreira teaches A method of detecting one or more objects at a device, the method comprising(see Ferreira para[0501] The TIA 1600 is configured to collect the injected charge signal from the photosensitive SPAD 52 and to store it on a memory capacitor for being read out from the backend on command);: obtaining point cloud data of an environment of the device, the point cloud data comprising one or more point clouds obtained using one or more sensors and a respective field of view of each sensor of the one or more sensors(see Ferreira para[2232] Program Device, Data Storage Device, Soft-and Hardware) of information about the observed object, e.g. point clouds (Point Cloud),; obtaining, from one or more camera sensors, camera data of the environment, each camera sensor of the one or more camera sensors comprising a respective field of view(see Ferreira a camera sensor), wherein a respective vertical field of view (see Ferreira claim 28 wherein the predefined direction is the vertical direction. ) of each camera sensor of the one or more camera sensors is greater than a respective vertical field of view of each sensor of the one or more sensors(see Ferreira para[1127] As a consequence, a rather large LIDAR sensor space is required and both partial optics arrangements of the optics arrangement and both sensors (LIDAR sensor and camera sensor) have to be aligned to each other with high accuracy. ); obtaining map data of the environment, the map data comprising one or more spatial priors indicative of at least one of elevated object patterns or locations(see Ferreira traffic map data in paras[5361]-[5365]); and determining, using a trained machine learning system(see Ferreira para[0093] machine learning software.), a location of an object based on the point cloud data, the camera data, and the map data of the environment of the device (see Ferreira para [1126] In the LIDAR Sensor System, a combination of a LIDAR sensor and a camera sensor may be desired e.g. in order to identify an object or characteristics of an object by means of data fusion;para [5363] , a traffic control device, and the like. Illustratively, the one or more traffic-related conditions may be GPS-coded, e.g. they may be associated with a location of a vehicle (e.g., with the GPS-coordinates of a vehicle).)) (see para [0073] In some embodiments of the LIDAR Sensor System, the instructions to the LIDAR Sensor Management Software are based on measured values and/or data of any member selected from the following group or a combination thereof: vehicle (LIDAR Sensor Device) speed, distance, density, vehicle specification and class, para [0054]The LIDAR Sensor Device can further comprise one or more LIDAR Sensor Systems as well as other sensor systems, like optical camera sensor systems (CCD; CMOS), RADAR sensing system, and ultrasonic sensing systems) . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3-6,8-9,17,19-20 are rejected under 35 U.S.C. 103 as being unpatented over US 20200284883 A1 to Ferreira et al. (herein after “Ferreira”) in view of US 20210150350 A1 to Gao et al. (herein after “Gao”). Regarding claim 3, Ferreira remains applied as claim 1. Ferreira teaches data associated with point cloud of the one or more point clouds and the camera data (see Ferreira para[2312] measurement and analysis results, for example, two different point clouds (three-dimensional (3D) for LIDAR, two-dimensional (2D) for a regular camera, or 3D for a stereo camera)). However, Ferreira does not expressly disclose or otherwise teach wherein the at least one processor is configured to construct a plurality of graphs using Graph Neural Network. Nevertheless, in a related field of invention, Gao teaches wherein the at least one processor is configured to construct a plurality of graphs (see Gao para[0085] Thus, as shown in FIG. 4, the graph neural network 220 performs six sets of local operations, where each set of local operations is performed on a corresponding one of six graphs). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine Ferreira’s LIDAR Sensor System with Gao’s Graph Neural Network to generate graphs in order to allow to predicting the future trajectory of an agent in an environment using polylines (see Gao paras[0002] and [0085]). Regarding claim 4, Ferreira remains applied as claim 1. Ferreira teaches a field of view of a sensor of the one or more sensors used to capture the point cloud and an azimuth(see Ferreira para[3063] Since the Retrofit LIDAR sensor device can communicate with a driver's smartphone, the camera picture as well as all relevant tilt (azimuth and elevation) and yaw angles can be transmitted and displayed), a radius, and an elevation of the object with respect to the apparatus for the point cloud (see Ferreira para[3063] Since the Retrofit LIDAR sensor device can communicate with a driver's smartphone, the camera picture as well as all relevant tilt (azimuth and elevation) and yaw angles can be transmitted and displayed). However, Ferreira does not expressly disclose or otherwise teach wherein the at least one processor is configured to construct a plurality of graphs using Graph Neural Network. Nevertheless, in a related field of invention, Gao teaches wherein the graph is further associated with a field of view (see Gao’s graph neural network to generate graphs). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine Ferreira’s LIDAR Sensor System with Gao’s Graph Neural Network to generate graphs in order to allow to predicting the future trajectory of an agent in an environment using polylines (see Gao paras[0002] and [0085]). Regarding claim 5, Ferreira remains applied as claim 1. Ferreira teaches a camera of the one or more camera sensors (see Ferreira camera sensors data). However, Ferreira does not expressly disclose or otherwise teach wherein the at least one processor is configured to construct a plurality of graphs using Graph Neural Network. Nevertheless, in a related field of invention, Gao teaches wherein the graph is further associated with a field of view (see Gao’s graph neural network to generate graphs). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine Ferreira’s LIDAR Sensor System with Gao’s Graph Neural Network to generate graphs using camera sensors data in order to allow to predicting the future trajectory of an agent in an environment using polylines (see Gao paras[0002] and [0085]). Regarding claim 6, Ferreira remains applied as claim 1. However, Ferreira does not expressly disclose or otherwise teach wherein each graph of the plurality of graphs comprises a plurality of nodes. Nevertheless, in a related field of invention, Gao teaches wherein each graph of the plurality of graphs comprises a plurality of nodes (see Gao [0086] In particular, the graph neural network 220 includes a sequence of one or more subgraph propagation layers that, when operating on a given polyline, each receive as input a respective input feature for each of the nodes in the graph representing the given polyline, i.e., each of the vectors in the given polyline, and generate as output a respective output feature for each of the nodes in the graph, i.e., for each of the vectors in the given polyline.). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine Ferreira’s LIDAR Sensor System with Gao’s Graph Neural Network to generate graphs using camera sensors data in order to allow to predicting the future trajectory of an agent in an environment using polylines (see Gao paras[0002] and [0085]). Regarding claim 8, Ferreira remains applied as claims 1 and 6. Ferreira teaches indicating whether a respective azimuth, a respective radius, and a respective elevation of each node is within the respective field of view of each camera sensor of the one or more camera sensors and a second value indicating whether the respective azimuth, the respective radius, and the respective elevation of each node is within the respective field of view of each sensor of the one or more sensors. (see Ferreira para[3063] Since the Retrofit LIDAR sensor device can communicate with a driver's smartphone, the camera picture as well as all relevant tilt (azimuth and elevation) and yaw angles can be transmitted and displayed). However, Ferreira does not expressly disclose or otherwise teach wherein each graph of the plurality of graphs comprises a plurality of nodes and nodes indicates the data. Nevertheless, in a related field of invention, Gao teaches Gao teaches wherein each graph of the plurality of graphs comprises a plurality of nodes (see Gao [0086] In particular, the graph neural network 220 includes a sequence of one or more subgraph propagation layers that, when operating on a given polyline, each receive as input a respective input feature for each of the nodes in the graph representing the given polyline). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine Ferreira’s LIDAR Sensor System with Gao’s Graph Neural Network to generate graphs using camera sensors data in order to allow to predicting the future trajectory of an agent in an environment using polylines (see Gao paras[0002] and [0085]). Regarding claim 9, Ferreira remains applied as claims 1 and 3. Ferreira teaches using the trained machine learning system (see Ferreira para[0093] machine learning software). However, Ferreira does not expressly disclose or otherwise teach wherein the at least one processor is configured to process, the plurality of graphs to determine the location of the object. Nevertheless, Gao same field of endeavor teaches the plurality of graphs to determine the location of the object (see Gao para[0085] each set of local operations is performed on a corresponding one of six graphs, each of the six graphs representing different one of the six polylines in the representation 402). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine Ferreira’s LIDAR Sensor System with Gao’s Graph Neural Network to generate graphs using camera sensors data in order to allow to predicting the future trajectory of an agent in an environment using polylines (see Gao paras[0002] and [0085]). Regarding claim 17, Ferreira remains applied as claim 1. However, Ferreira does not expressly disclose or otherwise teach wherein the trained machine learning system is a graph neural network (GNN). Nevertheless, Gao same field of endeavor teaches wherein the trained machine learning system is a graph neural network (GNN) ). (see Gao para [0085] The graph neural network 220 is a local graph neural network, i.e., a neural network that operates on each of the polylines independently and that represents each vector in any given polyline as a node in a graph that represents the given polyline. Thus, as shown in FIG. 4, the graph neural network 220 performs six sets of local operations, where each set of local operations is performed on a corresponding one of six graphs, each of the six graphs representing different one of the six polylines in the representation 402.). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine Ferreira’s LIDAR Sensor System with Gao’s Graph Neural Network to generate graphs using camera sensors data in order to allow to predicting the future trajectory of an agent in an environment using polylines (see Gao paras[0002] and [0085]). Regarding claim 19, Ferreira remains applied as claim 18. Ferreira teaches point cloud of the one or more point clouds and the camera data, a field of view of a sensor of the one or more sensors used to capture the point cloud, an azimuth (see Ferreira para[3063] Since the Retrofit LIDAR sensor device can communicate with a driver's smartphone, the camera picture as well as all relevant tilt (azimuth and elevation) and yaw angles can be transmitted and displayed), , a radius, and an elevation of the object with respect to the device for the point cloud, and a field of view of a camera of the one or more camera sensors (see Ferreira para[3063] Since the Retrofit LIDAR sensor device can communicate with a driver's smartphone, the camera picture as well as all relevant tilt (azimuth and elevation) and yaw angles can be transmitted and displayed). However, Ferreira does not expressly disclose or otherwise teach a plurality of graphs. Nevertheless, Gao same field of endeavor teaches a plurality of graphs (see Gao para[0085] Thus, as shown in FIG. 4, the graph neural network 220 performs six sets of local operations, where each set of local operations is performed on a corresponding one of six graphs),. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine Ferreira’s LIDAR Sensor System with Gao’s Graph Neural Network to generate graphs using camera sensors data in order to allow to predicting the future trajectory of an agent in an environment using polylines (see Gao paras[0002] and [0085]). Regarding claim 20, Ferreira remains applied as claim 18. Ferreira teaches azimuth, a respective radius, and a respective elevation of each node is within the respective field of view of each camera sensor of the one or more camera sensors and a second value indicating whether the respective azimuth, the respective radius, and the respective elevation of each node is within the respective field of view of each sensor of the one or more sensors. (see Ferreira para[3063] Since the Retrofit LIDAR sensor device can communicate with a driver's smartphone, the camera picture as well as all relevant tilt (azimuth and elevation) and yaw angles can be transmitted and displayed). However, Ferreira does not expressly disclose or otherwise teach a plurality of graphs. Nevertheless, Gao same field of endeavor teaches a plurality of graphs (see Gao para[0085] Thus, as shown in FIG. 4, the graph neural network 220 performs six sets of local operations, where each set of local operations is performed on a corresponding one of six graphs). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine Ferreira’s LIDAR Sensor System with Gao’s Graph Neural Network to generate graphs using camera sensors data in order to allow to predicting the future trajectory of an agent in an environment using polylines (see Gao paras[0002] and [0085]). Claim 7 is rejected under 35 U.S.C. 103 as being unpatented over US 20200284883 A1 to Ferreira et al. (herein after “Ferreira”) in view of US 20210150350 A1 to Gao et al. (herein after “Gao”) and US20200103894 A1 to Cella et al. (herein after “Cella”). Regarding claim 7, Ferreira and Gao remain applied as claim 6. However, Ferreira does not expressly disclose or otherwise teach a plurality of graphs. Nevertheless, Cella same field of endeavor teaches wherein the at least one processor is configured to prune one or more nodes of the plurality of nodes based on the one or more nodes being at least one of redundant or less informative than other nodes of the plurality of nodes with respect to the object (see paras [1855], [1857] In another general aspect, a system for modifying redundancy information associated with encoded data passing from a first node to a second node over a number of data paths includes an intermediate node configured to receive first encoded data including first redundancy information from the first node via a first channel connecting the first node and the intermediate node). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to combine Ferreira’s LIDAR Sensor System with Cella’s method of redundant or less information nodes from the graph in order to allow to optimize design, development, deployment, and operation of different technologies in order to improve overall results (see Cella para[0004]). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAZIA AFRIN whose telephone number is (703)756-1175. The examiner can normally be reached Monday-Friday 7:30-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott A Browne can be reached at 5712700151. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NAZIA AFRIN/Examiner, Art Unit 3666 /SCOTT A BROWNE/Supervisory Patent Examiner, Art Unit 3666
Read full office action

Prosecution Timeline

Mar 11, 2024
Application Filed
Aug 27, 2025
Non-Final Rejection — §102, §103
Nov 24, 2025
Response Filed
Mar 15, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600603
CRANE, CRANE CHARACTERISTIC CHANGE DETERMINATION DEVICE, AND CRANE CHARACTERISTIC CHANGE DETERMINATION SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12585271
ACTIVE GEOFENCING SYSTEM AND METHOD FOR SEAMLESS AIRCRAFT OPERATIONS IN ALLOWABLE AIRSPACE REGIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12560927
NAVIGATION METHOD AND ROBOT THEREOF
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
40%
Grant Probability
57%
With Interview (+16.7%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month