Prosecution Insights
Last updated: April 19, 2026
Application No. 18/185,524

LOGISTICS SAFETY OPERATIONS

Non-Final OA §103
Filed
Mar 17, 2023
Examiner
KNUDSON, ELLE ROSE
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
DELL PRODUCTS, L.P.
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
11 granted / 15 resolved
+21.3% vs TC avg
Strong +44% interview lift
Without
With
+44.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
27 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
26.7%
-13.3% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/23/2025 has been entered. Response to Amendment This non-final action is in response to RCE filed on 10/23/2025. Claims 1-20 are pending. Claims 1-16 and 18-20 are amended. Claims 17 are previously presented. Claim Objections Claim 18 is objected to because of the following informalities: “predicted trajectories of the mobile notes” in lines 9-10 of claim 18 should read “predicted trajectories of the mobile nodes”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 4, 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20240190473 A1 AKELLA; Abishek Krishna et al. (hereinafter Akella), in view of US 20150228066 A1 US Farb; Michael Scot (hereinafter Farb). Regarding claim 1, Akella teaches: A method (see Akella at least [0012] methods, systems, and computer-readable media for controlling how a vehicle behaves when its path intersects with a path of another object) comprising: preparing sensor data at a node in a physical environment for transmission to a central node, wherein the sensor data includes position data and inertial data (see Akella at least [0046] The vehicle 202 may include a vehicle computing device(s) 204, sensor(s) 206 and [0047] In some instances, the sensor(s) 206 may correspond to sensor(s) 114 and may include... location sensors (e.g., global positioning system (GPS), compass, etc.), inertial sensors…); sending the sensor data to a message service at the central node (see Akella at least [0047] The sensor(s) 206 may provide input to the vehicle computing device(s) 204 and/or to computing device(s) 232 and [0058] The driving log data 226 may comprise sensor data... the vehicle 202 may transmit the driving log data 226 to the computing device(s) 23); (see Akella at least [0075] The vehicle computing device may generate a representation or simulation of the environment including the vehicle and agent in order to determine the region of potential collision and [0065] the vehicle 202 may perform one or more of the functions associated with the computing device(s) 232, and vice versa). Akella does not teach: receiving, at the node, an event message indicating a potential collision between the node and a second node; in response to the event message: actuating, via the digital twin, a camera associated with a zone corresponding to the potential collision to initiate a live video feed showing the physical environment; and displaying, at the node, a user interface that includes the live video feed or rendered virtual view of the zone generated by the digital twin and a visual indication of the potential collision. However, Farb teaches: receiving, at the node, an event message indicating a potential collision between the node and a second node (see Farb at least [0081] The system can alert the user with a simple audible alert that can pulse (beep) at different rates, provide an escalating sound level (dB), present an audible message describing the situation, and the like in an escalating manner analogous with those described for the visual alerting system); in response to the event message: actuating, via the collision prediction physical environment (see Farb at least [0103] the system would activate the video recorder when the system determines that a distance and velocity of the approaching vehicle suggest an impending collision with the vehicle employing the system might occur within 3 seconds); and displaying, at the node, a user interface that includes the live video feed or rendered virtual view of the zone generated by the digital twin and a visual indication of the potential collision (see Farb at least [0170] The system controller 600 can record and present additional information to the user, including: and [0171] a. still images or real time video of the rear approaching vehicle 350 (referenced as an approaching vehicle image 650)). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection method disclosed by Akella to include the acquisition of video data in response to determination of a potential collision and presentation of such information to operators of vehicles of Farb. One of ordinary skill in the art would have been motivated to make this modification because once an imminent collision is detected, it can possibly be prevented by providing details regarding the situation to relevant parties, as suggested by Farb (see Farb at least [0131] These collisions can be reduced by simply warning at least one of the bicyclist and the driver of a potential collision. If the cyclist is warned of an approaching vehicle, the cyclist can react accordingly to avoid a collision. If the driver is informed that they are approaching a cyclist, the driver can be more aware of the cyclist and avoid a collision). Regarding claim 2, Akella and Farb disclose: The method of claim 1, wherein the potential collision defines a safety scenario comprising a predicted intersection of a trajectory of the node and a trajectory of the second node within a predefined zone of the digital twin environment (see Akella at least [0075] The vehicle computing device may generate a representation or simulation of the environment including the vehicle and agent in order to determine the region of potential collision and [0103] a first trajectory that the autonomous vehicle is being controlled to follow; receiving, from a prediction component of the autonomous vehicle, a second trajectory that an object is predicted to follow; determining, based on a first projection of the autonomous vehicle along a path of the first trajectory and a second projection of the object along a path of the second trajectory, an area where the autonomous vehicle and the object have a possibility of collision, the area including at least one location at which the first projection touches or partially overlaps the second projections). Regarding claim 4, Akella and Farb disclose: The method of claim 1, further comprising generating, by the node, a display associated with the potential collision event that is synchronized with the zone selected for the actuated camera (see Farb at least [0170] The system controller 600 can record and present additional information to the user, including: and [0171] a. still images or real time video of the rear approaching vehicle 350 (referenced as an approaching vehicle image 650)). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection method disclosed by Akella and Farb to include the acquisition of video data in response to determination of a potential collision and presentation of such information to operators of vehicles of Farb. One of ordinary skill in the art would have been motivated to make this modification because once an imminent collision is detected, it can possibly be prevented by providing details regarding the situation to relevant parties, as suggested by Farb (see Farb at least [0131] These collisions can be reduced by simply warning at least one of the bicyclist and the driver of a potential collision. If the cyclist is warned of an approaching vehicle, the cyclist can react accordingly to avoid a collision. If the driver is informed that they are approaching a cyclist, the driver can be more aware of the cyclist and avoid a collision). Regarding claim 5, Akella and Farb disclose: The method of claim 4, further comprising displaying the live video feed from the camera located at the node showing the physical environment (see Farb at least [0170] The system controller 600 can record and present additional information to the user, including: and [0171] a. still images or real time video of the rear approaching vehicle 350 (referenced as an approaching vehicle image 650)). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection method disclosed by Akella and Farb to include the acquisition of video data in response to determination of a potential collision and presentation of such information to operators of vehicles of Farb. One of ordinary skill in the art would have been motivated to make this modification because once an imminent collision is detected, it can possibly be prevented by providing details regarding the situation to relevant parties, as suggested by Farb (see Farb at least [0131] These collisions can be reduced by simply warning at least one of the bicyclist and the driver of a potential collision. If the cyclist is warned of an approaching vehicle, the cyclist can react accordingly to avoid a collision. If the driver is informed that they are approaching a cyclist, the driver can be more aware of the cyclist and avoid a collision). Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akella, in view of Farb, and further in view of US 20190243371 A1 Nister; David, et al. (hereinafter Nister). Regarding claim 3, Akella and Farb teach: The method of claim 2. Akella and Farb do not teach: wherein the event message indicates that the node is entering a virtual zone represented in the digital twin as being occupied by the second node. However, Nister teaches: wherein the event message indicates that the node is entering a virtual zone represented in the digital twin as being occupied by the second node (see Nister at least [0008] The system may then determine states and safety procedures for each object (perceived and unperceived, static and moving) in the environment, and generate a virtual representation of the points in space-time the objects will occupy (e.g., for each object, an object-occupied trajectory(ies)) when executing their respective safety procedures. The system may then monitor the vehicle-occupied trajectory(ies) in view of the object-occupied trajectories to determine if an intersection or overlap occurs). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection method disclosed by Akella and Farb to include the overlapping occupancy consideration of Nister. One of ordinary skill in the art would have been motivated to make this modification because determining whether an area of overlap occurs allows to avoid collision within the overlap zone, as suggested by Nister (see Nister at least [0008] Once it is determined that an intersection or overlap occurs, the system may implement a pre-emptive object avoidance procedure that acts like a “safety force field” that operates by pro-actively “repels” the vehicle from the projected intersection of object(s) by implementing an action that decreases the overall likelihood or imminence of an actual collision between the vehicle and the object(s)). Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akella, in view of Farb, and further in view of US 20240278797 A1 Agrawal; Tushar et al. (hereinafter Agrawal). Regarding claim 6, Akella and Farb disclose: The method of claim 5. Akella and Farb do not teach: further comprising displaying a rendered virtual view of a same zone generated by the digital twin. However, Agrawal discloses: further comprising displaying a rendered virtual view of a same zone generated by the digital twin (see Agrawal at least [0015] instead of displaying the actual surroundings as the background, an entirely artificial environment may be rendered by a virtual reality application with similar enhancements made to the objects in the field of view). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection method disclosed by Akella and Farb to include the rendered display of Agrawal. One of ordinary skill in the art would have been motivated to make this modification because a display of a vehicle’s surroundings allows a driver to better understand predicted collisions and their implications, improving safety, as suggested by Agrawal (see Agrawal at least [0015] Such a method or system may improve existing vehicle incident avoidance systems by providing more detailed data directly to a driver that may predict incident effects based on vehicle movements and driver actions. As a result, driver and passenger safety may be improved and the efficiency of incident avoidance systems may be enhanced). Claim(s) 7, 17, 18, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akella, in view of CN 115410354 A WANG, BIN et al. (hereinafter Wang), further in view of US 20150227862 A1 Chandrasekar; Kashyap et al. (hereinafter Chandrasekar), and further in view of Farb. Regarding claim 7, Akella discloses: A method (see Akella at least [0012] methods, systems, and computer-readable media for controlling how a vehicle behaves when its path intersects with a path of another object) comprising: receiving position data, inertial data, (see Akella at least [0046] The vehicle 202 may include a vehicle computing device(s) 204, sensor(s) 206 and [0047] In some instances, the sensor(s) 206 may correspond to sensor(s) 114 and may include... location sensors (e.g., global positioning system (GPS), compass, etc.), inertial sensors…); simulating, in a digital twin virtual environment, predicted trajectories of the nodes using the (see Akella at least [0032] The vehicle 102 may use its sensors 114 and the planner 118 to determine the path 110. The path 110 may be part of a trajectory determined by the planner 118 and [0076] Having modelled or simulated the vehicle and agent and the environment in which they are travelling, the simulated vehicle 402 and agent 426 may be projected along the paths); applying zone-based safety rules to the predicted trajectories to determine that a potential collision will occur within a defined virtual zone, and in response to the determined potential collision, generating a safety event corresponding to the defined virtual zone and the mobile nodes involved in the potential collision (see Akella at least [0075] The vehicle computing device may generate a representation or simulation of the environment including the vehicle and agent in order to determine the region of potential collision and [0077] As shown in representation 466, the projections may be compared to determine one or more locations at which the outlines 408, 428 touch, overlap, or are within a predetermined distance of one another and [0078] Based on the points of touching or overlap or of close proximity, the region of potential collision may be determined); Akella does not teach: Receiving identifier data from multiple mobile nodes; aggregating the received data to form an aggregated data set representing current states of the mobile nodes; actuating, via the digital twin, a camera associated with the defined virtual zone to capture live imagery of the physical environment; and publishing the safety event and the captured live imagery to the mobile nodes identified in the safety event for presentation by a virtual space service or a driver warning service at each respective node. However, Wang teaches: Receiving identifier data from multiple mobile nodes (see Wang at least [pg. 11, para. 4, beginning with “The technical architecture”] the vehicle identification module is used for identifying the vehicle in the industrial factory); and publishing the safety event and the captured live imagery to the mobile nodes identified in the safety event for presentation by a virtual space service or a driver warning service at each respective node (see Wang at least [pg. 2, para. 5, beginning with “According to one aspect”] displaying virtual risk position and risk video capture on the display interface, the virtual risk position is the virtual position in the three-dimensional virtual model, the virtual risk position is corresponding to the actual position of risk in the industrial plant area, the risk video capture is the image of the video of the actual position with risk in the field of the industrial plant area and [pg. 10, para. 3, beginning with “The safety pre-warning”] The safety pre-warning of the vehicle may include vehicle collision warning). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection method disclosed by Akella to include the warning and video display in response to vehicle risk situations of Wang. One of ordinary skill in the art would have been motivated to make this modification because doing so encourages a safer industrial environment, as suggested by Wang (see Wang at least [pg. 2, para. 1, beginning on pg. 1 with “The application claims”] if it is detected that there is risk in the industrial plant area, it can directly display the corresponding virtual risk position and risk video capture on the display interface, so it can analyze the industrial factory area, ensuring the safe production of the industrial factory area). Akella and Wang do not teach: aggregating the received data to form an aggregated data set representing current states of the mobile nodes; and actuating, via the digital twin, a camera associated with the defined virtual zone to capture live imagery of the physical environment. However, Chandrasekar teaches: aggregating the received data to form an aggregated data set representing current states of the mobile nodes (see Chandrasekar at least [0043] the management server 20 can operate as an aggregator of vehicular data 52 from each of the industrial vehicles 30… The vehicular data 52, which can comprise data indicative of the localized position and one or more operational characteristic… the vehicular data 52 can be indexed to allow the vehicular data 52 from each of the industrial vehicles 30 to be synchronized to collectively represent one or more states of the industrial vehicles 30). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection and response disclosed by Akella and Wang to include the data aggregation of each vehicle of Chandrasekar. One of ordinary skill in the art would have been motivated to make this modification because the aggregated data can be used for safety-related functions such as impact detection, as suggested by Chandrasekar (see Chandrasekar at least [0054] it may be desirable to implement appropriate post impact actions, such as lockout operations. In further embodiments, impact detection can be performed by the server functions 26 based upon aggregated vehicular data 52). Akella, Wang, and Chandrasekar do not teach: actuating, via the digital twin, a camera associated with the defined virtual zone to capture live imagery of the physical environment . However, Farb teaches: actuating, via the collision prediction (see Farb at least [0103] the system would activate the video recorder when the system determines that a distance and velocity of the approaching vehicle suggest an impending collision with the vehicle employing the system might occur within 3 seconds). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection method disclosed by Akella, Wang, and Chandrasekar to include the acquisition of video data in response to determination of a potential collision and presentation of such information to operators of vehicles of Farb. One of ordinary skill in the art would have been motivated to make this modification because once an imminent collision is detected, it can possibly be prevented by providing details regarding the situation to relevant parties, as suggested by Farb (see Farb at least [0131] These collisions can be reduced by simply warning at least one of the bicyclist and the driver of a potential collision. If the cyclist is warned of an approaching vehicle, the cyclist can react accordingly to avoid a collision. If the driver is informed that they are approaching a cyclist, the driver can be more aware of the cyclist and avoid a collision). Regarding claim 17, Akella, Wang, Chandrasekar, and Farb teach: The method of claim 7, wherein the events include ultrawide band position data and/or radio frequency identifier position data and/or inertial data (see Akella at least [0046] The vehicle 202 may include a vehicle computing device(s) 204, sensor(s) 206 and [0047] In some instances, the sensor(s) 206 may correspond to sensor(s) 114 and may include... location sensors (e.g., global positioning system (GPS), compass, etc.), inertial sensors) . Regarding claim 18, Akella discloses: A non-transitory storage medium having stored therein instructions that are executable by one or more hardware processors operations (see Akella at least [0129] the operations represent computer-executable instructions stored on one or more non-transitory computer-readable storage media that, when executed by one or more processors, cause a computer or autonomous vehicle to perform the recited operations) to perform operations comprising: receiving position data, inertial data, (see Akella at least [0046] The vehicle 202 may include a vehicle computing device(s) 204, sensor(s) 206 and [0047] In some instances, the sensor(s) 206 may correspond to sensor(s) 114 and may include... location sensors (e.g., global positioning system (GPS), compass, etc.), inertial sensors…); executing a simulation in a digital twin environment to project predicted trajectories of the mobile nodes using the (see Akella at least [0032] The vehicle 102 may use its sensors 114 and the planner 118 to determine the path 110. The path 110 may be part of a trajectory determined by the planner 118 and [0076] Having modelled or simulated the vehicle and agent and the environment in which they are travelling, the simulated vehicle 402 and agent 426 may be projected along the paths); applying zone-based safety rules to the predicted trajectories to determine that a potential collision will occur within a defined virtual zone (see Akella at least [0075] The vehicle computing device may generate a representation or simulation of the environment including the vehicle and agent in order to determine the region of potential collision and [0077] As shown in representation 466, the projections may be compared to determine one or more locations at which the outlines 408, 428 touch, overlap, or are within a predetermined distance of one another and [0078] Based on the points of touching or overlap or of close proximity, the region of potential collision may be determined); and in response to the determination, generating a safety event corresponding to the defined virtual zone and the mobile nodes involved in the potential collision (see Akella at least [0080] the stopping distance and current distance may be determined based on a region of potential collision and the relative positions of the vehicle and agent). Akella does not teach: receiving identifier data from multiple mobile nodes; aggregating the received data to form an aggregated data set representing current states of the mobile nodes; actuating, via the digital twin, a physical camera associated with the defined virtual zone to initiate a live video feed showing the physical environment; and publishing the safety event and the live video feed to at least one of the mobile node identified in the safety event for presentation by a virtual space service or drive warning service at the respective node. However, Wang teaches: receiving identifier data from multiple mobile nodes (see Wang at least [pg. 11, para. 4, beginning with “The technical architecture”] the vehicle identification module is used for identifying the vehicle in the industrial factory); and publishing the safety event and the live video feed to at least one of the mobile node identified in the safety event for presentation by a virtual space service or drive warning service at the respective node (see Wang at least [pg. 2, para. 5, beginning with “According to one aspect”] displaying virtual risk position and risk video capture on the display interface, the virtual risk position is the virtual position in the three-dimensional virtual model, the virtual risk position is corresponding to the actual position of risk in the industrial plant area, the risk video capture is the image of the video of the actual position with risk in the field of the industrial plant area and [pg. 10, para. 3, beginning with “The safety pre-warning”] The safety pre-warning of the vehicle may include vehicle collision warning). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection storage medium disclosed by Akella to include the warning and video display in response to vehicle risk situations of Wang. One of ordinary skill in the art would have been motivated to make this modification because doing so encourages a safer industrial environment, as suggested by Wang (see Wang at least [pg. 2, para. 1, beginning on pg. 1 with “The application claims”] if it is detected that there is risk in the industrial plant area, it can directly display the corresponding virtual risk position and risk video capture on the display interface, so it can analyze the industrial factory area, ensuring the safe production of the industrial factory area). Akella and Wang do not teach: aggregating the received data to form an aggregated data set representing current states of the mobile nodes; and actuating, via the digital twin, a physical camera associated with the defined virtual zone to initiate a live video feed showing the physical environment. However, Chandrasekar teaches: aggregating the received data to form an aggregated data set representing current states of the mobile nodes (see Chandrasekar at least [0043] the management server 20 can operate as an aggregator of vehicular data 52 from each of the industrial vehicles 30… The vehicular data 52, which can comprise data indicative of the localized position and one or more operational characteristic… the vehicular data 52 can be indexed to allow the vehicular data 52 from each of the industrial vehicles 30 to be synchronized to collectively represent one or more states of the industrial vehicles 30). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection storage medium disclosed by Akella and Wang to include the data aggregation of each vehicle of Chandrasekar. One of ordinary skill in the art would have been motivated to make this modification because the aggregated data can be used for safety-related functions such as impact detection, as suggested by Chandrasekar (see Chandrasekar at least [0054] it may be desirable to implement appropriate post impact actions, such as lockout operations. In further embodiments, impact detection can be performed by the server functions 26 based upon aggregated vehicular data 52). Akella, Wang, and Chandrasekar do not teach: actuating, via the digital twin, a physical camera associated with the defined virtual zone to initiate a live video feed showing the physical environment. However, Farb teaches: actuating, via the collision prediction (see Farb at least [0103] the system would activate the video recorder when the system determines that a distance and velocity of the approaching vehicle suggest an impending collision with the vehicle employing the system might occur within 3 seconds). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection method disclosed by Akella, Wang, and Chandrasekar to include the acquisition of video data in response to determination of a potential collision and presentation of such information to operators of vehicles of Farb. One of ordinary skill in the art would have been motivated to make this modification because once an imminent collision is detected, it can possibly be prevented by providing details regarding the situation to relevant parties, as suggested by Farb (see Farb at least [0131] These collisions can be reduced by simply warning at least one of the bicyclist and the driver of a potential collision. If the cyclist is warned of an approaching vehicle, the cyclist can react accordingly to avoid a collision. If the driver is informed that they are approaching a cyclist, the driver can be more aware of the cyclist and avoid a collision). Regarding claim 20, Akella, Wang, Chandrasekar, and Farb teach: The non-transitory storage medium of claim 18, wherein the received data include ultrawide band position data and/or radio frequency identifier position data and/or inertial data (see Akella at least [0046] The vehicle 202 may include a vehicle computing device(s) 204, sensor(s) 206 and [0047] In some instances, the sensor(s) 206 may correspond to sensor(s) 114 and may include... location sensors (e.g., global positioning system (GPS), compass, etc.), inertial sensors…). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akella, in view of Wang, further in view of Chandrasekar, further in view of Farb, and further in view of Agrawal. Regarding claim 8, Akella, Wang, Chandrasekar, and Farb disclose: The method of claim 7. Akella, Wang, Chandrasekar, and Farb do not teach: wherein the virtual environment includes a virtual node for each of the nodes. However, Agrawal teaches: wherein the virtual environment includes a virtual node for each of the nodes (see Agrawal at least [0003] the display indicates the collision zone on the object and includes a virtual model of the vehicle and [0007] the display may include a second virtual model of the second vehicle that indicates a second collision zone on the second vehicle). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection method disclosed by Akella, Wang, Chandrasekar, and Farb to include the augmented reality display at a relevant node of Agrawal. One of ordinary skill in the art would have been motivated to make this modification because the inclusion of virtual embodiments of multiple vehicles in the environment provides a more complete understanding of the driving circumstances, allowing for increase possibilities to predict traffic incidents, as suggested by Agrawal (see Agrawal at least [0015] Such a method or system may improve existing vehicle incident avoidance systems by providing more detailed data directly to a driver that may predict incident effects based on vehicle movements and driver actions. As a result, driver and passenger safety may be improved and the efficiency of incident avoidance systems may be enhanced). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akella, in view of Wang, further in view of Chandrasekar, further in view of Farb, further in view of Agrawal, and further in view of CN 114527676 A FANG, Wei-hao et al. (hereinafter Fang). Regarding claim 9, Akella, Wang, Chandrasekar, Farb, and Agrawal disclose: The method of claim 8. Akella, Wang, Chandrasekar, Farb, and Agrawal do not teach: wherein the digital twin virtual environment further includes one or more virtual only nodes having no corresponding physical counterpart. However, Fang teaches: wherein the digital twin virtual environment further includes one or more virtual only nodes having no corresponding physical counterpart (see Fang at least [pg. 4, para. 4, beginning with "most of the current test..."] most of the vehicle can be replaced by the vehicle in the virtual environment, the actual vehicle may only need one or two, so that the actual number of actual vehicles may be greatly reduced). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection method disclosed by Akella, Wang, Chandrasekar, Farb, and Agrawal to include the virtual vehicles in the virtual environment which do not correspond to physical vehicles of Fang. One of ordinary skill in the art would have been motivated to make this modification because, when it comes to testing algorithms, safety is improved by subjecting fewer physical vehicles to potential collisions and subjecting other virtual vehicles to potential collisions wherein less harm is likely to be produced, as suggested by Fang (see Fang at least [pg. 4, para. 5, beginning with “in the actual”] in the actual multi-vehicle test process, due to some defects of the algorithm itself, may cause the collision of the vehicle, using the virtual and actual vehicle mixing test method, can reduce the number of actual vehicle, using virtual vehicle to replace the actual vehicle use, so that the vehicle is not easy to collide, improving the safety of the experiment). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akella, in view of Wang, further in view of Chandrasekar, further in view of Farb, further in view of Agrawal, further in view of Fang, and further in view of US 20230339459 A1 Chi-Johnston; Geoffrey Louis et al. (hereinafter Chi-Johnston). Regarding claim 10, Akella, Wang, Chandrasekar, Farb, Agrawal, and Fang disclose: The method of claim 9. Akella, Wang, Chandrasekar, Farb, Agrawal, and Fang do not teach: further comprising replaying positions of mobile nodes and the virtual only nodes in the digital twin virtual environment to verify the safety event. However, Chi-Johnston teaches: further comprising replaying positions of mobile nodes and the virtual only nodes in the digital twin virtual environment to verify the safety event (see Chi-Johnston at least [0080] the simulated number of safety critical events (e.g., collisions or near misses) can be compared to those observed on-road, grouped by “pivot” (e.g., night vs. day, etc.)... the safety proxy model can be validated by replaying simulated segments where the simulated behavior of the AV resembles the behavior of the AV observed on-road (i.e., as collected in driving data) and comparing the differences between the behavior of simulated objects and the behavior of the observed objects in the scene, as well as the simulation safety proxy's estimate of the risk and an “on-road” safety proxy estimate of risk). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection method disclosed by Akella, Wang, Chandrasekar, Farb, Agrawal, and Fang to include the real and virtual replaying of safety events of Chi-Johnston. One of ordinary skill in the art would have been motivated to make this modification because comparing virtual and real vehicles’ actions by reviewing safety situations validates the safety models and provides for safer autonomous driving, as suggested by Chi-Johnston (see Chi-Johnston at least [0002] a simulation for AV testing, which reproduces real-world characteristics has been important in improving the safety and efficiency of AV driving). Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akella, in view of Wang, further in view of Chandrasekar, further in view of Farb, and further in view of US 11577741 B1 Reschka; Andreas Christian et al. (hereinafter Reschka). Regarding claim 11, Akella, Wang, Chandrasekar, and Farb disclose: The method of claim 7. Akella, Wang, Chandrasekar, and Farb do not teach: further comprising testing collision models and collision prediction models in the digital twin using the aggregated data. However, Reschka teaches: further comprising testing collision models and collision prediction models in the digital twin using the aggregated data (see Reschka at least [col. 27, lines 11-19] the simulation system 502 may select a set of simulation scenarios that are configured to test collision avoidance responses of the vehicle control system 504. In examples, each simulation scenario can include simulated vehicle control data and simulated object data that are purposefully antagonistic, that is, the data purposefully results in a perceived collision or potential collision of the vehicle with an object and [col. 34, line 51 – col. 35, line 1] determining a simulation scenario for testing a response of a simulated autonomous vehicle, the simulation scenario comprising… controlling the simulated autonomous vehicle to traverse the simulated environment according to the vehicle trajectory… determining, by a secondary system of the control system and based at least in part on the vehicle trajectory and the object data, a predicted collision and [col. 4, lines 50-53] the techniques described herein may be used with real data (e.g., captured using sensor(s)), simulated data (e.g., generated by a simulator), or any combination thereof). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the potential collision detection method disclosed by Akella, Shoshan, and Agrawal by testing collision models and collision prediction models in the virtual environment as taught by Reschka, in order to test the systems without losing material resources or endangering people (i.e., testing collision models in a simulated environment is safe and maintains the integrity of material resources -- see Reschka at least [col. 2, lines 15-22] it is difficult to replicate these events in a controlled testing environment, as, by their nature, such events have an increased likelihood of resulting in a collision (and potential damage). Systems and techniques described herein remedy these deficiencies by generating simulation scenarios designed to test collision avoidance systems (as well as vehicle actuators), such as the secondary system described above). Claim(s) 12, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akella, in view of Wang, further in view of Chandrasekar, further in view of Farb, and further in view of Fang. Regarding claim 12, Akella, Wang, Chandrasekar, and Farb teach: The method of claim 7. Akella, Wang, Chandrasekar, and Farb do not teach: further comprising actuating one or more virtual sensors in the digital twin to generate simulated sensor data. However, Fang teaches: further comprising actuating one or more virtual sensors in the digital twin to generate simulated sensor data (see Fang at least [pg. 4, para. 1, beginning with “emulator, the simulator…”] the virtual sensor is used for obtaining the state information the virtual automatic driving vehicle). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the potential collision detection method disclosed by Akella, Wang, Chandrasekar, and Farb by using virtual sensors on the virtual vehicle as taught by Fang, in order to update the virtual vehicle in the virtual environment (i.e., obtaining data from virtual sensors on the virtual vehicle enables accurate modeling of the virtual vehicle in the digital twin environment -- see Fang at least [pg. 4, para. 1, beginning with “emulator, the simulator…”] the digital twinning scene is constructed with a virtual automatic driving vehicle and a virtual sensor). Regarding claim 13, Akella, Wang, Chandrasekar, and Farb teach: The method of claim 7. Akella, Wang, Chandrasekar, and Farb do not teach: further comprising generating combined output data from physical sensors in the physical environment and virtual sensors in the digital twin. However, Fang teaches: further comprising generating combined output data from physical sensors in the physical environment and virtual sensors in the digital twin (see Fang at least [pg. 3, para. 9, beginning with “a real automatic…”] a real automatic driving vehicle, comprising a network communication module and a sensor, the sensor is used for obtaining the state information and controlling the vehicle and [pg. 3, para. 5, beginning with “in the simulator”] virtual sensor according to the principle of the actual sensor, constructing virtual sensor comprising virtual camera virtual laser radar, the actual automatic driving vehicle is synchronously mapped in the digital twinning environment according to the position of it in the real environment and [pg. 3, para. 6, beginning with “Preferably, in the step”] the coordinate system of the actual automatic driving vehicle in the point cloud map and the coordinate system of the virtual automatic driving vehicle in the simulation map are consistent, it can be considered that all the automatic driving vehicles are mapped under the uniform coordinate system, the automatic driving vehicle can obtain the coordinate information other vehicles, The information can provide the vehicle decision module information driving network and multi-vehicle test for realizing the virtual combination). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the potential collision detection method disclosed by Akella, Wang, Chandrasekar, and Farb by obtaining information from virtual and real vehicle sensors as taught by Fang, in order to update the both vehicles in the virtual environment (i.e., obtaining data from virtual sensors on the virtual vehicle enables accurate modeling of the virtual vehicle in the digital twin environment -- see Fang at least [pg. 4, para. 1, beginning with “emulator, the simulator…”] the digital twinning scene is constructed with a virtual automatic driving vehicle and a virtual sensor). Claim(s) 14-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akella, in view of Wang, further in view of Chandrasekar, further in view of Farb, and further in view of US 20170094227 A1 WILLIAMS; KJERSTIN IRJA et al. (hereinafter Williams). Regarding claim 14, Akella, Wang, Chandrasekar, and Farb disclose: The method of claim 7. Akella, Wang, Chandrasekar, and Farb do not teach: further comprising defining zones in the digital twin virtual environment, each zone corresponding to a camera or sensor in the physical environment. However, Williams teaches: further comprising defining zones in the digital twin virtual environment, each zone corresponding to a camera or sensor in the physical environment (see Williams at least [0005] the rendered three-dimensional virtual environment at an approximate location corresponding to a physical location of the monitoring platform in the geographic region and the real-time video images of the scene of interest superimposed at a field of view corresponding to a respective corresponding perspective orientation relative to the rendered three-dimensional virtual representation of the monitoring platform). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection method disclosed by Akella, Wang, Chandrasekar, and Farb to include the sensor captures fields of view as they relate to virtual environments of Williams. One of ordinary skill in the art would have been motivated to make this modification because different zones in the virtual environment can be adapted according to their associated images captured by real-world cameras, as suggested by Williams (see Williams at least [0047] unrevealed video images of the virtual environment become visible as the scene of interest of the geographic region enter the respective field of view of the video camera(s) 154, and previously revealed video images of the virtual environment are replaced by the virtual environment as the respective portions of the geographic region leave the respective field of view of the video camera(s) 154). Regarding claim 15, Akella, Wang, Chandrasekar, Farb, and Williams teach: The method of claim 14, further comprising applying rules based on mobile nodes entering the zones to determine that the safety scenario has occurred (see Akella at least [0081] the first path 506 and the second path 508 cross and a region of potential collision 510 has been determined around the point at which the paths cross and [0086] In FIG. 5B, the first and second vehicles 502, 504 have moved along their respective paths 506, 508. The second vehicle 504 has entered the region 510. When one vehicle is in the region, the stopping distance and current distance may be determined for the other of the vehicles). Regarding claim 16, Akella, Wang, Chandrasekar, Farb, and Williams teach: The method of claim 15, further comprising causing a mobile node to display an interface that includes real data from a camera actuated via the digital twin or rendered data generated by the digital twin (see Wang at least [pg. 1, para. 1, beginning with “The application claims”] displaying the three-dimensional virtual scene of the whole industrial factory area on the display interface). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection method disclosed by Akella, Wang, Chandrasekar, Farb, and Williams to include the warning and video display in response to vehicle risk situations of Wang. One of ordinary skill in the art would have been motivated to make this modification because doing so encourages a safer industrial environment, as suggested by Wang (see Wang at least [pg. 2, para. 1, beginning on pg. 1 with “The application claims”] if it is detected that there is risk in the industrial plant area, it can directly display the corresponding virtual risk position and risk video capture on the display interface, so it can analyze the industrial factory area, ensuring the safe production of the industrial factory area). Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akella, in view of Wang, further in view of Chandrasekar, further in view of Farb, further in view of Fang, further in view of Chi-Johnston, and further in view of Reschka. Regarding claim 19, Akella, Wang, Chandrasekar, and Farb disclose: The non-transitory storage medium of claim 18, , further comprising: defining zones in the digital twin virtual environment (see Akella at least [0075] determine the region of potential collision); applying rules based on the mobile nodes entering the zones to determine that a safety scenario, corresponding to a potential collision or other unsafe condition, has occurred (see Akella at least [0081] the first path 506 and the second path 508 cross and a region of potential collision 510 has been determined around the point at which the paths cross and [0086] In FIG. 5B, the first and second vehicles 502, 504 have moved along their respective paths 506, 508. The second vehicle 504 has entered the region 510. When one vehicle is in the region, the stopping distance and current distance may be determined for the other of the vehicles); and causing a mobile node identified in the safety scenario to display an interface that includes real data from a sensor in the physical environment or rendered data generated that includes data from a virtual only sensor in the digital twin (see Farb at least [0170] The system controller 600 can record and present additional information to the user, including: and [0171] a. still images or real time video of the rear approaching vehicle 350 (referenced as an approaching vehicle image 650)). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection storage medium disclosed by Akella, Wang, Chandrasekar, and Farb to include the acquisition of video data in response to determination of a potential collision and presentation of such information to operators of vehicles of Farb. One of ordinary skill in the art would have been motivated to make this modification because once an imminent collision is detected, it can possibly be prevented by providing details regarding the situation to relevant parties, as suggested by Farb (see Farb at least [0131] These collisions can be reduced by simply warning at least one of the bicyclist and the driver of a potential collision. If the cyclist is warned of an approaching vehicle, the cyclist can react accordingly to avoid a collision. If the driver is informed that they are approaching a cyclist, the driver can be more aware of the cyclist and avoid a collision). Akella, Wang, Chandrasekar, and Farb do not teach: wherein the digital twin virtual environment includes a virtual representation of each of the mobile nodes and at least one virtual only nodes; replaying positions of the mobile nodes and the virtual only nodes in the digital twin virtual environment to evaluate the safety; testing collision models and collision prediction models in the digital twin using aggregated data; actuating virtual sensors in the digital twin to generate simulated sensor data; generating combined output data from physical sensors in the physical environment and the actuated virtual only sensors in the digital twin. However, Fang teaches: wherein the digital twin virtual environment includes a virtual representation of each of the mobile nodes and at least one virtual only nodes (see Fang at least [pg. 4, para. 4, beginning with "most of the current test..."] most of the vehicle can be replaced by the vehicle in the virtual environment, the actual vehicle may only need one or two, so that the actual number of actual vehicles may be greatly reduced); actuating virtual sensors in the digital twin to generate simulated sensor data (see Fang at least [pg. 4, para. 1, beginning with “emulator, the simulator…”] the virtual sensor is used for obtaining the state information the virtual automatic driving vehicle); generating combined output data from physical sensors in the physical environment and the actuated virtual only sensors in the digital twin (see Fang at least [pg. 3, para. 9, beginning with “a real automatic…”] a real automatic driving vehicle, comprising a network communication module and a sensor, the sensor is used for obtaining the state information and controlling the vehicle and [pg. 3, para. 5, beginning with “in the simulator”] virtual sensor according to the principle of the actual sensor, constructing virtual sensor comprising virtual camera virtual laser radar, the actual automatic driving vehicle is synchronously mapped in the digital twinning environment according to the position of it in the real environment and [pg. 3, para. 6, beginning with “Preferably, in the step”] the coordinate system of the actual automatic driving vehicle in the point cloud map and the coordinate system of the virtual automatic driving vehicle in the simulation map are consistent, it can be considered that all the automatic driving vehicles are mapped under the uniform coordinate system, the automatic driving vehicle can obtain the coordinate information other vehicles, The information can provide the vehicle decision module information driving network and multi-vehicle test for realizing the virtual combination). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection storage medium disclosed by Akella, Wang, Chandrasekar, and Farb to include the virtual vehicles in the virtual environment which do not correspond to physical vehicles of Fang. One of ordinary skill in the art would have been motivated to make this modification because, when it comes to testing algorithms, safety is improved by subjecting fewer physical vehicles to potential collisions and subjecting other virtual vehicles to potential collisions wherein less harm is likely to be produced, as suggested by Fang (see Fang at least [pg. 4, para. 5, beginning with “in the actual”] in the actual multi-vehicle test process, due to some defects of the algorithm itself, may cause the collision of the vehicle, using the virtual and actual vehicle mixing test method, can reduce the number of actual vehicle, using virtual vehicle to replace the actual vehicle use, so that the vehicle is not easy to collide, improving the safety of the experiment). Akella, Wang, Chandrasekar, Farb, and Fang do not teach: replaying positions of the mobile nodes and the virtual only nodes in the digital twin virtual environment to evaluate the safety; and testing collision models and collision prediction models in the digital twin using aggregated data. However, Chi-Johnston teaches: replaying positions of the mobile nodes and the virtual only nodes in the digital twin virtual environment to evaluate the safety (see Chi-Johnston at least [0080] the simulated number of safety critical events (e.g., collisions or near misses) can be compared to those observed on-road, grouped by “pivot” (e.g., night vs. day, etc.)... the safety proxy model can be validated by replaying simulated segments where the simulated behavior of the AV resembles the behavior of the AV observed on-road (i.e., as collected in driving data) and comparing the differences between the behavior of simulated objects and the behavior of the observed objects in the scene, as well as the simulation safety proxy's estimate of the risk and an “on-road” safety proxy estimate of risk). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the potential collision detection storage medium disclosed by Akella, Wang, Chandrasekar, Farb, and Fang to include the real and virtual replaying of safety events of Chi-Johnston. One of ordinary skill in the art would have been motivated to make this modification because comparing virtual and real vehicles’ actions by reviewing safety situations validates the safety models and provides for safer autonomous driving, as suggested by Chi-Johnston (see Chi-Johnston at least [0002] a simulation for AV testing, which reproduces real-world characteristics has been important in improving the safety and efficiency of AV driving). Akella, Wang, Chandrasekar, Farb, Fang, and Chi-Johnston do not teach: testing collision models and collision prediction models in the digital twin using aggregated data. However, Reschka teaches: testing collision models and collision prediction models in the digital twin using aggregated data (see Reschka at least [col. 27, lines 11-19] the simulation system 502 may select a set of simulation scenarios that are configured to test collision avoidance responses of the vehicle control system 504. In examples, each simulation scenario can include simulated vehicle control data and simulated object data that are purposefully antagonistic, that is, the data purposefully results in a perceived collision or potential collision of the vehicle with an object and [col. 34, line 51 – col. 35, line 1] determining a simulation scenario for testing a response of a simulated autonomous vehicle, the simulation scenario comprising… controlling the simulated autonomous vehicle to traverse the simulated environment according to the vehicle trajectory… determining, by a secondary system of the control system and based at least in part on the vehicle trajectory and the object data, a predicted collision and [col. 4, lines 50-53] the techniques described herein may be used with real data (e.g., captured using sensor(s)), simulated data (e.g., generated by a simulator), or any combination thereof). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the potential collision detection storage medium disclosed by Akella, Wang, Chandrasekar, Farb, Fang, and Chi-Johnston by testing collision models and collision prediction models in the virtual environment as taught by Reschka, in order to test the systems without losing material resources or endangering people (i.e., testing collision models in a simulated environment is safe and maintains the integrity of material resources -- see Reschka at least [col. 2, lines 15-22] it is difficult to replicate these events in a controlled testing environment, as, by their nature, such events have an increased likelihood of resulting in a collision (and potential damage). Systems and techniques described herein remedy these deficiencies by generating simulation scenarios designed to test collision avoidance systems (as well as vehicle actuators), such as the secondary system described above). Response to Arguments Applicant's arguments filed 10/23/2025 have been fully considered. Applicant's amendments overcome the objections to the claims. However, the amendments necessitate a new objection to claim 18. Applicant's amendments overcome the 35 U.S.C. §112(a) rejection for claims 1. Applicant's amendments overcome the 35 U.S.C. §112(b) rejection for claims 2-17 and 20. Applicant's amendments overcome the 35 U.S.C. §112(d) rejection for claims 4-6 and 14. Applicant's amendments overcome the 35 U.S.C. §101 rejection for claims 1-20. Regarding the arguments provided for the 35 U.S.C. §103 rejections of claims 1 and 7 (remarks pages 12-13), the applicant's arguments have been considered but are moot because of new grounds of rejection. Regarding the arguments provided for the 35 U.S.C. §103 rejections of claims 18, the applicant's arguments have been considered but are not persuasive. (A) applicant argues, " The combination of Akella, Shoshan, and Agrawal does not disclose or suggest these operations. The cited references rely on human-directed systems where displays are generated locally, not on automated software that triggers physical camera control through simulation- based logic. The claimed interaction between simulated and physical environments introduces a novel layer of machine control that is absent from the cited art. " (from remarks page 13) As to point (A), Examiner notes that while Akella is cited in the current art rejections, newly introduced art is noted to teach actuating physical cameras based on determination of potential collision occurrence. Such art, when viewed in combination with Akella, renders obvious the claimed invention. Additionally, new art renders arguments related to previously cited art moot. (B) Applicant argues, “Even if each individual reference were considered to teach certain elements of the claims, there is no articulated rationale or motivation that would lead a person of ordinary skill to combine them in the specific manner required by the amended claims. Akella's collision prediction framework operates independently from Agrawal's augmented reality visualization, and neither reference contemplates a system that would physically actuate cameras or sensors through a simulated environment. Integrating those systems as claimed would fundamentally change their intended operation, requiring the digital twin to perform hardware control functions that are not hinted at in the art. Moreover, the Office Action's reasoning that such combination would "allow a driver to better understand predicted collisions" does not address the technical improvements recited in the claims. The claimed invention does not merely enhance driver understanding; it implements a new hardware-software interaction model in which the virtual simulation directly commands real-world sensors to provide live, context-specific data. Such a configuration would not be obvious to modify from the cited references, as those references operate at a purely informational level and lack any teaching or suggestion of direct physical actuation.” As to point (B), Examiner notes that new art rejections enumerate rationales for combination of prior art including safety and efficiency of operations. While rationale gathered from the prior art may differ from that disclosed in the instant application’s specification, motivation to combine is present nonetheless in the prior art and as such renders the claimed invention obvious. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. CN 114530058 A BAI, QING discloses a method of warning vehicles of upcoming potential collisions. A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELLE ROSE KNUDSON whose telephone number is (703)756-1742. The examiner can normally be reached 1000-1700 ET M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hitesh Patel can be reached on (571) 270-5442. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ELLE ROSE KNUDSON/Examiner, Art Unit 3667 /Hitesh Patel/Supervisory Patent Examiner, Art Unit 3667 1/26/26
Read full office action

Prosecution Timeline

Mar 17, 2023
Application Filed
Mar 14, 2025
Non-Final Rejection — §103
Jun 18, 2025
Response Filed
Aug 15, 2025
Final Rejection — §103
Oct 23, 2025
Applicant Interview (Telephonic)
Oct 23, 2025
Examiner Interview Summary
Oct 23, 2025
Request for Continued Examination
Nov 02, 2025
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §103
Apr 06, 2026
Applicant Interview (Telephonic)
Apr 06, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591241
OBJECT ENROLLMENT IN A ROBOTIC CART COORDINATION SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12590444
WORKING VEHICLE AND ATTACHMENT USAGE SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12582045
BASECUTTER AUTOMATED HEIGHT CALIBRATION FOR SUGARCANE HARVESTERS
2y 5m to grant Granted Mar 24, 2026
Patent 12558925
Method and Apparatus for Displaying Function Menu Interface of Automobile Tyre Pressure Monitoring System
2y 5m to grant Granted Feb 24, 2026
Patent 12559907
OPERATOR CONFIRMATION OF MACHINE CONTROL SCHEME
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+44.4%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month