Prosecution Insights
Last updated: April 19, 2026
Application No. 18/082,495

ATTENTION-BASED AGENT INTERACTION SYSTEM

Non-Final OA §103
Filed
Dec 15, 2022
Examiner
HEIM, MARK ROBERT
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toyota Research Institute, Inc.
OA Round
5 (Non-Final)
51%
Grant Probability
Moderate
5-6
OA Rounds
3y 3m
To Grant
49%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
25 granted / 49 resolved
-1.0% vs TC avg
Minimal -2% lift
Without
With
+-2.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
82
Total Applications
across all art units

Statute-Specific Performance

§101
19.4%
-20.6% vs TC avg
§103
48.0%
+8.0% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
16.2%
-23.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 49 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/20/2026 has been entered. Status of Claims Claims 1-7, 9-15, and 17-19 filed on 01/13/2026 are presently examined. Claims 8, 16, and 20 are cancelled. Claims 1, 9, and 17 are amended. Response to Arguments Regarding 112(f) interpretation, the interpretation is still invoked regardless of whether “means for” is included in the claim. There exists placeholder terms, functional language, and no structure to define the placeholder terms or how they carry out the function. The interpretation is necessary, until Applicant amends the claims to clarify the structure. The interpretation is not a rejection of the claims. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a gaze tracking module to dynamically track gaze directions of a vehicle operator, an unmonitored region(s) detection module to identify operator monitored regions and operator unmonitored regions in claim 17, an ADAS resource allocation module to allocate an increased portion of ADAS perception resources in claim 17, an external environment perception module to track external road agents in claim 17. Each module does not have a recited structure and they are performing a function with functional language. Therefore, 112(f) is required to be invoked on the above limitations. See the Claim Interpretation section below for more information. Accordingly, the previous 35 SUC 112(f) claim interpretation is maintained. Regarding 35 USC 112(a) rejection, the amendments to the claims 1, 9, and 17 result in the withdrawal of the 112(a) rejection. Regarding 35 USC 103 rejections, the amendments to the independent claims do not overcome the previously cited art. Moncomble and Yamaoka still disclose the amended limitations. In the interest of brevity, see the rejection below. Applicant argues Yamaoka does not teach “external road agents in a forward, operator unmonitored region of the scene visible to the driver.” Examiner respectfully disagrees. The driver is able to visibly see the unmonitored, adjacent, oncoming lane, which is the scene in the unmonitored region. The phrasing the claim is written with is broad and/or ambiguous and can be interpreted in different ways. “Unmonitored region of the scene visible to the driver.” Scene visible to the driver can be interpreted as the portion of the scene the driver has the capability to see, such as the oncoming lane area they want to enter. The unmonitored region of the scene is a different portion of the scene. It is further ahead in the oncoming lane. The driver is not monitoring that region, and the vehicle detects that is the case, and causes the return of the vehicle to its original lane to avoid collision with an unseen oncoming vehicle. Thus, the 35 USC 103 rejection is maintained. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a gaze tracking module to dynamically track gaze directions of a vehicle operator in claims 17-19. It is described in the specification [0056] as “The gaze tracking module 312 receives a data stream from the first sensor 306 and/or the second sensor 304. The data stream may include a 2D RGB image from the first sensor 306 and LIDAR data points from the second sensor 304. The data stream may include multiple frames, such as image frames of traffic data. These sensors may also include a driver facing camera to monitor the operator of the car 350.” Fig. 3: a vehicle ADAS controller (310); [0043]: The modules may be software modules running in the processor 320…hardware modules coupled to the processor 320, or some combination thereof. an unmonitored region(s) detection module to identify operator monitored regions and operator unmonitored regions in claim 17. It is described in the specification [0007] as “unmonitored region(s) detection module to identify operator monitored regions and operator unmonitored regions in the scene based on dynamically tracking the gaze directions of the vehicle operator.” And [0056] “the first sensor 306 and/or the second sensor 304. The data stream may include a 2D RGB image from the first sensor 306 and LIDAR data points from the second sensor 304. The data stream may include multiple frames, such as image frames of traffic data. These sensors may also include a driver facing camera to monitor the operator of the car 350.” Fig. 3: a vehicle ADAS controller (310) ); [0043]: The modules may be software modules running in the processor 320…hardware modules coupled to the processor 320, or some combination thereof. an ADAS resource allocation module to allocate an increased portion of ADAS perception resources in claim 17. It is described in the specification [0023] “Devoting fewer resources to tracking autonomous dynamic objects (ADOs) to which the driver is paying attention could mean using simpler models, drawing fewer samples from a sampling predictor (e.g., trajectory samples), and the like.” And [0024] “ADAS directs a greater share of computational resources (e.g., model complexity, number of samples, etc.) to perceptual tasks pertaining to region(s) of a scene to which a driver is not paying attention.” Fig. 3: a vehicle ADAS controller (310) ); [0043]: The modules may be software modules running in the processor 320…hardware modules coupled to the processor 320, or some combination thereof. an external environment perception module to track external road agents in claim 17. It is described in the specification [0038] “The second sensor 304 may be a ranging sensor, such as a light detection and ranging (LIDAR) sensor or a radio detection and ranging (RADAR) sensor for capturing an external vehicle environment.” Fig. 3: a vehicle ADAS controller (310) ); [0043]: The modules may be software modules running in the processor 320…hardware modules coupled to the processor 320, or some combination thereof. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-5, 7, 9, 11-13, 15, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Moncomble et al. (US 20230406204 A1), in view of Yamaoka et al. (US 20210070288 A1), hereinafter referred to as Moncomble and Yamaoka, respectively. Regarding claims 1, 9, and 17 Moncomble discloses A method for resource allocation of an advanced driver assistance system (ADAS), the method comprising: dynamically tracking, using a driver facing camera, gaze directions based on eyes of a vehicle operator regarding a scene surrounding an ego vehicle ([0065] “In one embodiment, the data from the sensor amongst the sensors 91, 92, 93 which are used to determine the direction of the gaze of the driver C” [also see FIG. 1]); identifying operator monitored regions and forward, operator unmonitored regions in the scene relative to the ego vehicle based on dynamically tracking the gaze directions based on the eyes of the vehicle operator captured using the driver facing camera ([0106] “step S1 for determining a monitoring area Z that the driver C is monitoring. The method continues with a step S2 for obtaining an area Z′ not monitored by the driver C” [also see FIG. 1 and FIG. 3] Z is driver monitored region and Z’ is driver non-monitored region. [0061] “At a given moment in time, it allows a monitored area Z to be determined during the step S1.””); allocating an increased portion of ADAS perception resources to the forward, operator unmonitored regions of the scene ([0014] “activating sensors of the vehicle relating to the unmonitored area” [0015] “A step for analyzing data captured by the sensors relating to said unmonitored area in order to detect an event to come” [0020] “Activating sensors relating to an unmonitored area offers several advantages. Firstly, this activation allows it to be guaranteed that alerts fed back by the activated sensors occur in an area not being monitored by the driver. In this way, the driver will not be inconvenienced by unnecessary alerts. Another advantage is the economy of resources and of energy consumption. The sensors are only activated in order to compensate a non-monitoring by the driver” [0084] “The activation of the sensors 91, 92, 93 relating to the unmonitored area Z′ allows it to be guaranteed that a potential alert comes from an unmonitored area and will not therefore erroneously draw the attention of the driver C and also allows the power consumption of the sensors 91, 92, 93 to be minimized.”); tracking external road agents detected in the forward operator unmonitored regions of the scene using the increased portion of ADAS perception resources ([0022] “the method will rely on predefined scenarios of road traffic events in order to determine whether events detected by the sensors in the unmonitored area effectively warrant information representative of an alert being rendered, or more simply an alert being triggered or not depending on the imminence of a categorized event. For example, an object may arrive at a road junction toward the automobile vehicle in the area not being monitored by the driver and, depending on the nature of the object and the nature of the junction, an alert could be triggered.” [see FIG. 3] sensors detect vehicles in the environment. [0090] “The existence of predefined scenarios allows the events detected during the data analysis step S3 to be categorized by verifying whether these events correspond to such scenarios. For example, the detection of a third-party vehicle approaching the vehicle V…”); Moncomble fails to disclose overriding an input from the vehicle operator when the input is predicted to cause a collision with one of the external road agents in a forward, operator unmonitored region of the scene visible to the driver. However, Yamaoka teaches overriding an input from the vehicle operator when the input is predicted to cause a collision with one of the external road agents in a forward, operator unmonitored region of the scene visible to the driver ([FIGs 4 and 5] driver initiates trajectory into adjacent oncoming lane and system determines whether driver is able to see the oncoming vehicle approaching in the adjacent oncoming lane. The oncoming lane is visible to the driver, but the driver does not recognize the oncoming vehicle, and therefore the vehicle overrides the driver and returns the vehicle back to the lane. [0028] “a driver drives a vehicle by manual driving, when the driver tries to pass a preceding vehicle traveling in front while the driver cannot visually recognize a situation in front”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Moncomble with Yamaoka’s teaching of detecting that a driver is initiating a bypass of a preceding vehicle and determining whether the driver can see a forward region of an oncoming lane. One would be motivated, with a reasonable expectation of success, to detect the bypass attempt and determining visual recognition of the oncoming region, in order to prevent collision with the oncoming vehicle by autonomously returning the vehicle to the lane when it is determined the driver cannot see the oncoming vehicle (Yamaoka [0030] “The driving assistance system 40 can suppress a collision of a vehicle with an oncoming vehicle by performing such assistance when a driver tries to pass a preceding vehicle traveling in front while the driver cannot visually recognize a situation in front of the vehicle.”). Moncomble fails to explicitly disclose controlling the ego vehicle to avoid a collision with the external road agent detected in the forward, operator unmonitored region of the scene visible to the driver. However, Yamaoka teaches controlling the ego vehicle to avoid a collision with the external road agent detected in the forward, operator unmonitored region of the scene visible to the driver ([FIGs 4 and 5] scenario of driver bypassing preceding vehicle and the driver assistance system returns the vehicle to the original lane after determining the driver cannot see the oncoming vehicle in the forward region. [0029] “the driving assistance system 40 controls a vehicle in such a way as to return to an original lane when a driver tries to pass a preceding vehicle traveling in front while the driver cannot visually recognize a situation in front.”) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Moncomble with Yamaoka’s teaching of detecting that a driver is initiating a bypass of a preceding vehicle and determining whether the driver can see a forward region of an oncoming lane. One would be motivated, with a reasonable expectation of success, to detect the bypass attempt and determining visual recognition of the oncoming region, in order to prevent collision with the oncoming vehicle by autonomously returning the vehicle to the lane when it is determined the driver cannot see the oncoming vehicle (Yamaoka [0030] “The driving assistance system 40 can suppress a collision of a vehicle with an oncoming vehicle by performing such assistance when a driver tries to pass a preceding vehicle traveling in front while the driver cannot visually recognize a situation in front of the vehicle.”). Regarding claims 3, 11, and 19, Moncomble discloses The method of claim 1, in which dynamically tracking comprises dynamically determining the gaze direction of the vehicle operator based on sensor data captured by the driver facing camera to monitor the vehicle operator ([see FIG. 1] [0061] “sensors 91, 92, 93 oriented toward the driver C will feed back to the module 101 data relating to the face of the driver C … At a given moment in time, it allows a monitored area Z to be determined during the step S1.” [0066] “module 101 to implement the step S1 for determining a monitoring area that the driver C is monitoring, which are in general images of the face of the driver C”). Regarding claims 4 and 12, Moncomble discloses The method of claim 1, further comprising allocating a reduced portion of the ADAS perception resources to the operator monitored regions of the scene ([0014] “activating sensors of the vehicle relating to the unmonitored area” [0015] “A step for analyzing data captured by the sensors relating to said unmonitored area in order to detect an event to come” [0020] “Activating sensors relating to an unmonitored area offers several advantages. Firstly, this activation allows it to be guaranteed that alerts fed back by the activated sensors occur in an area not being monitored by the driver. In this way, the driver will not be inconvenienced by unnecessary alerts. Another advantage is the economy of resources and of energy consumption. The sensors are only activated in order to compensate a non-monitoring by the driver” [0084] “The activation of the sensors 91, 92, 93 relating to the unmonitored area Z′ allows it to be guaranteed that a potential alert comes from an unmonitored area and will not therefore erroneously draw the attention of the driver C and also allows the power consumption of the sensors 91, 92, 93 to be minimized.” Moncomble only activates sensors in unmonitored regions.). Regarding claims 5 and 13, Moncomble discloses The method of claim 1, in which allocating the increased portion of ADAS perception resources comprises assigning increased external-road-agent predictor resources to track external road agents in the unmonitored regions of the scene ([0022] “the method will rely on predefined scenarios of road traffic events in order to determine whether events detected by the sensors in the unmonitored area effectively warrant information representative of an alert being rendered, or more simply an alert being triggered or not depending on the imminence of a categorized event. For example, an object may arrive at a road junction toward the automobile vehicle in the area not being monitored by the driver and, depending on the nature of the object and the nature of the junction, an alert could be triggered.” [see FIG. 3] sensors detect vehicles in the environment. [0090] “The existence of predefined scenarios allows the events detected during the data analysis step S3 to be categorized by verifying whether these events correspond to such scenarios. For example, the detection of a third-party vehicle approaching the vehicle V…”). Regarding claims 7 and 15, Moncomble discloses The method of claim 1, in which tracking external road agents comprises tracking autonomous dynamic objects (ADOs) identified in the operator unmonitored regions of the scene using the increased portion of ADAS perception resources ([see FIG. 3] vehicle V detects vehicle D in the crossroads, which may be an autonomous vehicle, by activating sensors in the unmonitored region where vehicle D is present. [0116] “the step S1 has determined a monitoring area Z that the driver C of the vehicle V is monitoring. The step S2 has subsequently obtained an area Z′ not monitored by the driver C of the vehicle, and, consequently, sensors 91, 92, 93, not shown in FIG. 3, relating to the unmonitored area Z′ have been activated … The step S3 for analyzing data captured by the sensors 91, 92, 93 relating to the unmonitored area Z′ will thus allow an event to come to be detected, namely the arrival at the junction of the vehicle D shown in FIG. 3, the arrow indicating its direction of travel.” [0117] “in the situation in FIG. 3, the vehicle D could be connected to the communications network N, and could receive information transmitted by the method from the vehicle V′ which would indicate a loss of attention on the part of the driver of the vehicle V′. This transmitted information could then have a direct action on the driving, for example a decrease in the speed, if the vehicle D has capacities for autonomous driving”). Claims 2, 10, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Moncomble in view of Yamaoka, further in view of Arar et al. (US 20220121867 A1), hereafter referred to as Arar. Regarding claims 2, 10, and 18 Moncomble fails to explicitly disclose The method of claim 1, in which dynamically tracking further comprises visualizing gaze-direction behavior of the vehicle operator using an operator attention heatmap, indicating where and how often the vehicle operator is focusing their gaze. However, Arar teaches dynamically tracking further comprises visualizing gaze-direction behavior of the vehicle operator using an operator attention heatmap, indicating where and how often the vehicle operator is focusing their gaze ([0028] “as illustrated in FIG. 2E, the gaze information of the occupant may be tracked over some period of time and used to generate a heat map 210 (e.g., with darker regions corresponding to more frequent gaze locations or directions than regions that are lighter or have less dense patterns of points) corresponding to gaze locations and directions of the occupant over time (e.g., over a thirty second period, one minute, three minutes, five minutes, etc.)”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Moncomble with Arar’s teaching of a heat map. One would be motivated, with a reasonable expectation of success, to include the driver attention heat map from Arar in order to improve determining of whether the driver has attention toward a vehicle or object on the road or not ([0048] “For example, where a driver sees a vehicle (e.g., is determined to have seen based on a comparison of the estimated field of view of the driver and the location of the vehicle) some distance in front of the ego-vehicle 500, but is determined to have a high cognitive load and/or low attentiveness (e.g., based on the heat map, fixations, etc.), the state may include aware (e.g., of the vehicle) but inattentive (e.g., potentially has not processed the presence of the vehicle)”). Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Moncomble in view of Yamaoka, further in view of Lee (US 20210122364 A1), hereafter referred to as Lee. Regarding claims 6 and 14, Moncomble fails to explicitly disclose The method of claim 5, in which the increased external-road-agent predictor resources comprises an increased model complexity and/or a number of samples ([0014] “activating sensors of the vehicle relating to the unmonitored area” [0015] “A step for analyzing data captured by the sensors relating to said unmonitored area in order to detect an event to come” [0020] “Activating sensors relating to an unmonitored area offers several advantages. Firstly, this activation allows it to be guaranteed that alerts fed back by the activated sensors occur in an area not being monitored by the driver. In this way, the driver will not be inconvenienced by unnecessary alerts. Another advantage is the economy of resources and of energy consumption. The sensors are only activated in order to compensate a non-monitoring by the driver” [0084] “The activation of the sensors 91, 92, 93 relating to the unmonitored area Z′ allows it to be guaranteed that a potential alert comes from an unmonitored area and will not therefore erroneously draw the attention of the driver C and also allows the power consumption of the sensors 91, 92, 93 to be minimized.”). However, Lee teaches the increased external-road-agent predictor resources comprises an increased model complexity and/or a number of samples. ([0261] “in response to a determination that a potentially threatening object is present in the point cloud map, the vehicle collision avoidance apparatus may set an area corresponding to the spatial coordinates of the potentially threatening object in the image as the region of interest. In this case, the vehicle collision avoidance apparatus may increase a frame rate by a set multiple when the camera photographs the region of interest so as to increase the number of times of identifying the type of the potentially threatening object in the received image of the region of interest”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Moncomble with Lee’s teaching of automatic braking for external road agents. One would be motivated, with a reasonable expectation of success, to provide the increasing of the rate of data capture (frame rate) when detecting a potential threat taught by Lee in addition to the activating of sensors only for unmonitored regions taught by Moncomble in order to increase the accuracy of recognition of the object ([0261] “increasing the accuracy of recognizing the type of potentially threatening object above a set reliability.”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK R HEIM whose telephone number is (571)270-0120. The examiner can normally be reached M-F 9-6 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fadey Jabr can be reached at 571-272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.R.H./Examiner, Art Unit 3668 /Fadey S. Jabr/Supervisory Patent Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

Dec 15, 2022
Application Filed
Sep 06, 2024
Non-Final Rejection — §103
Nov 25, 2024
Examiner Interview Summary
Nov 25, 2024
Applicant Interview (Telephonic)
Nov 26, 2024
Response Filed
Mar 06, 2025
Final Rejection — §103
Apr 18, 2025
Applicant Interview (Telephonic)
Apr 18, 2025
Examiner Interview Summary
Apr 24, 2025
Response after Non-Final Action
Jun 05, 2025
Request for Continued Examination
Jun 10, 2025
Response after Non-Final Action
Jun 12, 2025
Non-Final Rejection — §103
Sep 03, 2025
Applicant Interview (Telephonic)
Sep 04, 2025
Response Filed
Sep 04, 2025
Examiner Interview Summary
Dec 03, 2025
Final Rejection — §103
Jan 13, 2026
Response after Non-Final Action
Feb 20, 2026
Request for Continued Examination
Feb 24, 2026
Response after Non-Final Action
Mar 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600382
PROCESS SCHEDULING BASED ON DATA ARRIVAL IN AN AUTONOMOUS VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12583569
Method of Controlling Propulsion System of Marine Vehicle and Propulsion System
2y 5m to grant Granted Mar 24, 2026
Patent 12583471
VEHICLE DRIVING SUPPORT APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12586477
FLIGHT PLANNING BASED ON SOCIETAL IMPACT CONSIDERATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12571638
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
51%
Grant Probability
49%
With Interview (-2.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 49 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month