Prosecution Insights
Last updated: April 19, 2026
Application No. 18/479,984

AUTONOMOUS DRIVING USING PREDICTIONS OF TRUST

Final Rejection §102§103
Filed
Oct 03, 2023
Examiner
DYER, ANDREW R
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Honda Motor Co. Ltd.
OA Round
2 (Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
98%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
425 granted / 710 resolved
+7.9% vs TC avg
Strong +39% interview lift
Without
With
+38.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
50 currently pending
Career history
760
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
43.4%
+3.4% vs TC avg
§102
20.2%
-19.8% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 710 resolved cases

Office Action

§102 §103
DETAILED ACTION This is a response to the Amendment to Application # 18/479,984 filed on January 22, 2026 in which claims 1-3, 6-13, and 16-20 were amended. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 are pending, of which claims 1, 2, 4, 5, 9, 10, and 14-17 are rejected under 35 U.S.C. § 102(a)(2) and claims 3, 6-8, 11-13, and 18-20 are rejected under 35 U.S.C. § 103. Specification The disclosure is objected to because of the following informalities: Paragraph 37 of the present specification contains two errors. First, it includes blank lines instead of the referenced US Patent Numbers. Second, it states that U.S. Patent Number 11,332,165, was “issued” on January 27, 2020. However, this is the filing date and not the issue date of that patent. Appropriate correction is required. Claim Interpretation Claims 6 and 11 include the limitation “wherein the neural network includes a self-attention layer that computes attention weights by applying Query, Key, and Value matrices to input data and applying a range mask through element-wise multiplication on the attention weights to limit attention of each signal step to neighboring steps,” or similar. (Emphasis added). This appears to recite that the intended use of the application of the range masks is “to limit attention of each signal step to neighboring steps.” “An intended use or purpose usually will not limit the scope of the claim because such statements usually do no more than define a context in which the invention operates.” Boehringer Ingelheim Vetmedica, Inc. v. Schering-Plough Corp., 320 F.3d 1339, 1345 (Fed. Cir. 2003). Although “[s]uch statements often . . . appear in the claim’s preamble,” In re Stencel, 828 F.2d 751, 754 (Fed. Cir. 1987), a statement of intended use or purpose can appear elsewhere in a claim. Id; Hewlett-Packard Co. v. Bausch & Lomb Inc., 909 F.2d 1464, 1468 (Fed. Cir. 1990); see also Roberts v. Ryer, 91 U.S. 150, 157 (1875) (‘The inventor of a machine is entitled to the benefit of all the uses to which it can be put, no matter whether he had conceived the idea of the use or not.’). Thus, it is usually improper to construe non-functional claim terms in system claims in a way that makes infringement or validity turn on their function. Paragon Solutions, LLC v. Timex Corp., 566 F.3d 1075, 1091 (Fed. Cir. 2009). Claim 18 includes the limitation “wherein implementing the HMI action includes highlighting an object on a display to indicate that the autonomous driving agent is aware of the object and increase the degree of trust of the driver in the autonomous driving agent.” (Emphasis added). This appears to recite that the intended use of the highlighting of the object on the display is to indicate that the vehicle is aware of the object and to increase the driver’s trust of the agent. “An intended use or purpose usually will not limit the scope of the claim because such statements usually do no more than define a context in which the invention operates.” Boehringer Ingelheim Vetmedica, Inc. v. Schering-Plough Corp., 320 F.3d 1339, 1345 (Fed. Cir. 2003). Although “[s]uch statements often . . . appear in the claim’s preamble,” In re Stencel, 828 F.2d 751, 754 (Fed. Cir. 1987), a statement of intended use or purpose can appear elsewhere in a claim. Id; Hewlett-Packard Co. v. Bausch & Lomb Inc., 909 F.2d 1464, 1468 (Fed. Cir. 1990); see also Roberts v. Ryer, 91 U.S. 150, 157 (1875) (‘The inventor of a machine is entitled to the benefit of all the uses to which it can be put, no matter whether he had conceived the idea of the use or not.’). Thus, it is usually improper to construe non-functional claim terms in system claims in a way that makes infringement or validity turn on their function. Paragon Solutions, LLC v. Timex Corp., 566 F.3d 1075, 1091 (Fed. Cir. 2009). Claim Rejections - 35 U.S.C. § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 2, 4, 5, 9, 10, and 14-17 are rejected under 35 U.S.C. § 102(a)(2) as being anticipated by Sato et al., US Publication 2024/0166224 (hereinafter Sato). Regarding claim 1, Sato discloses an autonomous driving agent for an autonomous vehicle, comprising “circuitry coupled to one or more sensors of the autonomous vehicle.” (Sato ¶ 13). Additionally, Sato discloses “wherein the circuitry is configured to: receive input data from the one or more sensors, the input data comprising driver occupant sensor data associated with an occupant a driver of the autonomous vehicle and the input data comprising vehicle sensor data associated with the operation of the autonomous vehicle” (Sato ¶ 14) by capturing information about the emotion of the driver through biometric sensor unit 14. Further, Sato discloses “determine, from the input data, a trust value indicating a degree of trust of the driver in the autonomous driving agent using a neural network” (Sato ¶ 15) by determining a numerical value from the biometric information representation the probability that the driver’s emotion is comfortable or uncomfortable using a neural network. This represents “trust” because it represents how comfortable the driver is with the vehicle’s autonomous control. Finally, Sato discloses “automatically modify control of one or more vehicle systems of the autonomous vehicle according to the determined trust value, wherein a level of autonomous control is increased in response to the trust value indicating increased trust of the driver in the autonomous driving agent or decreased in response to the trust value indicating decreased trust of the driver in the autonomous driving agent” (Sato ¶ 16, see also Claim 3) by increasing the amount of intervention when the comfort value is high. Regarding claim 2, Sato discloses the limitations contained in parent claim 1 for the reasons discussed above. In addition, Sato discloses “wherein the circuitry is configured to generate and implement a human machine interface (HMI) action according to the trust value” (Sato ¶ 30) where the driving assistance function is generated according to the trust value is displayed on the display screen. Regarding claim 4, Sato discloses the limitations contained in parent claim 1 for the reasons discussed above. In addition, Sato discloses “wherein the input data includes gaze information” (Sato ¶ 35) by giving an example of the input data including when the driver is gazing at the car navigation. Regarding claim 5, Sato discloses the limitations contained in parent claim 1 for the reasons discussed above. In addition, Sato discloses “wherein the input data includes vehicle telemetry data” (Sato ¶ 35) where the input data includes driver behavior. A person of ordinary skill in the art would understand the plain and ordinary meaning of vehicle telemetry data to include “encompasses a broad spectrum, ranging from engine performance and fuel efficiency to driver behavior and environmental conditions.”1 Regarding claim 9, Sato discloses a system, comprising “one or more sensors; one or more processors; memory storing instructions.” (Sato ¶¶ 12-13). Additionally, Sato discloses the instructions “ when executed by the one or more processors cause the one or more processors to: operate an autonomous vehicle using an autonomous driving agent.” (Sato ¶ 17). Further, Sato discloses “receive input data from the one or more sensors, the input data comprising driver sensor data associated with a driver of the autonomous vehicle and the input data comprising vehicle sensor data associated with the operation of the autonomous vehicle” (Sato ¶ 14) by capturing information about the emotion of the driver through biometric sensor unit 14. Moreover, Sato discloses “determine, from the input data, a trust value indicating a degree of trust of the driver in the autonomous driving agent using a neural network” (Sato ¶ 15) by determining a numerical value from the biometric information representation the probability that the driver’s emotion is comfortable or uncomfortable using a neural network. This represents “trust” because it represents how comfortable the driver is with the vehicle’s autonomous control. Finally, Sato discloses “automatically modify control of one or more vehicle systems of the autonomous vehicle according to the determined trust value, wherein a level of autonomous control is increased in response to the trust value indicating increased trust of the driver in the autonomous driving agent or decreased in response to the trust value indicating decreased trust of the driver in the autonomous driving agent” (Sato ¶ 16, see also Claim 3) by increasing the amount of intervention when the comfort value is high. Regarding claim 10, Sato discloses the limitations contained in parent claim 9 for the reasons discussed above. In addition, Sato discloses “wherein the instructions, when executed by the one or more processors, further cause the one or more processors to generate and implement a human machine interface (HMI) action. (Sato ¶ 30) where the driving assistance function is generated according to the trust value is displayed on the display screen. Regarding claim 14, Sato discloses the limitations contained in parent claim 9 for the reasons discussed above. In addition, Sato discloses “further comprising a sensor for detecting gaze information for the driver” (Sato ¶ 35) by giving an example of the input data including when the driver is gazing at the car navigation. Regarding claim 15, Sato discloses the limitations contained in parent claim 9 for the reasons discussed above. In addition, Sato discloses “further comprising a sensor for gathering vehicle telemetry information for the autonomous vehicle” (Sato ¶ 35) where the input data includes driver behavior. A person of ordinary skill in the art would understand the plain and ordinary meaning of vehicle telemetry data to include “encompasses a broad spectrum, ranging from engine performance and fuel efficiency to driver behavior and environmental conditions,” as discussed above. Regarding claim 16, Sato discloses a computer-implemented method, comprising “operating an autonomous vehicle using an autonomous driving agent.” (Sato ¶ 17). Additionally, Sato discloses “receiving input data from one or more sensors of the autonomous vehicle, the input data comprising driver sensor data associated with a driver of the autonomous vehicle and the input data comprising vehicle sensor data associated with the operation of the autonomous vehicle” (Sato ¶ 14) by capturing information about the emotion of the driver through biometric sensor unit 14. Further, Sato discloses “determining, from the input data, a trust value indicating a degree of trust of the driver in the autonomous driving agent using a neural network” (Sato ¶ 15) by determining a numerical value from the biometric information representation the probability that the driver’s emotion is comfortable or uncomfortable using a neural network. This represents “trust” because it represents how comfortable the driver is with the vehicle’s autonomous control. Finally, Sato discloses “automatically modifying control of one or more vehicle systems of the vehicle according to the determined trust value, wherein a level of autonomous control is increased in response to the trust value indicating increased trust of the driver in the autonomous driving agent or decreased in response to the trust value indicating decreased trust of the driver in the autonomous driving agent” (Sato ¶ 16, see also Claim 3) by increasing the amount of intervention when the comfort value is high. Regarding claim 17, Sato discloses the limitations contained in parent claim 16 for the reasons discussed above. In addition, Sato discloses “further comprising generating and implementing a human machine interface (HMI) ” (Sato ¶ 30) where the driving assistance function is generated according to the trust value is displayed on the display screen. Claim Rejections - 35 U.S.C. § 103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims, the Examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicants are advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. Claims 3 and 18-20 are rejected under 35 U.S.C. § 103 as being unpatentable over Sato in view of Akash et al., US Patent 11,332,165 (hereinafter Akash). Regarding claim 3, Sato discloses the limitations contained in parent claim 2 for the reasons discussed above. In addition, Sato does not appear to explicitly disclose “wherein the HMI action includes highlighting an object on a display to indicate that the autonomous driving agent is aware of the object.” However, Akash discloses a method for determining operator trust in a vehicle including “wherein the HMI action includes highlighting an object on a display to indicate that the autonomous driving agent is aware of the object” (Akash col. 12, ll. 48-67) by displaying a cue overlaid over the object. The plain and ordinary meaning of “highlighting” is to emphasize and the cue of Akash emphasizes the object. Sato and Akash are analogous art because they are from the “same field of endeavor,” namely that of determining a driver’s comfort level with the autonomous agent. Prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Sato and Akash before him or her to modify the HMI interface of Sato to include the object highlighting of Akash. The motivation/rationale for doing so would have been that of applying a known technique to a known device. See KSR Int’l Co. v. Teleflex Inc., 550 US 398, 82 USPQ2d 1385, 1396 (U.S. 2007) and MPEP § 2143(I)(D). Sato teaches the “base device” for detecting a driver’s comfort level with autonomous driving and displaying data on an interface related to that. Further, Akash teaches the “known technique” of highlighting an object on a display related to the driver’s comfort level that is applicable to the base device of Sato. One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in an improved system because such a modification is well-known and easy to implement by a program of even basic skill in the art. Regarding claim 18, Sato discloses the limitations contained in parent claim 17 for the reasons discussed above. In addition, Sato does not appear to explicitly disclose “wherein implementing the HMI action includes highlighting an object on a display to indicate that the autonomous driving agent is aware of the object and increase the degree of trust of the driver in the autonomous driving agent.” However, Akash discloses a method for determining operator trust in a vehicle including “wherein implementing the HMI action includes highlighting an object on a display to indicate that the autonomous driving agent is aware of the object and increase the degree of trust of the driver in the autonomous driving agent” (Akash col. 12, ll. 48-67) by displaying a cue overlaid over the object. The plain and ordinary meaning of “highlighting” is to emphasize and the cue of Akash emphasizes the object. Sato and Akash are analogous art because they are from the “same field of endeavor,” namely that of determining a driver’s comfort level with the autonomous agent. Prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Sato and Akash before him or her to modify the HMI interface of Sato to include the object highlighting of Akash. The motivation/rationale for doing so would have been that of applying a known technique to a known device. See KSR Int’l Co. v. Teleflex Inc., 550 US 398, 82 USPQ2d 1385, 1396 (U.S. 2007) and MPEP § 2143(I)(D). Sato teaches the “base device” for detecting a driver’s comfort level with autonomous driving and displaying data on an interface related to that. Further, Akash teaches the “known technique” of highlighting an object on a display related to the driver’s comfort level that is applicable to the base device of Sato. One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in an improved system because such a modification is well-known and easy to implement by a program of even basic skill in the art. Regarding claim 19, the combination of Sato and Akash discloses the limitations contained in parent claim 18 for the reasons discussed above. In addition, the combination of Sato and Akash discloses “wherein the object is an obstacle in a roadway” (Akash col. 3, l. 49-col. 4, l. 13) where the object is a vehicle (i.e., an obstacle) in the roadway. Regarding claim 20, the combination of Sato and Akash discloses the limitations contained in parent claim 18 for the reasons discussed above. In addition, the combination of Sato and Akash does not appear to explicitly disclose “ wherein the object is a pedestrian” (Akash col. 3, l. 49-col. 4, l. 13) where the object may also be a pedestrian. Claims 6-8 and 11-13 are rejected under 35 U.S.C. § 103 as being unpatentable over Sato in view of Shazeer et al., US Publication 2021/0064924 (hereinafter Shazeer). Regarding claim 6, Sato discloses the limitations contained in parent claim 1 for the reasons discussed above. In addition, Sato does not appear to explicitly disclose “wherein the neural network includes a self-attention layer that computes attention weights by applying Query, Key, and Value matrices to input data and applying a range mask through element-wise multiplication on the attention weights to limit attention of each signal step to neighboring steps.” However, Shazeer discloses an image processing neural network “wherein the neural network includes a self-attention layer that computes attention weights by applying Query, Key, and Value matrices to input data” (Shazeer ¶¶ 50, 66) where the “self-attention sub-layer” (i.e., a self-attention layer) uses the queries, keys, and values using a scaled dot-product attention mechanism (Shazeer ¶ 66) and the attention sub-layers computes a weighted sum. (Shazeer ¶ 50). Additionally, Shazeer discloses “applying a range mask through element-wise multiplication on the attention weights to limit attention of each signal step to neighboring steps” (Shazeer ¶¶ 37, 55) by masking the data so that it is within the range of the current position in the generation order only (Shazeer ¶ 37), where the masks are applied by setting (i.e., multiplying) each value to negative infinity that is outside the range and, therefore, multiplying each value that is within the range by one. (Shazeer ¶ 55). Sato and Shazeer are analogous art because they are from the “same field of endeavor,” namely that of image processing using neural networks. Prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Sato and Shazeer before him or her to modify the neural network of Sato to include the particular neural network of Shazeer. The motivation/rationale for doing so would have been that of simple substitution. See KSR Int’l Co v. Teleflex Inc., 550 US 398, 82 USPQ2d 1385, 1396 (U.S. 2007) and MPEP § 2143(I)(B). Sato differs from the claimed invention by including a generic image processing neural network in place of the claimed neural network. Further, Shazeer teaches that the claimed neural network was well known in the art. One of ordinary skill in the art could have predictably substituted specific neural network of Shazeer for the generic neural network of Sato because both are merely neural networks used for image processing. Regarding claim 7, the combination of Sato and Shazeer discloses the limitations contained in parent claim 6 for the reasons discussed above. In addition, the combination of Sato and Shazeer discloses “wherein the neural network includes a windowing attention layer that transforms step sequences output from the self-attention layer into window sequences by applying dot-product attention with a window mask indicating moving window ranges and window prompts that generate an attention matrix indicating attention weights assigned to each step by each window” (Shazeer ¶¶ 50, 63) where the attention sub-layer transforms the embeddings into embeddings of all positions in the decoder preceding that position, which is a window (Shazeer ¶ 63), and indicating that the attention sub-layers are scaled (i.e., assigned a weight) using dot products. (Shazeer ¶ 50). Regarding claim 8, the combination of Sato and Shazeer discloses the limitations contained in parent claim 7 for the reasons discussed above. In addition, the combination of Sato and Shazeer discloses “wherein the neural network includes a window weighting layer that applies a softmax function to each window and uses a transformation matrix to calculate a sequence embedding as a weighted sum of window embeddings.” (Shazeer ¶ 50) Regarding claim 11, Sato discloses the limitations contained in parent claim 9 for the reasons discussed above. In addition, Sato does not appear to explicitly disclose “wherein the neural network includes a self-attention layer that computes attention weights by applying Query, Key, and Value matrices to input data and applying a range mask through element-wise multiplication on the attention weights to limit attention of each signal step to neighboring steps.” However, Shazeer discloses an image processing neural network “wherein the neural network includes a self-attention layer that computes attention weights by applying Query, Key, and Value matrices to input data” (Shazeer ¶¶ 50, 66) where the “self-attention sub-layer” (i.e., a self-attention layer) uses the queries, keys, and values using a scaled dot-product attention mechanism (Shazeer ¶ 66) and the attention sub-layers computes a weighted sum. (Shazeer ¶ 50). Additionally, Shazeer discloses “applying a range mask through element-wise multiplication on the attention weights to limit attention of each signal step to neighboring steps” (Shazeer ¶¶ 37, 55) by masking the data so that it is within the range of the current position in the generation order only (Shazeer ¶ 37), where the masks are applied by setting (i.e., multiplying) each value to negative infinity that is outside the range and, therefore, multiplying each value that is within the range by one. (Shazeer ¶ 55). Sato and Shazeer are analogous art because they are from the “same field of endeavor,” namely that of image processing using neural networks. Prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Sato and Shazeer before him or her to modify the neural network of Sato to include the particular neural network of Shazeer. The motivation/rationale for doing so would have been that of simple substitution. See KSR Int’l Co v. Teleflex Inc., 550 US 398, 82 USPQ2d 1385, 1396 (U.S. 2007) and MPEP § 2143(I)(B). Sato differs from the claimed invention by including a generic image processing neural network in place of the claimed neural network. Further, Shazeer teaches that the claimed neural network was well known in the art. One of ordinary skill in the art could have predictably substituted specific neural network of Shazeer for the generic neural network of Sato because both are merely neural networks used for image processing. Regarding claim 12, the combination of Sato and Shazeer discloses the limitations contained in parent claim 11 for the reasons discussed above. In addition, the combination of Sato and Shazeer discloses “wherein the neural network includes a windowing attention layer that transforms step sequences output from the self-attention layer into window sequences by applying dot-product attention with a window mask indicating moving window ranges and window prompts that generate an attention matrix indicating attention weights assigned to each step by each window” (Shazeer ¶¶ 50, 63) where the attention sub-layer transforms the embeddings into embeddings of all positions in the decoder preceding that position, which is a window (Shazeer ¶ 63), and indicating that the attention sub-layers are scaled (i.e., assigned a weight) using dot products. (Shazeer ¶ 50). Regarding claim 13, the combination of Sato and Shazeer discloses the limitations contained in parent claim 12 for the reasons discussed above. In addition, the combination of Sato and Shazeer discloses “wherein the neural network includes a window weighting layer that applies a softmax function to each window and uses a transformation matrix to calculate a sequence embedding as a weighted sum of window embeddings.” (Shazeer ¶ 50). Response to Arguments Applicant’s arguments filed January 22, 2026, with respect to the System and method for calculating a driver’s trust in an autonomous driving system using a neural network. (Remarks 8) have been fully considered and are persuasive. The System and method for calculating a driver’s trust in an autonomous driving system using a neural network. have been withdrawn. Applicant’s arguments filed January 22, 2026, with respect to the rejections of claims 1-20 under 35 U.S.C. §§ 102 and 103, respectively (Remarks 9-10) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of Sato. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure: Hatamizadeh et al., US Publication 2024/0185396, System and method for processing images in an autonomous vehicle using neural networks including SoftMax functions and self-attention networks. Takahashi et al., US Publication 2024/0208482, System and method for calculating a driver’s trust in an autonomous driving system using a neural network. Hois et al., US Publication 2024/0278810, System and method for calculating a driver’s trust in an autonomous driving system using a neural network. Alexander et al., US Publication 2025/0058787, System and method for calculating a driver’s trust in an autonomous driving system using a neural network. Hois et al., US Publication 2025/0282395, System and method for calculating a driver’s trust in an autonomous driving system using a neural network. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 C.F.R. § 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 C.F.R. § 1.17(a)) pursuant to 37 C.F.R. § 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW R DYER whose telephone number is (571)270-3790. The examiner can normally be reached Monday-Thursday 7:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached on 571-270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW R DYER/Primary Examiner, Art Unit 3662 1 Yevhen Fedoriuk, Investigating Vehicle Telemetry - What It Is, Why It Matters, and What Lies Ahead, May 18, 2023, indeema.com, Page 2.
Read full office action

Prosecution Timeline

Oct 03, 2023
Application Filed
Oct 27, 2025
Non-Final Rejection — §102, §103
Dec 11, 2025
Interview Requested
Jan 07, 2026
Examiner Interview Summary
Jan 07, 2026
Applicant Interview (Telephonic)
Jan 22, 2026
Response Filed
Mar 05, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600379
COMPUTER SYSTEM AND METHOD FOR DETERMINING RELIABLE VEHICLE CONTROL INSTRUCTIONS USING TRAFFIC SIGNAL INFORMATION
2y 5m to grant Granted Apr 14, 2026
Patent 12583294
ACTIVE DYNAMIC SUN VISOR AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12570371
Method for Determining a Driver State of a Motor-Assisted Vehicle; Method for Training a Machine Learning System; Motor-Assisted Vehicle
2y 5m to grant Granted Mar 10, 2026
Patent 12565200
VEHICLE AND DRIVING CONTROL METHOD FOR PROVIDING GUIDE MODE ASSOCIATED WITH MISSION-BASED DRIVING TRAINING
2y 5m to grant Granted Mar 03, 2026
Patent 12559119
INCREASING OPERATOR VIGILANCE BY MODIFYING LONGITUDINAL VEHICLE DYNAMICS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
98%
With Interview (+38.6%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 710 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month