Prosecution Insights
Last updated: April 19, 2026
Application No. 18/197,160

THREAT DETECTION FOR A PROCESSING SYSTEM OF A MOTOR VEHICLE

Final Rejection §103
Filed
May 15, 2023
Examiner
FISHER, PAUL R
Art Unit
2498
Tech Center
2400 — Computer Networks
Assignee
Elektrobit Automotive GmbH
OA Round
2 (Final)
23%
Grant Probability
At Risk
3-4
OA Rounds
4y 4m
To Grant
47%
With Interview

Examiner Intelligence

Grants only 23% of cases
23%
Career Allow Rate
113 granted / 487 resolved
-34.8% vs TC avg
Strong +24% interview lift
Without
With
+23.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
17 currently pending
Career history
504
Total Applications
across all art units

Statute-Specific Performance

§101
28.2%
-11.8% vs TC avg
§103
41.9%
+1.9% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 487 resolved cases

Office Action

§103
DETAILED ACTION The applicant’s amendment filed on June 25, 2025 has been acknowledged. Claims 2-5 and 13-16 have been canceled. Claims 1, 6-12 and 17-20, as amended, are currently pending and have been considered below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6-12 and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cohen et al. (WO 2022/040360 A1) hereafter Cohen, in view of Lynam et al. (WO 2010/099416 A1) hereafter Lynam, further in view of Stelzig et al. (US 2014/0195138 A1) hereafter Stelzig. As per claim 1, Cohen discloses a method for detecting a threat for a processing system of a motor vehicle (Cohen Abstract; discloses that the method is for detecting cyber-attacks or threats on a vehicle), the method comprising: receiving driving context data associated with a driving context of the motor vehicle, wherein the driving context data are related to details of an environment where the motor vehicle is being driven (Cohen Paragraph [0038]; discloses that the method receives context data which establishes the current values of the vehicle driving environment in this case the acceleration position, vehicle speed, torque of the engine, gear and brakes. Paragraph [0005]; discloses this can also include GPS position and positional accuracy); generating simulated network messages of the motor vehicle from at least the driving context data (Cohen Paragraph [0038]; discloses that the method generates predicted or simulated value for a message based on the current values or context data and the previously observed data); and detecting a threat by comparing the simulated network messages with actual network messages of the motor vehicle (Cohen Paragraph [0047]; discloses that the system compares the simulated values and the actual values. Paragraph [0049]; discloses that the comparison detects if there is a threat and outputs whether an attack is present or not). Cohen further discloses wherein the simulated network messages of the motor vehicle are generated from the driving context data, wherein the driving context data includes driving pattern data (Cohen Paragraph [0038]; discloses that the method generates predicted or simulated value for a message based on the current values or context data and the previously observed data). Cohen fails to explicitly disclose wherein the details of the environment where the motor vehicle is being driven are captured by at least one camera and by at least one other environment sensor that is selected from a group comprising: at least one ultrasonic sensor, at least one laser scanner, at least one lidar sensor, and at least one radar sensor. Cohen additionally fails to explicitly disclose wherein the driving context data includes both driving pattern data and data from the at least one other environment sensor mapped onto a video stream. Lynam, which like Cohen talks about collecting vehicle driving data, teaches wherein the details of the environment where the motor vehicle is being driven are captured by at least one camera and by at least one other environment sensor that is selected from a group comprising: at least one ultrasonic sensor, at least one laser scanner, at least one lidar sensor, and at least one radar sensor (Lynam Paragraph [0066]-[0069]; teaches it is known to include image data in the driving context data from a camera. Lynam additionally establishes additional sensors such as radar and ultrasonic sensors. Lynam establishes it is known to fusion the image data with other sensor data. This is done to enhance the processing and/or decision making and/or control of the system). Lynam additionally teaches wherein the driving context data includes both driving pattern data and data from the at least one other environment sensor (Lynam Paragraph [0069]; teaches it is known to fusion the image data with other sensor data. This is done to enhance the processing and/or decision making and/or control of the system. Since Cohen already collects various sensor data it would have been obvious to fuse the sensor data with image data to enhance the processing and decision making abilities of the system as shown in Lynam). Cohen discloses collecting vehicle driving context data to simulate messages and compare the expected or simulated data to determine if an attack is present on the system. As shown in Cohen this allows the system to enhance security and report unexpected results which are recorded across the CAN bus network. Cohen however fails to establish wherein the details of the environment where the motor vehicle is being driven are captured by at least one camera and by at least one other environment sensor that is selected from a group comprising: at least one ultrasonic sensor, at least one laser scanner, at least one lidar sensor, and at least one radar sensor. Lynam which like Cohen talks about collecting driving context data teaches wherein the details of the environment where the motor vehicle is being driven are captured by at least one camera and by at least one other environment sensor that is selected from a group comprising: at least one ultrasonic sensor, at least one laser scanner, at least one lidar sensor, and at least one radar sensor. It would have been obvious to one of ordinary skill in the art to include in the treat detection system of Cohen the ability to capture image data using a camera as well as capturing at least one other environment sensor as shown in Lynam since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same functions as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Therefore, from this teaching of Lynam, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of detecting threats to vehicle provided by Cohen, the ability to capture image data using a camera as well as capturing at least one other environment sensor as shown in Lynam, for the purposes of enhancing the decision making of the system. Since Cohen already collects various sensor data it would have been obvious to fuse the sensor data with image data to enhance the processing and decision making abilities of the system as shown in Lynam. While the combination establishes the fusion of driving pattern data and sensor data, it is not explicit that data from the at least one other environment sensor is mapped onto a video stream. Stelzig, which like the combination talks about monitoring vehicle actions, teaches it is known for data from one environment sensor such as radar to be mapped onto a video stream (Stelzig Page 4, paragraphs [0050]-[0052]; teaches that it is known to map environment sensor data from a sensor such as a radar and map it onto a video stream. Stelzig establishes that this is done to better visual the information. Since the combination already establishing fusing different types of sensor data including camera data and radar data, it would have been obvious that one manner of combining the data is to map the radar data onto the video data as shown explicitly in Stelzig, to help visualize the data that is received). Cohen discloses collecting vehicle driving context data to simulate messages and compare the expected or simulated data to determine if an attack is present on the system. As shown in Cohen this allows the system to enhance security and report unexpected results which are recorded across the CAN bus network. Lynam teaches wherein the details of the environment where the motor vehicle is being driven are captured by at least one camera and by at least one other environment sensor that is selected from a group comprising: at least one ultrasonic sensor, at least one laser scanner, at least one lidar sensor, and at least one radar sensor. The combination however fails to explicitly disclose that the data from the at least one other environment sensor is mapped onto a video stream. Stelzig teaches it is known that the data from the at least one other environment sensor is mapped onto a video stream. It would have been obvious to one of ordinary skill in the art to include in the treat detection system of Cohen and Lynam the ability to map environment sensor data onto a video stream as shown in Stelzig since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same functions as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Therefore, from this teaching of Lynam, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of detecting threats to vehicle provided by Cohen and Lynam, the ability to map environment sensor data onto a video stream as shown in Stelzig, for the purposes of providing higher precision information. Since the combination already establishing fusing different types of sensor data including camera data and radar data, it would have been obvious that one manner of combining the data is to map the radar data onto the video data as shown explicitly in Stelzig, to help visualize the data that is received. As per claim 6, the combination of Cohen, Lynam and Stelzig teaches the method according to claim 1, Cohen further discloses wherein a threat is detected if a similarity between the simulated network messages and the actual network messages is below a threshold (Cohen Paragraphs [0044] and [0048]; discloses that the system determines the similarity between the simulated or predicted values and the actual values. If it is not what is expect or below threshold it is considered to be an attack). As per claim 7, the combination of Cohen, Lynam and Stelzig teaches the method according to claim 1, Cohen further discloses wherein the simulated network messages are generated by an autoencoder network (Cohen Paragraph [0046]; discloses that the simulated or predicted values are generated using an autoencoder network). As per claim 8, the combination of Cohen, Lynam and Stelzig teaches the method according to claim 7, Cohen further discloses wherein the autoencoder network is based on a recurrent neural network (Cohen Paragraph [0046]; discloses that the autoencoder network is based on a recurrent neural network). As per claim 9, the combination of Cohen, Lynam and Stelzig teaches the method according to claim 8, Cohen further discloses wherein the autoencoder network comprises a long short-term memory network with multi-encoders (Cohen Paragraph [0046]; discloses that the autoencoder network comprises long short-term memory network units). As per claim 10, the combination of Cohen, Lynam and Stelzig teaches the method according to claim 1, Cohen further discloses further comprising predicting a threat from the simulated network messages (Cohen Paragraph [0047]; discloses that the system compares the simulated values and the actual values. Paragraph [0049]; discloses that the comparison detects if there is a threat and outputs whether an attack is present or not). As per claim 11, the combination of Cohen, Lynam and Stelzig teaches the method according to claim 1, Cohen further discloses wherein the method is performed in the motor vehicle, in a remote backend communicatively coupled to the motor vehicle, or distributed between the motor vehicle and the remote backend (Cohen Paragraph [0084]; discloses that the method is performed in the motor vehicle or in a remote system or backend). As per claim 12, Cohen discloses a motor vehicle comprising an apparatus for detecting a threat for a processing system of a motor vehicle (Cohen Abstract; discloses that the method is for detecting cyber-attacks or threats on a vehicle. Paragraph [0023]; discloses that the apparatus includes a processor CPU or GPU), the apparatus comprising: a reception unit configured to receive (Based on the applicant’s originally filed disclosure the “reception unit” is either a dedicated hardware unit or combined with other units on a single unit or implemented as software running a processor, CPU or GPU, applicant’s originally filed specification paragraph [0053]) driving context data associated with a driving context of the motor vehicle , wherein the driving context data are related to details of an environment where the motor vehicle is being driven (Cohen Paragraph [0038]; discloses that the method receives context data which establishes the current values of the vehicle driving environment in this case the acceleration position, vehicle speed, torque of the engine, gear and brakes. Paragraph [0005]; discloses this can also include GPS position and positional accuracy. Paragraph [0023]; discloses that the apparatus includes a processor CPU or GPU); a simulation unit configured to generate (Based on the applicant’s originally filed disclosure the “simulation unit” is either a dedicated hardware unit or combined with other units on a single unit or implemented as software running a processor, CPU or GPU, applicant’s originally filed specification paragraph [0053]) simulated network messages of the motor vehicle from at least the driving context data (Cohen Paragraph [0038]; discloses that the method generates predicted or simulated value for a message based on the current values or context data and the previously observed data. Paragraph [0023]; discloses that the apparatus includes a processor CPU or GPU); and a detection unit configured to detect (Based on the applicant’s originally filed disclosure the “detection unit” is either a dedicated hardware unit or combined with other units on a single unit or implemented as software running a processor, CPU or GPU, applicant’s originally filed specification paragraph [0053]) a threat by comparing the simulated network messages with actual network messages of the motor vehicle (Cohen Paragraph [0047]; discloses that the system compares the simulated values and the actual values. Paragraph [0049]; discloses that the comparison detects if there is a threat and outputs whether an attack is present or not. Paragraph [0023]; discloses that the apparatus includes a processor CPU or GPU). Cohen further discloses wherein the simulated network messages of the motor vehicle are generated from the driving context data, wherein the driving context data includes driving pattern data (Cohen Paragraph [0038]; discloses that the method generates predicted or simulated value for a message based on the current values or context data and the previously observed data). Cohen fails to explicitly disclose wherein the details of the environment where the motor vehicle is being driven are captured by at least one camera and by at least one other environment sensor that is selected from a group comprising: at least one ultrasonic sensor, at least one laser scanner, at least one lidar sensor, and at least one radar sensor. Cohen additionally fails to explicitly disclose wherein the driving context data includes both driving pattern data and data from the at least one other environment sensor mapped onto a video stream. Lynam, which like Cohen talks about collecting vehicle driving data, teaches wherein the details of the environment where the motor vehicle is being driven are captured by at least one camera and by at least one other environment sensor that is selected from a group comprising: at least one ultrasonic sensor, at least one laser scanner, at least one lidar sensor, and at least one radar sensor (Lynam Paragraph [0066]-[0069]; teaches it is known to include image data in the driving context data from a camera. Lynam additionally establishes additional sensors such as radar and ultrasonic sensors. Lynam establishes it is known to fusion the image data with other sensor data. This is done to enhance the processing and/or decision making and/or control of the system). Lynam additionally teaches wherein the driving context data includes both driving pattern data and data from the at least one other environment sensor (Lynam Paragraph [0069]; teaches it is known to fusion the image data with other sensor data. This is done to enhance the processing and/or decision making and/or control of the system. Since Cohen already collects various sensor data it would have been obvious to fuse the sensor data with image data to enhance the processing and decision making abilities of the system as shown in Lynam). Cohen discloses collecting vehicle driving context data to simulate messages and compare the expected or simulated data to determine if an attack is present on the system. As shown in Cohen this allows the system to enhance security and report unexpected results which are recorded across the CAN bus network. Cohen however fails to establish wherein the details of the environment where the motor vehicle is being driven are captured by at least one camera and by at least one other environment sensor that is selected from a group comprising: at least one ultrasonic sensor, at least one laser scanner, at least one lidar sensor, and at least one radar sensor. Lynam which like Cohen talks about collecting driving context data teaches wherein the details of the environment where the motor vehicle is being driven are captured by at least one camera and by at least one other environment sensor that is selected from a group comprising: at least one ultrasonic sensor, at least one laser scanner, at least one lidar sensor, and at least one radar sensor. It would have been obvious to one of ordinary skill in the art to include in the treat detection system of Cohen the ability to capture image data using a camera as well as capturing at least one other environment sensor as shown in Lynam since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same functions as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Therefore, from this teaching of Lynam, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of detecting threats to vehicle provided by Cohen, the ability to capture image data using a camera as well as capturing at least one other environment sensor as shown in Lynam, for the purposes of enhancing the decision making of the system. Since Cohen already collects various sensor data it would have been obvious to fuse the sensor data with image data to enhance the processing and decision making abilities of the system as shown in Lynam. While the combination establishes the fusion of driving pattern data and sensor data, it is not explicit that data from the at least one other environment sensor is mapped onto a video stream. Stelzig, which like the combination talks about monitoring vehicle actions, teaches it is known for data from one environment sensor such as radar to be mapped onto a video stream (Stelzig Page 4, paragraphs [0050]-[0052]; teaches that it is known to map environment sensor data from a sensor such as a radar and map it onto a video stream. Stelzig establishes that this is done to better visual the information. Since the combination already establishing fusing different types of sensor data including camera data and radar data, it would have been obvious that one manner of combining the data is to map the radar data onto the video data as shown explicitly in Stelzig, to help visualize the data that is received). Cohen discloses collecting vehicle driving context data to simulate messages and compare the expected or simulated data to determine if an attack is present on the system. As shown in Cohen this allows the system to enhance security and report unexpected results which are recorded across the CAN bus network. Lynam teaches wherein the details of the environment where the motor vehicle is being driven are captured by at least one camera and by at least one other environment sensor that is selected from a group comprising: at least one ultrasonic sensor, at least one laser scanner, at least one lidar sensor, and at least one radar sensor. The combination however fails to explicitly disclose that the data from the at least one other environment sensor is mapped onto a video stream. Stelzig teaches it is known that the data from the at least one other environment sensor is mapped onto a video stream. It would have been obvious to one of ordinary skill in the art to include in the treat detection system of Cohen and Lynam the ability to map environment sensor data onto a video stream as shown in Stelzig since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same functions as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Therefore, from this teaching of Lynam, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of detecting threats to vehicle provided by Cohen and Lynam, the ability to map environment sensor data onto a video stream as shown in Stelzig, for the purposes of providing higher precision information. Since the combination already establishing fusing different types of sensor data including camera data and radar data, it would have been obvious that one manner of combining the data is to map the radar data onto the video data as shown explicitly in Stelzig, to help visualize the data that is received. As per claim 17, the combination of Cohen, Lynam and Stelzig teaches the motor vehicle according to claim 12, Cohen further discloses wherein a threat is detected if a similarity between the simulated network messages and the actual network messages is below a threshold (Cohen Paragraphs [0044] and [0048]; discloses that the system determines the similarity between the simulated or predicted values and the actual values. If it is not what is expect or below threshold it is considered to be an attack). As per claim 18, the combination of Cohen, Lynam and Stelzig teaches the motor vehicle according to claim 12, Cohen further discloses wherein the simulated network messages are generated by an autoencoder network (Cohen Paragraph [0046]; discloses that the simulated or predicted values are generated using an autoencoder network). As per claim 19, the combination of Cohen, Lynam and Stelzig teaches the motor vehicle according to claim 18, Cohen further discloses wherein the autoencoder network is based on a recurrent neural network that comprises a long short- term memory network with multi-encoders (Cohen Paragraph [0046]; discloses that the autoencoder network is based on a recurrent neural network that comprises long short-term memory network units). As per claim 20, the combination of Cohen, Lynam and Stelzig teaches the motor vehicle according to claim 12, Cohen further discloses further comprising predicting a threat from the simulated network messages (Cohen Paragraph [0047]; discloses that the system compares the simulated values and the actual values. Paragraph [0049]; discloses that the comparison detects if there is a threat and outputs whether an attack is present or not). Response to Arguments Applicant's arguments filed June 25, 2025 have been fully considered but they are not persuasive. In response to the applicant’s arguments on pages 7-8 regarding the art rejections specifically that, “Cohen and Lynam do not support a proper prim a facie case of unpatentability of currently amended claim 1 because these references, either alone or in combination with the other prior art of record, do not teach or fairly suggest "wherein the details of the environment where the motor vehicle is being driven are captured by at least one camera and by at least one other environment sensor that is selected from a group comprising: at least one ultrasonic sensor, at least one laser scanner, at least one lidar sensor, and at least one radar sensor ... wherein the simulated network messages of the motor vehicle are generated from the driving context data, wherein the driving context data includes both driving pattern data and data from the at least one other environment sensor mapped onto a video stream," as is explicitly recited in claim 1, as currently amended.” “Independent claim 12 contains limitations that are analogous to the limitations of claim 1 discussed above. Claim 12 is, therefore, also in condition for allowance over the prior art of record for reasons that correspond to the reasons set forth above in connection with claim 1.” The Examiner respectfully disagrees. As stated in the rejection, Cohen discloses wherein the simulated network messages of the motor vehicle are generated from the driving context data, wherein the driving context data includes driving pattern data. Specifically, Cohen Paragraph [0038]; discloses that the method generates predicted or simulated value for a message based on the current values or context data and the previously observed data. Lynam teaches wherein the details of the environment where the motor vehicle is being driven are captured by at least one camera and by at least one other environment sensor that is selected from a group comprising: at least one ultrasonic sensor, at least one laser scanner, at least one lidar sensor, and at least one radar sensor. Lynam Paragraph [0066]-[0069]; teaches it is known to include image data in the driving context data from a camera. Lynam additionally establishes additional sensors such as radar and ultrasonic sensors. Lynam establishes it is known to fusion the image data with other sensor data. This is done to enhance the processing and/or decision making and/or control of the system. Lynam additionally teaches wherein the driving context data includes both driving pattern data and data from the at least one other environment sensor. Lynam Paragraph [0069]; teaches it is known to fusion the image data with other sensor data. This is done to enhance the processing and/or decision making and/or control of the system. Since Cohen already collects various sensor data it would have been obvious to fuse the sensor data with image data to enhance the processing and decision making abilities of the system as shown in Lynam. While the combination establishes the fusion of sensor data, it is not explicit that it maps the sensor data to the video stream. The Examiner has provided the Stelzig reference to address this feature. Stelzig Page 4, paragraphs [0050]-[0052]; teaches that it is known to map environment sensor data from a sensor such as a radar and map it onto a video stream. Stelzig establishes that this is done to better visual the information. Since the combination already establishing fusing different types of sensor data including camera data and radar data, it would have been obvious that one manner of combining the data is to map the radar data onto the video data as shown explicitly in Stelzig, to help visualize the data that is received. The Examiner asserts that the combination reads over the claims as currently written. Lacking any additional arguments from the applicant the Examiner has not been persuaded and the rejections have been maintained. All rejections made towards the dependent claims are maintained due to the lack of a reply by the applicant in regards to distinctly and specifically point out the supposed errors in the Examiner’s action in the prior Office Action (37 CFR 1.111). The Examiner asserts that the applicant only argues that the dependent claims should be allowable because the independent claims are unobvious and patentable over Cohen, and, where appropriate, in view of Lynam. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL R FISHER whose telephone number is (571)270-5097. The examiner can normally be reached Monday - Friday 9 am to 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yin-Chen Shaw can be reached at (571)272-8878. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. PAUL R. FISHER Primary Examiner Art Unit 2498 /PAUL R FISHER/Primary Examiner, Art Unit 2498 10/8/2025
Read full office action

Prosecution Timeline

May 15, 2023
Application Filed
Mar 22, 2025
Non-Final Rejection — §103
Jun 25, 2025
Response Filed
Oct 08, 2025
Final Rejection — §103
Apr 06, 2026
Request for Continued Examination
Apr 15, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598182
PEER-TO-PEER SECURE MODE AUTHENTICATION
2y 5m to grant Granted Apr 07, 2026
Patent 12587393
SYSTEM FOR DIAGNOSIS OF A VEHICLE AND METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12556384
NEW METHOD FOR PSEUDO-RANDOM NUMBER GENERATION FOR INFORMATION ENCRYPTION
2y 5m to grant Granted Feb 17, 2026
Patent 12554841
ELECTRONIC SYSTEM AND METHODS FOR DYNAMIC ACTIVATION OF COUNTERMEASURES
2y 5m to grant Granted Feb 17, 2026
Patent 12554860
DETECTING SECURITY ISSUES IN FORKED PROJECTS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
23%
Grant Probability
47%
With Interview (+23.6%)
4y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 487 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month