Prosecution Insights
Last updated: April 19, 2026
Application No. 18/988,784

METHODS AND SYSTEMS FOR SETTING DYNAMIC TRIGGERS FOR EVENT RECORDINGS FOR A VEHICLE

Non-Final OA §102
Filed
Dec 19, 2024
Examiner
SHAAWAT, MUSSA A
Art Unit
3669
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Zenseact AB
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
82%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
665 granted / 876 resolved
+23.9% vs TC avg
Moderate +6% lift
Without
With
+6.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
29 currently pending
Career history
905
Total Applications
across all art units

Statute-Specific Performance

§101
18.1%
-21.9% vs TC avg
§103
28.5%
-11.5% vs TC avg
§102
37.5%
-2.5% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 876 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-14 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gao et al. US Pg. Pub. (2022/0164350) referred to hereinafter as Gao. As per claim 1, Gao teaches a method for setting dynamic triggers for event recordings for a vehicle (see at least abstract, summary, Para 72-74. Fig.3), the method comprising: obtaining sensor data samples captured by one or more sensors of the vehicle (see at least abstract, summary, Para 72-74. Fig.3); generating sensor data embeddings from the sensor data samples, wherein the sensor data embeddings are generated by processing each sensor data sample through one or more sensor data embedding networks that are trained to process sensor data samples and to output a corresponding sensor data embedding for each sensor data sample in a multi-dimensional vector space (see at least abstract, summary, Para 72-74. Fig.3); receiving a query embedding, wherein the query embedding has been generated by processing a query through one or more query embedding networks that are trained to process queries and to output a corresponding query embedding for each query in the multi-dimensional vector space, and wherein each of the one or more query embedding networks is trained in association with one or more of the sensor data embedding networks such that a query embedding of a query that is contextually related to a specific sensor data sample points towards the same point as the sensor data embedding of that sensor data sample within the multi-dimensional vector space (see at least abstract, summary, Para 72-74. Fig.3); identifying one or more sensor data embeddings within the multi-dimensional vector space based on a proximity to the received query embedding within the multi-dimensional vector space (see at least abstract, summary, Para 65-68. Fig.3); and storing sensor data samples represented by the identified one or more sensor data embeddings (see at least abstract, summary, Para 72-74. Fig.3). As per claim 2, Gao teaches a method according to claim 1, wherein the one or more sensor data embedding networks comprise a plurality of sensor data embedding networks including one sensor data embedding network for a corresponding sensor of the vehicle, wherein the plurality of sensor data embedding networks comprises a first sensor data embedding network trained to process sensor data samples of a first sensor and to output a corresponding sensor data embedding, and wherein each of the other sensor data embedding networks is trained in association with the first sensor data embedding network such that a sensor data embedding generated by the first sensor data embedding network and a sensor data embedding generated by each of the other sensor data embedding networks point towards the same point within the multi-dimensional vector space when the generated sensor data embeddings are contextually, spatially and/or temporally related (see at least abstract, summary, Para 26, 35, 72-74. Fig.1). As per claim 3, Gao teaches a method according to claim 1, wherein the sensor data embeddings are continuously generated and temporarily stored in a data buffer (see at least abstract, summary, Para 26, 35, 72-74. Fig.1). As per claim 4, Gao teaches a method according to claim 1, wherein the storing the sensor data samples comprises persistently storing the sensor data samples represented by the identified one or more sensor data embeddings in a data storage unit (see at least abstract, summary, Para 26, 35, 72-74. Fig.1). As per claim 5, Gao teaches a method according to claim 1, wherein the identifying of the one or more sensor data embeddings comprises identifying the one or more sensor data embeddings that are within a distance value from the obtained query embedding within the multi-dimensional vector space (see at least abstract, summary, Para 11, 26, 35, 72-74. Fig.1). As per claim 6, Gao teaches a method according to claim 1, further comprising: transmitting the stored sensor data samples to a remote server (see at least abstract, summary, Para 26, 35, 72-74. Fig.1). As per claim 7, Gao teaches a method according to claim 1, wherein the vehicle comprises an automated driving system, ADS, configured to generate ADS output data samples, the method further comprising: generating ADS data embeddings from the ADS output data samples, wherein the ADS data embeddings are generated by processing each ADS output data sample through one or more ADS data embedding networks that are trained to process ADS output data samples and to output a corresponding ADS data embedding in the multi-dimensional vector space, and wherein each of the one or more ADS data embedding networks are trained in association with one or more of the sensor data embedding networks such that an ADS data embedding of an ADS output data sample that is contextually, spatially and/or temporally related to a specific sensor data sample points towards the same point as the sensor data embedding of that sensor data sample within the multi-dimensional vector space (see at least abstract, summary, Para 26, 35, 49, 57, 72-74. Fig.1); identifying one or more ADS data embeddings within the multi-dimensional vector space based on a proximity to the received query embedding within the multi-dimensional vector space (see at least abstract, summary, Para 26, 35, 49, 57, 72-74. Fig.1); and storing ADS output data samples represented by the identified one or more ADS data embeddings (see at least abstract, summary, Para 26, 35, 49, 57, 72-74. Fig.1). As per claims 8-14, the limitations of claims 8-14 are similar to the limitations of claims 1-7, therefore they are rejected based on the same rationale. Conclusion Please refer to form 892 for cited references. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MUSSA A SHAAWAT whose telephone number is (313)446-6592. The examiner can normally be reached Monday-Friday 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Piateski can be reached at 571-270-7429. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MUSSA A SHAAWAT/Primary Examiner, Art Unit 3669
Read full office action

Prosecution Timeline

Dec 19, 2024
Application Filed
Mar 03, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595070
METHOD OF OPERATING A ROTORCRAFT IN A SINGLE ENGINE OPERATION MODE
2y 5m to grant Granted Apr 07, 2026
Patent 12595049
METHOD FOR CONTROLLING A ROTORCRAFT, ASSOCIATED ROTORCRAFT AND COMPUTER PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12583605
FAST THRUST RESPONSE USING OPTIMAL POWER SPLITTING IN HYBRID ELECTRIC AIRCRAFT
2y 5m to grant Granted Mar 24, 2026
Patent 12583606
SYSTEMS AND METHODS FOR AIRCRAFT ENERGY OPTIMIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12576715
INFORMATION PROCESSING DEVICE AND VEHICLE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
82%
With Interview (+6.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 876 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month