Prosecution Insights
Last updated: April 19, 2026
Application No. 19/190,162

COLLISION RECONSTRUCTION ENGINE

Non-Final OA §103§112
Filed
Apr 25, 2025
Examiner
SAXENA, AKASH
Art Unit
2188
Tech Center
2100 — Computer Architecture & Software
Assignee
Assured Insurance Technologies, Inc.
OA Round
3 (Non-Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
4y 10m
To Grant
81%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
256 granted / 520 resolved
-5.8% vs TC avg
Strong +32% interview lift
Without
With
+32.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 10m
Avg Prosecution
43 currently pending
Career history
563
Total Applications
across all art units

Statute-Specific Performance

§101
19.2%
-20.8% vs TC avg
§103
36.4%
-3.6% vs TC avg
§102
15.8%
-24.2% vs TC avg
§112
22.8%
-17.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 520 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/17/2026 has been entered. Claims 1-7, 9-16, 18-20 have been presented for examination based on the amendment filed on 2/17/2026. Claims 1, 10 and 19 are amended. Claims 1-7, 9-16, 18-20 are newly rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. Claim(s) 1-6, 9-15, 18-20 remain rejected under 35 U.S.C. 103 as being unpatentable over US 9824453 B1 by Collins; Stephen M. et al., in view of US 20080052134 A1 by Nowak; Vikki et al., in view of US 20240303745 A1 by Fields; Brian et al., further in view of US PGPUB No. US 20250045491 A1 by RESCHKA; Andreas. Claim(s) 7 & 16 remain rejected under 35 U.S.C. 103 as being unpatentable over Collins, in view of Nowak, in view Fields, in view of Reschka, further in view of US 20230116639 A1 by Patt; Theo et al. This action is made Non-Final. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Response to Arguments (Argument 1) Applicant has argued in Remarks Pg.10: PNG media_image1.png 224 640 media_image1.png Greyscale (Response 1) Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections. In this case the applicant has not defined what “vehicle incident simulation” encompasses. Applicant has not illuded to specification to provide scope of “vehicle incident simulation”, hence the argument over novelty over the primary reference Collins (which shows the 3D CAD model as simulation) is not persuasive. To that extent examiner has anticipated that if the applicant intended the “vehicle incident simulation” to be some sort of animation, such teaching is provided via Nowak. Applicant has addressed this prior art and not shown how the claim/scope in specification differs from the mapping. Hence Applicant's arguments also fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Further In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Even if the “vehicle incident simulation” is considered as not 3D CAD model, but something else (like an animation, again not claimed or included in the specification), the combination is made at least with Collins and Nowak teaching “vehicle incident simulation” . Applicant has performed piece meal analysis of only one reference. (Argument 2) Applicant has argued in Remarks Pg.10: PNG media_image2.png 158 634 media_image2.png Greyscale (Response 2) The updated rejection shows the mapping Ruschka where none of the mapped paragraphs [0043][0067][0071] [0038] [006]-[0007] discussing filtering and generating vehicle incident simulation using LLM and corpus of data state user intervention is needed in the a process which they state is performed automatically (See [0041] "...[0043] FIG. 1 illustrates an example implementation or scenario (hereinafter “implementation”), of a computing system 102 that automatically generates seed scenarios from a corpus of data which may encompass accident reconstruction reports, accident reports, incident reports, potential or suspicious activity reports, traffic reports, and/or other reports of actual events, accidents, near-miss events, or disengagements (hereinafter “events”)...." & [0038] "... [0038] In order to improve accuracy and effectiveness of testing scenarios for autonomous and semi-autonomous vehicles, a computing system, which may include or be associated with machine learning components such as Large Language Model (LLM), may generate testing scenarios based on reports, logs, and/or other data from external databases. ...".). (Argument 3) Applicant has argued in Remarks Pg.10: PNG media_image3.png 146 648 media_image3.png Greyscale 1 (Response 3) The mapping for newly added limitation is shown in the exemplary claim 1 rejection below with Reschka teaching the filtering aspect (Reschka: [0071]). The filtering can also considered as data cleaning and assigning weights as taught in Fields (Fields: [0047]) for determining relevance of data (Fields: [0091]). For at least the above reasons the argument for new limitation is unpersuasive. No amendment or new arguments are made for the dependent claims and the examiner respectfully maintains the rejections. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-7, 9-16, 18-20 are newly rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1, 10 and 19 recite similar limitation: wherein generating the optimized AI prompt includes filtering the LLM prompt to avoid language that is likely [A] to cause LLM fixation that would otherwise reduce a relevance of an output from the LLM engine or service [B]; The scope of the claim how the filtering is performed and how effectively it is performed (See [A]) is not claimed. Its unclear what is done for the filtering such that a likely outcome is achieved. Further the scope of “likely to cause fixation” is not clear. It is not shown what would and would not likely cause fixation. Further the limitation as identified in [B] above, fails to further add to the claim as no metes and bounds are presented to determine relevance of the output and no nexus between the effectiveness of filtering and relevance of prompt/input data is claimed. For claim interpretation purposes, the limitation [B] is considered as expected outcome of filtering and not further limiting the claim. Respective dependent claims 2-7, 9, 11-16, 18 & 20 do not cure this deficiency and are therefore rejected likewise. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-6, 9-15, 18-20 remain rejected under 35 U.S.C. 103 as being unpatentable over US 9824453 B1 by Collins; Stephen M. et al., in view of US 20080052134 A1 by Nowak; Vikki et al., in view of US 20240303745 A1 by Fields; Brian et al., further in view of US PGPUB No. US 20250045491 A1 by RESCHKA; Andreas. Regarding Claims 1, 10 and 19 (Updated 2/27/2026) Collins teaches (Claim 1) A computing system (Collins: Fig.1 Col.6 Lines 34-Col.8 line 20)/ (Claim 10). A non-transitory computer readable medium storing instructions that, when executed by one or more processors of a computing system (Collins: Fig.1 Showing elements 103, 105, 107, 115) , cause the computing system to \ (Claim 19). A computer-implemented method of collision reconstruction, the method being performed by one or more processors (Collins: Fig.3) : comprising: a network communication interface (Collins: Fig.1 element 127, 123) ; one or more processors (Collins: Fig.1 element 103) ; and a memory storing instructions that, when executed by the one or more processors (Collins: Fig.1 element 105, 107, 115) , cause the computing system to: obtain an information corpus (Collins : Fig.3 Col.8 Lines 33-46) corresponding to a vehicle incident involving a vehicle over one or more sessions with a user (Collins: Fig.3 Col.9 Lines 30-Col.10 Line 49) ; (Collins: Col.56 Lines 52-Col.58 Line 22 "... In step 5406, the computing device may generate a 3D image of the vehicle based on the captured images...." where the 3D CAD image from captured images generation and comparison can be understood as vehicle incident simulation). Alternate Interpretation of “a vehicle incident simulation of the vehicle incident”: If this interpreted as actual vehicle movement simulation (like animation) due to which the vehicle damaged rejection is further made below: Collins does not explicitly teach based on the information corpus, generate a vehicle incident simulation of the vehicle incident, under the alternate interpretation. Nowak teaches based on the information corpus, generate a vehicle incident simulation of the vehicle incident (Nowak: Fig.2A-2B [0030]-[0036] as "... Using an interactive help utility, such as a help wizard, a user may create and revise the animation to create a precise and accurate recreation...."). Fields teaches generate an optimized artificial intelligence (AI) prompt based on the information corpus (Fields: [0055] "... [0055] The ML chatbot may include and/or derive functionality from a Large Language Model (LLM). The ML chatbot may be trained on a server [hence optimized], such as server 105, using large training datasets of text which may provide sophisticated capability for natural-language tasks, such as answering questions and/or holding conversations. The ML chatbot may include a general-purpose pretrained LLM which, when provided with a starting set of words (prompt) as an input, may attempt to provide an output (response) of the most likely set of words that follow from the input....") ; transmit, over one or more networks, the optimized AI prompt to a large language model (LLM) engine or service executing on a remote computing system (Fields: [0055]-[0057]; [0057] "... [0057] The system and methods to generate and/or train an ML chatbot model (e.g., via the ML module 140 of the server 105) which may be used the an ML chatbot, may consists of three steps: (1) a Supervised Fine-Tuning (SFT) step where a pretrained language model (e.g., an LLM) may be fine-tuned on a relatively small amount of demonstration data curated by human labelers to learn a supervised policy (SFT ML model) which may generate responses/outputs from a selected list of prompts/inputs....") ; wherein generating the optimized AI prompt includes filtering the LLM prompt (Fields: [0047] "... [0047] The server 105 may store the claim submission information in the database 126. The data may be cleaned, labeled, vectorized, weighted and/or otherwise processed, especially processing suitable for data used in any aspect of ML...."; input data being cleaned and weighted is mapped to filtering of the data to provide to LLM) to avoid language that is likely to cause LLM fixation that would otherwise reduce a relevance of an output from the LLM engine or service (Fields: as setting relevance of input data - [0091]"... [0091] In one aspect, the server 405 may analyze and/or process the claim information received by the ML chatbot 440 to interpret, understand and/or extract relevant information within one or more customer responses and/or generate additional requests via the ML chatbot 440. In one aspect, the ML chatbot 440 may use NLP for this, which may include NLU and/or NLG, e.g., via an NLP module such as NLP module 148....") ; receive, over the one or more networks, an LLM summary of the vehicle incident from the LLM engine or service (Fields: [0041] "... The NLP module may include NLU processing to understand the intended meaning of utterances, among other things. The NLP module 148 may include NLG which may provide text summarization, machine translation, and dialog where structured data is transformed into natural conversational language (i.e., unstructured) for output to the user....") . Fields teaches use of AI/ML chatbot to for generating images (Fields: [0072]-[0074]; [0073] "... Other types of generative AI/ML may use the GAN, the transformer model, and/or other types of models and/or algorithms to generate: (i) realistic images from sketches, which may include the sketch and object category as input to output a synthesized image;... With the appropriate algorithms and/or training, generative AI/ML may produce various types of multimedia output and/or content which may be incorporated into a customized presentation, e.g., via an AI and/or ML chatbot (or voice bot). "); [0074] "... The trained ML chatbot may generate output such as images, video, slides (e.g., a PowerPoint slide), virtual reality, augmented reality, mixed reality, multimedia, blockchain entries, metaverse content, or any other suitable components which may be used in the customized presentation...."). Collins and Fields and/or Collins, Novak and Fields combination do not explicitly teach based on the information corpus and the LLM summary, generate a vehicle incident simulation of the vehicle incident (Emphasis on bold and underlined as additional limitation not being taught: corpus+LLM summary -> simulation). Reschka teaches wherein generating the optimized AI prompt includes filtering the LLM prompt (Reschka : [0071] "... For example, upon determining that the raw data 1421 was incorrect or inaccurate, the logic 113 may reduce a priority or a degree of reliability of the raw data 1421, Upon determining that the raw data 221 was correct or accurate, the logic 113 may increase a priority or a degree of reliability of the raw data 221. Through such a process, the logic 113 may more accurately assess reliability of data sources in order to better filter and extract relevant and reliable information while filtering out information of low reliability....") to avoid language that is likely to cause LLM fixation that would otherwise reduce a relevance of an output from the LLM engine or service (Reschka: [0071] "... For example, upon determining that the raw data 1421 was incorrect or inaccurate, the logic 113 may reduce a priority or a degree of reliability of the raw data 1421 [reliability as relevance of the data], Upon determining that the raw data 221 was correct or accurate, the logic 113 may increase a priority or a degree of reliability of the raw data 221. Through such a process, the logic 113 may more accurately assess reliability of data sources in order to better filter and extract relevant and reliable information while filtering out information of low reliability....") ; Reschka teaches based on the information corpus and the LLM summary, generate, without further user input (Reschka: [0071] shows no user input to filter out data and further in [0067] & [0043] as mapped below there is no mention of user intervention as by use of word “automatically”) , a vehicle incident simulation of the vehicle incident (Reschka: Fig.9 element 921 & [0067] "...[0067] In some implementations, captured or obtained media from both an interior and an exterior of a vehicle may also enhance a generated scenario. For example, in FIG. 9, an image or video 921 (hereinafter “image”) may show a representation of both an interior and an exterior of one or more vehicles during an accident. The logic 113 may augment the raw data 221 with the image 921 to generate a scenario 931, which depicts information of participants and their characteristics or behaviors. ...", Fig. 1 & [0043] "... [0043] FIG. 1 illustrates an example implementation or scenario (hereinafter “implementation”), of a computing system 102 that automatically generates seed scenarios from a corpus of data which may encompass accident reconstruction reports, accident reports, incident reports, potential or suspicious activity reports, traffic reports, and/or other reports of actual events, accidents, near-miss events, or disengagements (hereinafter “events”)...." - the scenario also vehicle incidents like accident reconstruction. The scenarios are generated based on LLM report as shown in [0038] "... [0038] In order to improve accuracy and effectiveness of testing scenarios for autonomous and semi-autonomous vehicles, a computing system, which may include or be associated with machine learning components such as Large Language Model (LLM), may generate testing scenarios based on reports, logs, and/or other data from external databases. ..." [0006]-[0007]). It would have been obvious to one (e.g. a designer) of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Nowak (2008) to Collins (2017) to generate vehicle simulation of the vehicle incidence to complement the teachings of Collins to address additional features taught in Collins like gathering speed data (Nowak : [0034]-[0036]) to determine bodily injuries (Collins: See Fig.54A elements 5422-5428). Further motivation to combine would have been that both Collins and Novak are analogous art in to the instant claim in the field of automotive damage assessment which enables a remote user to visually illustrate damage to an item through a rich-media application (Nowak: Abstract; Collins: Abstract). It would have been obvious to one (e.g. a designer) of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Fields to Nowak (2008) to generate vehicle simulation animation in a more relalisti c manner from user input data (Nowak : [0034]-[0036] Fields [0055]-[0057]; [0072]-[0074]). Further motivation to combine would have been that both Fields, Novak & Collins are analogous art in to the instant claim in the field of automotive damage assessment collection which enables a remote user to visually illustrate damage to an item through a rich-media application (Fields: Fig.4-5; Nowak: Abstract; Collins: Abstract). It would have been obvious to one (e.g. a designer) of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Reschka (2023 filing) to Collins (2017) & Fields to generate vehicle simulation/ implementation/scenario such as accident reconstruction (Reschka : [0043][0038]) to also use the LLM based processing to gather raw data with motivation "... to improve accuracy and effectiveness of testing scenarios for autonomous and semi-autonomous vehicles, ..." (Reschka : [0038]). Additional motivation to combine would be Collins, Fields, and Reschka are analogous art to the instant claim in the field of accident reconstruction and use of LLM (Reschka: abstract, [0038][0043]; Fields: [0041][0055]-[0057], Abstract; Collins: Col.14 Lines 48-53 – use of machine learning). Regarding Claims 2, 11 and 20 Collins teaches The computing system of claim 1, wherein the information corpus includes damage inputs from the user on a damage input interface comprising a three-dimension representation of the vehicle of the user, the damage inputs identifying damage to the vehicle (Collins: Fig.4, 16-17, 33-35 showing damage input in captured images, and 3D based processing in Col.56 Lines 52-Col.58 Line 22). Motivation to combine is incorporated from the parent claim. Regarding Claims 3 & 12 Collins teaches The computing system of claim 2, wherein the information corpus further includes damage inputs from one or more additional users on the damage input interface that identifies damage to the vehicle (Collins: Col.10 Lines 41-49 – data captured by user/customer; Col.28 Lines 6-10 damage data captured by agent ). Motivation to combine is incorporated from the parent claim. Regarding Claims 4 & 13 Nowak teaches The computing system of claim 1, wherein the information corpus includes collision inputs from the user on a collision input interface that enables the user to provide one or more vehicle trajectories and an estimated travel speed of each vehicle corresponding to the one or more vehicle trajectories (Nowak: [0031]-[0033] Fig.2A-2B) . Motivation to combine is incorporated from the parent claim. Motivation to combine is incorporated from the parent claim. Regarding Claims 5 & 14 Nowak teaches The computing system of claim 4, wherein the information corpus further includes collision inputs from one or more additional users on the collision input interface that indicates respective vehicle trajectories and estimate travels speed of one or more vehicle corresponding to the respective vehicle trajectories (Nowak: [0031]-[0033] Fig.2A-2B from user; additional user inputs from additional vehicles as in [0036]"... Some incident animator tools accept information from measuring instruments input devices, or vehicle controllers or vehicle computers internal or external to one or more vehicles automatically...." ). Motivation to combine is incorporated from the parent claim. Regarding Claims 6 & 15 Collins teaches The computing system of claim 1, wherein the information corpus includes images of damage to the vehicle of the user captured via a guided content capture process (Collins: at least in Fig.4-7 shows the guided damage capture process and Fig.9 shows the flow; more details may be available in additional figures 15-20 and associated disclosure) . Motivation to combine is incorporated from the parent claim. Regarding Claims 9 & 18 Fields teaches wherein the executed instructions further cause the computing system to: generate a collision reconstruction interface presenting at least the LLM summary and vehicle incident simulation (Fields: [0072]-[0074]; [0073] "... Other types of generative AI/ML may use the GAN, the transformer model, and/or other types of models and/or algorithms to generate: (i) realistic images from sketches, which may include the sketch and object category as input to output a synthesized image;... With the appropriate algorithms and/or training, generative AI/ML may produce various types of multimedia output and/or content which may be incorporated into a customized presentation, e.g., via an AI and/or ML chatbot (or voice bot). "); [0074] "... The trained ML chatbot may generate output such as images, video, slides (e.g., a PowerPoint slide), virtual reality, augmented reality, mixed reality, multimedia, blockchain entries, metaverse content, or any other suitable components which may be used in the customized presentation...."). Motivation to combine is incorporated from the parent claim. ---- This page is left blank after this line ---- Claim(s) 7 & 16 remain rejected under 35 U.S.C. 103 as being unpatentable over US 9824453 B1 by Collins; Stephen M. et al., in view of US 20080052134 A1 by Nowak; Vikki et al., in view Fields, in view of Reschka, further in view of US 20230116639 A1 by Patt; Theo et al. Regarding Claims 7 & 16 Teachings of Collins, Nowak, Fields and Reschka are shown in the parent claim 1. Nowak teaches collision path animation creation, but does teach wherein the vehicle incident simulation is overlaid on satellite image data of a location of the vehicle incident. Patt teaches The computing system of claim 1, wherein the vehicle incident simulation is overlaid on satellite image data of a location of the vehicle incident (Patt: Figs.9R through 9X "... input can be in the form of a pin drop, with respect to highly specific geographic imagery such as provided through a satellite view of the incident location...."; [0133], [0157]) . Motivation to combine Collins and Nowak is incorporated from parent claim. It would have been obvious to one (e.g. a designer) of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Patt to Nowak (&Collins) to more accurately pinpoint the accident location based on satellite view (Patt:[0057], [0068] accuracy in data gathering and simulation of incidence; Figs.9R through 9X & [0133], [0157]). Further motivation to combine would have been that Patt, Nowak & Collins are analogous art to the instant claim in the field of accurate vehicle incidence data gathering (Patt: Figs.9R through 9X [0133], [0157]; Nowak: Fig.2A-2B [0030]-[0036] & Collins: Abstract). Communication Any inquiry concerning this communication or earlier communications from the examiner should be directed to AKASH SAXENA whose telephone number is (571)272-8351. The examiner can normally be reached Mon-Fri, 7AM-3:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, RYAN PITARO can be reached on (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. AKASH SAXENA Primary Examiner Art Unit 2188 /AKASH SAXENA/Primary Examiner, Art Unit 2188 Friday, February 27, 2026 1 Limitation derives priority from parent application 18/809103 ¶[0176].
Read full office action

Prosecution Timeline

Apr 25, 2025
Application Filed
May 29, 2025
Non-Final Rejection — §103, §112
Nov 03, 2025
Response Filed
Nov 12, 2025
Final Rejection — §103, §112
Feb 17, 2026
Request for Continued Examination
Feb 24, 2026
Response after Non-Final Action
Feb 27, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585847
SIMULATIONS FOR EVALUATING DRIVING BEHAVIORS OF AUTONOMOUS VEHICLES
2y 5m to grant Granted Mar 24, 2026
Patent 12579344
HOSTING PRE-CERTIFIED SYSTEMS, REMOTE ACTIVATION OF CUSTOMER OPTIONS, AND OPTIMIZATION OF FLIGHT ALGORITHMS IN AN EMULATED ENVIRONMENT WITH REAL WORLD OPERATIONAL CONDITIONS AND DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12572711
GENERATIVE DESIGN TECHNIQUES FOR MULTI-FAMILY HOUSING PROJECTS
2y 5m to grant Granted Mar 10, 2026
Patent 12572773
AGENT INSTANTIATION AND CALIBRATION FOR MULTI-AGENT SIMULATOR PLATFORM
2y 5m to grant Granted Mar 10, 2026
Patent 12565067
METHOD FOR SIMULATING THE TEMPORAL EVOLUTION OF A PHYSICAL SYSTEM IN REAL TIME
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
81%
With Interview (+32.0%)
4y 10m
Median Time to Grant
High
PTA Risk
Based on 520 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month