Prosecution Insights
Last updated: April 19, 2026
Application No. 19/178,636

METHODS AND APPARATUS FOR ESTIMATING DEPTH INFORMATION FROM THERMAL IMAGES

Non-Final OA §112
Filed
Apr 14, 2025
Examiner
SUMMERS, GEOFFREY E
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Rivet Industries Inc.
OA Round
3 (Non-Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
249 granted / 348 resolved
+9.6% vs TC avg
Strong +35% interview lift
Without
With
+35.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
27 currently pending
Career history
375
Total Applications
across all art units

Statute-Specific Performance

§101
9.6%
-30.4% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
28.6%
-11.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 348 resolved cases

Office Action

§112
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 19, 2026, has been entered. Response to Amendment Claims 1 and 3-20 were previously pending. Applicant’s amendment filed February 19, 2026, has been entered in full. Claims 14, 17 and 19 are amended. New claim 21 is added. No claims are cancelled. Accordingly, claims 1 and 3-21 are now pending. Response to Arguments Applicant argues that the amended claims should not be interpreted under 35 U.S.C. 112(f) because they now recite sufficient structure (Remarks filed February 19, 2026, hereinafter Remarks: Page 8). Examiner agrees. The previous interpretation under 35 U.S.C. 112(f) is withdrawn. Applicant traverses the previous rejections under 35 U.S.C. 112(a), arguing that the claims comply with the written description requirement (Remarks: Pages 8-12). Examiner respectfully disagrees for the reasons presented in the Final Rejection, the reasons presented in the rejections below, and the reasons presented in the further responses below. Applicant presents the following argument at page 9: PNG media_image1.png 200 400 media_image1.png Greyscale Applicant has not explained how a statement that the ML models can be “transformer-based foundation models” would demonstrate to one of ordinary skill in the art how the specific configuration of encoders, decoders, and latent space recited in the claimed invention would be trained and provide the specific claimed outputs. Applicant is invited to elaborate on what “specific type of machine learning model” and “specific training processes and specific inference processes” are indicated by this term and how use of this term would demonstrate to one of ordinary skill in the art that the inventors were in possession of the specific configuration of encoders, decoders, and latent space recited in the claimed invention. Applicant presents the following argument at page 9: PNG media_image2.png 200 400 media_image2.png Greyscale Examiner respectfully disagrees. For example, supervised learning can also be performed using a controlled data set (e.g., a set of training data) that, once encoded and decoded (i.e., once passed through a neural network with an encoder-decoder architecture), corresponds to an expected data set of outputs (e.g., a set of ground truth data paired with corresponding training data), so the statement in the specification does not “inherently disclsos[e] that the plurality of inputs is associated with unsupervised machine learning” (emphasis modified). Applicant presents the following argument at page 10: PNG media_image3.png 200 400 media_image3.png Greyscale The nexus between this statement and the previous rejection is respectfully unclear. The claim does require a shared latent space and the specification similarly states that the encoders place input data into the latent space. However, the rejection is based on a lack of explanation as to how the encoders generate encoded data within that latent space and the cited portion of the specification does not provide such an explanation. Applicant presents the following argument at page 10: PNG media_image4.png 200 400 media_image4.png Greyscale Applicant is apparently referring to the following box on a “downstream” flowchart in the Appendix: PNG media_image5.png 184 400 media_image5.png Greyscale First, this “ l 1 -loss" is apparently mentioned in connection with “fine-tuning” which is a process of further-training an already-trained machine learning model. Stating that fine tuning is performed using a l 1 -loss does not explain how the machine learning model (including encoders, decoders, etc.) was trained in the first place. Second, as Applicant has apparently acknowledged, the Appendix to the provisional application does not appear to describe applying the l 1 -loss (or any other loss) for training a text encoder or decoder, which are required by the claimed invention. Third, the written description rejection is not based only on failure to disclose a loss function. Instead, it is based on a comprehensive lack of detail regarding the training and inference of the claimed encoders, decoders, etc. A single reference to an l 1 -loss in the Appendix of the provisional application does not resolve this issue. Applicant presents the following argument at page 11: PNG media_image6.png 200 400 media_image6.png Greyscale This is respectfully non-persuasive for substantially the same reasons as discussed above with respect to the “first example” on page 9 of the Remarks. Applicant presents the following argument at page 11: PNG media_image7.png 200 400 media_image7.png Greyscale Examiner’s discussion of the specific term “depth extractor” at page 3 of the Final Rejection concerned whether that specific term had a specific, structural meaning in the art of image analysis for purposes of interpretation under 35 U.S.C. 112(f), not whether depth extraction in general was well-known. Furthermore, the claims at issue do not recite “depth extraction” in general. For example, claim 17 recites the depth extractor controlling a processor to “receive the output visible image from the image decoder and to output first depth information associated with the input thermal image”. Claim 14 recites a “first machine learning model configured to: receive image decoder output from the image decoder, the image decoder output including the output visible image associated with the input thermal image; output first depth information associated with the image decoder output; receive thermal encoder output from the thermal encoder, the thermal encoder output including at least one encoded thermal image; and output second depth information associated with the thermal encoder output.” The depth extraction functions recited in the claims clearly go beyond depth extraction in general, so a number of times the term “depth extraction” appears in the prior art is clearly not sufficient to prove whether the specifically claimed depth extraction functions were well-known. Applicant presents the following argument at pages 11-12: PNG media_image8.png 200 400 media_image8.png Greyscale PNG media_image9.png 200 400 media_image9.png Greyscale None of these statements from the specification explain how depth information is output by the depth extractor, which was the basis of the written description rejection. I.e., even if the specification describes that the depth extractor operates “by accessing an encoded thermal image from the shared latent space”, how are values in that shared latent space transformed into output depth information? The specification does not answer this question. Applicant presents the following argument at page 12: PNG media_image10.png 200 400 media_image10.png Greyscale Examiner acknowledges that Figs. 5B and 5C do illustrate a 2D map of “second depth information”. However, this still does not explain how the depth extractor generates this output 2D map, nor does it answer various other questions raised in the written description rejection, such as whether the depth information is a depth map or a disparity map, how the depth information is scaled, etc. The written description rejection is not based only on failure to disclose details of the depth information’s format. Instead, it is based on a comprehensive lack of detail regarding the training and inference of the claimed depth extractor and first machine learning model. Disclosure of one aspect of the output depth information’s format does not resolve this issue. Applicant argues that the previous rejections under 35 U.S.C. 112(b) are overcome because the claims are no longer interpreted under 35 U.S.C. 112(f) (Remarks: Page 12). Examiner agrees. The previous rejections under 35 U.S.C. 112(b) are withdrawn. Information Disclosure Statement The information disclosure statement (IDS) submitted on February 19, 2026, is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1 and 3-21 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites an image encoder, a text encoder, and a thermal encoder, each of which performs various functions. It is apparent from at least Fig. 1B and the associated description in the specification that the scope of each of these claim elements includes computer-implemented embodiments. MPEP 2161.01, Subsection I, includes the following instructions for determining whether there is adequate written description for a computer-implemented functional claim limitation: “[O]riginal claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. See MPEP §§ 2163.02 and 2181, subsection IV.” “When examining computer-implemented functional claims, examiners should determine whether the specification discloses the computer and the algorithm (e.g., the necessary steps and/or flowcharts) that perform the claimed function in sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor possessed the claimed subject matter at the time of filing. An algorithm is defined, for example, as "a finite sequence of steps for solving a logical or mathematical problem or performing a task." Microsoft Computer Dictionary (5th ed., 2002). Applicant may "express that algorithm in any understandable terms including as a mathematical formula, in prose, or as a flow chart, or in any other manner that provides sufficient structure." Finisar Corp. v. DirecTV Grp., Inc., 523 F.3d 1323, 1340, 86 USPQ2d 1609, 1623 (Fed. Cir. 2008) (internal citation omitted). It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See, e.g., Vasudevan Software, Inc. v. MicroStrategy, Inc., 782 F.3d 671, 681-683, 114 USPQ2d 1349, 1356, 1357 (Fed. Cir. 2015) (reversing and remanding the district court’s grant of summary judgment of invalidity for lack of adequate written description where there were genuine issues of material fact regarding "whether the specification show[ed] possession by the inventor of how accessing disparate databases is achieved"). If the specification does not provide a disclosure of the computer and algorithm in sufficient detail to demonstrate to one of ordinary skill in the art that the inventor possessed the invention a rejection under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, for lack of written description must be made.” To provide a concise explanation, Examiner takes the “image encoder” as a representative example. Claim 1 requires that the image encoder is “configured to be trained by a plurality of visible images captured by a visible light camera” and “configured to output image encoder output” where the image encoder output, along with other encoder outputs, “collectively defin[es] a shared latent space.” Accordingly, in order to comply with the written description requirement of 35 U.S.C. 112(a), the specification should disclose an algorithm for (a) training an image encoder by a plurality of visible images captured by a visible light camera and (b) outputting image encoder output that collectively defines a shared latent space, with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. Regarding function (a), Examiner notes the following pertinent portions of the (as-filed) specification. [0026] restates that image encoder 132 is trained based on input files from a visible light camera, but does not describe any algorithm for performing the training. Fig. 6 and [0027] et seq. purport to disclose “an example method 600 of training a plurality of encoders”. [0028] describes that, at block 601, “input visible images 141 can be received at the image encoder 132”. [0029] describes that, at block 604, “the image encoder 132” is “configured to be trained based on the input visible images 141” and that “the image encoder 132 can be configured to output image encoder output” that “is one or more encoded visible images generated based on the input visible images 141”. Par. [0029] then states “As such, each of the image encoder 132, the text encoder 136, and the thermal encoder 134 can be trained to collectively define the shared latent space.” While these portions of the specification mention training and declare that the image encoder is trained, there is no description of any algorithm for training the image encoder. Instead, these sections of the specification merely identify general inputs and outputs of the image encoder and declare that it is trained, without ever explaining how the image encoder is trained. How is the image encoder “trained based on the input visible images 141”? How is the image encoder “trained to collectively define the shared latent space”? What specific sequence of steps is followed in order to train the image encoder? Is a loss function used? If so, what is the loss function and how is it used to update the encoder? Are the “input visible images 141” labeled or otherwise associated with ‘ground truth’ information to enable supervised learning? Or, is unsupervised learning used? What technique is used to ensure that the encoder’s output collectively defines a shared latent space with the text encoder’s output and the thermal encoder’s output? All of these questions are left unanswered by the specification. There is no description of any algorithm – i.e., a finite sequence of steps for solving a logical or mathematical problem or performing a task – for training the claimed image encoder. In view of this evidence, the claimed image encoder and its training function are not described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed, and the claim therefore lacks adequate written description under 35 U.S.C. 112(a). Regarding function (b), Examiner notes the following pertinent portions of the (as-filed) specification. Fig. 7 and [0034] et seq. are purported to describe an inference phase for image encoder 132. [0039] describes that, “for example, the image encoder 132 can be configured to receive an input visible image and output an encoded visible image to the shared latent space 139.” As discussed above, [0029] also describes that, as part of a training process, “the image encoder 132 can be configured to output image encoder output” that “is one or more encoded visible images generated based on the input visible images 141”. While these portions of the specification mention encoding and declare that the image encoder produces an output, there is no description of any algorithm for how the image encoder outputs image encoder output that collectively defines a shared latent space. Instead, these sections of the specification merely identify generic inputs of the image encoder and declare that it produces outputs, without ever explaining how the image encoder produces those outputs. How does the image encoder “output an encoded visible image to the shared latent space”? What specific sequence of steps is followed in order to transform an input visible image into an output encoded visible image? What is the structure of the encoder? How does it process the inputs? Is the encoder a neural network? If so, what types of layers does it use? What arrangement of layers does it use? What are the dimensions of the data it processes? What technique is used to ensure that the encoder’s output collectively defines a shared latent space with the text encoder’s output and the thermal encoder’s output? All of these questions are left unanswered by the specification. There is no description of any algorithm – i.e., a finite sequence of steps for solving a logical or mathematical problem or performing a task – for producing output from the claimed image encoder. In view of this evidence, the claimed image encoder and its output function are not described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed, and the claim therefore lacks adequate written description under 35 U.S.C. 112(a). As discussed above, in addition to an “image encoder”, claim 1 further recites a “text encoder” and a “thermal encoder”. These limitations also lack an adequate description of algorithms for performing their training or inference, and therefore also lack adequate written description under 35 U.S.C. 112(a), for substantially the same reasons as presented above with respect to the “image encoder” of claim 1. Claim 7 further recites an image encoder, a text encoder, and a thermal encoder that are substantially similar to those recited in claim 1. Therefore, claim 7 also lacks adequate written description under 35 U.S.C. 112(a) for substantially the same reasons as claim 1. Claim 17 further recites a thermal encoder and an image encoder that are substantially similar to those recited in claim 1. Therefore, claim 17 also lacks adequate written description under 35 U.S.C. 112(a) for substantially the same reasons as claim 1. Claims 3-6, 8-16, and 18-21 include the limitations of one of claims 1, 7, or 17, and therefore also lack adequate written description under 35 U.S.C. 112(a) for substantially the same reasons as claims 1, 7, and 17. Claim 1 further recites an image decoder, a text decoder, and a thermal decoder. It is apparent from at least Fig. 1B and the associated description in the specification that the scopes of the claimed decoders include computer-implemented embodiments. To provide a concise explanation, Examiner takes the “image decoder” as a representative example. Claim 1 requires that the image decoder is “configured to be trained using the shared latent space, the image decoder configured to output a visible image.” Accordingly, in order to comply with the written description requirement of 35 U.S.C. 112(a), the specification should disclose an algorithm for (a) training an image decoder using the shared latent space and (b) outputting a visible image, with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed. Regarding function (a), Examiner notes the following pertinent portions of the (as-filed) specification. [0031] states that, at block 608 of Fig. 6, “at least one decoder is trained based on the plurality of encoder outputs.” For example, “[a] difference between a first one of the input visible images 141 and the output visible image 151 may indicate that the image decoder 133 needs to be trained or retrained to output a visible image that better matches the first one of the visible images 141. Thus, … the image decoder 133 … can be trained or retrained based on the plurality of encoder outputs in the shared latent space 139.” While these portions of the specification mention training and declare that the image decoder is trained, there is no description of any algorithm for training the image decoder. Instead, these sections of the specification merely provide generic inputs and outputs of the image decoder and declare that it is trained, without ever explaining how the image decoder is trained. How is the image decoder “trained based on the plurality of encoder outputs”? What specific sequence of steps is followed in order to train the image decoder? Is a loss function used? If so, what is the loss function and how is it used to update the decoder? Is there any ‘ground truth’ information to enable supervised learning? Or, is unsupervised learning used? What technique is used to ensure that the decoder’s output is a visible image, and not a thermal image, text or some other type of output? All of these questions are left unanswered by the specification. There is no description of any algorithm – i.e., a finite sequence of steps for solving a logical or mathematical problem or performing a task – for training the claimed image decoder. In view of this evidence, the claimed image decoder and its training function are not described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed, and the claim therefore lacks adequate written description under 35 U.S.C. 112(a). Regarding function (b), Examiner notes the following pertinent portions of the (as-filed) specification. [0030] states that “image decoder 133 can be configured to output an output visible image 151 based on the plurality of encoder outputs from the shared latent space 139.” [0039] states that “the image decoder 133 can be configured to generate an output visible image associated with the input text phrase 245 by accessing the encoded text phrase in the shared latent space 139” or “to generate the output visible image associated with the input thermal image by accessing the encoded thermal image from the shared latent space 139.” [0041] states that “image decoder 133 can be configured to receive the thermal encoder output 304 and output an output visible image 351.” [0042] states that “the image decoder 133 can be configured to output an output visible image based on at least one of the thermal encoder output 304 …, text encoder output, or image encoder output.” While these portions of the specification mention decoding and declare that the image decoder outputs a visible image, there is no description of any algorithm for how the image decoder outputs a visible image. Instead, these sections of the specification merely provide generic inputs of the image decoder and declare that it produces output visible images, without ever explaining how the image decoder produces those outputs. How does the image decoder output a visible image? What specific sequence of steps is followed in order to output a visible image? What is the structure of the decoder? How does it process inputs from the shared latent space? Is the decoder a neural network? If so, what types of layers does it use? What arrangement of layers does it use? What are the dimensions of the data it processes? What technique is used to ensure that the decoder’s output is a visible image, rather than a thermal image, text, or some other mode of output? All of these questions are left unanswered by the specification. There is no description of any algorithm – i.e., a finite sequence of steps for solving a logical or mathematical problem or performing a task – for producing a visible image from the claimed image decoder. In view of this evidence, the claimed image decoder and its output function are not described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed, and the claim therefore lacks adequate written description under 35 U.S.C. 112(a). As discussed above, in addition to an “image decoder”, claim 1 further recites a “text decoder” and a “thermal decoder”. These limitations also lack an adequate description of algorithms for performing their training or inference, and therefore also lack adequate written description under 35 U.S.C. 112(a), for substantially the same reasons as presented above with respect to the “image decoder” of claim 1. Claim 7 further recites an image decoder, a text decoder, and a thermal decoder that are substantially similar to those recited in claim 1. Therefore, claim 7 also lacks adequate written description under 35 U.S.C. 112(a) for substantially the same reasons as claim 1. Claim 17 further recites a thermal decoder and an image decoder that are substantially similar to those recited in claim 1. Therefore, claim 17 also lacks adequate written description under 35 U.S.C. 112(a) for substantially the same reasons as claim 1. Claims 3-6, 8-16, and 18-21 include the limitations of one of claims 1, 7, or 17, and therefore also lack adequate written description under 35 U.S.C. 112(a) for substantially the same reasons as claims 1, 7, and 17. Claim 17 further recites a depth extractor. Claim 17 has been amended to recite that the depth extractor is executed by a processor, so it is computer-implemented. Claim 17 requires that the depth extractor causes a processor to “receive the output visible image from the image decoder and to output first depth information associated with the input thermal image”. Accordingly, in order to comply with the written description requirement of 35 U.S.C. 112(a), the specification should disclose an algorithm for outputting depth information given an output visible image. Examiner notes the following pertinent portions of the (as-filed) specification. [0026] states that “depth extractor 138 can be configured to output depth information associated with input from at least one of the image encoder 132, the image decoder 133, the thermal encoder 134, the thermal decoder 135, the text encoder 136, and/or the text decoder 137”. [0047] describes, “[a]s shown in FIG. 5A, the depth extractor generates first depth information 404 associated with the output visible image 451.” While these portions of the specification mention generating depth information and declare that the image depth extractor outputs a depth information, there is no description of any algorithm for how the depth extractor outputs depth information. Instead, these sections of the specification merely identify general inputs of the depth extractor and declare that it produces depth information, without ever explaining how the depth extractor produces those outputs. How does the depth extractor output depth information? What specific sequence of steps is followed in order to transform an output visible image into depth information? What is the structure of the depth extractor? How does it process a visible image? Is the depth extractor a neural network? If so, what types of layers does it use? What arrangement of layers does it use? What are the dimensions of the data it processes? What format is the output depth information? For example, is it a single value, a depth map, or a disparity map? How is the depth information scaled? For example, is it metric scaled or relatively scaled? All of these questions are left unanswered by the specification. There is no description of any algorithm – i.e., a finite sequence of steps for solving a logical or mathematical problem or performing a task – for producing depth information from the claimed depth extractor. In view of this evidence, the claimed depth extractor and its depth information output function are not described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed, and the claim therefore lacks adequate written description under 35 U.S.C. 112(a). Claim 14 has been amended to replace the term “depth extractor” with “first machine learning model”. Nevertheless, the “first machine learning model” of claim 14 performs functions substantially similar to functions performed by the depth extractor in claim 17. Therefore, claim 14 also lacks adequate written description under 35 U.S.C. 112(a) for substantially the same reasons described above with respect to the depth extractor of claim 17. Claims 15-16 and 18-21 include the limitations of one of claims 14 or 17, and therefore also lack adequate written description under 35 U.S.C. 112(a) for substantially the same reasons as claims 14 and 17. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEOFFREY E SUMMERS whose telephone number is (571)272-9915. The examiner can normally be reached Monday-Friday, 7:00 AM to 3:30 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GEOFFREY E SUMMERS/Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Apr 14, 2025
Application Filed
Jul 14, 2025
Non-Final Rejection — §112
Sep 19, 2025
Examiner Interview Summary
Sep 19, 2025
Applicant Interview (Telephonic)
Oct 16, 2025
Response Filed
Oct 27, 2025
Final Rejection — §112
Feb 19, 2026
Request for Continued Examination
Feb 23, 2026
Response after Non-Final Action
Feb 25, 2026
Non-Final Rejection — §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586379
SYSTEM FOR DETECTING OCCURRENCE PERIOD OF CYCLICAL EVENT
2y 5m to grant Granted Mar 24, 2026
Patent 12561755
System and Method for Image Super-Resolution
2y 5m to grant Granted Feb 24, 2026
Patent 12555205
METHOD AND APPARATUS WITH IMAGE DEBLURRING
2y 5m to grant Granted Feb 17, 2026
Patent 12541838
INSPECTION APPARATUS AND REFERENCE IMAGE GENERATION METHOD
2y 5m to grant Granted Feb 03, 2026
Patent 12536682
METHOD AND SYSTEM FOR GENERATING A DEPTH MAP
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+35.4%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 348 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month