Prosecution Insights
Last updated: April 19, 2026
Application No. 18/436,509

Machine Learning Models for Image Interpolation

Non-Final OA §103
Filed
Feb 08, 2024
Examiner
OMETZ, RACHEL ANNE
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
18 granted / 26 resolved
+7.2% vs TC avg
Strong +30% interview lift
Without
With
+30.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
24 currently pending
Career history
50
Total Applications
across all art units

Statute-Specific Performance

§101
3.1%
-36.9% vs TC avg
§103
62.1%
+22.1% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 7, 9-10, 13, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nottebaum et al., "Efficient Feature Extraction for High-resolution Video Frame Interpolation", arXiv:2211.14005. Regarding claim 1, Nottebaum teaches: A computer system for image interpolation (“deep learning architecture for video frame interpolation,” pg. 2, Introduction), the computer system comprising: one or more processors (inherent, as deep learning requires a processor); and one or more non-transitory computer-readable media that collectively store a machine-learned image interpolation model configured to receive and process a pair of input images (Fig. 1, pg. 4, I0 and I1) having respective capture times (Fig. 1, pg. 4, “0” and “1” of I0 and I1) to generate an interpolated image having an interpolation time (“Given two images, I0 and I1, the goal of video frame interpolation is to generate several intermediate images It for t ∈ (0,1), pg. 4, Section 4, “fLDR-Net for Video Frame Interpolation), wherein the machine-learned image interpolation model is configured to: PNG media_image1.png 611 1021 media_image1.png Greyscale extract, for each of multiple different scales (Fig. 1, from “scale 0” to “scale S”), a respective set of feature values from each of the pair of input images (Fig. 1, extract features using “Dimensionality Reduction”); generate, for each of the multiple different scales (Fig. 1, from “scale 0” to “scale S”), a respective flow estimate for each of the pair of input images that indicates a respective flow from the interpolation time to the respective capture time (Fig. 1, “Flow Estimation”, and (Equation 2, interpolation (t) to respective capture times (0 and 1)); PNG media_image2.png 68 912 media_image2.png Greyscale warp, for each of the multiple different scales (Fig. 1, from “scale 0” to “scale S”), the respective set of feature values for each of the pair of input images according to the respective flow estimate to generate respective warped sets of features (Fig. 1, “Warping”, where flows from each scale are shared before being warped at the shallowest level); and generate the interpolated image based on the respective warped sets of features for the pair of input images and the multiple different scales (Fig. 1, It is generated based on the warping that took into account flows from all scales). Although Nottebaum does not explicitly teach backward warping for each scale separately, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to do so, as details at both the finer and coarser levels will lead to a smoother and more realistic interpolated image regardless if the upscaling of the features from each scale is done after flow estimation or after warping. Regarding claim 7, the claimed features of claim 7 are also described in claim 1. Therefore, the rejection of claim 1 is applied to claim 7. Regarding claim 9, the rejection of claim 1 is incorporated herein. Nottebaum teaches the system of claim 1, wherein the pair of input images comprise near-duplicate photographs (pg. 10, Fig. 4, “(a) Overlaid inputs” are nearly identical). Regarding claim 10, the rejection of claim 1 is incorporated herein. Nottebaum teaches the system of claim 1. Additionally, it would have been obvious to incorporate “the respective capture times for the pair of input images are at least one second apart from each other”, as Nottebaum’s network is specifically designed to handle large motion, which is common in image sets that are taken at least one second apart from each other (“We achieve highly competitive results and even outperform existing methods for larger motions across various benchmarks with only a fraction of the network complexity,” pg. 2, Introduction). Regarding claim 13, the rejection of claim 1 is incorporated herein. Nottebaum teaches the system of claim 1, and wherein the machine-learned image interpolation model comprises a single machine-learned model trained end-to-end (“We propose a method that finetunes an initial block-based PCA basis end-to-end for video frame interpolation,” pg. 2, Introduction). Claim 15 is a non-transitory computer-readable media that corresponds to system claim 1. Therefore, the rejection of claim 1 is applied to claim 15. Claim(s) 2-4, 6, 8, 16-18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nottebaum et al., "Efficient Feature Extraction for High-resolution Video Frame Interpolation", arXiv:2211.14005 as applied to claims 1 and 15 above, and further in view of Comino Trinidad et al., "Multi-view Image Fusion", 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019, pp. 4100-4109. Regarding claim 2, the rejection of claim 1 is incorporated herein. Nottebaum teaches the system of claim 1, but is not relied upon to teach the following limitations. Comino Trinidad, however, further teaches: wherein to generate, for each of the multiple different scales, the respective flow estimate for each of the pair of input images the machine-learned image interpolation model is configured to, for each of the multiple different scales except a coarsest scale: predict a residual based on (1) the set of feature values for the other input image and the scale and (2) a warped version of the set of feature values for the input image and the scale (pg. 4103, Section 3.1, “scale-agnostic” flow prediction is residual flow), wherein the warped version of the set of feature values for the input image and the scale has been warped according to an upsampled flow estimate for the input image from a coarser scale (“The image warping module that is repeatedly applied starting from the coarsest level k = N towards the finest level k = 0 to estimate optical flow,” pg. 4104, Fig. 3 Description); PNG media_image3.png 433 575 media_image3.png Greyscale and generate the flow estimate (from “Flow Module”, pg. 4103, Fig. 2) for the input image and the scale based on the residual and the upsampled flow estimate for the input image from the coarser scale (Fig. 2, each flow is upscaled to the shallowest scale). PNG media_image4.png 673 1215 media_image4.png Greyscale Comino Trinidad is considered to be analogous to the claimed invention because they are both in the field of aligning misaligned photographs using a learned system. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Comino Trinidad into Nottebaum for the benefit of reduced artifacts and improved detail retention in the final image. Regarding claim 3, the rejection of claim 2 is incorporated herein. Nottebaum in view of Trinidad teaches the system of claim 2, and Comino Trinidad further teaches: wherein to predict the residual the machine-learned image interpolation model is configured to apply one or more learned convolutional filters to (1) the set of feature values for the other input image and the scale (“Each block An for n = 0,1,2 represents two 3×3 convolutions,” pg. 4103, Section 3.1, also see Fig. 2 for “An” blocks) and (2) the warped version of the set of feature values for the input image and the scale (“Our residual flow prediction network Pk is a serial application of five 2d-convolutions: 3×3×32, 3×3×64, 1×1×64, 1×1×16 and 1×1×2,” pg. 4104, Section 3.2, also see Fig. 3 for “Pk” blocks). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Comino Trinidad into Nottebaum for the benefit of reduced artifacts and improved detail retention in the final image. Regarding claim 4, the rejection of claim 3 is incorporated herein. Nottebaum in view of Comino Trinidad teaches the system of claim 3, and Comino Trinidad further teaches: wherein, for two or more of the multiple different scales, the learned convolutional filters comprise shared weight values (“we design our flow prediction module (Section 3.2) to share weights among all except two finest levels of the pyramid, which allows synergetic learning on multiple pyramid levels,” pg. 4103, Section 3.1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Comino Trinidad into Nottebaum for the benefit of reduced artifacts and improved detail retention in the final image. Regarding claim 6, the rejection of claim 1 is incorporated herein. Nottebaum teaches the system of claim 1, but is not relied on for the following limitations. Comino Trinidad, however, further teaches: wherein to extract, for each of the multiple different scales, the respective set of feature values from each of the pair of input images the machine-learned image interpolation model applies a plurality of learned convolutional filters (Fig. 2, “An” blocks) associated with a plurality of different scales (finest to coarsest), and wherein at least two of the convolutional filters for at least two of the different scales comprise shared weight values (“share the flow prediction weights on multiple pyramid levels we use a novel cascaded feature extraction architecture that ensures that the meaning of filters at each shared level is the same,” pg. 4103, Section 3.1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Comino Trinidad into Nottebaum for the benefit of reduced artifacts and improved detail retention in the final image. Regarding claim 8, the rejection of claim 1 is incorporated herein. Nottebaum teaches the system of claim 1, but is not relied on for the following limitations. Comino Trinidad, however, further teaches: wherein to generate the interpolated image based on the respective warped sets of features for the pair of input images and the multiple different scales the machine-learned image interpolation model is configured to apply one or more learned convolutional filters to the respective warped sets of features for the pair of input images and the multiple different scales (“Our residual flow prediction network Pk is a serial application of five 2d-convolutions: 3×3×32, 3×3×64, 1×1×64, 1×1×16 and 1×1×2,” pg. 4104, Section 3.2, where Pk is applied for every scale, see Fig. 2’s “Flow Module”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Comino Trinidad into Nottebaum for the benefit of reduced artifacts and improved detail retention in the final image. Claims 16-18 and 20 are non-transitory computer-readable media claims that correspond to system claims 2-4 and 6. Therefore, the rejection of claims 2-4 and 6 is applied to these claims. Claim(s) 5 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nottebaum et al., "Efficient Feature Extraction for High-resolution Video Frame Interpolation", arXiv:2211.14005 as applied to claims 1 and 15 above, and further in view of Sevastopolskiy et al. (US-20220157014-A1). Regarding claim 5, the rejection of claim 1 is incorporated herein. Nottebaum teaches the system of claim 1, but is not relied upon to teach the following limitations. Sevastopolskiy, however, further teaches: wherein to warp, for each of the multiple different scales, the respective set of feature values for each of the pair of input images according to the respective flow estimate, the machine-learned image interpolation model is configured to perform a backward bilinear resample operation (“The bilinear sampling (backward warping) of an image .sup.I onto Posmap is denoted by the operation I⊙Posmap, which results into mapping the visible part of .sup.I into the UV texture space,” Para [0068]). Sevastopolskiy is considered to be analogous to the claimed invention because they are both in the same field of using deep learning models to predict new images from previous ones. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Sevastopolskiy into Nottebaum for the benefit of smooth and anti-aliased final images. Claim 19 is a non-transitory computer-readable media claim that corresponds to system claim 5. Therefore, the rejection of claim 5 is applied to claim 19. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nottebaum et al., "Efficient Feature Extraction for High-resolution Video Frame Interpolation", arXiv:2211.14005 as applied to claim 1 above, and further in view of Yang et al. (US-20220284552-A1). Regarding claim 11, the rejection of claim 1 is incorporated herein. Nottebaum teaches the system of claim 1, but is not relied upon for the following limitations. Yang, however, further teaches: wherein the machine-learned image interpolation model has been trained using a loss function that comprises: a L1 loss term, a perceptual loss term, and style loss term (“Reconstruction loss includes two parts: an L1 loss to constrain the overall reconstruction of the image and another L1 loss to focus on the pixel accuracy of the corrupted region. Perceptual loss is widely used in video inpainting or image inpainting tasks to improve visual quality of generated images… Style loss is also widely applied in image/video inpainting tasks and is accumulated over all frames between the output video sequence and the ground truth video,” Para [0055]). Yang is considered to be analogous to the claimed invention because they are both in the same field of deep learning networks that utilize optical flow for creating missing images or parts of an image from initial pairs of images. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Yang into Nottebaum for the benefit of sharper output images. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nottebaum et al., "Efficient Feature Extraction for High-resolution Video Frame Interpolation", arXiv:2211.14005 as applied to claim 1 above, and further in view of Liu et al. (US-20210326691-A1). Regarding claim 12, the rejection of claim 1 is incorporated herein. Nottebaum teaches the system of claim 1, but is not relied upon for the following limitations. Liu, however, further teaches: wherein the machine-learned image interpolation model has been trained using a loss function that comprises a style loss term, wherein the style loss term evaluates a Gram matrix that measures a correlation difference between features (“Given that subsequent to extracting a style feature from the first image, the convolutional layer l outputs a Gram matrix A.sup.l, and subsequent to extracting a style feature from the second image, the convolutional layer l outputs a Gram matrix G.sup.l. A style loss between the first image and the second image obtained from the convolutional layer l is defined,” Para [0072]). Liu is considered to be analogous to the claimed invention because they are both in the same field of deep learning neural networks that create an output image based on the features of past input images. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Liu into Nottebaum for the benefit of a more accurate output image. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nottebaum et al., "Efficient Feature Extraction for High-resolution Video Frame Interpolation", arXiv:2211.14005 as applied to claims 1, 7, 9, and 13 above, and further in view of Jiang et al. (US-20190138889-A1). Regarding claim 14, the rejection of claims 1, 7, 9, and 13 are incorporated herein. Nottebaum teaches the system of claims 1, 7, 9, and 13, but is not relied upon to teach the following limitations. Jiang, however, further teaches: A computer-implemented to perform machine learning, the method comprising: obtaining, by a computing system comprising one or more computing devices, a training tuple comprising a pair of input images (Fig. 2B, “input images”) and a ground truth image (Fig. 2B, “ground truth interpolated frame”); processing, by the computing system, the pair of input images with the machine-learned image interpolation model described in any of claims 1-13 to generate a predicted interpolated image (see rejection of claim 1 above); evaluating, by the computing system, a loss function that generates a loss value based on the ground truth image and the predicted interpolated image (“the predicted intermediate frame is compared with the ground-truth frame. In an embodiment, the training loss function 210 compares the intermediate frame with a ground truth frame,” Para [0068]); and modifying, by the computing system, one or more values of one or more parameters of the machine-learned image interpolation model based on the loss function (“The intermediate optical flow neural network model 142 and the flow interpolation neural network model 102 or 122 are deemed to be sufficiently trained when the predicted frames generated for the input frames from the training dataset match the ground truth frames or a threshold accuracy is achieved for the training dataset,” Para [0068]). Jiang is considered to be analogous to the claimed invention because they are both in the same field of image interpolation using optical flow. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Jiang into Nottebaum for the benefit of a well-trained deep learned interpolation network. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nottebaum et al., "Efficient Feature Extraction for High-resolution Video Frame Interpolation", arXiv:2211.14005 in view of Comino Trinidad et al., "Multi-view Image Fusion", 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019, pp. 4100-4109, as applied to claims 2-4, 6, and 8 above, and further in view of Jiang et al. (US-20190138889-A1). Regarding claim 14, the rejections of 2-4, 6, and 8 are incorporated herein. Nottebaum in view of Comino Trinidad teach the system of claims 2-4, 6, and 8, but are not relied upon to teach the following limitations. Jiang, however, further teaches: Jiang, however, further teaches: A computer-implemented to perform machine learning, the method comprising: obtaining, by a computing system comprising one or more computing devices, a training tuple comprising a pair of input images (Fig. 2B, “input images”) and a ground truth image (Fig. 2B, “ground truth interpolated frame”); processing, by the computing system, the pair of input images with the machine-learned image interpolation model described in any of claims 1-13 to generate a predicted interpolated image (see rejection of claim 1 above); evaluating, by the computing system, a loss function that generates a loss value based on the ground truth image and the predicted interpolated image (“the predicted intermediate frame is compared with the ground-truth frame. In an embodiment, the training loss function 210 compares the intermediate frame with a ground truth frame,” Para [0068]); and modifying, by the computing system, one or more values of one or more parameters of the machine-learned image interpolation model based on the loss function (“The intermediate optical flow neural network model 142 and the flow interpolation neural network model 102 or 122 are deemed to be sufficiently trained when the predicted frames generated for the input frames from the training dataset match the ground truth frames or a threshold accuracy is achieved for the training dataset,” Para [0068]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Jiang into Nottebaum and Comino Trinidad for the benefit of a well-trained deep learned interpolation network. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nottebaum et al., "Efficient Feature Extraction for High-resolution Video Frame Interpolation", arXiv:2211.14005 in view of Sevastopolskiy et al. (US-20220157014-A1), as applied to claim 5 above, and further in view of Jiang et al. (US-20190138889-A1). Regarding claim 14, the rejection of claim 5 is incorporated herein, Nottebaum in view of Sevastopolskiy teaches the system of claim 5, but is not relied upon for the following limitations. Jiang, however, further teaches: A computer-implemented to perform machine learning, the method comprising: obtaining, by a computing system comprising one or more computing devices, a training tuple comprising a pair of input images (Fig. 2B, “input images”) and a ground truth image (Fig. 2B, “ground truth interpolated frame”); processing, by the computing system, the pair of input images with the machine-learned image interpolation model described in any of claims 1-13 to generate a predicted interpolated image (see rejection of claim 1 above); evaluating, by the computing system, a loss function that generates a loss value based on the ground truth image and the predicted interpolated image (“the predicted intermediate frame is compared with the ground-truth frame. In an embodiment, the training loss function 210 compares the intermediate frame with a ground truth frame,” Para [0068]); and modifying, by the computing system, one or more values of one or more parameters of the machine-learned image interpolation model based on the loss function (“The intermediate optical flow neural network model 142 and the flow interpolation neural network model 102 or 122 are deemed to be sufficiently trained when the predicted frames generated for the input frames from the training dataset match the ground truth frames or a threshold accuracy is achieved for the training dataset,” Para [0068]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Jiang into Nottebaum and Sevastopolskiy for the benefit of a well-trained deep learned interpolation network. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nottebaum et al., "Efficient Feature Extraction for High-resolution Video Frame Interpolation", arXiv:2211.14005 in view of Yang et al. (US-20220284552-A1), as applied to claim 11 above, and further in view of Jiang et al. (US-20190138889-A1). Regarding claim 14, the rejection of claim 11 is incorporated herein. Nottebaum in view of Yang teaches the system of claim 11 , but is not relied upon for the following limitations. Jiang, however, further teaches: A computer-implemented to perform machine learning, the method comprising: obtaining, by a computing system comprising one or more computing devices, a training tuple comprising a pair of input images (Fig. 2B, “input images”) and a ground truth image (Fig. 2B, “ground truth interpolated frame”); processing, by the computing system, the pair of input images with the machine-learned image interpolation model described in any of claims 1-13 to generate a predicted interpolated image (see rejection of claim 1 above); evaluating, by the computing system, a loss function that generates a loss value based on the ground truth image and the predicted interpolated image (“the predicted intermediate frame is compared with the ground-truth frame. In an embodiment, the training loss function 210 compares the intermediate frame with a ground truth frame,” Para [0068]); and modifying, by the computing system, one or more values of one or more parameters of the machine-learned image interpolation model based on the loss function (“The intermediate optical flow neural network model 142 and the flow interpolation neural network model 102 or 122 are deemed to be sufficiently trained when the predicted frames generated for the input frames from the training dataset match the ground truth frames or a threshold accuracy is achieved for the training dataset,” Para [0068]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Jiang into Nottebaum and Yang for the benefit of a well-trained deep learned interpolation network. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nottebaum et al., "Efficient Feature Extraction for High-resolution Video Frame Interpolation", arXiv:2211.14005 in view of Liu et al. (US-20210326691-A1), as applied to claim 12 above, and further in view of Jiang et al. (US-20190138889-A1). Regarding claim 14, the rejection of claim 12 is incorporated herein. Nottebaum in view of Liu teaches the system of claim 12, but is not relied upon for the following limitations. Jiang, however, further teaches: A computer-implemented to perform machine learning, the method comprising: obtaining, by a computing system comprising one or more computing devices, a training tuple comprising a pair of input images (Fig. 2B, “input images”) and a ground truth image (Fig. 2B, “ground truth interpolated frame”); processing, by the computing system, the pair of input images with the machine-learned image interpolation model described in any of claims 1-13 to generate a predicted interpolated image (see rejection of claim 1 above); evaluating, by the computing system, a loss function that generates a loss value based on the ground truth image and the predicted interpolated image (“the predicted intermediate frame is compared with the ground-truth frame. In an embodiment, the training loss function 210 compares the intermediate frame with a ground truth frame,” Para [0068]); and modifying, by the computing system, one or more values of one or more parameters of the machine-learned image interpolation model based on the loss function (“The intermediate optical flow neural network model 142 and the flow interpolation neural network model 102 or 122 are deemed to be sufficiently trained when the predicted frames generated for the input frames from the training dataset match the ground truth frames or a threshold accuracy is achieved for the training dataset,” Para [0068]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Jiang into Nottebaum and Liu for the benefit of a well-trained deep learned interpolation network. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. van Amersfoort et al. (US-11122238-B1) teaches a method for frame interpolation that estimates optical flow and uses pyramidal scaling. Liu et al. (US-20220092795-A1) teaches a method for motion estimation in video frame interpolation, where the model architecture is very similar to applicant’s. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL A OMETZ whose telephone number is (571)272-2535. The examiner can normally be reached 6:45am-4:00pm ET Monday-Thursday, 6:45am-1:00pm ET every other Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at 571-272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Rachel Anne Ometz/Examiner, Art Unit 2668 3/12/26 /VU LE/Supervisory Patent Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

Feb 08, 2024
Application Filed
Mar 12, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602925
HYPERSPECTRAL IMAGE ANALYSIS USING MACHINE LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12555255
ABSOLUTE DEPTH ESTIMATION FROM A SINGLE IMAGE USING ONLINE DEPTH SCALE TRANSFER
2y 5m to grant Granted Feb 17, 2026
Patent 12548354
METHOD FOR PROCESSING CELL IMAGE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12541970
SYSTEM AND METHOD FOR ESTIMATING THE POSE OF A LOCALIZING APPARATUS USING REFLECTIVE LANDMARKS AND OTHER FEATURES
2y 5m to grant Granted Feb 03, 2026
Patent 12530735
IMAGE PROCESSING APPARATUS THAT IMPROVES COMPRESSION EFFICIENCY OF IMAGE DATA, METHOD OF CONTROLLING SAME, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+30.1%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month