Prosecution Insights
Last updated: April 19, 2026
Application No. 18/531,940

HUMAN MOTION GENERATION METHOD AND SYSTEM

Final Rejection §103
Filed
Dec 07, 2023
Examiner
KOETH, MICHELLE M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Korea Electronics Technology Institute
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
331 granted / 429 resolved
+15.2% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
34 currently pending
Career history
463
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
62.2%
+22.2% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 429 resolved cases

Office Action

§103
3-DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments and amendments in the Amendment filed January 20, 2026 (herein “Amendment”), with respect to the rejection of claims 1, 10 and 11 under 35 U.S.C. 101 have been fully considered and are persuasive. The rejection of claims 1, 10 and 11 under 35 U.S.C. 101 has been withdrawn. Applicant’s arguments and amendments in the Amendment, with respect to the invocation of interpretation under 35 U.S.C. 112(f) for claims 1 and 10 have been fully considered and are persuasive so that interpretation under 35 U.S.C. 112(f) is no longer applied to claims 1, 10 and 11, and claims depending therefrom. Applicant’s arguments and amendments in the Amendment with respect to the rejections of independent claims 1 and 10, and dependent claims 6 and 8 which depend from claim 1, under 35 U.S.C. 102, and claim 11, and various dependent claims from claims 1 and 10 under 35 U.S.C. 103, have been fully considered and are persuasive in part. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Yoon et al., “Frame-rate Up-conversion Detection Based on Convolutional Neural Network for Learning Spatiotemporal Features,” arXiv:2103.13674v1 [cs.MM], March 25, 2021, https://doi.org/10.48550/arXiv.2103.13674 (herein “Yoon”) and Sun et al., CN 115240007-A. Specifically, Applicant’s remarks regarding the newly amended limitation of “an empty frame corresponding to a frame between two observed frames” are persuasive as the primary reference Li is silent regarding its disclosed method of predicting human motion being for a frame between two observed frames. However it is noted that newly cited Yoon is relied upon for these new limitations, made in an obvious combination as the image processing details that would necessarily preclude Li’s “extrapolation” approach from equally applying to Applicant’s disclosed “interpolation” approach are not presently claimed. Also while Li does teach that it’s weights are learned weights, Li does not explicitly teach that the weights are “deep” learning trained, for which newly cited Sun et al., CN 115240007-A is relied upon. Nonetheless, Li does at least teach generating motion features of an empty frame in the transformed domain. In the remarks on pages 6–7 of the Amendment, Applicant argues that Li is directed towards a graph spectrum attention, which is a different section (the Graph Spectrum Attention section) of the Li reference than relied upon in the rejection. However the cited portions of Li, not addressed by Applicant in their remarks, is the Graph Scattering Decomposition section which teaches transforming a domain of pose information using a transform model … applying a matrix multiplication using a domain transform matrix having elements determined in a deep learning training process as already applied and explained in the Non-Final Action on pages 9–10. In particular, the rejection in applying Li cites to trainable weights W which elements of a matrix AW defining a filter bank that transforms the pose information X into features H in the frequency domain. Applicant contends that Li’s method uses a “pre-defined, fixed mathematical filter” however Li teaches on page 857, left column, that the filter weights W are trainable – thus are at least “determined in a … learning training process” as claimed. However, Li does not explicitly teach the learning process to be “deep” (given a plain meaning of a multi-layered ML model based learning), and accordingly for this limitation as well, the newly cited reference Sun is applied. Therefore, while all of Applicants arguments and amendments have been considered, they are only partially persuasive, and for the newly amended portions now distinguishing over Li, a new ground of rejection is made with the newly above cited references. Claim Objections Claims 15, 16, and 17, and therefore claim 18 which depends from claim 15, are objected to because of the following informalities: claims 15 recites “toderive” in line 2, and claims 16 and 17 recite “touse” in line 2, however the words recited in these limitations should be separated by a space. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 6, 8, 10, 15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al., "Skeleton Graph Scattering Networks for 3D Skeleton-based Human Motion Prediction," 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 2021, pp. 854-864, doi: 10.1109/ICCVW54120.2021.00101 (herein “Li”) in view Sun et al., Chinese Patent Publication No. CN-115240007-A, published October 25, 2022, with reference to the provided machine English language translation (herein “Sun”), further in view of Yoon et al., “Frame-rate Up-conversion Detection Based on Convolutional Neural Network for Learning Spatiotemporal Features,” arXiv:2103.13674v1 [cs.MM], March 25, 2021, https://doi.org/10.48550/arXiv.2103.13674 (herein “Yoon”). Regarding claim 1, with deficiencies of Li noted in square brackets [], Li teaches a human motion generation method performed by a system comprising a processor and a memory, the method comprising (Li Abstract and title, and page 859, an NVIDIA Tesla V100 GPU executing processing of a skeleton graph scattering network that outputs human motion prediction): transforming, by the processor, a domain of pose information of a frame (Li pages 855–857, sections 3.1–3.3, figs. 1 and 2, input motion from a pose matrix of body joints at time t are converted to the frequency domain, then into spectral information via graph scattering decomposition, where section 4.1 in describing the input data sets characterizes the 3D poses as “frames”), using a transform model, the transform model configured to transform the domain by applying a matrix multiplication using a domain transform matrix to the pose information of the frame (Li page 857, section 3.3, in the graph scattering decomposition, DCT-formed pose features are matrix multiplied to graph adjacency matrix A and trainable weights W(k) , where A and W together form a spectral transform matrix, to obtain a bank of spectrum features H(0), H(1), …, H(K) per equations 1 and 3 together), wherein the domain transform matrix has elements determined in a [deep learning] training process (Li page 857, the W(k) matrix are trainable weights, thus determined in a training process); generating, by the processor, motion features of an empty frame [corresponding to a frame between two observed frames] in the transformed domain (Li pages 857–858, fig. 3, section 3.3, graph spectrum attention further processes the spectrum features from the graph scattering decomposition to generate a final representation of predicted motion for a next frame (predicted future frame, thus empty frame)); and inversely transforming, by the processor, the generated motion features into a time domain (Li page 855, fig. 1, output features from the last step of the adaptive graph scattering block are transformed to the temporal domain by way of an inverse DCT). While Li does teach that its domain transform matrix weights W(k) are trainable weights, Li does not explicitly teach they are “deep learning” trained. Further, while Li does at least teach predicting motion features in an empty frame, Li does not explicitly teach that the empty frame is “corresponding to a frame between two observed frames.” Sun teaches a domain transform matrix having elements determined in a deep learning training process (Sun pages 5–6, step S3, a convolution model neural network using discrete cosine transform (DCT) using a series of weight matrix of frequency domain to represent convolution layers, where the DCT converts the CNN (a deep machine learning system) into the frequency domain, and where the CNN model is reiteratively retrained (learning) to sparse the weight matrix, therefore the matrix weights being determined in a deep learning process of training the CNN). Yoon teaches corresponding to a frame between two observed frames (Yoon pages 3–4, fig. 1, frame-rate upconversion generating interpolated frames of a video between two frames of the original video (observed frames)). Therefore, taking the teachings of Li and Sun together as a whole, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the trainable weights of Li to be determined in a deep learning process as disclosed by Sun at least because doing so would reduce calculation cost and storage cost of image processing. See Sun Abstract. Further, taking the teachings of Li as modified by Sun and Yoon together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the predicted frame of Li to be one that corresponds to a frame between two frames of an input video as disclosed by Yoon at least because doing so would allow for increasing motion continuity/up-conversion of videos with a lower frame rate without being distinguishable to the human eye, and can when used ethically, benefit modern people’s lives by allowing nonprofessionals to edit video. See Sun Abstract and Introduction. Regarding claims 6 and 15, where claim 6 is exemplary, Li teaches wherein the inversely transforming comprises deriving pose information of each frame by inversely transforming the generated motion features into the time domain (Li page 856, section 3.2, page 855, figure 1, at the output layer (with the motion features) an inverse DCT is applied to recover the temporal information for prediction (time domain)). Regarding claims 8 and 17, Li teaches wherein the inversely transforming uses an inverse matrix of the domain transform matrix to inversely transform the generated motion features into the time domain (Li page 855, left column, figure 1, inverse DCT is applied to recovers the output features to the temporal domain, where fig. 1 illustrates that the IDCT would be a matrix transformation as it transforms the averaged attention scores into two dimensional predicted motions). Regarding claim 10, with deficiencies of Li noted in square brackets [], Li teaches a human motion generation system comprising: one or more processors comprising (Li Abstract and title and page 859, an NVIDIA Tesla V100 GPU executing processing of a skeleton graph scattering network that outputs human motion prediction): a communication unit configured to (Li page 859, an NVIDIA Tesla V100 GPU includes memory which acquires the data the GPU processes) acquire pose information of a frame (Li pages 855–857, sections 3.1–3.3, figs. 1 and 2, input motion from a pose matrix of body joints, where section 4.1 in describing the input data sets characterizes the 3D poses as “frames”); and a transforming processor configured to: (Li page 859, an NVIDIA Tesla V100 GPU includes a processing core) transform a domain of the acquired pose information of the frame (Li pages 855–857, sections 3.1–3.3, figs. 1 and 2, input motion from a pose matrix of body joints at time t are converted to the frequency domain, then into spectral information via graph scattering decomposition, where section 4.1 in describing the input data sets characterizes the 3D poses as “frames”) using a transform model, the transform model configured to transform the domain by applying a matrix multiplication using a domain transform matrix to the pose information of the frame (Li page 857, section 3.3, in the graph scattering decomposition, DCT-formed pose features are matrix multiplied to graph adjacency matrix A and trainable weights W(k) , where A and W together form a spectral transform matrix, to obtain a bank of spectrum features H(0), H(1), …, H(K) per equations 1 and 3 together), wherein the domain transform matrix has elements determined in a [deep learning] training process (Li page 857, the W(k) matrix are trainable weights, thus determined in a training process); generate motion features of an empty frame [corresponding to a frame between two observed frames] in the transformed domain (Li pages 857–858, fig. 3, section 3.3, graph spectrum attention further processes the spectrum features from the graph scattering decomposition to generate a final representation of predicted motion for a next frame); and inversely transform the generated motion features into a time domain (Li page 855, fig. 1, output features from the last step of the adaptive graph scattering block are transformed to the temporal domain by way of an inverse DCT). While Li does teach that its domain transform matrix weights W(k) are trainable weights, Li does not explicitly teach they are “deep learning” trained. Further, while Li does at least teach predicting motion features in an empty frame, Li does not explicitly teach that the empty frame is “corresponding to a frame between two observed frames.” Sun teaches a domain transform matrix having elements determined in a deep learning training process (Sun pages 5–6, step S3, a convolution model neural network using discrete cosine transform (DCT) using a series of weight matrix of frequency domain to represent convolution layers, where the DCT converts the CNN (a deep machine learning system) into the frequency domain, and where the CNN model is reiteratively retrained (learning) to sparse the weight matrix, therefore the matrix weights being determined in a deep learning process of training the CNN). Yoon teaches corresponding to a frame between two observed frames (Yoon pages 3–4, fig. 1, frame-rate upconversion generating interpolated frames of a video between two frames of the original video (observed frames)). Therefore, taking the teachings of Li and Sun together as a whole, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the trainable weights of Li to be determined in a deep learning process as disclosed by Sun at least because doing so would reduce calculation cost and storage cost of image processing. See Sun Abstract. Further, taking the teachings of Li as modified by Sun and Yoon together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the predicted frame of Li to be one that corresponds to a frame between two frames of an input video as disclosed by Yoon at least because doing so would allow for increasing motion continuity/up-conversion of videos with a lower frame rate without being distinguishable to the human eye, and can when used ethically, benefit modern people’s lives by allowing nonprofessionals to edit video. See Sun Abstract and Introduction. Claims 3–5 and 12–14 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Sun in view of Yoon, as set forth above regarding claims 1 and 10, further in view of Mao et al., "Learning Trajectory Dependencies for Human Motion Prediction," 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019, pp. 9488-9496, doi: 10.1109/ICCV.2019.00958 (herein “Mao”). Regarding claims 3 and 12, with claim 3 as exemplary and with deficiencies of Li noted in square brackets [], Li teaches wherein the generating the motion features of the empty frame by using [trajectory information of] body joints included in the pose information in the transformed domain (Li page 857, graph adjacency matrix A is built to connect related body-joints). Li does not explicitly teach, but Mao teaches trajectory information of (Li page 9490, section 3.1, figure 2, pose information is encoded in trajectory space). Therefore, taking the teachings of Li as modified above and Mao together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the data used to generate motion features disclosed in Li to include trajectory information as disclosed in Mao at least because doing so would allow for capturing long range dependencies and achieve state of the art performance. See Mao Abstract. Regarding claims 4 and 13, with claim 4 as exemplary, Li does not explicitly teach, but Mao teaches wherein the generated motion features are implemented by a linear combination of a basis vector (Mao page 9490, left column, each human joint is represented as a linear combination of DCT bases (basis vector), with motion features derived therefrom). Therefore, taking the teachings of Li as modified above and Mao together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the data used to generate motion features disclosed in Li to include trajectory information as disclosed in Mao at least because doing so would allow for capturing long range dependencies and achieve state of the art performance. See Mao Abstract. Regarding claims 5 and 14, with claim 5 as exemplary, Li teaches wherein the generating uses a graph neural network (GNN) model, a transformer model, a convolutional neural network (CNN) model, a multi-layer perceptrons (MLP) model, or a recurrent neural network (RNN) model when generating the motion features of the empty frame (Li page 856, sections 2.2 and 3.2, disclosed skeleton graph scattering network is a type of a graph neural network (GNN model)). Claims 7, 11 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Sun in view of Yoon, as set forth above regarding claims 6 and 15 from which claims 7 and 16 respectively depend, further in view of Wang et al., “A Machine Learning Approach to Optimal Inverse Discrete Cosine Transform (IDCT) Design,” arXiv:2102.00502v1 [cs.MM] January 31, 2021 (herein “Wang”). Regarding claims 7 and 16, with claim 7 as exemplary, Li teaches inversely transform the generated motion features into the time domain (Li page 856, section 3.2, page 855, figure 1, at the output layer (with the motion features) an inverse DCT is applied to recover the temporal information for prediction (time domain)) but does not explicitly teach, where Wang teaches wherein the inversely transforming uses a deep learning-based inverse transform model to, and wherein an inverse transform matrix used by the inverse transform model has elements determined in a training process (Wang section III, an IDCT transformation using a machine learning training process involving optimizing the IDCT matrix (inverse transform model with (matrix) elements). Therefore, taking the teachings of Li as modified above and Wang together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the IDCT disclosed in Li to include machine learning the coefficients as disclosed in Wang at least because doing so would improve the quality of the reconstructed data output without any additional cost in a decoding stage. See Wang, end of Introduction section. Regarding claim 11, with deficiencies of Li noted in square brackets [], Li teaches A human motion generation method performed by a system comprising a processor and a memory, the method comprising (Li Abstract and title, processing of a skeleton graph scattering network that outputs human motion prediction, and Li page 859, an NVIDIA Tesla V100 GPU executing processing of a skeleton graph scattering network that outputs human motion prediction): training, by the processor, a transform model which transforms a domain of pose information of a frame (Li page 857, section 3.3, in the graph scattering decomposition, DCT-formed pose features are matrix multiplied to graph adjacency matrix A and trainable weights W(k) , where A and W together form a spectral transform matrix, to obtain a bank of spectrum features H(0), H(1), …, H(K) per equations 1 and 3 together, where the W(k) matrix are trainable weights, thus determined in a step of training) [wherein the training determines elements of a domain transform matrix used by the transform model]; a step of training, by the processor, a motion generation model which generates motion features of an empty frame [corresponding to a frame between two observed frames] in the transformed domain (Li pages 857–858, fig. 3, graph spectrum attention which generates the final representation of motion prediction features (prediction meaning of a future/empty frame) including a weight matrix Wsp having trainable parameters, and attention weights watt which is a trainable vector, where the attention score is determined from the trained Wsp and watt, and thus includes a step of training); and [a step of training,] by the processor, an inverse transform [model] which inversely transforms the generated motion features into a time domain (Li page 855, fig. 1, output features from the last step of the adaptive graph scattering block are transformed to the temporal domain by way of an inverse DCT). While Li does teach that its domain transform matrix weights W(k) are trainable weights, Li does not explicitly teach “wherein the training determines elements of a domain transform matrix used by the transform model.” However, Sun teaches wherein the training determines elements of a domain transform matrix used by the transform model (Sun pages 5–6, step S3, a convolution model neural network using discrete cosine transform (DCT) using a series of weight matrix of frequency domain to represent convolution layers, where the DCT converts the CNN (a deep machine learning system) into the frequency domain, and where the CNN model is reiteratively retrained (learning) to sparse the weight matrix, therefore the matrix weights being determined in a deep learning process of training the CNN). Further, Li does not explicitly teach but Yoon teaches corresponding to a frame between two observed frames (Yoon pages 3–4, fig. 1, frame-rate upconversion generating interpolated frames of a video between two frames of the original video (observed frames)). Still further, Li does not explicitly teach, but Wang teaches a step of training an inverse transform model (Wang section III, an IDCT transformation using a machine learning training process involving optimizing the IDCT matrix (inverse transform model with (matrix) elements). Therefore, taking the teachings of Li and Sun together as a whole, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the trainable weights of Li to be determined in a deep learning process as disclosed by Sun at least because doing so would reduce calculation cost and storage cost of image processing. See Sun Abstract. Further, taking the teachings of Li as modified by Sun and Yoon together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the predicted frame of Li to be one that corresponds to a frame between two frames of an input video as disclosed by Yoon at least because doing so would allow for increasing motion continuity/up-conversion of videos with a lower frame rate without being distinguishable to the human eye, and can when used ethically, benefit modern people’s lives by allowing nonprofessionals to edit video. See Sun Abstract and Introduction. Still further, taking the teachings of Li as modified by Sun and Yoon and Wang together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the IDCT disclosed in Li to include machine learning the coefficients as disclosed in Wang at least because doing so would improve the quality of the reconstructed data output without any additional cost in a decoding stage. See Wang, end of Introduction section. Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Sun in view of Yoon, as set forth above regarding claims 6 and 15, from which claims 9 and 18 respectively depend, further in view of Lukman et al., "Discrete Cosine Transform Method for Watermarking in Digital Image Processing," 2021 IEEE 7th International Conference on Computing, Engineering and Design (ICCED), Sukabumi, Indonesia, 2021, pp. 1-6, doi: 10.1109/ICCED53389.2021.9664853 (herein “Lukman”). Regarding claims 9 and 18, with claim 9 as exemplary, while Li as modified above teaches wherein inversely transforming uses a matrix of the domain transform matrix to inversely transform the generated motion features into the time domain (Li page 855, left column, figure 1, inverse DCT is applied to recovers the output features to the temporal domain, where fig. 1 illustrates that the IDCT would be a matrix transformation as it transforms the averaged attention scores into two dimensional predicted motions), Li does not explicitly teach but Lukman teaches a transpose matrix of the domain transform matrix (Lukman section III(A)(b) and III(A)(d), IDCT coefficients obtained by multiplying a transpose matrix and the DCT coefficient matrix). Therefore, taking the teachings of Li as modified by Sun and Yoon and Lukman together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the IDCT disclosed in Li to a transpose matrix as disclosed in Lukman at least because doing so would allow for clearing unnecessary DCT coefficients, thus reducing computational complexity. See Lukman section II(B). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE M KOETH whose telephone number is (571)272-5908. The examiner can normally be reached Monday-Thursday, 09:00-17:00, Friday 09:00-13:00, EDT/EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MICHELLE M. KOETH Primary Examiner Art Unit 2671 /MICHELLE M KOETH/Primary Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Dec 07, 2023
Application Filed
Oct 17, 2025
Non-Final Rejection — §103
Jan 20, 2026
Response Filed
Feb 28, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586221
METHOD AND APPARATUS FOR ESTIMATING DEPTH INFORMATION OF IMAGES
2y 5m to grant Granted Mar 24, 2026
Patent 12579651
IMPEDED DIFFUSION FRACTION FOR QUANTITATIVE IMAGING DIAGNOSTIC ASSAY
2y 5m to grant Granted Mar 17, 2026
Patent 12567241
Method For Generating Training Data Used To Learn Machine Learning Model, System, And Non-Transitory Computer-Readable Storage Medium Storing Computer Program
2y 5m to grant Granted Mar 03, 2026
Patent 12567177
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR IMAGE PROCESSING
2y 5m to grant Granted Mar 03, 2026
Patent 12566493
METHODS AND SYSTEMS FOR EYE-GAZE LOCATION DETECTION AND ACCURATE COLLECTION OF EYE-GAZE DATA
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
94%
With Interview (+16.7%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 429 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month