Prosecution Insights
Last updated: April 19, 2026
Application No. 18/250,498

ONLINE LEARNING METHOD AND SYSTEM FOR ACTION RECOGNITION

Final Rejection §103
Filed
Apr 25, 2023
Examiner
GOEBEL, EMMA ROSE
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Intel Corporation
OA Round
2 (Final)
53%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
24 granted / 45 resolved
-8.7% vs TC avg
Strong +47% interview lift
Without
With
+47.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
40 currently pending
Career history
85
Total Applications
across all art units

Statute-Specific Performance

§101
18.2%
-21.8% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 45 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgement is made of Applicant’s claim of priority from PCT Application No. PCT/CN2020/132814, filed November 30, 2020. Response to Arguments Applicant’s arguments, see p. 6, filed October 9, 2025, with respect to the 35 USC 101 rejections have been fully considered and are persuasive. The amendment of the claims to include “non-transitory” has overcome the previous rejection, and it therefore has been withdrawn. Applicant’s arguments, see p. 6-8, filed October 9, 2025, with respect to the 35 USC 103 rejections have been fully considered but are not persuasive. Applicant argues that the combination of the Deutsch, Bernal and Biswas references does not teach "determine if the visual features indicate an unseen action in the video stream." Examiner respectfully disagrees. In Col. 13 line 43 – Col. 14 line 18 of the Deutsch reference, a process is described wherein labels are determined for unseen instances based on features (i.e., visual features) included in the unseen instance that the system was trained on (e.g., cones, orange coloring, construction workers for an unseen instance of a construction zone). Examiner asserts that this limitation is taught because it teaches that a label can be created for visual features that are determined as an unseen action. Applicant further argues that the combination does not at least describe "if no unseen action is determined, apply an offline classification model to the visual features to identify seen actions, assign identifiers to the identified seen actions, transform the visual features from the visual domain into mixed features in the mixed domain, and store the mixed features and seen action identifiers in the feature database." Examiner respectfully disagrees. As described in the Non-Final Office Action mailed June 9, 2025, Deutsch teaches to “transform the visual features from the visual domain into mixed features in the mixed domain” (see Deutsch, Col. 13, lines 43-52). Deutsch is not relied upon to teach “if no unseen action is determined, apply an offline classification model to the visual features to identify seen actions” and “assign identifiers to the identified seen actions”. However, in an analogous field of endeavor, Bernal is relied upon to teach these limitations because Bernal teaches an offline action classification module to assign video segments to classes of previously seen actions (see Bernal, Paras. [0051]-[0052]). Examiner asserts that this is sufficient to teach “if no unseen action is determined, apply an offline classification model to the visual features to identify seen actions” and “assign identifiers to the identified seen actions” because it is inherent that if the identifiers are classified as seen actions, then no unseen action is determined. One having ordinary skill in the art would be motivated to combine Bernal with Deutsch to apply Deutsch’s transformation of visual features into mixed features in the mixed domain to Bernal’s identified seen actions. Deutsch and Bernal are not relied upon to teach the final part of the limitation, however in an analogous field of endeavor, Biswas teaches to “store the mixed features and seen action identifiers in the feature database” because Biswas teaches labels and extracted low-level features may be stored in a database (see Biswas, Col. 10 line 54 – Col. 11 line 11 and Col. 12, lines 1-7). Applicant argues that the Biswas reference cannot be used to store the mixed features of Deutsch, however, Examiner asserts that the database of Biswas can store object-based features and motion boundary histograms (MBHs) and could therefore store the mixed features of Deutsch and seen action identifiers of Bernal. Finally, Applicant argues that the combination does not at least describe "if an unseen action is determined, transform the visual features from the visual domain into mixed features in the mixed domain, apply a continual learner model to mixed features from the feature database to identify unseen actions in the video stream, assign identifiers to the identified unseen actions, and store the unseen action identifiers in the feature database." Applicant argues that this limitation is not taught because of the rationale of the above argument, and because Deutsch does not describe a continual learner model. Examiner respectfully disagrees. Deutsch teaches that the system incorporates semantic attributes on the top of low level features in order to learn and classify new classes which are disjoint from the training data (see Deutsch, Col. 1, lines 43-57) and that labels of unseen instances are learned by learning the relationship between the visual and semantic representation and then using the semantic representation to estimate the unseen data (see Deutsch, Col. 9, lines 45-53). Examiner asserts that Deutsch does teach continual learning because Deutsch’s system aims to continually learn unseen instances by determining labels of unseen data using seen data. Therefore, Deutsch in view of Bernal further in view of Biswas teaches every limitation of this claim, and the 35 USC 103 rejections are upheld. Consequently, THIS ACTION IS FINAL. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 4-5, 7-8, 10-11, 13-14, 17-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Deutsch et al. (US 10,592,788 B2) in view of Bernal et al. (US 2017/0255831 A1) further in view of Biswas et al. (US 9,830,516 B1). Regarding claim 1, Deutsch least one computer-readable medium having stored thereon instructions (Deutsch, Col. 6, lines 1-9, the computer program product generally represents computer-readable instructions stored on a non-transitory computer-readable medium) which, when executed, cause a computing device to perform operations comprising: extract semantic features in a semantic domain from semantic action labels (Deutsch, Col. 11, lines 29-55, for the semantic representation, the Word2Vec public dataset was used, where each instance is represented by a 100 dimensional semantic vector. Col. 8, lines 11-26, zero shot learning enables recognition of unseen or untrained patterns (e.g., objects in an image or video) with no training by utilizing semantic attribute descriptions of the patterns), transform the semantic features from the semantic domain into mixed features in a mixed domain (Deutsch, Col. 11, lines 29-55, the semantic information for the training data is propagated to the joint embedding space to share information between disjoint classes); extract visual features in a visual domain from a video stream (Deutsch, Col. 11, lines 29-55, to represent the visual features, the deep learning pre-trained GoogleNet features were used); determine if the visual features indicate an unseen action in the video stream (Deutsch, Col. 13, lines 53-66, the unseen instances are specific objects, items, or features that the system was not literally trained on that include features that the system was trained on); (Deutsch, Col. 13, lines 43-52, a graph is generated based on visual features in the input data. The semantic representations are aligned with visual representations of the input data using a regularization method. Col. 9, lines 1-9, it allows one to align the visual-semantic spaces locally, while taking into account the fine-grain regularity properties of the joint visual-semantic attribute spaces (i.e., mixed domain)), if an unseen action is determined, transform the visual features from the visual domain into mixed features in the mixed domain (Deutsch, Col. 13, lines 43-52, a graph is generated based on visual features in the input data. The semantic representations are aligned with visual representations of the input data using a regularization method. Col. 9, lines 1-9, it allows one to align the visual-semantic spaces locally, while taking into account the fine-grain regularity properties of the joint visual-semantic attribute spaces (i.e., mixed domain)), apply a continual learner model to mixed features from the feature database to identify unseen actions in the video stream, assign identifiers to the identified unseen actions (Deutsch, Col. 13, lines 43-52, the semantic representation are aligned with visual representations of the input data using a regularization method. The semantic representations are used to estimate labels for unseen instances. Col. 12, lines 46-58, FIG. 3 is a plot showing the average percentage of the correct same-class k nearest neighbors from the same unseen class in the noisy Word2Vec semantic space (represented by unfilled bars), evaluated for k∈{1, 3, . . . , 37}, and after using the regularization process (represented by filled bars) disclosed herein for a wide range of k nearest neighbor parameter. As can be seen, after performing alignment using the approach of the invention, the average percentage of k nearest neighbors from the same unseen class has improved significantly compared to the noisy semantic space, which indicates the effectiveness and robustness of the alignment process. Moreover, due to the multi-resolution properties of Spectral Graph Wavelets, the regularization method performed well for a wide range of k nearest neighbor selections). Although Deutsch teaches estimating labels for unseen action instances (Deutsch, Col. 13, lines 43-52), Deutsch does not explicitly teach “if no unseen action is determined, apply an offline classification model to the visual features to identify seen actions, assign identifiers to the identified seen actions”. However, in an analogous field of endeavor, Bernal teaches an action classification module assigns incoming frames, or video segments, to at least one of, and potentially multiple previously seen action classes according to their feature representations (Bernal, Para. [0051]). The action classification module may comprise a classifier that is trained in an offline stage (Bernal, Para. [0052]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the computer readable medium of Deutsch with the teachings of Bernal by including applying an offline action classification module to video frames to assign them to seen action classes according to the visual features. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for assigning an action class to video data based on previously seen action classes, as recognized by Bernal. Although Deutsch in view of Bernal teaches unseen action identifiers and mixed features (Deutsch, Col. 13, lines 43-52), they do not explicitly teach to “store the mixed features and seen action identifiers in the feature database” and “store the unseen action identifiers in the feature database” However, in an analogous field of endeavor, Biswas teaches a table with exemplary activity labels may be stored in the database for use by various modules of the activity analysis device (Biswas, Col. 10 line 54 - Col. 11 line 11) and extracted low-level features may be stored in the database for use by the training module (Biswas, Col. 12, lines 1-7). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the computer-readable medium of Deutsch in view of Bernal with the teachings of Biswas by including a feature database for storing action identifiers and mixed features. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for accurate video classification by communicating with a database, as recognized by Biswas. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Regarding claim 4, Deutsch in view of Bernal further in view of Biswas teaches the computer-readable medium of claim 1, and further teaches wherein the continual learner model applies a K nearest neighbors process to the mixed features to identify unseen actions (Deutsch, FIG. 3 is a plot showing the average percentage of the correct same-class k nearest neighbors from the same unseen class in the noisy Word2Vec semantic space (represented by unfilled bars), evaluated for k∈{1, 3, . . . , 37}, and after using the regularization process (represented by filled bars) disclosed herein for a wide range of k nearest neighbor parameter. As can be seen, after performing alignment using the approach of the invention, the average percentage of k nearest neighbors from the same unseen class has improved significantly compared to the noisy semantic space, which indicates the effectiveness and robustness of the alignment process. Moreover, due to the multi-resolution properties of Spectral Graph Wavelets, the regularization method performed well for a wide range of k nearest neighbor selections). Regarding claim 5, Deutsch in view of Bernal further in view of Biswas teaches the computer-readable medium of claim 1, wherein the offline classification model recognizes human actions using a video action transformer network (Biswas, Col. 19, lines 27-44, one activity segment is identified that corresponds to a user activity from the plurality of segments using a trained activity model. Once the live dataset is optimally segmented into multiple segments based on dynamic programming, the JSC module may identify a segment that corresponds to a user activity being related to a predefined activity class. Such identification of the activity segment may be performed based on a predefined activity model stored in the database). The proposed combination as well as the motivation for combining the Deutsch, Bernal, and Biswas references presented in the rejection of Claim 1, apply to Claim 5 and are incorporated herein by reference. Thus, the computer-readable medium recited in Claim 5 is met by Deutsch in view of Bernal further in view of Biswas. Regarding claim 7, Deutsch in view of Bernal further in view of Biswas teaches the computer-readable medium of claim 1, and further teaches wherein action identifiers are associated with action categories (Biswas, Col. 10 line 54 – Col. 11 line 4, each video frame may be pre-segmented based on a predefined label or class such as those shown in FIG. 4, which is a table 400 that illustrates exemplary labels associated with a mean duration of the corresponding activities in the training dataset 220. These labels or classes, namely, “Combing hair,” “Make-up,” “Brushing Teeth,” “Washing hands/face,” “Laundry,” “Washing dishes,” “Moving dishes,” “Making tea/coffee,” “Vacuuming,” “Watching TV,” “Using computer,” “Using cell,” etc., are mentioned under a column “label category”). The proposed combination as well as the motivation for combining the Deutsch, Bernal, and Biswas references presented in the rejection of Claim 1, apply to Claim 7 and are incorporated herein by reference. Thus, the computer-readable medium recited in Claim 7 is met by Deutsch in view of Bernal further in view of Biswas. Claims 8, 10-11, and 13 recites systems with elements corresponding to the elements recited in Claims 1, 4-5 and 7, respectively. Therefore, the recited elements of these claims are mapped to the proposed combination in the same manner as the corresponding steps in their corresponding computer readable storage medium claims. Additionally, the rationale and motivation to combine the Deutsch, Bernal, and Biswas references, presented in rejection of Claim 1, apply to these claims. Finally, the combination of the Deutsch, Bernal, and Biswas references discloses a processing device and memory device (Deutsch, Col. 6, lines 22-58, one or more data processing units, such as a processor. The computer system may include a volatile memory unit). Claims 14, 17-18 and 20 recite methods with steps corresponding to the elements of the computer readable storage mediums recited in Claims 1, 4-5 and 7. Therefore, the recited steps of these claims are mapped to the proposed combination in the same manner as the corresponding elements in their corresponding CRM claims. Additionally, the rationale and motivation to combine the Deutsch, Bernal, and Biswas references, presented in rejection of Claim 1, apply to this claim. Claims 2, 9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Deutsch et al. (US 10,592,788 B2) in view of Bernal et al. (US 2017/0255831 A1) further in view of Biswas et al. (US 9,830,516 B1), as applied to claims 1, 4-5, 7-8, 10-11, 13-14, 17-18, and 20 above, and further in view of Cheng et al. (US 2014/0161322 A1). Regarding claim 2, Deutsch in view of Bernal further in view of Biswas teaches the computer-readable medium of claim 1, as described above. Although Deutsch in view of Bernal further in view of Biswas teaches estimating labels for unseen action instances (Deutsch, Col. 13, lines 43-52) , they do not explicitly teach “wherein determining if the visual features indicate an unseen action in the video stream comprises applying a machine learning (ML) classifier with a binary output value”. However, in an analogous field of endeavor, Cheng teaches a binary classifier can be trained in the feature space where a first set of activity classes where a first set of activity classes include samples or training data from all seen activity classes (i.e., having associated training data) and where a second set of activity classes represents any activity class that is unseen and does not have corresponding training data (Cheng, Para. [0035]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the computer-readable medium of Deutsch in view of Bernal further in view of Biswas with the teachings of Cheng by including a binary classifier that determines a first set of activity classes including seen activity classes and a second set of activity classes representing unseen activity classes. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for detecting and selecting activities in an unseen class for performing an attribute-based activity classification, as recognized by Cheng. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Claim 9 recites a system with elements corresponding to the elements recited in Claim 2. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding computer readable storage medium claim. Additionally, the rationale and motivation to combine the Deutsch, Bernal, Biswas, and Cheng references, presented in rejection of Claim 2, apply to these claims. Finally, the combination of the Deutsch, Bernal, Biswas, and Cheng references discloses a processing device and memory device (Deutsch, Col. 6, lines 22-58, one or more data processing units, such as a processor. The computer system may include a volatile memory unit). Claim 15 recites a method with steps corresponding to the elements of the computer readable storage medium recited in Claim 2. Therefore, the recited steps of these claims are mapped to the proposed combination in the same manner as the corresponding elements in their corresponding CRM claims. Additionally, the rationale and motivation to combine the Deutsch, Bernal, Biswas, and Cheng references, presented in rejection of Claim 2, apply to this claim. Claims 3 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Deutsch et al. (US 10,592,788 B2) in view of Bernal et al. (US 2017/0255831 A1) further in view of Biswas et al. (US 9,830,516 B1) and Deutsch et al. (US 10,592,788 B2), as applied to claims 2, 9, and 15 above, and further in view of Mancini et al. (“Towards Recognizing Unseen Categories in Unseen Domains”). Regarding claim 3, Deutsch in view of Bernal further in view of Biswas and Cheng teaches the computer-readable medium of claim 2, as described above. Although Deutsch in view of Bernal further in view of Biswas and Cheng teaches a classifier for determining unseen categories (Cheng, Para. [0035]), they do not explicitly teach “wherein the operations comprise training the ML classifier using a generative adversarial network to generate unseen visualization features from semantic features”. However, in an analogous field of endeavor, Mancini teaches generating samples of unseen classes by learning a generative function conditioned on the semantic embeddings in order to access visual data associated to categories and data of the unseen domains (Mancini, p. 471, section 3.2). Mancini further teaches tackling zero-shot learning from a generative point of view considering Generative Adversarial Networks (Mancini, p. 470). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the computer-readable medium of Deutsch in view of Bernal further in view of Biswas and Cheng with the teachings of Mancini by including a generative adversarial network to generate samples of unseen classes by learning a generative function condition on the semantic embeddings. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for accessing visual data of the unseen categories, as recognized by Mancini. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Claim 16 recites a method with steps corresponding to the elements of the computer readable storage medium recited in Claim 3. Therefore, the recited steps of these claims are mapped to the proposed combination in the same manner as the corresponding elements in their corresponding CRM claims. Additionally, the rationale and motivation to combine the Deutsch, Bernal, Biswas, Cheng and Mancini references, presented in rejection of Claim 3, apply to this claim. Claims 6, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Deutsch et al. (US 10,592,788 B2) in view of Bernal et al. (US 2017/0255831 A1) further in view of Biswas et al. (US 9,830,516 B1), as applied to claims 1, 4-5, 7-8, 10-11, 13-14, 17-18, and 20 above, and further in view of Chen et al. (US 2021/0027066 A1, filed February 28, 2020). Regarding Claim 6, Deutsch in view of Bernal further in view of Biswas teaches the computer-readable medium of claim 1, wherein semantic features are extracted, the semantic features are transformed into mixed features, and the mixed features are stored in the feature database, in a training phase (Deutsch, Col. 11, lines 29-55, the semantic information for the training data is propagated to the joint embedding space to share information between disjoint classes. Biswas, Col. Col. 12, lines 1-7, extracted low-level features may be stored in the database for use by the training module). The proposed combination as well as the motivation for combining the Deutsch, Bernal, and Biswas references presented in the rejection of Claim 1, apply to Claim 6 and are incorporated herein by reference. Although Deutsch in view of Bernal further in view of Biswas teaches extracting visual features (Deutsch, Col. 11, lines 29-55), they do not explicitly teach “wherein extracting visual features comprises applying an offline I3D classification model to the video stream”. However, in an analogous field of endeavor, Chen teaches an I3D to capture appearance and temporal dynamics of input video frames provided by the vehicle camera system (Chen, Para. [0044]). The I3D may be configured to receive video a plurality of anchor boxes upon one or more objects that may be located within the surrounding environment of the vehicle as including within the target frame(s) of length T frames and generate a corresponding temporal feature representation using the TF feature extractor (Chen, Para. [0045]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the computer-readable medium of Deutsch in view of Bernal further in view of Biswas with the teachings of Chen by including a 13D classification model to extract visual features from the video stream. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for completing spatio-temporal action localization of individuals and actions that occur within an environment, as recognized by Chen. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Claim 12 recites a system with elements corresponding to the elements recited in Claim 6. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding computer readable storage medium claim. Additionally, the rationale and motivation to combine the Deutsch, Bernal, Biswas, and Chen references, presented in rejection of Claim 2, apply to these claims. Finally, the combination of the Deutsch, Bernal, Biswas, and Chen references discloses a processing device and memory device (Deutsch, Col. 6, lines 22-58, one or more data processing units, such as a processor. The computer system may include a volatile memory unit). Claim 19 recites a method with steps corresponding to the elements of the computer readable storage medium recited in Claim 6. Therefore, the recited steps of these claims are mapped to the proposed combination in the same manner as the corresponding elements in their corresponding CRM claims. Additionally, the rationale and motivation to combine the Deutsch, Bernal, Biswas, and Chen references, presented in rejection of Claim 6, apply to this claim. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Emma Rose Goebel whose telephone number is (703)756-5582. The examiner can normally be reached Monday - Friday 7:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Emma Rose Goebel/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Apr 25, 2023
Application Filed
Jun 05, 2025
Non-Final Rejection — §103
Oct 09, 2025
Response Filed
Nov 18, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597236
FINE-TUNING JOINT TEXT-IMAGE ENCODERS USING REPROGRAMMING
2y 5m to grant Granted Apr 07, 2026
Patent 12597129
METHOD FOR ANALYZING IMMUNOHISTOCHEMISTRY IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12597093
UNDERWATER IMAGE ENHANCEMENT METHOD AND IMAGE PROCESSING SYSTEM USING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12597124
DEBRIS DETERMINATION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12588885
FAT MASS DERIVATION DEVICE, FAT MASS DERIVATION METHOD, AND FAT MASS DERIVATION PROGRAM
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
53%
Grant Probability
99%
With Interview (+47.0%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 45 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month