Prosecution Insights
Last updated: April 19, 2026
Application No. 18/664,892

DEEP LEARNING FOR MODELING DISEASE PROGRESSION

Non-Final OA §101§103
Filed
May 15, 2024
Examiner
JAMES, DOMINIQUE NICOLE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Genentech Inc.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
16 granted / 21 resolved
+14.2% vs TC avg
Strong +38% interview lift
Without
With
+38.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
27 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
14.6%
-25.4% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 21 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status This action is in response to the application filed on May 15, 2025. Claims 1-16,18, 23, 29 and 31 are pending and have been examined. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1, 3-16, 18, 23, 29, and 31 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claims 1, 16, and 31, these claims recite the following limitations which are found to be abstract ideas not reciting a practical application or significantly more, with claim 1 being exemplary: generating, by a machine learning model, a first feature representation based on clinical data associated with a baseline cognitive state of a patient (abstract idea as a mental process as a doctor or clinician is capable of collecting and storing data associated with a baseline cognitive state of a patient); generating, by the machine learning model, a second feature representation based on an image of a brain of the patient (abstract idea as a mental process as a doctor or clinician is capable of collecting and storing data associated with a brain image of a patient); generating, by the machine learning model, a set representation by at least fusing the first feature representation and the second feature representation (abstract idea as a mental process as a doctor or clinician is capable of comparing cognitive state of a patient and brain image of a patient); This judicial exception is not integrated into a practical application for the following reasons. Claims 1, 16, and 31 all recite the additional element of “and predicting, by the machine learning model, a change in the baseline cognitive state over a time period based at least on the set representation,” however, this limitation also recites an abstract idea as a mental process as a doctor or clinician is capable of observing and comparing cognitive state and brain image of a patient over a time period. Claim 31 further recites the additional element of “a non-transitory computer-readable storage medium.” While this limitation includes an additional element it is not sufficient to recite a practical application of the abstract ideas recited in claim 31 as it amounts to mere generic computer elements and thus amount to no more than a recitation of the words “apply it” (or an equivalent) or are no more than the mere instructions to implement an abstract idea or other exception on a computer. See MPEP 2106.05(f). Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because when considered separately and in combination, the above recited additional element from claim 31 does not add significantly more (also known as an “inventive concept”) to the exception. Rather, the additional elements disclosed above perform well-understood, routine, conventional computer functions. Therefore, independent claims 1, 16, and 31 are directed towards an abstract idea without a practical application or significantly more. Regarding claims 3-15, 18, 23, and 29 the limitations are merely directed towards insignificant pre/post-solution extra activity that nonetheless do not integrate the abstract idea recited from claim 1 into a practical application. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 5-6, 12-13, 16, and 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kollada et al, US 20240221950 in view of Karow et al, US 20200027557. Regarding claim 1, Kollada teaches a system, comprising: a processor; and a memory storing instructions which, when executed by the processor, result in operations comprising (see Kollada, Paragraph [0048], “Non-transitory memory 106 may further store training module 110, which includes instructions for training the multi-modal product fusion model stored in the machine learning module 108. Training module 110 may include instructions that, when executed by processor 104, cause mental health processing system 102 train one or more subnetworks in the product fusion model”): generating, by a machine learning model (see Kollada, Paragraph [0048], “a multi-modal machine learning module 108”), a first feature representation based on clinical data associated with a baseline cognitive state of a patient (see Kollada, Paragraph [0034], ““mental health” refers to an individual's psychological, emotional, cognitive, or behavioral state or a combination thereof,” Paragraph [0015], “a first data modality from a first type of sensor to output a first data representation comprising a first set of mental health features,” and Paragraph [0054], “for a mental health evaluation based on a first modality data, a second modality data, and a third modality data, a combined data representation is obtained using a first tensor, a second tensor, and a third tensor, wherein the first tensor comprises a first data representation of all of the first modality data,” the first modality of data is considered to be a baseline cognitive sate of a patient and a first data representation of the first modality data is considered to be a first feature representation); generating, by the machine learning model (see Kollada, Paragraph [0048], “a multi-modal machine learning module 108”), a second feature representation based on an image of a brain of the patient (see Kollada, Paragraph [0054], “the second tensor comprises a second data representation of all of the second modality data,” second modality data is considered to be an image of a brain of the patient and second data representation is considered a second feature representation and Paragraph [0068], “The plurality of modalities 201 may further include one or more medical imaging devices 208. Medical image data from one or more medical imaging devices may be utilized to obtain brain structure and functional information for mental health diagnosis”); generating, by the machine learning model (see Kollada, “a multi-modal machine learning module 108”), a set representation by at least fusing the first feature representation and the second feature representation (see Kollada, Paragraph [0054], “The product fusion model 138 further includes a modality combination logic 143 to process the data representations to output a combined data representation comprising products of each set of features … a combined data representation is obtained using a first tensor, a second tensor, and a third tensor”); Kollada does not expressively teach and predicting, by the machine learning model, a change in the baseline cognitive state over a time period based at least on the set representation However, Karow in a similar invention in the same field of endeavor teaches and predicting, by the machine learning model, a change in the baseline cognitive state over a time period based at least on the set representation (see Karow, Paragraph [0114], “two or more modalities of data (e.g. medical imaging, genotyping, laboratory screening for biomarkers, blood tests, demographics, cognitive testing, etc.) are combined to predict an individual's risk for developing dementia in his/her lifetime and identify actionable risk factors (e.g., blood pressure, cortisol levels, medications, BMI, cholesterol, diet, etc.) to mitigate that risk,” and Paragraph [0121], “These genomic and imaging features are used to train the multimodal models that predict the likelihood of an individual's progression to dementia,” progression to dementia is considered a change in the baseline cognitive state over a time period; the two or more modalities of data are combined which is considered to be the set representation). The combination of Kollada and Karow are analogous art because they are both in the same field of endeavor of multi-modal processing for disease monitoring. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to predict an individual’s likelihood for progression to dementia; risk of the onset of Alzheimer's evolves with time; observe after 12 months models trained with MRI and GWAS out perform other models as taught in the system of Karow in the device of Kollada to make precise dementia risk predictions for individuals and identifying actionable risk factors for the same (Karow, Paragraph [0002]). Regarding claim 2, Kollada in view of Karow teaches the system of claim 1, wherein the fusing is performed using one or more fusion techniques including at least one of: concatenation, summation, simple attention, scaled dot product attention, applying a tensor fusion network, low rank fusion, and unidirectional contextual attention (see Kollada, Paragraph [0054], “the modality combination logic 143 includes a tensor fusion model”). The rationale of claim 1 has been applied herein. Regarding claim 3, Kollada in view of Karow teaches the system of claim 1, wherein the first feature representation is an encoded vector including a concatenation of at least one of a current cognitive score representing the baseline cognitive state of the patient, demographic information associated with the patient, and genomic information associated with the patient (see Kollada, Paragraph [0069], “Indications of one or more mental health conditions may be obtained by analyzing one or more of gene expression data, protein expression data, and genetic make-up of a patient … gene and/or protein expression data may be used to generate unimodal representations related to each genetic modality,” and Paragraph [0085], “After generating the multi-modal representation 370, all the dimensions of the multi-modal representation are concatenated into a single multi-modal vector and fed into a mental health inference module 375”). The rationale of claim 1 has been applied herein. Regarding claim 5, Kollada in view of Karow teaches the system of claim 1, wherein the machine learning model includes a first machine learning model trained to generate the first feature representation (see Kollada, Paragraph [0079], “first encoder subnetwork 322”); a second machine learning model trained to generate the second feature representation (see Kollada, Paragraph [0079], “second encoder subnetwork 324”); a third machine learning model trained to generate the set representation (see Kollada, Paragraph [0079], “Nth encoder subnetwork 326”); and a fourth machine learning model trained to predict the change in the baseline cognitive state over the time period (see Kollada, Paragraph [0127], “process the set of combination features using a fourth model to output a combined data representation”). The rationale of claim 1 has been applied herein. Regarding claim 6, Kollada in view of Karow teaches the system of claim 1, wherein the machine learning model is trained, based at least on a plurality of modalities including the clinical data associated with the baseline cognitive state of the patient and the image of the brain of the patient (see Kollada, Paragraph [0067], “data from physiological sensors, medical imaging devices, and genetic/proteomic/genomic systems may be included in generating a multi-modal representation that is subsequently used to classify mental health condition”). The rationale of claim 1 has been applied herein. Regarding claim 12, Kollada in view of Karow teaches the system of claim 1, wherein the change in the baseline cognitive state over time indicates a progression of Alzheimer's disease in the patient (see, Karow, Paragraph [0139], “The model aims to compute for each individual a hazard function, which describes how the risk of the onset of Alzheimer's evolves with time. The proportional hazards model assumes that the hazard function consists of two parts: a baseline hazard function, which is common to all the population, and a multiplicative factor, which is unique for each individual.”). The rationale of claim 1 has been applied herein. Regarding claim 13, Kollada in view of Karow teaches the system of claim 1, wherein the time period is 12 months (see Karow, Paragraph [0152], “We observe that after 12 months models trained with MRI and GWAS always outperforms the models trained on MRI features, cognitive tests, or genetics markers only”). The rationale of claim 1 has been applied herein. As per claim 16, Claim 16 claims a computer-implemented method comprising: the same limitations as Claim 1. Therefore, the rejection and rationale are analogous to that made in Claim 1. As per claim 31, Claim 31 claims a non-transitory computer readable medium storing instruction, which when executed by at least one data processor, result in operations comprising: the same limitations as Claim 1. Therefore, the rejection and rationale are analogous to that made in Claim 1. Claim(s) 4 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kollada et al, US 20240221950 in view of Karow et al, US 20200027557 in view of Casale et al, US 20230360758. Regarding claim 4, Kollada in view of Karow does not expressively teach the system of claim 1, wherein the second feature representation includes at least one domain invariant embedded feature. However, Casale in a similar invention in the same field of endeavor teaches wherein the second feature representation includes at least one domain invariant embedded feature (see Casale, Paragraph [0193], “the model can extract embeddings from images that are invariant to rotation, flipping, cropping, and color jittering”). The combination of Kollada, Karow, and Casale are analogous art because they are all in the same field of endeavor of using machine-learning techniques for processing medical imaging data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, for the model to extract embeddings from images that are invariant to rotation, flipping, cropping, and color jittering as taught in the method of Casale in the device of Kollada in view of Karow to maximize the similarity between embeddings from different augmentations of the same sample image and minimize the similarity between embeddings of different sample images (Casale, Paragraph [0193]). Regarding claim 18, Kollada in view of Karow further teaches the method of claim 16, wherein the first feature representation is an encoded vector including a concatenation of at least one of a current cognitive score representing the baseline cognitive state of the patient, demographic information associated with the patient, and genomic information associated with the patient (see Kollada, Paragraph [0069], “Indications of one or more mental health conditions may be obtained by analyzing one or more of gene expression data, protein expression data, and genetic make-up of a patient … gene and/or protein expression data may be used to generate unimodal representations related to each genetic modality,” and Paragraph [0085], “After generating the multi-modal representation 370, all the dimensions of the multi-modal representation are concatenated into a single multi-modal vector and fed into a mental health inference module 375”), Kollada in view of Karow does not expressively teach, and wherein the second feature representation includes at least one domain invariant embedded feature. However, Casale in a similar invention in the same field of endeavor teaches and wherein the second feature representation includes at least one domain invariant embedded feature (see Casale, Paragraph [0193], “the model can extract embeddings from images that are invariant to rotation, flipping, cropping, and color jittering”). The combination of Kollada, Karow, and Casale are analogous art because they are all in the same field of endeavor of using machine-learning techniques for processing medical imaging data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, for the model to extract embeddings from images that are invariant to rotation, flipping, cropping, and color jittering as taught in the method of Casale in the device of Kollada in view of Karow to maximize the similarity between embeddings from different augmentations of the same sample image and minimize the similarity between embeddings of different sample images (Casale, Paragraph [0193]). Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kollada et al, US 20240221950 in view of Karow et al, US 20200027557 in view of Lee et al, (Predicting Alzheimer’s disease progression using multi-modal deep learning approach, 2019). Regarding claim 7, Kollada in view of Karow does not expressively teach the system of claim 1, wherein the machine learning model is pre- trained to predict the baseline cognitive state of the patient based at least on a plurality of brain images acquired at a plurality of time points and across a plurality of domains. However, Lee in a similar invention in the same field of endeavor teaches wherein the machine learning model is pre- trained (see Lee, pg. 2, Experimental Setting, “CN and AD are used as auxiliary dataset to pre-train the classifier,” CN is cognitively normal older adults and AD is Alzheimer’s disease) to predict the baseline cognitive state of the patient based at least on a plurality of brain images acquired at a plurality of time points and across a plurality of domains (see Lee, pg. 2, Experimental Setting, “experiment named “baseline”, 4 modalities data at baseline visit (cognitive performance, CSF, demographic information, and MRI) were incorporated … We tested the classifier on MCI patients to predict the conversion after Δt from baseline (6, 12, 18, and 24 months) as shown in Fig. 1,” MCI is mild cognitive impairment). The combination of Kollada, Karow, and Lee are analogous art because they are all in the same field of endeavor of using multi-modal processing for disease monitoring. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to pre-train the classifier to predict conversion from baseline to a plurality of time points across multiple domains as taught in the method of Lee in the device of Kollada in view of Karow to identify persons at risk of developing AD who might benefit most from a clinical trial or as a stratification approach within clinical trials (Lee, Abstract). Claim(s) 8-11 and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kollada et al, US 20240221950 in view of Karow et al, US 20200027557 in view of Niu et al, CN 112353381. Regarding claim 8, Kollada in view of Karow does not expressively teach the system of claim 1, wherein the machine learning model is trained by at least adversarially training a domain detector of the machine learning model to reduce an inter-study domain shift associated with the image of the brain of the patient. However, Niu in a similar invention in the same field of endeavor teaches wherein the machine learning model is trained by at least adversarially training a domain detector of the machine learning model to reduce an inter-study domain shift associated with the image of the brain of the patient (Niu, Paragraph [0085], “a domain adversarial neural network, wherein the domain adversarial neural network fuses brain structural information provided by MRI and cognitive function information provided by PET to diagnose the degree and stage of cognitive decline”). The combination of Kollada, Karow, and Niu are analogous art because they are all in the same field of endeavor of using multi-modal processing for disease monitoring. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, for a domain adversarial neural network fuses brain structural information provided by MRI and cognitive function information provided by PET; to map the input of the MRI/PET modality to a common feature domain; to optimize two subnetworks and inverting the error gradient; align the image across the domain until a scaling factor is reached as taught in the system of Niu in the device of Kollada in view of Karow to diagnose the degree and stage of cognitive decline (Niu, Paragraph [0085]). Regarding claim 9, Kollada in view of Karow in view of Niu further teaches the system of claim 8, wherein the adversarially training includes adversarially training a feature extraction network of the machine learning model to learn domain invariant features for generating the second feature representation based at least on the image of the brain of the patient (see Niu, Paragraph [0049], “the multimodal adversarial and domain fusion module maps the input of the MRI/PET modality to a common feature domain that is difficult to distinguish. The multimodal adversarial and domain fusion module is based on a passive adapter architecture and encodes images from two domains so that the feature representations cannot be traced back to a specific domain source”). The rationale of claim 8 has been applied herein. Regarding claim 10, Kollada in view of Karow in view of Niu further teaches the system of claim 8, wherein the adversarially training includes applying a reverse gradient to the second feature representation to generate a domain detector input; and the adversarially training the domain detector is based at least on the domain detector input (see Niu, Paragraph [0098], “The adversarial approach is achieved by alternately optimizing two subnetworks (G<sub>e</sub> and D<sub>d</sub>) and inverting the error gradient. First, the adversarial loss is maximized for the domain discriminator perceptron D<sub>d</sub>, resulting in the MRI and PET inputs being predicted as 1 and 0, respectively. Then, with the parameters of the domain discriminator perceptron fixed, the adversarial loss is minimized for the encoder G<sub>e</sub>, resulting in the MRI and PET inputs being predicted as 0 and 1, respectively”). The rationale of claim 8 has been applied herein. Regarding claim 11, Kollada in view of Karow in view of Niu further teaches the system of claim 8, wherein the domain detector indicates a drift in the inter- study domain shift at inference (see Niu, Paragraph [0020], “By cropping the image by removing black slices from all images of the modality, the image is aligned across the domain until the scaling factor is reached,” and Paragraph [0096], “The purpose of domain adaptation is to minimize the feature distribution shift between the domains”). The rationale of claim 8 has been applied herein. Regarding claim 23, Kollada in view of Karow does not expressively teach the method of claim 16, wherein the machine learning model is trained by at least adversarially training a domain detector of the machine learning model to reduce an inter-study domain shift associated with the image of the brain of the patient, the adversarial training includes adversarially training a feature extraction network of the machine learning model to learn domain invariant features for generating the second feature representation based at least on the image of the brain of the patient, applying a reverse gradient to the second feature representation to generate a domain detector input, and adversarially training, based at least on the domain detector input, a domain detector to indicate a drift in the inter- study domain shift at inference. However, Niu in a similar invention in the same field of endeavor teaches wherein the machine learning model is trained by at least adversarially training a domain detector of the machine learning model to reduce an inter-study domain shift associated with the image of the brain of the patient, the adversarial training includes adversarially training a feature extraction network of the machine learning model to learn domain invariant features for generating the second feature representation based at least on the image of the brain of the patient (Niu, Paragraph [0085], “a domain adversarial neural network, wherein the domain adversarial neural network fuses brain structural information provided by MRI and cognitive function information provided by PET to diagnose the degree and stage of cognitive decline”), applying a reverse gradient to the second feature representation to generate a domain detector input, and adversarially training, based at least on the domain detector input (see Niu, Paragraph [0098], “The adversarial approach is achieved by alternately optimizing two subnetworks (G<sub>e</sub> and D<sub>d</sub>) and inverting the error gradient. First, the adversarial loss is maximized for the domain discriminator perceptron D<sub>d</sub>, resulting in the MRI and PET inputs being predicted as 1 and 0, respectively. Then, with the parameters of the domain discriminator perceptron fixed, the adversarial loss is minimized for the encoder G<sub>e</sub>, resulting in the MRI and PET inputs being predicted as 0 and 1, respectively”), a domain detector to indicate a drift in the inter- study domain shift at inference (see Niu, Paragraph [0020], “By cropping the image by removing black slices from all images of the modality, the image is aligned across the domain until the scaling factor is reached,” and Paragraph [0096], “The purpose of domain adaptation is to minimize the feature distribution shift between the domains”). The combination of Kollada, Karow, and Niu are analogous art because they are all in the same field of endeavor of using multi-modal processing for disease monitoring. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, for a domain adversarial neural network fuses brain structural information provided by MRI and cognitive function information provided by PET; to map the input of the MRI/PET modality to a common feature domain; to optimize two subnetworks and inverting the error gradient; align the image across the domain until a scaling factor is reached as taught in the system of Niu in the device of Kollada in view of Karow to diagnose the degree and stage of cognitive decline (Niu, Paragraph [0085]). Claim(s) 14 and 29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kollada et al, US 20240221950 in view of Karow et al, US 20200027557 in view of Shirai et al, US 20220179027. Regarding claim 14, Kollada in view of Karow does not expressively teach the system of claim 1, wherein the image is a three-dimensional magnetic resonance imaging image including an inferred mask. However, Shirai in a similar invention in the same field of endeavor teaches wherein the image is a three-dimensional magnetic resonance imaging image including an inferred mask (see Shirai, Paragraph [0044], “an MRI image in an imaging part (a head herein), Step S32 of causing the specific tissue extraction mask image creating unit 213 to calculates an image (a specific tissue extraction mask image which is referred to as a brain extraction mask image in this embodiment) which is obtained by extracting a brain part (a specific tissue) from the three-dimensional image”). The combination of Kollada, Karow, and Shirai are analogous art because they are all in the same field of endeavor of medical image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to create a brain extraction mask of a three-dimensional MRI image as taught in the method of Shirai in the device of Kollada in view of Karow to remove the unnecessary tissues and clipping only blood vessels needs to be performed before an MIP image is displayed (Shirai, Paragraph [0002]). As per claim 29, Claim 29 claims the same limitations as Claim 14 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale is analogous to that made in Claim 14. Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kollada et al, US 20240221950 in view of Karow et al, US 20200027557 in view of Nudd et al, US 11633103. Regarding claim 15, Kollada in view of Karow does not expressively teach the system of claim 1, wherein the baseline cognitive state is represented by at least one cognitive score including at least one of a Clinical Dementia Rating Scale Sum of Boxes (CDRSB) score, an Alzheimer's disease Assessment Scale-Cognitive Subscale (ADAS-COG12) score, and a Mini-Mental State Examination (MMSE) score. However, Nudd in a similar invention in the same field of endeavor teaches wherein the baseline cognitive state is represented by at least one cognitive score including at least one of a Clinical Dementia Rating Scale Sum of Boxes (CDRSB) score, an Alzheimer's disease Assessment Scale-Cognitive Subscale (ADAS-COG12) score, and a Mini-Mental State Examination (MMSE) score (see Nudd, Col 26, Lines, 38-46, “an assessment is made of the mental state and cognitive functioning of seniors periodically by administration of various screening tools such as the Mini Mental State Exam (MMSE), the Standardized Mini Mental State Exam, the Abbreviated Mental Test, etc. The results of these tests are stored in the system database together with the IOT data and events for those seniors and their caregivers”). The combination of Kollada, Karow, and Nudd are analogous art because they are all in the same field of endeavor of monitoring disease progression. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to assess the mental state and cognitive functioning of seniors periodically by various screening tools such as the Mini Mental State Exam (MMSE) as taught in the system of Nudd in the device of Kollada in view of Karow to detect a medical or psychological condition of the senior (Nudd, Col 2, Lines, 2-5). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOMINIQUE JAMES whose telephone number is (703)756-1655. The examiner can normally be reached 9:00 am - 6:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DOMINIQUE JAMES/Examiner, Art Unit 2666 /MING Y HON/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

May 15, 2024
Application Filed
Mar 30, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591976
CELL SEGMENTATION IMAGE PROCESSING METHODS
2y 5m to grant Granted Mar 31, 2026
Patent 12567138
REGISTRATION METROLOGY TOOL USING DARKFIELD AND PHASE CONTRAST IMAGING
2y 5m to grant Granted Mar 03, 2026
Patent 12548159
SCENE PERCEPTION SYSTEMS AND METHODS
2y 5m to grant Granted Feb 10, 2026
Patent 12462681
Detection of Malfunctions of the Switching State Detection of Light Signal Systems
2y 5m to grant Granted Nov 04, 2025
Patent 12462346
MACHINE LEARNING BASED NOISE REDUCTION CIRCUIT
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+38.5%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 21 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month