Prosecution Insights
Last updated: April 19, 2026
Application No. 18/273,342

MACHINE LEARNING-BASED INVARIANT DATA REPRESENTATION

Non-Final OA §101§103
Filed
Jul 20, 2023
Examiner
LEE, CLAY C
Art Unit
3699
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Ramot AT Tel-Aviv University Ltd.
OA Round
1 (Non-Final)
54%
Grant Probability
Moderate
1-2
OA Rounds
4y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
117 granted / 216 resolved
+2.2% vs TC avg
Strong +57% interview lift
Without
With
+57.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
60 currently pending
Career history
276
Total Applications
across all art units

Statute-Specific Performance

§101
32.7%
-7.3% vs TC avg
§103
45.9%
+5.9% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 216 resolved cases

Office Action

§101 §103
DETAILED ACTION Claim Status This is first office action on the merits in response to the application filed on 7/20/2023. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claims 1-20 are currently pending and have been examined. Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 6/28/2024 is(are) in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 12-15 are objected to because of the following informalities: In claim 12, line 5, “a feature vector” should read --the feature vector-- . Claims 13-15 are further objected due to their dependency. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Under the Step 1 of the Section 101 analysis, Claims 1-6 and 20 are drawn to a system which is within the four statutory categories (i.e. a machine) , and Claims 7-19 are drawn to a method which is within the four statutory categories (i.e., a process) . Since the claims are directed toward statutory categories, it must be determined if the claims are directed towards a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea). Based on consideration of all of the relevant factors with respect to the claim as a whole, claims 1-20 are determined to be directed to an abstract idea. The rationale for this determination is explained below: Regarding Claims 1, 7, and 20 : Claims 1, 7, and 20 are drawn to an abstract idea without significantly more. The claims recite “ receive at least one content data element pertaining to the subject from one or more data sources of a plurality of data sources ; and generate a feature vector in a latent space of the one or more autoencoders , said feature vector comprising a source-invariant representation of said at least one content data element, and one or more machine-learning (ML) based classification models , trained to : receive the source-invariant representation of the at least one content data element ; and produce a prediction data element, representing a predicted condition of the subject, based on the source-invariant representation of said at least one content data element .” Under the Step 2A Prong One, the limitations, as underlined above, are processes that, under its broadest reasonable interpretation, cover Mental Processes such as concepts performed in the human mind (including an observation, evaluation, judgment, opinion) . For example, but for the “ content ” , “latent space”, “autoencoders”, and “machine-learning (ML) based” language, the underlined limitations in the context of this claim encompass the mental processes. The series of steps belong to a typical observation, evaluation, judgment, or opinion , because data or information is processed in order to produce a prediction or predicted condition for a subject . Under the Step 2A Prong Two, this judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – “ A system for predicting a condition of a subject, the system comprising: one or more autoencoder modules, trained to: ”, “ A method of predicting a condition of a subject by at least one processor, the method comprising: ”, “ A system for predicting a condition of a subject, the system comprising: a non- transitory memory device, wherein modules of instruction code are stored, and at least one processor associated with the memory device, and configured to execute the modules of instruction code, whereupon execution of said modules of instruction code, the at least one processor is configured to: ”, “content”, “latent space”, “autoencoders”, and “machine-learning (ML) based” . The additional elements are recited at a high-level of generality (i.e., performing generic functions of an interaction) such that it amounts no more than mere instructions to apply the exception using a generic computer component, merely implementing an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea . Additionally, regarding the specification and claims, there is no improvement in the functioning of a computer or an improvement to other technology or technical field present, there is no applying or using the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition present, there is no implementing the judicial exception with or using the judicial exception in conjunction with a particular machine or manufacture that is integral to the claim present, there is no effecting a transformation or reduction of a particular article to a different state or thing present, and there is no applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment present such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Accordingly, these additional elements, individually or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Under the Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements in the process amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. Regarding Claims 2-3 and 8-19 : Dependent claim 14 only further elaborate s the abstract idea and do not recite additional elements. Dependent claims 2-3, 8-13, and 15-19 include additional limitations, for example, “neural network (NN)” and “content” (Claim 2); “module”, “autoencoder”, and “NN” (Claim 3); “module” and “autoencoder” (Claims 4 and 9); “module” and “autoencoder” (Claims 5 and 10); “module” (Claims 6 and 11); “NN”, “content”, “module”, and “autoencoder” (Claim 8); “ content ” and “ autoencoder ” (Claim 12 ); “ autoencoder ” and “ content ” (Claim 13 ); “ content ” (Claim 15 ); “ content ”, “ Internet ”, “social network”, “email”, and “ text message ” (Claim 16 ); “content” and “ online ” (Claim 17 ); “ content ”, “ image ”, “video”, “Magnetic Resonance Imaging (MRI) scan”, “Computed Tomography (CT) scan”, and “ Ultrasound (US) scan ” (Claim 18 ); and “ content ”, “ proteomic data ”, and “ genomic data ” (Claim 19 ) , but none of these limitations are deemed significantly more than the abstract idea because, as stated above, they require no more than generic computer structures or signals to be executed, and do not recite any Improvements to the functioning of a computer, or Improvements to any other technology or technical field. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology, and their collective functions merely provide conventional computer implementation or implementing the judicial exception on a generic computer. Therefore, whether taken individually or as an ordered combination, claims 2-3 and 8-19 are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saltz (US 11164312 B2) ) in view of Molero Leon (US 20230377747 A1) . Regarding Claims 1, 7, and 20 , Saltz teaches system for predicting a condition of a subject, the system comprising: one or more autoencoder modules, trained to ( Saltz : Abstract; Col. 2, lines 27-39 ) : A method of predicting a condition of a subject by at least one processor, the method comprising ( Saltz : Abstract; 2/27-39, 32/8-26 ) : A system for predicting a condition of a subject, the system comprising: a non- transitory memory device, wherein modules of instruction code are stored, and at least one processor associated with the memory device, and configured to execute the modules of instruction code, whereupon execution of said modules of instruction code, the at least one processor is configured to ( Saltz : Abstract; 2/27-39, 6/39~7/10, 32/8-26 ) : receive at least one content data element pertaining to the subject from one or more data sources of a plurality of data sources ( Saltz : 10/60~11/8 teach(es) that trains a classification model in order to predict the respective labeling of TILs associated with computationally stained and digitized whole slide images of Hematoxylin and Eosin (H&E) stained pathology specimens obtained from biopsied tissue, and spatially characterizing TIL Maps ) ; and generate a feature vector in a latent space of the one or more autoencoders, said feature vector comprising a source-invariant representation of said at least one content data element ( Saltz : 23/23-31, 33/62~33/5 teach(es) The fully unsupervised autoencoder in FIG. 2A first decomposes or segments an input histopathology image patch into foreground (e.g. nuclei) and background (e.g. cytoplasm) during the sparse autoencoding step. The CAE then detects nuclei in the foreground by representing the locations of nuclei as a sparse feature map. Finally, the autoencoder interactively encodes each nucleus to a feature vector; Refining a model to account for latent features that are unique to a local population also occurs when deploying the model at a new site (hospital, geographic location) ) , and one or more machine-learning (ML) based classification models ( Saltz : 3/46-64 teach(es) the useful analysis thereof and deduction of information such as quantification of useful values for classification and/or diagnosis of tumor cells has proven challenging. As a result, there is a desire to apply novel machine learning and deep learning techniques in a related system and method that creates a Computational Stain, that permits efficient identification of image features, more accurate quantification of image features, and formulation of higher-order relationships that go beyond mere simple densities ) , trained to: receive the source-invariant representation of the at least one content data element ( Saltz : 5/23-60 teach(es) receiving digitized diagnostic and stained whole-slide image data related to tissue of a particular type of tumoral data; generating tumor-infiltrating lymphocyte representations based on prediction of TIL information associated with classified segmented data portions. Yet further included is generating a refined TIL representation based on prediction of the TIL representations using the adjusted threshold probability value associated with the classified segmented data portion ) . However, Saltz does not explicitly teach produce a prediction data element, representing a predicted condition of the subject, based on the source-invariant representation of said at least one content data element . Molero Leon from same or similar field of endeavor teaches produce a prediction data element, representing a predicted condition of the subject, based on the source-invariant representation of said at least one content data element ( Molero Leon: Paragraph(s) 0144 , 0146 -0147 teach(es) The neural network model may have been trained to perform intelligent functions, such as predicting a subject's responsiveness to a treatment regimen, identifying similar patients, generating a recommendation of a treatment regimen for a patient, and other intelligent functionality. The neural network model may be trained using a training data set that includes subject records of subjects who have previously been treated for a condition and experienced an outcome (e.g., overcoming a condition , increasing a severity of a condition, reducing a severity of a condition, and so on) ; for data elements that include images (e.g., MM data) or image frames of a video (e.g., a video data of an ultrasound), each image or image frame may be transformed into a numerical representation (e.g., vector) using a trained auto-encoder neural network, which is trained to generate a latent-space representation of an input image. The condensed representation of the input image (e.g., the latent-space representation) may serve as the numerical representation of the input image ). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Saltz to incorporate the teachings of Molero Leon for produce a prediction data element, representing a predicted condition of the subject, based on the source-invariant representation of said at least one content data element . There is motivation to combine Molero Leon into Saltz because Molero Leon ’s teachings of predicting a subject's responsiveness would facilitate predicting a condition for a subject ( Molero Leon : Paragraph(s) 0144, 0146 ). Regarding Claim 2 , the combination of Saltz and Molero Leon teaches all the limitations of claim 1 above; and the combination further teaches further comprising at least one … neural network (NN) configured to predict, based on the source-invariant representation of the at least one content data element, an identification of an origin data source from which the at least one content data element originated ( Saltz : Abstract; 6/58~7/10, 2/59~3/7, 4/19-36 teach(es) generating tumor-infiltrating lymphocyte representations based on prediction of TIL information associated with classified segmented data portions. Yet further disclosed operations include generating a refined TIL representation based on prediction of the TIL representations using the adjusted threshold probability value associated with the classified segmented data portions ) H owever , the combination of Saltz and Molero Leon does not explicitly teach adversarial neural network (NN) . Molero Leon further teaches adversarial neural network (NN) ( Molero Leon : Paragraph(s) 0216, 0259 ). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Saltz and Molero Leon to incorporate the teachings of Molero Leon for adversarial neural network (NN). There is motivation to combine Molero Leon into the combination of Saltz and Molero Leon because Molero Leon ’s teachings of adversarial neural network (NN) would facilitate predicting a condition for a subject ( Molero Leon : Paragraph(s) 0144, 0146). Regarding Claim 3 , the combination of Saltz and Molero Leon teaches all the limitations of claim 2 and adversarial neural network (NN) above; and Saltz further teaches further comprising at least one first training module configured to, during an autoencoder training stage: receive a plurality of training content data elements from a plurality of data sources; and train the one or more autoencoder modules, based on the plurality of training content data elements, to generate the source-invariant representation such that the … NN would fail in predicting the identification of origin data sources of one or more content data elements of the plurality of training content data elements ( Saltz : 15/5-23, 16/64~17/12, 17/20-49 teach(es) the system uses convolutional neural networks (CNNs) to identify lymphocyte-infiltrated regions in digitized H&E stained tissue specimens ; The initial training steps are followed by an iterative cycle of review and refinement steps in step 36 in order to improve the prediction accuracy of the lymphocyte CNN ). Regarding Claim s 4 and 9 , the combination of Saltz and Molero Leon teaches all the limitations of claim s 3 and 8 above and adversarial neural network (NN) above; and Saltz further teaches wherein the at least one first training module is further configured to, during the autoencoder training stage: receive a plurality of annotation data elements, corresponding to the plurality of training content data elements; receive, from the one or more classification models, a plurality of prediction data elements, corresponding to the plurality of training content data elements; and train the one or more autoencoder modules further based on the prediction data elements and annotation data elements ( Saltz : 33/13-39 , 11/42-57, 15/5-23, 16/64~17/12, 17/20-49 teach(es) TIL Training Tool (also referred to as a viewer or caMicorscope): a web-based or desktop-based tool that can interface with PathDB to display images, capture markups, annotations and labels and load the resulting data into FeatureDB ). Regarding Claim s 5 and 10 , the combination of Saltz and Molero Leon teaches all the limitations of claim s 4 and 9 above and adversarial neural network (NN) above; and Saltz further teaches wherein the plurality of annotation data elements represent ground-truth information pertaining to a condition of corresponding subjects, and wherein the at least one first training module is configured to train the one or more autoencoder modules to generate the source-invariant representation, such that the classification models correctly predict the conditions of relevant subjects, as represented by the annotation data elements ( Saltz : 22/6-22 teach(es) Training a fully supervised CNN requires a large number of training instances with ground truth labels. For example, Masci et al. (Masci et al., 2011) have shown that utilizing unlabeled instances can boost the performance of a CNN. Hence, in an example embodiment the system trains an unsupervised Convolutional Auto-Encoder (CAE) to learn the representation of nuclei and lymphocytes in histopathology images and initialize the lymphocyte CNN ). Regarding Claim s 6 and 11 , the combination of Saltz and Molero Leon teaches all the limitations of claim s 1 and 7 above; and Saltz further teaches further comprising one or more second training modules, corresponding to the respective one or more classification models, wherein the one or more second training modules are configured to, during a classifier training stage: receive a plurality of source-invariant representations of a respective plurality of training content data elements; receive a plurality of annotation data elements, corresponding to the plurality of training content data elements; and train the one or more classification models to produce the prediction data elements, based on the plurality of source-invariant representations, using the annotation data elements as supervisory data ( Saltz : 33/13-39, 11/42-57, 15/5-23, 16/64~17/12, 17/20-49, as stated above with respect to claim s 4 and 9 ). Regarding Claim 8 , the combination of Saltz and Molero Leon teaches all the limitations of claim 7 above; and the combination further teaches , as stated above with respect to claims 2-3, further comprising, during an autoencoder training stage: receiving a plurality of training content data elements from a plurality of data sources; applying at least one adversarial NN on the source-invariant representation of the at least one content data element, to produce an identification of an origin data source, from which the at least one content data element was received; and training the one or more autoencoder modules, based on the plurality of training content data elements, to generate the source-invariant representation such that the at least one adversarial NN would fail in predicting the identification of origin data sources of one or more content data elements of the plurality of training content data elements . Regarding Claim 12 , the combination of Saltz and Molero Leon teaches all the limitations of claim 7 above; and Saltz further teaches further comprising receiving a definition of a hierarchical categorization data structure, representing a plurality of hierarchical levels of the received content data elements, wherein applying the one or more autoencoder models on at least one content data element comprises generating a feature vector that comprises a plurality of source-invariant representations of said at least one content data element, and wherein each source-invariant representation of the feature vector corresponds to a respective hierarchical level ( Saltz : Abstract; 5/23-60 , 30/55~31/7 teach(es) The system and method further includes defining regions of interest that represents a portion of, or a full image of the whole-slide image data. The system and method further includes encoding the image data into segmented data portions based on convolutional autoencoding of objects associated with the collection of image data ). Regarding Claim 13 , the combination of Saltz and Molero Leon teaches all the limitations of claim 12 above; and Saltz further teaches further comprising, during an autoencoder training stage: receiving a plurality of training content data elements from a plurality of data sources; and training the one or more autoencoder modules, based on the plurality of training content data elements, to generate said feature vector, while applying a predetermined weight to each source-invariant representations of the feature vector, wherein said weight is determined according to the hierarchical level of the respective source-invariant representation ( Saltz : 22/6-22, 41/61~42/9 teach(es) the initial lymphocyte CNN model captures the appearance of histopathology images without supervised training. Then the system initializes the weights of the necrosis segmentation CNN randomly following the DeconvNet approach. Next, the system trains the CNNs with labeled images. The training phases of the CNNs involve a cross-validation step to assess prediction performance and avoid overfitting ). Regarding Claim 14 , the combination of Saltz and Molero Leon teaches all the limitations of claim 13 above; and Saltz further teaches wherein for each pair of source-invariant representations, said pair comprising a first source- invariant representation corresponding to a first hierarchical level, and a second source-invariant representation corresponding to a second, higher hierarchical level, the weight of the second source-invariant representation is higher than the weight of the first source-invariant representation ( Saltz : 42/10-23, 44/62~45/8 teach(es) There are two training stages. Stage (1) corrects the weights of encoding layers and only trains the added classification layers which are initialized randomly. Stage (2) trains all layers of network jointly. The purpose of this two stage training scheme is that, after the lym-CNN is constructed, only part of it (the encoding layers) have been trained and part of it is randomly initialized ). Regarding Claim 15 , the combination of Saltz and Molero Leon teaches all the limitations of claim 12 above; and Saltz further teaches further comprising, during a classifier training stage: receiving a plurality of feature vectors, corresponding to a respective plurality of training content data elements; receiving a plurality of annotation data elements, corresponding to the plurality of training content data elements; and training the one or more classification models to produce the prediction data elements, based on the plurality of source-invariant representations, while (a) using the annotation data elements as supervisory data, and (b) applying a predetermined weight to each source-invariant representations of the feature vector, wherein said weight is determined according to the hierarchical level of the respective source-invariant representation ( Saltz : 19/8-23, 23/16-30, 33/13-39, 30/55~31/7 ). Regarding Claim 16 , the combination of Saltz and Molero Leon teaches all the limitations of claim 7 above; and Saltz further teaches wherein the at least one content data element is selected from a list of textual or audible data sources, consisting of: an Internet search query, a posting to a social network by the subject, an email pertaining to the subject, a text message pertaining to the subject, a transcription of a voice command pertaining to the subject, and text included in a medical record pertaining to the subject ( Saltz : 72/20-34, 74/34-40, 32/62~33/5 teach(es) Either of such a wired and/or wireless connection may be a proprietary connection as well. The remote device may be accessible via the Internet and may include a computing cluster associated with a particular web service (e.g., social-networking, photo sharing, address book, etc.) ). Regarding Claim 17 , the combination of Saltz and Molero Leon teaches all the limitations of claim 7 above; and Saltz further teaches wherein the at least one content data element is selected from a list of online data sources, consisting of: online user-selections performed by the subject, online images pertaining to the subject, online videos pertaining to the subject, and online audio or vocal data elements pertaining to the subject ( Saltz : 37/26-49 teach(es) QuIP is a software system which consists of a suite of integrated data services and web-based user applications designed for the management and analysis of whole-slide tissue images and indexing and exploration of image features ). Regarding Claim 18 , the combination of Saltz and Molero Leon teaches all the limitations of claim 7 above; however the combination does not explicitly teach wherein the at least one content data element is selected from a list of image data sources, consisting of: an image of the subject, a video of the subject, a Magnetic Resonance Imaging (MRI) scan of the subject, a Computed Tomography (CT) scan of the subject, and images obtained from an Ultrasound (US) scan of the subject . Molero Leon further teaches wherein the at least one content data element is selected from a list of image data sources, consisting of: an image of the subject, a video of the subject, a Magnetic Resonance Imaging (MRI) scan of the subject, a Computed Tomography (CT) scan of the subject, and images obtained from an Ultrasound (US) scan of the subject ( Molero Leon : Paragraph(s) 0243, 0147 teach(es) imaging data (e.g., including scans from or summaries of one or more: CTs, Mills, PET, or radiography ) treatment information ; for data elements that include images (e.g., MM data) or image frames of a video (e.g., a video data of an ultrasound), each image or image frame may be transformed into a numerical representation (e.g., vector) using a trained auto-encoder neural network, which is trained to generate a latent-space representation of an input image ). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Saltz and Molero Leon to incorporate the teachings of Molero Leon for wherein the at least one content data element is selected from a list of image data sources, consisting of: an image of the subject, a video of the subject, a Magnetic Resonance Imaging (MRI) scan of the subject, a Computed Tomography (CT) scan of the subject, and images obtained from an Ultrasound (US) scan of the subject . There is motivation to combine Molero Leon into the combination of Saltz and Molero Leon because Molero Leon ’s teachings of CTs, Mills, PET, radiography , and ultrasound would facilitate predicting a condition for a subject ( Molero Leon : Paragraph(s) 0243, 0147 ). Regarding Claim 19 , the combination of Saltz and Molero Leon teaches all the limitations of claim 7 above; however the combination does not explicitly teach wherein the at least one content data element is selected from a list consisting of a proteomic data element and a genomic data element . Molero Leon further teaches wherein the at least one content data element is selected from a list consisting of a proteomic data element and a genomic data element ( Molero Leon : Paragraph(s) 0221, 0091, 0242 teach(es) Subject data can include genetic data (e.g., identifying one or more mutations, such as one or more X-chromosome mutations and/or one or more chromosome-4 mutations) ; von Willebrand factor is a carrier protein for factor VIII ; laboratory and/or medical test information (e.g., detected DNA variant , total protein count, albumin count, etc.) ). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Saltz and Molero Leon to incorporate the teachings of Molero Leon for wherein the at least one content data element is selected from a list consisting of a proteomic data element and a genomic data element . There is motivation to combine Molero Leon into the combination of Saltz and Molero Leon because Molero Leon ’s teachings of proteomic /genomic data element would facilitate predicting a condition for a subject ( Molero Leon : Paragraph(s) 0221, 0091, 0242 ). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Cella (US 20190171187 A1) teaches Methods And Systems For The Industrial Internet Of Things . Liu (US 20220108417 A1) teaches Image Generation Using One Or More Neural Networks , MRI, CT, including ultrasound, weight, diagnoses, and scan . Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT CLAY LEE whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-3309 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday-Friday 8-5pm EST . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Neha Patel can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571)270-1492 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CLAY C LEE/ Primary Examiner, Art Unit 3699
Read full office action

Prosecution Timeline

Jul 20, 2023
Application Filed
Feb 26, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597019
Post-Provisioning Authentication Protocols
2y 5m to grant Granted Apr 07, 2026
Patent 12591639
RESOURCE BASED LICENSING
2y 5m to grant Granted Mar 31, 2026
Patent 12572907
UNIVERSAL PAYMENT CHANNEL
2y 5m to grant Granted Mar 10, 2026
Patent 12561654
SYSTEMS AND METHODS FOR EXECUTING REAL-TIME ELECTRONIC TRANSACTIONS USING A ROUTING DECISION MODEL
2y 5m to grant Granted Feb 24, 2026
Patent 12561712
LOYALTY POINT DISTRIBUTIONS USING A DECENTRALIZED LOYALTY ID
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
54%
Grant Probability
99%
With Interview (+57.1%)
4y 1m
Median Time to Grant
Low
PTA Risk
Based on 216 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month