Prosecution Insights
Last updated: April 19, 2026
Application No. 17/833,719

METHODS, SYSTEMS, AND TOOLS FOR LONGEVITY-RELATED APPLICATIONS

Non-Final OA §103
Filed
Jun 06, 2022
Examiner
BEVERIDGE, CONNOR HAMMOND
Art Unit
1687
Tech Center
1600 — Biotechnology & Organic Chemistry
Assignee
Genentech Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-60.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
15 currently pending
Career history
15
Total Applications
across all art units

Statute-Specific Performance

§101
35.7%
-4.3% vs TC avg
§103
47.6%
+7.6% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-236 are canceled. Claims 237-256 are pending. Claims 237-256 are rejected. Claims 237-256 do not have claim status. They appear to be new claims but are not denoted as such. Priority The application is a continuation of application PCT/US20/67648 which has an effective filing date of 1/2/2020. Information Disclosure Statement The information disclosure statements (IDS) submitted on 2 December 2022 and 15 July 2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements have been considered by the examiner. Drawings The drawings filed on 6/06/2022 were considered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 237, 245, 246 and 255 is rejected under 35 U.S.C. 103 as being unpatentable over Kobayashi et al. (Kobayashi, H., Lei, C., Wu, Y. et al. Label-free detection of cellular drug responses by high-throughput bright-field imaging and machine learning. Sci Rep 7, 12454 (2017)) in view of Scheeder et al. (Scheeder, C.; Heigwer, F.; Boutros, M. Machine Learning and Image-Based Profiling in Drug Discovery. Current Opinion in Systems Biology 2018, 10, 43–52) in view of Jackson et al. (Jackson, P. T.; Wang, Y.; Knight, S.; Chen, H.; Dorval, T.; Brown, M.; Bendtsen, C.; Obara, B. Phenotypic Profiling of High Throughput Imaging Screens with Generic Deep Convolutional Features. arXiv. (2019)). Regarding the limitations of independent claim 237, independent claim 246, independent claim 255, a computer-implemented method, the computer-implemented method comprising (Claim 1), a system comprising: a processor; and a non-transitory computer-readable storage medium storing executing instructions that, when executed, cause the processor to perform steps comprising (claim 246), a non-transitory computer-readable storage medium storing executable computer instructions that, when executed by a processor, causes the processor to perform steps comprising (claim 255), generating a training set comprising a first subset of images, the first subset of images including images of cells modified with one of a plurality of agents, and a second subset of images, the second subset of images including images of cells that have not been modified with an agent; training a machine learned model using the training set, Kobayashi et al. teaches a method of performing high-throughput bright-field imaging of numerous drug-treated and -untreated cells and applying machine learning to the cell images to identify their morphological variations which are too subtle for human eyes to detect (abstract). Kobayashi et al. developed a computer program that evaluated the impact of an anti-cancer drug concentration on the morphological change of cancer cells. A support vector machine (SVM) aimed to find a hyperplane that separates with a large margin between two classes of data and to classify the negative control and each drug-treated population (Classification of drug-treated and -untreated cells 1st paragraph). Kobayashi et al. does not explicitly teach the limitation of the machine-learned model configured to predict an effect of an agent on a state of a cell; accessing a set of images, each image in the set of images including a cell, each cell associated with a first state; applying the machine learned model to each image in the set of images to identify a set of candidate compounds predicted to modify a corresponding state of a corresponding cell from the first state of the cell to a second state; and providing the set of candidate compounds to an entity associated with the set of images (claim 237, claim 246, claim 255) wherein training the machine learned model comprises: accessing an initial set of weights; initializing the machine learned model with the initial set of weights; applying the machine learned model to the training set to generate a prediction of an effect of an agent on a state of a cell; and updating the initial set of weights based on the predictions and a label associated with each image in the training set, the label indicating a known effect of the agent on a corresponding cell (claim 245). Regarding the other limitations of independent claim 237, independent claim 246, independent claim 255, the machine-learned model configured to predict an effect of an agent on a state of a cell, Scheeder et al. teaches a method of supervised machine learning used within image-based genetic screening experiments to classify single cells into pre-defined, biologically meaningful classes based on their phenotypic profiles. Regarding the limitations of dependent claim 245, wherein training the machine learned model comprises: accessing an initial set of weights; initializing the machine learned model with the initial set of weights; applying the machine learned model to the training set to generate a prediction of an effect of an agent on a state of a cell; and updating the initial set of weights based on the predictions and a label associated with each image in the training set, the label indicating a known effect of the agent on a corresponding cell, Scheeder et al. teaches a deep neural network that was used could also be adapted to new, divergent data sets using transfer learning (Machine learning strategies for image-based profiling paragraph 7). Regarding the other limitations of independent claim 237, independent claim 246, independent claim 255, accessing a set of images, each image in the set of images including a cell, each cell associated with a first state; applying the machine learned model to each image in the set of images to identify a set of candidate compounds predicted to modify a corresponding state of a corresponding cell from the first state of the cell to a second state; and providing the set of candidate compounds to an entity associated with the set of images, Jackson et al. teaches a method of screening candidate drugs applied to cell cultures and imaged with high throughput fluorescence microscopy and depending on the bioactivity of the drugs, this can cause a variety of morphological changes to occur (Introduction 2nd paragraph). The method reduces the dimensionality of raw fluorescent stained images from a high throughput imaging (HTI) screen. This produces an embedding space that groups together images with similar cellular phenotypes. Running standard unsupervised clustering on this embedding space yields a set of distinct phenotypic clusters (abstract). This facilitates scientists to select interesting clusters for downstream screening in an attempt to find hit compounds (Conclusion 1st paragraph). A person having ordinary skill in the art would be motivated to combine the method of using machine learning to identify treated vs non treated samples based on morphological changes of Kobayashi et al. with the machine learning model that relates morphology to phenotype of Scheeder et al. with the method of embedding images to determine distinct phenotypic clusters of Jackson et al. in order to build a machine learning model that can identify a drug in order to induce a morphological change. There is a reasonable expectation of success because each author performed analysis on images in order to detect morphological changes. A person having ordinary skill in the art would also be motivated to use the transfer learning on cell images taught by Scheeder et al. to the machine learning model in order to improve model performance. There is a reasonable expectation of success because transfer learning was previously used to increase performance of cell imaging models. Claims 238 and 247 are rejected under 35 U.S.C. 103 as being unpatentable over Kobayashi et al. in view of Scheeder et al. in view of Jackson et al. as applied to claim 237, 246 and 255 under 35 U.S.C. 103 above, and further in view of Oja et al. (Oja, S., Komulainen, P., Penttilä, A. et al. Automated image analysis detects aging in clinical-grade mesenchymal stromal cell cultures. Stem Cell Res Ther 9, 6 (2018)). Kobayashi et al. in view of Scheeder et al. in view of Jackson et al. teach a system, non-transitory computer-readable storage medium, and method for analyzing images and providing a set of candidate compounds associated with the images as applied to claims 237, 245, 246 and 255 above. Kobayashi et al. in view of Scheeder et al. in view of Jackson et al. does not explicitly teach the limitation of claims 238 and 247 wherein the state of the cell is a predicted age of the cell, and wherein the machine-learned model is configured to predict an effect of an agent on the predicted age of a cell. Regarding the limitation of dependent claims 238 and 247, wherein the state of the cell is a predicted age of the cell, and wherein the machine-learned model is configured to predict an effect of an agent on the predicted age of a cell, Oja et al. teaches Imaging analysis of cell morphology is a useful tool for evaluating aging in cell cultures throughout the lifespan of MSCs (abstract) A person having ordinary skill in the art would be motivated to combine the method of Kobayashi et al. in view of Scheeder et al. in view of Jackson et al. with the use of cellular morphology aging indicator taught by Oja et al. in order to develop a model to predict an effect of a drug on the age of a cell. One would have had a reasonable expectation of success because of the relationship between cellular morphology and aging. Image analysis would be able to detect the changes in morphology which directly relate to gaining. Claims 239-242 and 248-251 are rejected under 35 U.S.C. 103 as being unpatentable over Kobayashi et al. in view of Scheeder et al. in view of Jackson et al. as applied to claims 237, 245, 246 and 255 under 35 USC 103 above, in view of Oja et al. as applied to claims 238 and 247 under 35 U.S.C. 103 above, and further in view of Kyriazis et al. (Kyriazis, A.; Shahriar Noroozizadeh; Amir Refaee; Choi, W.; Chu, L.-T.; Bashir, A.; Cheng, W. K.; Zhao, R.; Dhananjay Namjoshi; Salcudean, S. E.; Wellington, C. L.; Nir, G. An End-To-End System for Automatic Characterization of Iba1 Immunopositive Microglia in Whole Slide Imaging. 2019, 17 (3), 373–389). Kobayashi et al. in view of Scheeder et al. in view of Jackson et al. in view of Oja et al. a a system, non-transitory computer-readable storage medium, and method for analyzing images and providing a set of candidate compounds associated with the images as applied to claims 237, 245, 246 and 255 above as well as the use of cellular morphology as an indicator for age as applied to claims 238 and 247 above. Kobayashi et al. in view of Scheeder et al. in view of Jackson et al. in view of Oja et al. does not explicitly teach the limitations of wherein the set of candidate compounds predicted to modify the corresponding state of the corresponding cell from the first state to the second state are predicted to reduce a predicted age of a cell by restoring one or more aspects of an immune response of the cell (claim 239, claim 248); wherein the state of the cell is a function of the cell, and wherein the machine learned model is configured to predict an effect of an agent on the function of a cell (claim 240, claim 249); wherein the function of the cell is the immune cell function of the cell, and wherein the agent modifies the immune cell function from a first function to a second function (claim 241, claim 250); wherein the identified set of compounds is predicted to modify the corresponding state of the corresponding cell based on at least one of: a functional signature of the cell, a morphological signature of the cell, or a marker of the cell (claim 242, claim 251). In regards to the limitations of dependent claims 240-242 and 248-251, wherein the set of candidate compounds predicted to modify the corresponding state of the corresponding cell from the first state to the second state are predicted to reduce a predicted age of a cell by restoring one or more aspects of an immune response of the cell (claim 239, claim 248); wherein the state of the cell is a function of the cell, and wherein the machine learned model is configured to predict an effect of an agent on the function of a cell (claim 240, claim 249); wherein the function of the cell is the immune cell function of the cell, and wherein the agent modifies the immune cell function from a first function to a second function (claim 241, claim 250); wherein the identified set of compounds is predicted to modify the corresponding state of the corresponding cell based on at least one of: a functional signature of the cell, a morphological signature of the cell, or a marker of the cell (claim 242, claim 251) Kyriazis et al. teaches training and deploying a machine learning model to classify microglia based on images of microglia processes. Microglia processes play an important role in morphology, immune response and function of microglia (abstract). A person having ordinary skill in the art would be motivated to combine the machine learning model of Kobayashi et al. in view of Scheeder et al. in view of Jackson et al. in view of Oja et al. with the knowledge of immune cell response, morphology, and function from Kyriazis et al. in order to make a machine learning model to predict function, immune response or morphology of individual cells. One would have had a reasonable expectation of success because machine learning models have successfully detected morphology changes in cells and function, immune response or morphology changes of individual cells can be detected via images. Claims 243-244, 252-253, and 256 are rejected under 35 U.S.C. 103 as being unpatentable over Kobayashi et al. in view of Scheeder et al. in view of Jackson et al. as applied to claims 237,245, 246 and 255 under 35 U.S.C. 103 above, and further in view of Montserrat et al. (Montserrat, D. M.; Lin, Q.; Allebach, J.; Delp, EdwardJ. Training Object Detection and Recognition CNN Models Using Data Augmentation. Electronic Imaging 2017, 2017 (10), 27–36). Kobayashi et al. in view of Scheeder et al. in view of Jackson et al. teach a system and method for analyzing images and providing a set of candidate compounds associated with the images as applied to claims 237,245, 246 and 255 above. Kobayashi et al. in view of Scheeder et al. in view of Jackson et al. does not explicitly teach wherein generating the training set further comprises: preprocessing one or more images in the first subset of images and one or more images in the second subset of images; and wherein preprocessing an image includes at least one of: rotating the image, inverting the image left, inverting the image right, inverting the image up, or inverting the image down (claim 243, claim 252); wherein applying the machine learned model to each image in the set of images further comprises: preprocessing each image in the set of images; and wherein preprocessing includes at least one of: normalization, image enhancement, image correction, contrast enhancement, brightness enhancement, filtering, transformation, adjusting image resolution, adjusting bit resolution, adjusting image size, adjusting field-of-view, background subtraction, image subtraction, or compression (claim 244, claim 253, claim 256). Regarding the limitations of dependent claims 243-244, 252-253, and 256, wherein generating the training set further comprises: preprocessing one or more images in the first subset of images and one or more images in the second subset of images; and wherein preprocessing an image includes at least one of: rotating the image, inverting the image left, inverting the image right, inverting the image up, or inverting the image down (claim 243, claim 252); wherein applying the machine learned model to each image in the set of images further comprises: preprocessing each image in the set of images; and wherein preprocessing includes at least one of: normalization, image enhancement, image correction, contrast enhancement, brightness enhancement, filtering, transformation, adjusting image resolution, adjusting bit resolution, adjusting image size, adjusting field-of-view, background subtraction, image subtraction, or compression (claim 244, claim 253, claim 256) Montserrat et al. does teach a method of augmenting training data through the use of data augmentation methods where linear and nonlinear transforms are done on the training data to create “new” training images (abstract). A person of ordinary skill in the art would be motivated to combine the machine learning model taught by Kobayashi et al. in view of Scheeder et al. in view of Jackson et al. with the training data augmentation taught by Montserrat et al. in order to increase the available training data for model training in order to increase performance. There is a reasonable expectation of success because linear and nonlinear transformations have previously been performed on images. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Connor Beveridge whose telephone number is 571-272-2099. The examiner can normally be reached Monday - Thursday 9 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Karlheinz Skowronek can be reached at 571-272-9047. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.H.B./ Examiner, Art Unit 1687 /Karlheinz R. Skowronek/Supervisory Patent Examiner, Art Unit 1687
Read full office action

Prosecution Timeline

Jun 06, 2022
Application Filed
Dec 23, 2025
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month