Prosecution Insights
Last updated: April 19, 2026
Application No. 18/113,753

MODULE FOR IDENTIFICATION AND CLASSIFICATION TO SORT CELLS BASED ON THE NUCLEAR TRANSLOCATION OF FLUORESCENCE SIGNALS

Final Rejection §101§103§DP
Filed
Feb 24, 2023
Examiner
HICKS, AUSTIN JAMES
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Sony Corporation Of America
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
308 granted / 403 resolved
+21.4% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
54 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 403 resolved cases

Office Action

§101 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Arguments related to the 101 rejection are not persuasive. Applicant argues, The following limitations are clearly not directed to a mental concept: fine-tuning a classification network based on the cluster; and performing real-time live sorting of a set of cells using the classification network. A classification network is fine-tuned and then used to perform real-time live sorting of cells, which clearly cannot be implemented in the mind. Therefore, the rejection should be withdrawn. Remarks 5. Fine-tuning was identified as an additional element, not part of the abstract idea. Non-final 2. Scientists used to do the sorting manually, so sorting is a mental concept. The additional element of fine-tuning is merely linked to the field of microscopy. Because Applicant claims a mental concept, and the additional elements don’t integrate the abstract idea into a practical application, nor do they amount to significantly more than the abstract idea, the claimed subject matter is not patent eligible. The Double Patenting rejection is withdrawn. Arguments related to the 103 rejection are moot in light of new art necessitated by the amendment filed 2/7/2026. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea of a mental concept without significantly more. The claims recite the mental concept of extracting information, clustering, identifying and sorting a set of cells. This judicial exception is not integrated into a practical application because the additional elements of training/fine-tuning is claimed broadly and amounts to insignificant extra-solution activity. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the mentions of hardware elements are all directed to generic computer parts. Double Patenting The Double Patenting rejection to claims 1, 8 and 15 is withdrawn, thank you. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-11, 13-18, 20 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Improving Image Clustering With Multiple Pretrained CNN Feature Extractors by Guerin, High-speed fluorescence image–enabled cell sorting by Schraivogel et al, US 20180189602 A1 to Hellier and US 11636161 B1 to Chang et al Claims 5, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Improving Image Clustering With Multiple Pretrained CNN Feature Extractors by Guerin, High-speed fluorescence image–enabled cell sorting by Schraivogel et al, US 20180189602 A1 to Hellier and US 11636161 B1 to Chang et al and US 20080118160 A1 to Fan et al. Guerin teaches claims 1, 8 and 15. A method comprising: extracting one or more features from (Guerin Fig. 2 see below, feature extracted from images. The Multiview generator/encoder is a “CNN” according to Guerin fig. 2 description.) PNG media_image1.png 246 496 media_image1.png Greyscale clustering one or more (Guerin fig. 2 above, clustering network MV net.) identifying a cluster of the one or more clusters to (Guerin p. 6 sec. 4.1 and fig. 2 above. The clusters are classes. A cluster is identified when a datapoint similar to the identified cluster is input to the model in Fig. 2. Guerin p. 6 sec. 4.1 “VOC2007 [8] is an image classification dataset presenting visual objects from various classes in a realistic scene. This is a very challenging dataset for clustering…”) performing real-time live sorting of a set of (This sorting is different than the sorting that happens within the cluster in the preceding claim element. This sorting is clustering/classifying, shown in Guerin Fig. 2 above. The output clusters are the output classes.) Guerin doesn’t use cell images and different dyes for the nucleus and target protein. However, Schraivogel teaches cell images… [and] fine-tuning a classification network (Schraivogel Fig. 2 description p. 3 “HeLa cells expressing RelA-mNG were treated with TNFa or left untreated and stained with the cell-permeable nuclear dye DRAQ5. Cells were then gated for singlets and live cells, and the correlation between RelA-mNG and DRAQ5 was used to differentiate between the treated (nuclear RelA) and untreated (cytoplasmic RelA) conditions.” Differentiating is detecting. DRAQ5 is the nuclear dye. The target protein dye is RelA-mNeonGreen (RelA-mNG). Schraivogel p.2 makes it clear that this data was part of the training data set, “We used this training dataset to identify the most differing image-, scatter- and intensity-based parameters…”) Schraivogel, Guerin and the claims are all image processors. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use cell images and fine tune of dyed cells because this fine-tuned classification sorts image based on cell measurements “from image data at speeds up to 15,000 events per second” because these machine learning discriminators can solve the technical challenge of “isolation of single cells with unique spatial and morphological traits…” Schraivogel abs. Guerin doesn’t sort within the cluster. However, Hellier teaches how to sort within a cluster. (Hellier para 5 “a processor for sorting the plurality of images into a first group of images and a second group of images in response to metadata associated with each of said plurality of images, sorting said first group of images into a third group of images and a fourth group of images in response to a media attribute of each of said plurality of images within said first group of images…”) The claims, Guerin and Hellier all cluster images. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to sort/cluster within a cluster because it allows a user or model to choose “the best images in terms of quality.” Hellier para 3. Guerin doesn’t fine tune based on the clusters. However, Chang teaches fine-tuning a classification network based on the cluster. (Chang abs “The initial clusters thus generated are fine-tuned by undergoing an iterative self-tuning process, which continues when new data is streamed from data source(s).” Clustering is classifying.) Chang, the claims and Guerin all teach clustering. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to fine tune a network because this “hybrid approach combines strengths of user domain knowledge and machine learning power.” Chang abs. Schraivogel teaches claims 2, 9 and 16. The method of claim 1 wherein the one or more features comprise a target protein based on a fluorescent dye. (Schraivogel fig. 2 description p. 3 “HeLa cells expressing RelA-mNG were treated with TNFa or left untreated and stained with the cell-permeable nuclear dye DRAQ5.”) Guerin teaches claims 3, 10 and 17. The method of claim 2 wherein clustering the one or more (Guerin fig. 2 shows clustering of the images) Guerin doesn’t teach a cell based on location of target protein. However, Schraivogel teaches features of cells is based on a location of the target protein. (Schraivogel Fig. 2 description p. 3 “the correlation between RelA-mNG and DRAQ5 was used to differentiate between the treated (nuclear RelA) and untreated (cytoplasmic RelA) conditions.”) Schraivogel teaches claims 4, 11 and 18. The method of claim 3 wherein when the target protein is in the cytosol, the one or more cells are clustered as dormant cells, and when the target protein is in the nucleus, the one or more cells are clustered as activated cells. (Schraivogel p. 5 “nuclear translocation of RelA upon NF-kB pathway activation… Cells were then treated with TNFa, and the 5% lower (cytoplasmicRelA) and upper (nuclear RelA) bins of the RelA-mNG/DRAQ5 correlation parameter were isolated…” The nuclear RelA is activated, and the cytosolic RelA is the non-actived dormant part. The clustering is taught by Guerin and Chang above.) Guerin teaches claims 5, 12 and 19. The method of claim 1 wherein identifying the cluster to sort is based on a (Guerin p. 6 sec. 4.1 and fig. 2 above. The clusters are classes. A cluster is identified when a datapoint similar to the identified cluster is input to the model in Fig. 2. Guerin p. 6 sec. 4.1 “VOC2007 [8] is an image classification dataset presenting visual objects from various classes in a realistic scene. This is a very challenging dataset for clustering…”) Guerin doesn’t teach a user manually identifying a cluster. However, Fan teaches user manually identifying a cluster. (Fan abs “A user may browse an image database by identifying and accessing clusters of images that are progressively more refined.”) The claims, Fan and Guerin all cluster images. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to allow a user to identify clusters because it makes the algorithm more adaptable to the user’s needs. Guerin teaches claims 6, 13 and 20. The method of claim 1 wherein identifying the cluster to (Guerin p. 6 sec. 4.1 and fig. 2 above. The clusters are classes. A cluster is identified when a datapoint similar to the identified cluster is input to the model in Fig. 2. Guerin p. 6 sec. 4.1 “VOC2007 [8] is an image classification dataset presenting visual objects from various classes in a realistic scene. This is a very challenging dataset for clustering…”) Guerin doesn’t sort within the cluster. However, Hellier teaches how to sort within a cluster. (Hellier para 5 “a processor for sorting the plurality of images into a first group of images and a second group of images in response to metadata associated with each of said plurality of images, sorting said first group of images into a third group of images and a fourth group of images in response to a media attribute of each of said plurality of images within said first group of images…”) Guerin teaches claims 7, 14 and 21. The method of claim 1 wherein fine-tuning the classification network includes performing training with an additional dataset based on the cluster. (Chang abs “The initial clusters thus generated are fine-tuned by undergoing an iterative self-tuning process, which continues when new data is streamed from data source(s).” Clustering is classifying.) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Austin Hicks whose telephone number is (571)270-3377. The examiner can normally be reached Monday - Thursday 8-4 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AUSTIN HICKS/Primary Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Feb 24, 2023
Application Filed
Dec 19, 2025
Non-Final Rejection — §101, §103, §DP
Feb 07, 2026
Response Filed
Mar 20, 2026
Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591767
NEURAL NETWORK ACCELERATION CIRCUIT AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12554795
REDUCING CLASS IMBALANCE IN MACHINE-LEARNING TRAINING DATASET
2y 5m to grant Granted Feb 17, 2026
Patent 12530630
Hierarchical Gradient Averaging For Enforcing Subject Level Privacy
2y 5m to grant Granted Jan 20, 2026
Patent 12524694
OPTIMIZING ROUTE MODIFICATION USING QUANTUM GENERATED ROUTE REPOSITORY
2y 5m to grant Granted Jan 13, 2026
Patent 12524646
VARIABLE CURVATURE BENDING ARC CONTROL METHOD FOR ROLL BENDING MACHINE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+25.1%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 403 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month