Prosecution Insights
Last updated: April 19, 2026
Application No. 18/031,928

METHOD FOR CLASSIFYING AN INPUT IMAGE REPRESENTING A PARTICLE IN A SAMPLE

Final Rejection §103§DP
Filed
Apr 14, 2023
Examiner
VARNDELL, ROSS E
Art Unit
2674
Tech Center
2600 — Communications
Assignee
BIOASTER
OA Round
2 (Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
98%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
520 granted / 615 resolved
+22.6% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
28 currently pending
Career history
643
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
66.9%
+26.9% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 615 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The IDS(s) has/have been considered and placed in the application file. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Response to Arguments This final office action is in response to the amendment filed 8/1/2025. Claims 1-14 and 16 are pending in this application and have been considered below. Claim 15 is canceled by the applicant. Applicant’s arguments with respect to claims 1-14 and 16 have been considered but are moot in view of new ground(s) of rejection because of the amendments. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-14 and 16 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-14 of copending Application No. 18/031972 (reference application ‘972). Although the claims at issue are not identical, they are not patentably distinct from each other because the two sets of claims are obvious variants of methods and machines to lower the dimensionality of the features space using the t-SNE algorithm so the classified particles from the CNN can be visualized in 2D or 3D space. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claims 1-14 and 16 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-13 of copending Application No. 18/032399 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the two sets of claims address the same problem of classifying a target particle on a client device over time (18031928 claims 1+12, 18/032399 claim 1) using a CNN on a client. At the time of the filling, processing sequences of images (like frames or slices) by stacking them into a 3D volume and applying a 3D CNN is well known. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 5-7, 10-14, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Douet et al. (US 20190086866 A1 – hereinafter “Douet”) in view of Ching et al. (Opportunities and obstacles for deep learning in biology and medicine – hereinafter “Ching”) in view of Hong et al (US 20230013209 A1 – hereinafter “Hong”) in view of Gur et al. (US 11,823,046 B2 – hereinafter “Gur”). Claims 1, 13 and 16. (Currently Amended) Douet discloses a CRM (¶109) and a method for classifying at least one input image representing a target particle in a sample (Fig. 1, 11a-11f, 12; ¶96), the method being characterized in that it comprises implementation, by a data processor (Fig. 1, 20; ¶109) Douet discloses all of the subject matter as described above except for specifically teaching “of a client , of steps of: (b) extraction of a feature map of said target particle by means of a convolutional neural network pre-trained on a public image database; (c) classification of said input image depending on said extracted feature map.” However, Ching in the same field of endeavor teaches of a client (p. 31, left column, discloses “Cloud computing affords researchers flexibility, and enables the use of specialized hardware (e.g. FPGAs, ASICs and GPUs) without major investment.”), of steps of: (b) extraction of a feature database (p. 3, Fig. 1 discloses a “CNN”; p. 2, right column, discloses “A layer consists of a set of nodes, sometimes called ‘features’”; p. 21 discloses “latest neural network architectures (ResNet, Inception, Xception and others) are already optimized for and pre-trained on generic, large-scale image datasets [344]”); p. 32, left column, discloses “a model pre-trained on available public data.”); (c) classification of said input image depending on said extracted feature map (p. 13, left column, discloses “Two-dimensional CNNs are ideal for segmentation, feature extraction and classification in fluorescence microscopy images”). Therefore, it would have been obvious to one of ordinary skill in the art to combine Douet and Ching before the effective filing date of the claimed invention. The motivation for this combination of references would have been to use deep learning, particularly CNNs, for image based classification and feature extraction of useful information in complex biological and medical domains. Successful applications of these techniques are seen in Ching including healthcare (Section 2.1), and morphological phenotypes (Section 3.11) including classifying lesions, segmenting organs, and analyzing histology slides (p. 7, right column). This motivation for the combination of Douet and Ching is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III). Douet discloses all of the subject matter as described above except for specifically teaching a “feature map” and “wherein step (c) is implemented by means of a classifier, different to the convolutional neural network at step (b), said classifier being trained, using a training database of already classified feature maps of particles in said sample, wherein the public image is different than the training database.” However, Gur in the same field of endeavor teaches a feature map (¶65 discloses “latent feature … represented as an ordered collection of numerical values, e.g., aa vector or matrix of numerical values.”) and wherein step (c) is implemented by means of a classifier (C2:37-39: “feature vectors are submitted to a pretrained support-vector machine, which returns a candidate image label” SVM i.e. classifier), different to the convolutional neural network at step (b) (C11:L8-11: “after using the method of FIG. 2 to pretrain the CNN, the present invention derives feature vectors from the output of three deep layers of the trained network”; where, Ching pp. 29-30 teaches public datasets “ImageNet [46] and CIFAR [489] Datasets” that use “transfer learning”), said classifier being trained, using a training database of already classified feature maps of particles in said sample” (C11:L27-34: “pretraining the SVM may comprise … pretraining the SVM may comprise … and then refining the SVM as a function of an accuracy of the SVM's resulting training output.” Thus, the classifier (SVM) is different from the CNN at step (b), trained using a training database comprised of labeled feature vectors (feature maps).), wherein the public image is different than the training database (As shown in the citations above the CNN is pretrained on a public dataset (ImageNet per common practice) while downstream SVM is trained on domain feature vectors from the target images. Maintaining these datasets as distinct is conventional in transfer learning (see Ching).). Therefore, it would have been obvious to one of ordinary skill in the art to combine Douet and Gur before the effective filing date of the claimed invention. The motivation for this combination of references would have been decouple the pipeline using a pretrained CNN as feature extractor and a separate SVM trained on feature vectors to leverage robust SVM decision boundaries on fixed deep embeddings. This motivation for the combination prior art yields predictable improvements in generalization and performance on limited clinical datasets and reflects routine transfer-learning practices. Claim 2. The combination of Douet, Ching, and Gur discloses the method as claimed in claim 1, wherein the particles (Douet ¶¶ 62, 68). Claim 3. The combination of Douet, Ching, and Gur discloses the method as claimed in claim 2, comprising a step (a) of extracting said input image from an overall image of the sample, so as to represent said target particle in said uniform manner (Douet ¶155 discloses “obtain a thumbnail image representing a bacterium with a number of pixels in the range from 100 to 400 pixels.”; Fig.’s 8-9). Claims 5 and 14. The combination of Douet, Ching, and Gur discloses the method as claimed in claim 3, wherein step (a) comprises obtaining said overall image from an intensity image of the sample, said image being acquired by an digital sensor (Douet ¶131 discloses “acquisition 50 of intensity image Ih by image sensor 16.”). Claim 6. The combination of Douet, Ching, and Gur discloses the method as claimed in claim 1(Ching p. 2, right column, discloses “one or more hidden layers”). Claim 7. The combination of Douet, Ching, and Gur discloses the method as claimed in claim 6, wherein said pre- trained convolutional neural network is an image-classifying network, in particular of the VGG, AlexNet, Inception, ResNet type (Ching p. 2, right column, discloses “latest neural network architectures (ResNet, Inception, Xception and others) are already optimized for and pre-trained on generic, large-scale image datasets [344]”). Claim 10. The combination of Douet, Ching, and Gur discloses the method as claimed in claim 9, wherein said classifier is chosen from a support vector machine, a k-nearest neighbors algorithm, or a convolutional neural network (Ching p. 14, right column, discloses “support vector machines (SVMs)”). Claim 11. The combination of Douet, Ching, and Gur discloses the method as claimed in claim 1, wherein step (c) comprises reducing the number of variables of the feature map by means of the t-SNE algorithm (Ching p. 20, left column, discloses “t-Distributed Stochastic Neighbour Embedding [303].”). Claim 12. The combination of Douet, Ching, and Gur discloses the method as claimed in claim 1, for classifying a sequence of input images representing said target particle in a sample over time (Douet ¶30 discloses “This embodiment enables to track a particle on a plurality of successive images to form a film showing the behavior of a particle over time”), wherein step (b) comprises concatenation of the extracted feature maps of each input image of said sequence (Ching p. 32, right column, discloses “These individual representations are further concatenated before or within fully connected layers.”). Claims 4, 8, and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Douet in view of Ching in view of Gur, further in view of Hong et al (US 20230013209 A1 – hereinafter “Hong”). Claim 4. The combination of Douet, Ching, and Gur discloses the method as claimed in claim 3, wherein step (a) comprises segmentation of said overall image so as to detect said target particle in the sample (Douet ¶131 discloses “conventional image processing (segmentation … detection of particles based on their morphology, etc.)”), then (Douet Fig. 8). Douet, Ching, and Gur discloses all of the subject matter as described above except for specifically teaching “cropping.” However, Hong in the same field of endeavor teaches cropping (Hong ¶60 discloses “phase-contrast microscope 108 may generate a 3D QPI 106 depicting thousands of microorganisms (e.g., bacteria) mounted on a microscope slide. The system 110 may detect individual microorganisms in the 3D QPI 106 (e.g., using an object detection neural network trained to detect microorganisms) and crop multiple regions from the 3D QPI 106 which each depict one or more respective microorganisms (emphasis added).”; Hong ¶¶ 61-63). Therefore, it would have been obvious to one of ordinary skill in the art to combine Douet, Ching, Gur and Hong before the effective filing date of the claimed invention. The motivation for this combination of references would have been to use Hong’s microorganism CNN workflow with Douet’s particle acquisition to supply the supply “the input image representing a target particle in the sample” and to adopt Gur’s decoupled transfer-learning pipeline as a routine practice in view of Ching’s teachings on transfer learning for biomedical imaging, yielding predictable improvements with limited clinical data. Claim 8. The combination of Douet, Ching, Gur, and Hong discloses the method as claimed in claim 6, wherein (Hong ¶68 discloses pooling layers; ¶110 discloses a “global average pooling layer 728”), the extracted feature map having a spatial size of 1x1 as a result (Hong ¶110 discloses “a 3D convolutional layer 722 with 1x1x1 convolutional filters”; where a 2D convolution would use a 1x1 filter). Claim 9. The combination of Douet, Ching, Gur, and Hong discloses the method as claimed in claim 1, wherein step (c) is implemented by means of a classifier, the method comprising a step (a0) of training, by data-processing means of a server (Ching p. 6, Section 2.1 discloses various training types; Douet p. 31, left column, discloses “Cloud computing affords researchers flexibility, and enables the use of specialized hardware (e.g. FPGAs, ASICs and GPUs) without major investment.”)), parameters of said classifier using a training database of already classified feature maps of particles in said sample (Hong ¶109 and Fig. 6G discloses using the trained classification neural network to identify properties of bacteria; Fig. 6B and ¶104 discloses the latent features corresponding to correctly or incorrectly classification). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication, earlier communications from the examiner should be directed to Ross Varndell whose telephone number is (571)270-1922. The examiner can normally be reached M-F, 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O’Neal Mistry can be reached at (313)446-4912. The fax phone number for the organization where this application, proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR, Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, access to the automated information system, call 800-786-9199 (IN USA, CANADA), 571-272-1000. /Ross Varndell/Primary Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Apr 14, 2023
Application Filed
May 15, 2025
Non-Final Rejection — §103, §DP
Aug 01, 2025
Response Filed
Oct 31, 2025
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603810
System and Method for Communications Beam Recovery
2y 5m to grant Granted Apr 14, 2026
Patent 12597238
AUTOMATIC IMAGE VARIETY SIMULATION FOR IMPROVED DEEP LEARNING PERFORMANCE
2y 5m to grant Granted Apr 07, 2026
Patent 12582348
DEVICE AND METHOD FOR INSPECTING A HAIR SAMPLE
2y 5m to grant Granted Mar 24, 2026
Patent 12579441
SYSTEMS AND METHODS FOR IMAGE RECONSTRUCTION
2y 5m to grant Granted Mar 17, 2026
Patent 12579786
SYSTEM AND METHOD FOR PROPERTY TYPICALITY DETERMINATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
98%
With Interview (+13.0%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 615 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month