Prosecution Insights
Last updated: April 19, 2026
Application No. 18/031,972

METHOD FOR CLASSIFYING AN INPUT IMAGE REPRESENTING A PARTICLE IN A SAMPLE

Non-Final OA §103
Filed
Apr 14, 2023
Examiner
ZHANG, WAYNE
Art Unit
2672
Tech Center
2600 — Communications
Assignee
BIOASTER
OA Round
3 (Non-Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
94%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
8 granted / 16 resolved
-12.0% vs TC avg
Strong +44% interview lift
Without
With
+43.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
22 currently pending
Career history
38
Total Applications
across all art units

Statute-Specific Performance

§101
19.2%
-20.8% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
25.1%
-14.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/17/2026 has been entered. Response to Arguments The claim objections have been withdrawn in light of the amended claims. The rejections under 35 U.S.C. 112(b) and 112(d) have been withdrawn in light of the amended claims. The Applicant in Remarks dated 1/22/2026 on page 6, states “Soans discloses the use of a t-SNE algorithm for visualization purposes, and not for defining an operational embedding space used for classification. In particular, Soans explicitly states that t-SNE is used to visualize feature vectors learned by a convolutional neural network, while the classification decision remains entirely made by a CNN (see paragraphs [0056]-[0057]). Thus, Soans neither discloses nor suggests defining a t-SNE embedding space shared by reference feature maps and a target feature map, nor performing any classification or decision-making within such a t-SNE embedding”. As additionally supported by Wikipedia below, the t-SNE algorithm is a dimensionality reduction technique that converts higher level dimensional data to lower-level dimensional data. Soans not only visualizes what the t-SNE algorithm does as shown in Fig. 4, but also implements this algorithm into their own respective invention. The t-SNE algorithm is used for visualization, but it is through the application of the technique that this visualization occurs. PNG media_image1.png 599 1031 media_image1.png Greyscale The Applicant on Remarks dated 1/22/2026 on Page 6, states “Knight does not contain any teaching that would encourage the person skilled in the art to combine its disclosure with those of Tandon, Salman, or Soans. Knight relates to latent spaces derived from single-cell RNA-sequencing (scRNA- seq) expression data, which is a fundamentally different data modality from the image- based particle feature maps of the present application. Moreover, Knight explicitly indicates in paragraphs [0041] and [0119] that UMAP is better than t-SNE and presents t-SNE as being less stable and less suitable. Knight explains that UMAP achieves superior separability and better preservation of structure, whereas t-SNE is inferior. Thus, Knight provides a clear technical teaching that teaches away from the use of t-SNE as an operational embedding for analysis or decision-making. Knight performs its analyses exclusively in UMAP-derived latent spaces, and not in t-SNE spaces as described in paragraphs [0011] and [0024]. Accordingly, the one skilled in the art reading Knight would be not encouraged from using t-SNE as the operational decision space, which directly contradicts the Examiner's assertion”. The examiner (as modified below) is simply using the k-nearest algorithm as taught by Knight. Rather than completely taking the entire process from Knight (i.e. implementing the algorithm on latent spaces), they are using the concept of the k-nearest algorithm, and implementing it into Tandon’s space. Knight’s analysis of optimal dimensionality reduction techniques also does not pertain with the current modification/motivation at hand, as while it states UMAP may be superior in construction of latent spaces, they do not say t-SNE is suboptimal. Knight’s analysis also does not relate to the modification/motivation, as the modification is simply using the k-nearest neighbor algorithm to perform the classification of Tandon’s invention. Applicant’s arguments with respect to claim(s) 1-6, 8, 10-16 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Priority Receipt is acknowledged that application claims priority to foreign application with application number FR2010743 dated 10/20/2020. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDS dated 12/10/2025 has been considered and placed in the application file. Drawing Objections Fig. 4, Fig. 5b, and Fig. 6 are objected to for having untranslated text. Drawings should be labeled with English text. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6-8, 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Tandon (US 20180211380 A1) in view of Salman (US 20210406644 A1) and Soans (US 20210090250 A1). Regarding claim 1, Tandon discloses a method for classifying at least one input image representing a target particle in a sample (“In the process 500, the one or more processors are configured to receive one or more images of a biological sample captured by the camera,” Tandon, paragraph [0200]). While Tandon discloses extraction of a feature map of said target particle from the input image by a convolutional neural network ("In some implementations, applying the machine-learning classification model to the plurality of images of cellular artifacts to classify the cellular artifacts includes: applying, by the one or more processors, a principal component analysis to the plurality of images of cellular artifacts to obtain a plurality of feature vectors for the plurality of cellular artifacts," Tandon, paragraph [0029]), they do not teach trained beforehand on a public image database. However, Salman teaches a convolutional neural network trained beforehand on a public image database (“Then, for each label representation image, a convolutional neural network with weights pre-trained on a large visual database (such as Imagenet) is used to extract a feature vector,” Salman, paragraph [0147]). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to pre-train Tandon’s model on a public database, as taught by Salman. The suggestion/motivation for doing so would have been because using a pre-trained model will skip training and speed up the process of extraction. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. While Tandon in view of Salman discloses dimensionality reduction (“In the following description, two primary implementations of machine learning model will be presented: a convolutional neural network and a randomized Principal Component Analysis (PCA) random forests model,” Tandon, paragraph [0294]), they do not teach “reduction of a number of variables of the extracted feature map, by a t-SNE algorithm, wherein the t-SNE algorithm defines an embedding space common to a plurality of feature maps of reference particle and to the extracted feature map”. However, Soans teaches reduction of a number of variables of the extracted feature map, by a t-SNE algorithm, wherein the t-SNE algorithm defines an embedding space common to a plurality of feature maps of reference particle and to the extracted feature map (“FIG. 4 depicts image feature vectors in two- dimensional t-distributed stochastic neighbor (t-SNE) embedded plots 402, 404, 406, 408, 410, 412 for the reduction to practice,” Soans, paragraph [0056], Fig. 4, below, embedding space is the reduced dimensional space after the t-SNE application). PNG media_image2.png 253 611 media_image2.png Greyscale It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to apply a t-SNE algorithm onto Tandon’s (in view of Salman) feature vectors, as taught by Soans. The suggestion/motivation for doing so would have been because t-SNE ensures data points are close together, allowing for easier identification of clusters. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Tandon in view of Salman and Soans discloses unsupervised classification (“Training methods in which the identities of sample features and/or conditions are not used in training are termed unsupervised learning processes. While both supervised and unsupervised learning may be employed with the disclosed processes and systems, most examples herein are provided in the context of supervised learning,” Tandon, paragraph [0214]) of said input image depending on said feature map having a reduced number of variable, by implementation of an algorithm within said embedding space (“The reduced dimensional output describing the individual images and the labels associated with those images are provided to a random forests model generator 707 that produces a random forests model 709, which is ready to classify biological samples,” Tandon, paragraph [0258]). Therefore, it would have been obvious to combine Tandon in view of Salman and Soans to obtain the invention as specified in claim 1. Regarding claim 6, Tandon in view of Salman and Soans discloses the method as claimed in claim 1, wherein said feature map is a vector of numerical coefficients each associated with one elementary image of a set of elementary images each representing a reference particle ("In some implementations, randomized PCA generates a ten dimensional feature vector from each image in the training set. Every element in the training set is represented by this multi-dimensional vector, and fed into the random forests module to correlate between the label and the features," Tandon, paragraph [0342]), step of extracting said input image from an overall image of the sample comprises determination of numerical coefficients such that a linear combination of said elementary images weighted by said coefficients approximates the representation of said target particle in the input image (“ FIG. 28A shows hypothetical dataset having only two dimensions on the left and the hypothetical decision tree on the right that is trained from the hypothetical dataset. In this simplified illustrative example, each feature vector includes only two components; curvature and eccentricity. Each data point (or sample feature) is labeled as either 1 (feature of interest) or 0 (not feature of interest). Plotted on the x-axis on the left of the figure is curvature expressed in an arbitrary unit,” Tandon, Col. 19, paragraph [0346], Fig. 28A below, the location of the feature vector is a representation of the target particles). PNG media_image3.png 418 584 media_image3.png Greyscale Regarding claim 8, Tandon in view of Salman and Soans discloses the method as claimed in claim 1, wherein the feature map has a reduced number of variables being the result of embedding the extracted feature map into said embedding space (“FIG. 4 depicts image feature vectors in two-dimensional t-distributed stochastic neighbor (t-SNE) embedded plots 402, 404, 406, 408, 410, 412 for the reduction to practice,” Soans, paragraph [0056], Fig. 4, below, embedding space is a low dimension space containing vector points, such as the two-dimensional space as shown below in Fig. 4). PNG media_image2.png 253 611 media_image2.png Greyscale Claim 11 corresponds to claim 1, additionally reciting “A system for classifying at least one input image representing a target particle in a sample” (“In various embodiments, the system includes at least one hardware component and/or at least one software component,” Tandon, paragraph [0382]), Comprising at least one client comprising data-processor (“The system includes: a camera configured to capture one or more images of the biological sample; and one or more processors communicatively connected to the camera”, Tandon, paragraph [0004]). Thus, claim 11 is rejected for the same reasons of obviousness as claim 1. Regarding claim 12, Tandon in view of Salman and Soans discloses the system as claimed in claim 11, further comprising an observing device configured to determine said target particle in the sample. ("In some implementations, the one or more actuators can move the camera 412 and/or the stage 414 in one, two, or three dimensions,” Tandon, paragraph [0195]). Regarding claim 13, Tandon in view of Salman and Soans discloses a non-transitory computer storage medium comprising code instructions for executing a method as claimed in claim 1 (“An additional aspect of the disclosure relates to a non-transitory computer-readable medium storing computer-readable program code to be executed by one or more processors, the program code including instructions to cause a system including a camera and one or more processors communicatively connected to the camera to,” Tandon, paragraph [0038]), for classifying at least one input image representing a target particle in a sample, when said program is executed on a computer. (“In the process 500, the one or more processors are configured to receive one or more images of a biological sample captured by the camera,” Tandon, paragraph [0200]). Regarding claim 14, Tandon in view of Salman and Soans discloses a non-transitory storage medium readable by a piece of computer equipment, on which a computer program product comprises code instructions for executing a method as claimed in claim 1 for classifying at least one input image representing a target particle in a sample. (“An additional aspect of the disclosure relates to a non-transitory computer-readable medium storing computer-readable program code to be executed by one or more processors, the program code including instructions to cause a system including a camera and one or more processors communicatively connected to the camera to,” Tandon, paragraph [0038]). Claim(s) 2-3, 5 are rejected under 35 U.S.C. 103 as being unpatentable over Tandon (US 20180211380 A1) in view of Salman (US 20210406644 A1), Soans (US 20210090250 A1), and in further view of Douet (US 10831156 B2). Regarding claim 2, Tandon in view of Salman and Soans discloses the method as claimed in claim 1. Tandon in view of Salman and Soans does not teach “wherein the particles are represented in a uniform manner in the input image and in each elementary image, and in particular centered on and aligned in a predetermined direction.”. However, Douet teaches wherein the particles are represented in a uniform manner in the input image and in each elementary image, and in particular centered on and aligned in a predetermined direction ("Without being bound by theory, the inventors were able to observe the occurrence of two poles in a bacterium before its division, which might correspond to the meiosis of a bacterium, as illustrated in FIG. 8 which shows the time variation of a bacterium," Douet, Col. 15, Line 9-14, Fig. 8 below). PNG media_image4.png 323 511 media_image4.png Greyscale It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to capture thumbnail images of Tandon’s (in view of Salman and Soans) samples in a uniform and centered manner, as taught by Douet. The suggestion/motivation for doing so would have been to acquire data of samples with less position variance, resulting in faster machine learning and less room for classification errors. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Tandon in view of Salman, Soans and in further view of Douet to obtain the invention as specified in claim 2. Regarding claim 3, Tandon in view of Salman, Soans, and Douet discloses the method as claimed in claim 2, comprising a step of extracting said input image from an overall image of the sample, so as to represent said target particle in said uniform manner. ("Typically, the disclosed embodiments have enabled to obtain a thumbnail image representing a bacterium with a number of pixels in the range from 100 to 400 pixels," Douet, Col. 15, Line 3-5, Fig. 8 above, the sample images taken above are from an image of samples). Regarding claim 5, Tandon in view of Salman, Soans, and Douet discloses the method as claimed in claim 3, wherein step of extracting said input image from an overall image of the sample comprises obtaining said overall image from an intensity image of the sample, said image being acquired (Tandon, paragraph [0196], "The system automatically moves the camera 428 and/or the stage 430 so that the camera 428 can capture one or more images of the biological sample without requiring a human operator to adjust the camera or the stage to change the relative positions of the camera 428 and the biological samples on the stage 430"). Claim(s) 4 is rejected under 35 U.S.C. 103 as being unpatentable over Tandon (US 20180211380 A1) in view of Salman (US 20210406644 A1), Soans (US 20210090250 A1), Douet (US 10831156 B2), and in further view of Abu Qura (US 20210334971 A1). Regarding claim 4, Tandon in view of Salman, Soans, and Douet discloses the method as claimed in claim 3, wherein step of extracting said input image from an overall image of the sample comprises segmentation of said overall image so as to detect said target particle in the sample (“The one or more processors are configured to: receive the one or more images of the biological sample captured by the camera; segment the one or more images of the biological sample to obtain a plurality of cellular artifacts,” Tandon, paragraph [0004]). Tandon in view of Salman, Soans, and Douet does not teach “then cropping of the input image to said detected target particle”. However, Abu Qura teaches cropping of the input image to said detected target particle (“The processor may perform various operations on the image to make the image cleaner for analysis. For example, the processor may automatically crop the image 240 or perform edge detection of the image 240 of the test kit 210 prior to transmitting the image for analysis,” Abu Qura, paragraph [0068]). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to crop the segmented image of Tandon (in view of Salman, Soans, and Douet), as taught by Abu Qura. The suggestion/motivation for doing so would have been to acquire a smaller image, resulting in lesser data processing and resources. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Tandon in view of Salman, Soans, Douet, and in further view of Abu Qura to obtain the invention as specified in claim 4. Claim(s) 10 are rejected under 35 U.S.C. 103 as being unpatentable over Tandon (US 20180211380 A1) in view of Salman (US 20210406644 A1), Soans (US 20210090250 A1), and in further view of Lee (US 20210256322 A1). Regarding claim 10, Tandon in view of Salman and Soans discloses the method as claimed in claim 1, for classifying a sequence of input images representing said target particle in a sample over time ("In the process 500, the one or more processors are configured to receive one or more images of a biological sample captured by the camera,” Tandon, paragraph [0200], taking more than one image of a sample is taking images over time). Tandon in view of Salman and Soans does not teach “wherein step of extraction of a feature map comprises concatenation of the extracted feature maps of each input image of said sequence”. However, Lee teaches wherein step of extraction of a feature map comprises concatenation of the extracted feature maps of each input image of said sequence ("The first to fourth fully-connected layers 831, 832, 833, and 834 may perform parallel processing (e.g., simultaneous processing) on the first to fourth partial images. The feature vectors of the first to fourth partial images may be concatenated to form one vector (hereinafter, a connected feature vector),” Lee, paragraph [0098]). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to use Lee’s method of concatenation on Tandon’s (in view of Salman and Soans) feature vectors to form a connected vector. The suggestion/motivation for doing so would have been to merge the information of different feature vectors, allowing for more comprehensive results. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Tandon in view of Salman and Soans and in further view of Lee to obtain the invention as specified in claim 10. Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tandon (US 20180211380 A1) in view of Salman (US 20210406644 A1), Soans (US 20210090250 A1), and in further view of Knight (US 20220254440 A1). Regarding claim 15, Tandon in view of Salman and Soans discloses the method as claimed in claim 1. Tandon in view of Salman and Soans does not teach “wherein the unsupervised classification comprises implementation of a k-nearest neighbor algorithm as the algorithm in the embedding space.”. However, Knight teaches wherein the unsupervised classification comprises implementation of a k-nearest neighbor algorithm as the algorithm in the embedding space. ("For example, the anomaly detection algorithm may comprise one or more of: a density-based technique (k-nearest neighbor, local outlier factor, isolation forest), a subspace-based outlier detection, a correlation-based outlier detection, a tensor-based outlier detection, a support vector machine (SVM), a single-class vector machine, support vector data description, a neural network (e.g., replicator neural network, autoencoder, long short-term memory (LSTM) neural network), a Bayesian network, a hidden Markov model (HMM), a cluster analysis-based outlier detection, deviation from association rules and frequent itemsets, fuzzy logic-based outlier detection, and an ensemble technique (e.g., using feature bagging, score normalization, and different sources of diversity),” Knight, paragraph [0086]). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to implement the k-nearest neighbor algorithm of Knight in Tandon’s (in view of Salman and Soans) embedding space. The suggestion/motivation for doing so would have been because the algorithm requires no training, thus reducing processing and saving time. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Tandon in view of Salman, Soans, and in further view of Knight to obtain the invention as specified in claim 15. Claim 16 corresponds to claim 15, additionally reciting the system (Tandon, paragraph [0004], “One aspect of the disclosure relates to a system for identifying a sample feature of interest in a biological sample of a host organism”). Thus, it is rejected for the same reasons of obviousness as claim 15. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WAYNE ZHANG whose telephone number is (571) 272-0245. The examiner can normally be reached Monday-Friday 10:00-6:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Sumati Lefkowitz can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WAYNE ZHANG/Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Apr 14, 2023
Application Filed
Jun 02, 2025
Non-Final Rejection — §103
Aug 29, 2025
Response Filed
Sep 22, 2025
Final Rejection — §103
Jan 22, 2026
Request for Continued Examination
Feb 04, 2026
Response after Non-Final Action
Mar 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591990
METHOD AND APPARATUS FOR GENERATING SPATIAL GEOMETRIC INFORMATION ESTIMATION MODEL
2y 5m to grant Granted Mar 31, 2026
Patent 12591958
INFRA-RED CONTRAST ENHANCEMENT FILTER
2y 5m to grant Granted Mar 31, 2026
Patent 12561843
METHOD FOR MANAGING IMAGE DATA, AND VEHICLE LIGHTING SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12536629
Image Processing Method and Electronic Device
2y 5m to grant Granted Jan 27, 2026
Patent 12536667
METHOD AND FACILITY FOR SEGMENTATION OF HIGH-CONTRAST OBJECTS IN X-RAY IMAGES
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
94%
With Interview (+43.6%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month