Prosecution Insights
Last updated: April 19, 2026
Application No. 18/200,474

SINGLE EXTRACELLULAR VESICLE SORTING BASED ON SURFACE BIOMARKERS

Non-Final OA §101§102§112
Filed
May 22, 2023
Examiner
PARK, SOO JIN
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Sony Corporation Of America
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
589 granted / 720 resolved
+19.8% vs TC avg
Strong +17% interview lift
Without
With
+17.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
15 currently pending
Career history
735
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
37.3%
-2.7% vs TC avg
§102
26.3%
-13.7% vs TC avg
§112
19.3%
-20.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 720 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to judicial exception in the form of an abstract idea without significantly more. Following is an analysis of claim 1 under the subject matter eligibility test for products and processes in MPEP 2106: [Claim 1] A method programmed in a non-transitory memory of a device comprising: receiving input at one or more neural networks; and classifying the input into one or more classifications using image analysis [using] machine learning based on fluorescence related to biomarkers in the input with the one or more neural networks. The claim is to a process. (Step 1: Yes) Each of elements (c) and (d) falls within the mental processes groupings of abstract ideas because they cover concepts performed in the human mind including an observation, evaluation, judgment, opinion. See MPEP 2106.04(a)(2), subsection III.C. (Step 2A, Prong One: YES) Additional element (a) recites insignificant extra-solution activity, because it is deemed a pre-solution activity that amounts to necessary data gathering. See MPEP 2106.05(g). Each of additional elements (b), (e), and (g) mere instructions to apply the judicial exception, because the claim recites only the idea of a solution or outcome “classifying the input into one or more classifications” without reciting details of how such classification is made via the machine learning and neural networks. See MPEP 2106.05(f). Each of these elements also indicate use of a general purpose computer. See MPEP 2106.05(a). Additional element (f) merely indicates a field of use or technological environment in which the judicial exception is performed. See MPEP 2106.05(h). (Step 2A, Prong Two: NO) Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application, and the claim is directed to the judicial exception. (Step 2A: YES) Even when considered in combination, these additional elements represent a process to implement an abstract idea or other judicial exception and insignificant extra-solution activity by a generic computer, that do not provide an inventive concept. (Step 2B: NO) Claims 8 and 15 recite subject matter similar to claim 1, and are rejected for similar reasons (i.e., a process or product to implement an abstract idea or other exception and insignificant extra-solution activity by a generic computer to an indicated field of technology). Additional elements in each of claims 2 and 9 merely indicates a field of use or technological environment in which the judicial exception is performed. Additional elements in each of claims 3-7, 10-14, and 16-20 further recite mental processes. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 3-8, and 10-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, the limitation “using image analysis and machine learning” renders the claim indefinite for the following reasons: i) It is unclear and confusing whether “using image analysis” and “using machine learning” are two separate elements. Please amend the claim for clarification. Similar reasons apply to claims 8 and 15. ii) It is unclear and confusing what the limitation “image analysis” refers to. The applicant’s specification merely states that “any type of image analysis is able to be utilized” (6th page of the specification, last paragraph) and lists a variety of known image analysis techniques. Since the applicant does not clearly specify which image analysis techniques are used and how, the metes and bounds of the subject matter in the claim is not particularly pointed out and distinctly defined for one of ordinary skill in the art. Please amend the claim for clarification. Similar reasons apply to claims 8 and 15. Regarding claim 3, the limitation “classifying the input […] is based on detecting a single paint of intensity above a threshold” renders the claim indefinite for the following reasons: iii) The phrase “a single paint” appears to be a typographical error that should be amended to read as “a single point”. iv) Claim 1, which claim 3 is dependent upon, clearly recites that the classification is based on “using image analysis” and “using machine learning”. Is the detection of a single point, recited in claim 3, a part of “using image analysis” or “using machine learning”, or is it a separate third step in addition to using image analysis and machine learning? Please amend the claim for clarification. Similar reasons apply to claims 10 and 16 each in view of claims 8 and 15. Regarding claim 4, the limitation “classifying the input […] is based an detecting a plurality of points of intensity above a threshold” renders the claim indefinite for the following reasons: v) The phrase “based an” appears to be a typographical error that should be amended to read as “based on”. vi) Claim 1, which claim 4 is dependent upon, clearly recites that the classification is based on “using image analysis” and “using machine learning”. Is the detection of a plurality of points, recited in claim 4, a part of “using image analysis” or “using machine learning”, or is it a separate third step in addition to using image analysis and machine learning? Please amend the claim for clarification. Similar reasons apply to claims 11 and 17 in view of claims 8 and 15. Regarding claim 5, the limitation “classifying the input […] is based on determining a spot count of intensity greater than a threshold” renders the claim indefinite for the following reason: vii) Claim 1, which claim 5 is dependent upon, clearly recites that the classification is based on “using image analysis” and “using machine learning”. Is the determination of a spot count, recited in claim 5, a part of “using image analysis” or “using machine learning”, or is it a separate third step in addition to using image analysis and machine learning? Please amend the claim for clarification. Similar reasons apply to claims 12 and 18 in view of claims 8 and 15. Regarding claim 6, the limitation “classifying the input into one or more classifications includes detecting noise” renders the claim indefinite for the following reasons: viii) It is unclear and confusing when the noise is detected relative to the other steps. For example, is the noise detected after using the image analysis but prior to using machine learning? Please amend the claim for clarification. Similar reasons apply to claims 13 and 19 in view of claims 8 and 15. Regarding claim 7, the limitation “separating the input based on detection of the biomarkers” renders the claim indefinite for the following reasons: ix) It is unclear and confusing what the “detection of the biomarkers” refers to. Claim 1, which claim 7 is dependent upon, at most classifies the “input” based on fluorescence related to biomarkers, and does not recite classifying/detecting the biomarkers themselves. Please amend the claim for clarification. Similar reasons apply to claims 14 and 20 in view of claims 8 and 15. x) It is unclear and confusing what steps are included in “separating the input”. For example, is the input merely categorized into different classes? Please amend the claim for clarification. Similar reasons apply to claims 14 and 20 in view of claims 8 and 15. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kimmerling et al. (USPN 11,530,974). Regarding claim 1, Kimmerling discloses: receiving input at one or more neural networks (see 6:58-62 and fig 8, a convolutional neural network (CNN) classifier receiving an input image from a suspended microchannel resonators (SMR) platform; and see 7:20-25, wherein the input image is a fluorescent image); and classifying the input into one or more classifications using image analysis and machine learning based on fluorescence related to biomarkers in the input with the one or more neural networks (see 6:58-62, 7:20-25, and fig 8, the CNN classifier classifies the input image according to fluorescent markers in the image). Regarding claim 2, Kimmerling further discloses: wherein the input comprises fluorescence images (see rejection of claim 1, the input image is a fluorescent image). Regarding claim 3, Kimmerling further discloses: wherein classifying the input into one or more classifications is based on detecting a single paint of intensity above a threshold (see 6:58-62 and fig 8, the CNN classifier is trained with training data such that a single point of bright intensity above an inherent threshold is classified as a single live cell, while multiple points of bright intensity above an inherent threshold are classified as cell aggregates). Regarding claim 4, Kimmerling further discloses: wherein classifying the input into one or more classifications is based an detecting a plurality of points of intensity above a threshold (see 6:58-62 and fig 8, the CNN classifier is trained with training data such that a single point of bright intensity above an inherent threshold is classified as a single live cell, while multiple points of bright intensity above an inherent threshold are classified as cell aggregates). Regarding claim 5, Kimmerling further discloses wherein classifying the input into one or more classifications is based on determining a spot count of intensity greater than a threshold (see 6:58-62 and fig 8, the CNN classifier is trained with training data such that a single point of bright intensity above an inherent threshold is classified as a single live cell, while multiple points of bright intensity above an inherent threshold are classified as cell aggregates). Regarding claim 6, Kimmerling further discloses wherein classifying the input into one or more classifications includes detecting noise (see fig 8, the CNN classifier is trained to classify debris). Regarding claim 7, Kimmerling further discloses comprising separating the input based on detection of the biomarkers (see fig 8, binning the input image according to its classification result). Regarding claims 8-14, Kimmerling discloses everything claimed as applied above (see rejection of claims 1-7; and see Kimmerling fig 7, a computer). Regarding claim 15, Kimmerling discloses: a first computing device configured for sending one or more fluorescent images of extracellular vesicles to a second computing device (see 30:54-31:38 and fig 7, server 719); and the second computing device (see 30:54-31:38 and fig 7, computer 725) configured for: receiving the one or more fluorescent images of extracellular vesicles at one or more neural networks (see 6:58-62 and fig 8, a CNN classifier receiving an input image from a SMR platform; and see 7:20-25, wherein the input image is a fluorescent image; and see 26:41-42, the input image is of extracellular vesicles); and classifying the one or more fluorescent images of extracellular vesicles into one or more classifications using image analysis and machine learning based on fluorescence related to biomarkers in the one or more fluorescent images of extracellular vesicles with the one or more neural networks (see 6:58-62, 7:20-25 and fig 8, the CNN classifier classifies the input image according to fluorescent markers in the image; and see 26:41-43, wherein one of the known classes is of extracellular vesicles). Regarding claims 16-20, Kimmerling discloses everything claimed as applied above (see rejection of claims 3-7 and 15). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Tang et al. (USPAPN 2024/0177504) discloses cell classification via implementing a fluorescence model. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SJ PARK whose telephone number is (571)270-3569. The examiner can normally be reached M-F 8:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW MOYER can be reached at 571-272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SJ Park/Primary Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

May 22, 2023
Application Filed
Jan 13, 2026
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602779
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12597481
SYSTEM, MOBILE TERMINAL DEVICE, PROGRAM, AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12585700
VIDEO RETRIEVAL METHOD AND APPARATUS BASED ON KEY FRAME DETECTION
2y 5m to grant Granted Mar 24, 2026
Patent 12586402
MACHINE-LEARNING MODELS FOR IMAGE PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12579829
APPLICATION DEVELOPMENT ENVIRONMENT FOR BIOLOGICAL SAMPLE ASSESSMENT PROCESSING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+17.3%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 720 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month