Prosecution Insights
Last updated: April 19, 2026
Application No. 18/271,233

METHOD FOR TRAINING ARTIFICIAL NEURAL NETWORK PROVIDING DETERMINATION RESULT OF PATHOLOGICAL SPECIMEN, AND COMPUTING SYSTEM FOR PERFORMING SAME

Non-Final OA §101§102§103
Filed
Jul 06, 2023
Examiner
SHAW, PETER C
Art Unit
2493
Tech Center
2400 — Computer Networks
Assignee
Deep Bio Inc.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
422 granted / 553 resolved
+18.3% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
46 currently pending
Career history
599
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 553 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Claims 1-14 are pending in this action. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 7 and 14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because it is unclear from the specification whether the computer-readable medium can comprise signals which is per se non-statutory see MPEP 2106.03. Examiner suggests amending to “non-transitory computer-readable recording medium”. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 5-8 and 12-14 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Klaiman et al. (WO-2020182710-A1) [hereinafter “Klaiman”]. As per claim 1, Klaiman teaches an artificial neural network training method comprising the steps of: generating a training data set including M pieces of individual training data (here, M is a natural number equal to or greater than 2) (Page 5, lines 3-6, generating a gallery of image tiles after a training phase), by a neural network training system (Page 17, lines 19-20, feature training performed by a neural network); and training an artificial neural network on the basis of the training data set, by the neural network training system (Abstract, training using feature vectors of image tiles by a neural network see id.), wherein the step of generating a training data set including M pieces of individual training data includes the step of generating an m-th training data to be included in the training data set for all natural number m where 1<=m<=M (Page 14, lines 1-10, generating one more image tiles based on some or all of previously obtained image tiles from patient), wherein the step of generating an m-th training data includes the steps of: acquiring first to N-th pathology slide images (here, N is a natural number equal to or greater than 2) (Page 14, lines 8-10, obtaining one or more tiles from patient), wherein the first to N-th pathology slide images are pathology slide images obtained by staining serial sections of a single pathological specimen with different staining reagents (Page 7, lines 1-10, staining tiles with a particular biomarker stain – image tiles taken from tissue sample slices, i.e. serial sections); and generating the m-th training data on the basis of the first to N-th pathology slide images (Page 46, lines 1-10, generating new training data based on one or more of previously obtained images from patient) see also (Page 47, lines 10-20, feature vector of an image tile is based on predictions from previous feature vectors). As per claim 5, Klaiman teaches a method of providing a result of determination on a predetermined determination target pathological specimen through an artificial neural network trained by the artificial neural network training method described in claim 1, the method comprising the steps of: acquiring first to N-th determination target pathology slide images (here, N is a natural number equal to or greater than 2), by a computing system (Page 14, lines 8-10, obtaining one or more tiles from patient), wherein the first to N-th determination target pathology slide images are pathology slide images in which serial sections of the determination target pathological specimen are stained with different staining reagents (Page 7, lines 1-10, staining tiles with a particular biomarker stain – image tiles taken from tissue sample slices, i.e. serial sections); and outputting a result of determination on the determination target pathological specimen determined by the artificial neural network on the basis of the first to N-th determination target pathology slide images, by the computing system (Page 46, lines 1-10, generating new training data based on one or more of previously obtained images from patient which can be used as input or outputted by system see Abstract). As per claim 6, the substance of the claimed invention is identical or substantially similar to that of claim 1. Accordingly, this claim is rejected under the same rationale. As per claim 7, the substance of the claimed invention is identical or substantially similar to that of claim 1. Accordingly, this claim is rejected under the same rationale. As per claim 8, the substance of the claimed invention is identical or substantially similar to that of claim 1. Accordingly, this claim is rejected under the same rationale. As per claim 12, the substance of the claimed invention is identical or substantially similar to that of claim 5. Accordingly, this claim is rejected under the same rationale. As per claim 13, the substance of the claimed invention is identical or substantially similar to that of claim 5. Accordingly, this claim is rejected under the same rationale. As per claim 14, the substance of the claimed invention is identical or substantially similar to that of claim 5. Accordingly, this claim is rejected under the same rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2-3 and 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Klaiman in view of Chen et al. (WO-2015177268-A1) [hereinafter “Chen”]. As per claim 2, Klaiman teaches method according to claim 1, as well as the original (n training data) and generated training data (m trainsing data), being a single or multi-channel image (Page 36 lines 20-21 and Page 46 lines 4-5). Klaiman does not explicitly teach converting images into one multi-channel image through channelstacking. Chen teaches converting images into one multi-channel image through channelstacking ([0013], combining as a stack a number of images into a multi-channel image). At the time of filing, it would have been obvious to one of ordinary skill in the art to combine Klaiman with the teachings of Chen, converting pathology slide images into one multi-channel image through channelstacking, to provide a greater range of training of related images using well-known techniques. As per claim 3, Klaiman teaches the method according to claim 1. Klaiman does not explicitly teach wherein the step of generating the m-th training data on the basis of the first to N-th pathology slide images includes the steps of: specifying a biological tissue area existing in each of the first to N-th pathology slide images; matching the first to N-th pathology slide images so that positions and shapes of the biological tissue areas existing in the first to N-th pathology slide images may match; and converting the matched first to N-th pathology slide images into one multi-channel image through channel stacking, wherein the m-th training data includes the multi-channel image. Chen teaches wherein the step of generating the m-th training data on the basis of the first to N-th pathology slide images includes the steps of: specifying a biological tissue area existing in each of the first to N-th pathology slide images ([0066], patches generated from images selected from same candidate location); matching the first to N-th pathology slide images so that positions and shapes of the biological tissue areas existing in the first to N-th pathology slide images may match ([0066], images line up based on candidate location); and converting the matched first to N-th pathology slide images into one multi-channel image through channel stacking, wherein the m-th training data includes the multi-channel image ([0032], generated patches formed from stacking during training see [0066] can be used as input in the learning module). At the time of filing, it would have been obvious to one of ordinary skill in the art to combine Klaiman with the teachings of Chen, wherein the step of generating the m-th training data on the basis of the first to N-th pathology slide images includes the steps of: specifying a biological tissue area existing in each of the first to N-th pathology slide images; matching the first to N-th pathology slide images so that positions and shapes of the biological tissue areas existing in the first to N-th pathology slide images may match; and converting the matched first to N-th pathology slide images into one multi-channel image through channel stacking, wherein the m-th training data includes the multi-channel image, to provide a greater range of training of related images using well-known techniques. As per claim 9, the substance of the claimed invention is identical or substantially similar to that of claim 2. Accordingly, this claim is rejected under the same rationale. As per claim 10, the substance of the claimed invention is identical or substantially similar to that of claim 3. Accordingly, this claim is rejected under the same rationale. Claims 4 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Klaiman in view of Faust et al. (WO-2019084697-A1) [hereinafter “Faust”]. As per claim 4, the combination of Klaiman and Chen teaches the method according to claim 3, wherein the step of matching the first to N-th pathology slide images so that the positions and shapes of the biological tissue areas existing in the first to N-th pathology slide images may match (Page 7, lines 8-24, matching image tiles using position and signature of a particular biomarker) includes the step of calculating a conversion relation corresponding to an i-th pathology slide image for all natural numbers i where 1<=i<=N (here, the conversion relation corresponding to the i-th pathology slide image is a conversion relation between the i-th pathology slide image and a matched i-th pathology slide image corresponding thereto) (Page 8, lines 7-28, calculating similarity of two images using included features). The combination of Klaiman and Chen does not explicitly teach modifying a lesion annotation area assigned to a j-th pathology slide image using a conversion relation corresponding to the j-th pathology slide image; and converting the modified lesion annotation areas of the first to N-th pathology slide images into one multi-channel lesion annotation area through channel stacking, wherein the m-th training data further includes the multi-channel lesion annotation area. Faust teaches modifying a lesion annotation area assigned to a j-th pathology slide image using a conversion relation corresponding to the j-th pathology slide image ([0263], automated extraction and annotation of lesion tiles from previous pathology reports containing lesion annotations); and converting the modified lesion annotation areas of the first to N-th pathology slide images into one multi-channel lesion annotation area through channel stacking, wherein the m-th training data further includes the multi-channel lesion annotation area ([0131], annotations can be modified and aggregated alongside aggregating visual slides see [0089] and [0282]). At the time of filing, it would have been obvious to one of ordinary skill in the art to combine Klaiman and Chen with the teachings of Faust, modifying a lesion annotation area assigned to a j-th pathology slide image using a conversion relation corresponding to the j-th pathology slide image; and converting the modified lesion annotation areas of the first to N-th pathology slide images into one multi-channel lesion annotation area through channel stacking, wherein the m-th training data further includes the multi-channel lesion annotation area, wherein the m-th training data includes the multi-channel image, to provide a greater range of training of related images using well-known techniques. As per claim 11, the substance of the claimed invention is identical or substantially similar to that of claim 4. Accordingly, this claim is rejected under the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bingham (US PGPUB No. 2016/0027347), Mimura et al. (US PGPUB No. 2016/0163043), Sashida (US PGPUB No. 2018/0286040), Phillips et al. ("CellRep: Multichannel Image Representation Learning Model," 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 2025, pp. 4303-4309, doi: 10.1109/CVPRW67362.2025.00415), Siyuan et al. ("2D CNN-Based Slices-to-Volume Superresolution Reconstruction," in IEEE Access, vol. 8, pp. 86357-86366, 2020, doi: 10.1109/ACCESS.2020.2992481), Fujitani et al. ("Re-staining Pathology Images by FCNN," 2019 16th International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 2019, pp. 1-6, doi: 10.23919/MVA.2019.8757875), de Haan et al. ("Deep learning-based transformation of the H&E stain into special stains," arXiv:2008.08871, August 20, 2020) and Bhattacharyya et al. ("Online Phase Detection and Characterization of Cloud Applications," 2017 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), Hong Kong, China, 2017, pp. 98-105, doi: 10.1109/CloudCom.2017.21) all disclose various aspects of the claimed invention including using stained specimen images to generate training data for a neural network. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER C SHAW whose telephone number is (571)270-7179. The examiner can normally be reached Max Flex. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carl Colin can be reached at 571-272-3862. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER C SHAW/Primary Examiner, Art Unit 2493 February 9, 2026
Read full office action

Prosecution Timeline

Jul 06, 2023
Application Filed
Feb 09, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566852
NEFARIOUS CODE DETECTION USING SEMANTIC UNDERSTANDING
2y 5m to grant Granted Mar 03, 2026
Patent 12547696
WIRELESS BATTERY MANAGEMENT SYSTEM SAFETY CHANNEL COMMUNICATION LAYER PROTOCOL
2y 5m to grant Granted Feb 10, 2026
Patent 12536342
SOC ARCHITECTURE WITH SECURE, SELECTIVE PERIPHERAL ENABLING/DISABLING
2y 5m to grant Granted Jan 27, 2026
Patent 12511438
DYNAMIC PROVISION OF SOFTWARE APPLICATION FEATURES
2y 5m to grant Granted Dec 30, 2025
Patent 12513190
SNAPSHOT FOR ACTIVITY DETECTION AND THREAT ANALYSIS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+35.7%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 553 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month