Prosecution Insights
Last updated: April 19, 2026
Application No. 18/528,923

METHOD AND SYSTEM FOR PARALLEL PROCESSING FOR MEDICAL IMAGE

Non-Final OA §103
Filed
Dec 05, 2023
Examiner
TRANDAI, CINDY HUYEN
Art Unit
2648
Tech Center
2600 — Communications
Assignee
LUNIT INC.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
394 granted / 508 resolved
+15.6% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
25 currently pending
Career history
533
Total Applications
across all art units

Statute-Specific Performance

§101
4.1%
-35.9% vs TC avg
§103
72.1%
+32.1% vs TC avg
§102
7.2%
-32.8% vs TC avg
§112
12.4%
-27.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 508 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Allowable Subject Matter Claim 7 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: the prior art of made of record does not teach or fairly suggest the combination of claimed elements “calculating a throughput required for the first operation, the second operation, and the third operation; acquiring, from a given plurality of processors, a status of operations to be processed by each of the plurality of processors at a specific point in time; and allocating one or more processors of the plurality of processors to process each of the first operation, the second operation, and the third operation, based on the calculated throughput and the acquired status of the operations, wherein the one or more processors allocated to process the first operation are the one or more first processors, the one or more processors allocated to process the second operation are the one or more third processors, and the one or more processors allocated to process the third operation are the one or more second processors” as recited in independent claim 7. Claims 8-10 are depending on claim 7. Therefore, claims 8-10 are allowed for the same reason as claim 7. Claim 15 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: the prior art of made of record does not teach or fairly suggest the combination of claimed elements “wherein the at least one program further includes instructions for: calculating a throughput required for the first operation, the second operation, and the third operation; acquiring, from a given plurality of processors, a status of operations to be processed by each of the plurality of processors at a specific point in time; and allocating one or more processors of the plurality of processors to process each of the first operation, the second operation, and the third operation, based on the calculated throughput and the acquired status of the operations, and the one or more processors allocated to process the first operation are the one or more first processors, the one or more processors allocated to process the second operation are the one or more third processors, and the one or more processors allocated to process the third operation are the one or more second processors” as recited in independent claim 15. Claims 16-18 are depending on claim 15. Therefore, claims 16-18 are allowed for the same reason as claim 15. Claim Objections Claim 7 is objected to because of the following informalities: claim 8 recites “can be” at line 4. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Cho et al. (US 20200003886 A1) in view of Fuchs (US 20190286936 A1). Regarding claim 1, Cho teaches a method for parallel processing “image data” (Fig. 14 and Par. 127, image data/training data 1491), the method being performed by a plurality of processors (Fig. 15 and Pars. 138-139, a single operation or two or more operations is/are performed by a single processor & one or more operations is/are performed by one or more parallel processors) and comprising: performing, by a first processor, a first operation of providing a second processor with a first patch included in the digitally scanned pathology image (Fig. 14, at “t-2”, first Conv layer (i.e. first processor) calculates feature data for a mini-batch and provides to first RNN layer (i.e. second processor) and Pars. 129, 139); performing, by the first processor, a second operation of providing a second processor with a second patch included in the digitally scanned pathology image (Fig. 14, at “t-1”, second Conv layer (i.e. first processor) calculates feature data for a min-batch and provides to second RNN layer (i.e. second processor) and Pars. 129, 139) and performing, by the second processor, a third operation of outputting a first analysis result from the first patch (Fig. 14 and Pars. 129, 138, at “t-1”, output at the second RNN layer) using a machine learning model (Fig. 14, RNN model 1422 & Note that Recurrent Neural Network (RNN) is known as machine learning model)), wherein at least a part of a time frame for the second operation performed by the first processor overlaps with at least a part of a time frame for the third operation performed by the second processor (Fig. 14 and Par. 92, the frame at the current time frame t-1 and the frame at the previous time frame t-2 are overlapped). Cho does not disclose the image data as taught above is a digitally scanned pathology image. However, processing the image patches from the scanning of pathology slides is well-known, cannot be considered new or novel, in the presence of and Fuchs. Fuchs teaches each image is an image of a pathology slide representing a tissue sample taken (scanned) from a patient (Figs. 3-4 and Par. 71), which the plurality of images divided into the plurality of patches and processing a subset of the plurality of patches in each iteration or a predetermined number of patches of the plurality of patches for each iteration (Figs. 3-4 and Par. 82). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the above teaching as taught by Fuchs to Cho, the image data in Cho would be the pathology slides (as taught by Fuchs) to provide output for diagnosis of disease in automated fashion for treatment or a prognosis for a subject. Regarding claim 2, the modified Cho teaches previous claim. The modified Cho further teaches the method according to claim 1, further comprising performing, by the second processor, a fourth operation of outputting a second analysis result from the second patch using the machine learning model, wherein medical information associated with the digitally scanned pathology image is generated based on the first analysis result and the second analysis result (Fig. 14). Regarding claim 3, the modified Cho teaches previous claim. The modified Cho further teaches the method according to claim 1, wherein the first and second patches are included in one batch (design choice as indicated above by Par. 82 of Fuchs). Regarding claim 6, Cho teaches a method for parallel processing a digitally scanned pathology image, the method being performed by a plurality of processors and comprising: performing, by one or more first processors, a first operation of providing one or more second processors (Fig. 14 and Par. 29, at t-2, first Conv layer (i.e. first processor) calculates feature data and provides to first RNN (i.e. second processor) & Fig. 15 and Par. 139, a single operation or two or more operations is/are performed by a single processor & one or more operations is/are performed by one or more parallel processors) with a first batch (Fig 14 and Par. 127, min-batch at t-2) associated with “image data”(Fig. 14 and Par. 127, image data/training data 1491), performing, by one or more third processors, a second operation of providing the one or more second processors with a second batch associated with the digitally scanned pathology image (Fig. 14 and Pars. 129, 139, at “t-1”, second Conv layer (i.e. third processor) calculates feature data for a min-batch and provides to second RNN layer (i.e. second processor)), (Fig. 14 and Pars. 129, 138, at “t-1”, output at the second RNN layer) using a machine learning model (Fig. 14, RNN model 1422 & Note that Recurrent Neural Network (RNN) is known as machine learning model)), wherein at least a part of a time frame for the second operation performed by the one or more third processors overlaps with at least a part of a time frame for the third operation performed by the one or more second processors (Fig. 14 and Par. 92, the frame at the current time frame t-1 and the frame at the previous time frame t-2 are overlapped). Cho does not disclose the training image data as taught above is a digitally scanned pathology image. Cho also does not disclose wherein the first batch includes a first set of patches and the second batch includes a second set of patches. However, processing the image patches from the scanning of pathology slides is well-known, cannot be considered new or novel, in the presence of and Fuchs. Fuchs teaches each image is an image of a pathology slide representing a tissue sample taken (scanned) from a patient (Figs. 3-4 and Par. 71), which the plurality of images divided into the plurality of patches and processing a subset of the plurality of patches in each iteration or a predetermined number of patches of the plurality of patches for each iteration (Figs. 3-4 and Par. 82). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the above teaching as taught by Fuchs to Cho, the image data in Cho would be the pathology slides (as taught by Fuchs) to provide output for diagnosis of disease in automated fashion for treatment or a prognosis for a subject. Regarding claim 14, apparatus of claim 14 is performed by the method of claim 6. They recite same scope of limitations. Applicant is kindly advised to refer to rejection of claim 6. Claims 4, 11 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Cho et al. (US 20200003886 A1) in view of Fuchs (US 20190286936 A1) and in further view of Kossyk et al. (US 20200151500 A1). Regarding claim 4, the modified Cho teaches previous claim. The modified Cho further teaches the method according to claim 1, wherein the performing the first operation includes extracting the first patch from the digitally scanned pathology image, the performing the second operation includes extracting the second patch from the digitally scanned pathology image, and the first and second patches are different from each other (Fig. 14 and Par. 129, calculates feature data (e.g., temporary feature data) for each of a frame t−2, a frame t−1, and a frame t). Calculates feature data (e.g., temporary feature data) is i.e. extracts feature data as evidence by Kossyk (Figs. 1-2 and Par. 34). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the above teaching as taught by Kossyk into the modified Cho to provide data for learning. Regarding claim 11, apparatus of claim 11 is performed by the method of claim 4. They recite same scope of limitations. Applicant is kindly advised to refer to rejection of claim 4. Regarding claim 19, apparatus of claim 19 is performed by the method of claim 4. They recite same scope of limitations. Applicant is kindly advised to refer to rejection of claim 4. Claims 2, 12 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Cho et al. (US 20200003886 A1) in view of Fuchs (US 20190286936 A1) and in further view of Poh et al. (US 20180173212 A1). Regarding claim 5, the modified Cho teaches previous claim. The modified Cho further teaches the method according to claim 1, wherein the performing the first operation includes acquiring, as the first patch, one of a plurality of patches previously extracted from the digitally scanned pathology image and stored in a storage medium, the performing the second operation includes acquiring, as the second patch, one of a plurality of patches previously extracted from the digitally scanned pathology image and stored in the storage medium, and the first and second patches are different from each other (Fig. 14 and Par. 129, calculates feature data (e.g., temporary feature data) for each of a frame t−2, a frame t−1, and a frame t). Calculates feature data (e.g., temporary feature data) is i.e. extracts feature data and it is well-known the data is stored as evidence by Poh (Fig. 1 and Par. 32). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the above teaching as taught by Poh into the modified Cho to provide data for learning. Regarding claim 12, apparatus of claim 12 is performed by the method of claim 5. They recite same scope of limitations. Applicant is kindly advised to refer to rejection of claim 5. Regarding claim 20, apparatus of claim 20 is performed by the method of claim 5. They recite same scope of limitations. Applicant is kindly advised to refer to rejection of claim 5. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Cho et al. (US 20200003886 A1) in view of Fuchs (US 20190286936 A1) and in further view of Zhu et al. (US 20210374518 A1). Regarding claim 13, the modified Cho teaches previous claim. The modified Cho further teaches the method according to claim 6, wherein, at least a part of the one or more first processors is the same as at least a part of the one or more third processors (Fig. 15 and Pars. 129, 138-139). The operations are performed by the single processor or by the two parallel processors (first and third processors), as indicated above and also in claim 6, obviously the same as evidence by Zhu (Pars. 341, 350-351). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the above teaching as taught by Zhu into the modified Cho to perform parallel operations. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Zhu et al. US 20210374518 A1 teaches different layers (operations) of the neural network are to be performed using different processors/GPUs/computing resources such that one layer (operation) is/are performed by one or more processors/GPUs or two layers (operations) are performed by one processor/GPU (Pars. 64-66, 341), where a scheduler unit 3312 that configures various GPCs 3318 to process tasks defined by one or more command streams and tracks state information related to various tasks managed by scheduler unit 3312 where state information indicate which of GPCs 3318 a task is assigned to, whether task is active or inactive, a priority level associated with task (Pars. 460-461). Aghaei et al. (US 20220351860 A1) (Fig. 5 and Pars. 3, 99). Aghdam et al. US 20230245480 A1 Hassan-Shafique et al. US 20210118136 A1 Agus et al. US 20220180518 A1 Sargent et al. US 20220058794 A1 Biermann et al. US 7969444 B1 Yu US 20250068724 A1 Yu US 20230306739 A1 Dwivedi et al. US 20230097169 A1 Nie et al. US 20240079138 A1 Any inquiry concerning this communication or earlier communications from the examiner should be directed to CINDY HUYEN TRANDAI whose telephone number is (571)270-1914. The examiner can normally be reached 8am -4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wesley L. Kim can be reached at 571-272-7867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Cindy Trandai/Primary Examiner, Art Unit 2648 3/16/2026
Read full office action

Prosecution Timeline

Dec 05, 2023
Application Filed
Mar 16, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581554
COMMUNICATION METHOD FOR NEAR-FIELD COMMUNICATION DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12581604
SIGNAL PROCESSING DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12568534
OBJECT TRACKING SYSTEM AND METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12555244
PERFORMING SEMANTIC SEGMENTATION OF 3D DATA USING DEEP LEARNING
2y 5m to grant Granted Feb 17, 2026
Patent 12556896
CACHING A DATA PAYLOAD ON A PERIPHERAL DEVICE FOR DELIVERY TO A TARGET DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
96%
With Interview (+18.3%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 508 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month