Prosecution Insights
Last updated: April 19, 2026
Application No. 18/384,421

METHOD FOR TRAINING NEURAL NETWORK AND DEVICE THEREOF

Non-Final OA §102§103
Filed
Oct 27, 2023
Examiner
SUN, JIANGENG
Art Unit
2671
Tech Center
2600 — Communications
Assignee
LUNIT INC.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
96%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
330 granted / 403 resolved
+19.9% vs TC avg
Moderate +14% lift
Without
With
+14.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
22 currently pending
Career history
425
Total Applications
across all art units

Statute-Specific Performance

§101
6.4%
-33.6% vs TC avg
§103
45.3%
+5.3% vs TC avg
§102
25.7%
-14.3% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 403 resolved cases

Office Action

§102 §103
DETAILED ACTION Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-9, 11-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li ( CN-109285200). Regarding claim 1, Li teaches a method for training a neural network with three-dimensional (3D) image data by a processor(page 8, computer), the method comprising: selecting at least one key frame image from 3D image data(Page 2, (1a) … medical image of the second mode corresponding to the first mode; (1b) pre-processing the training data ); training a first neural network with two-dimensional (2D) images, wherein the first neural network comprises a plurality of 2D convolutional layers(Page 3, (2a) training the training neural network model with the prepared image block) ; training a second neural network with the at least one key frame image(Page 3, (3)… inputting the pre-processed first mode medical image into the trained deep convolutional neural network model for forward calculating and outputting the medical image of the second mode ), wherein the second neural network comprises the plurality of 2D convolutional layers, and an aggregator combining outputs of the 2D convolutional layers( Page 3, each layer of training image are divided into a plurality of 2 D image blocks ) , and wherein the 2D images used for training the first neural network comprise the at least one key frame image selected from the 3D image data(Page 3, if each layer of image of different modes in each group of data image are matched with each other, then the corresponding each layer of training image are divided into a plurality of 2 D image blocks (patch); if each layer of image of different modes in each group of data images is not completely matched with each other, then …), and/or additional 2D images obtained from a different domain from the 3D image data. Regarding claim 2, Li teaches the method of claim 1, wherein the 3D image data comprises at least one of a digital breast tomosynthesis (DBT) image or a computed tomography (CT) image(Page 3, the medical image is: nuclear magnetic image, CT image, PET image or ultrasonic image), and the additional 2D image comprises at least one of a full-field digital mammography (FFDM) image or an X-ray image. Regarding claim 3, Li teaches the method of claim 1, wherein parameters of one or more 2D convolutional layers among the 2D convolutional layers are fixed during the training of the second neural network, and parameters of one or more remaining 2D convolution layers among the 2D convolutional layers are trained with the at least one key frame image during the training of the second neural network( Page 3, in the training process, dividing the corresponding position of the region of interest in the medical image of different modes in each image block… (2b) when the image similarity LOSS value of the verification data set is less than or equal to the set threshold value, the model stops the iteration, storing the model ). Regarding claim 4, Li teaches the method of claim 1, wherein the at least one key frame image comprise at least one randomly selected image from the 3D image data comprising a plurality of 2D images, at least one center frame image selected from the 3D image data, at least one suspicious frame image selected from the 3D image data(Page 5, step 213 … matching refers to each group of medical image, the organs in each layer slice of different modes), or at least one annotated frame image selected from the 3D image data. Regarding claim 5, Li teaches the method of claim 1, further comprising: obtaining a prediction for 3D inference image data using the trained second neural network( page 3, (3)… outputting the medical image of the second mode). Regarding claim 6, Li teaches a method for training a neural network with three-dimensional (3D) image data by a processor, the method comprising: selecting at least one key frame image from 3D training image data comprising a plurality of two-dimensional (2D) images(Page 2, (1a) … medical image of the second mode corresponding to the first mode; (1b) pre-processing the training data ); training a neural network using the at least one key frame image and at least one additional 2D image to output a prediction(Page 3, (2a) training the training neural network model with the prepared image block), wherein the additional 2D image is obtained from a different domain from the 3D training image data( Page 3, each layer of training image are divided into a plurality of 2 D image blocks ). Regarding claim 7, Li teaches the method of claim 6, wherein the selecting at least one key frame image comprises to select the at least one key frame image randomly from the plurality of 2D images(Page 3, if each layer of image of different modes in each group of data image are matched with each other, then the corresponding each layer of training image are divided into a plurality of 2 D image blocks (patch); if each layer of image of different modes in each group of data images is not completely matched with each other, then …). Regarding claim 8, Li teaches the method of claim 6, wherein the selecting at least one key frame image comprises to select at least one center frame image from the plurality of 2D images, as the at least one key frame image(Page 3, each layer of image of different modes in each group of data image). Regarding claim 9, Li teaches the method of claim 6, wherein the selecting at least one key frame images comprises to select at least one suspicious frame image from the plurality of 2D images, as the at least one key frame image Page 5, step 213 … matching refers to each group of medical image, the organs in each layer slice of different modes). Regarding claim 11, Li teaches the method of claim 6, wherein the 3D image data comprises at least one of a digital breast tomosynthesis (DBT) image or a computed tomography (CT) image, and the additional 2D image comprises at least one of a full-field digital mammography (FFDM) image or an X-ray image(Page 3, the medical image is: nuclear magnetic image, CT image, PET image or ultrasonic image). Regarding claim 12, Li teaches the method of claim 6, further comprising: obtaining a prediction for 3D inference image data using the trained neural network ( page 5, training picture into a plurality of 3 D image blocks). Regarding claim 13, Li teaches the method of claim 6, wherein the neural network comprises the plurality of 2D convolutional layers, and an aggregator combining outputs of the 2D convolutional layers( Page 3, each layer of training image are divided into a plurality of 2 D image blocks ). Regarding claim 14, Li teaches the method of claim 13, wherein the plurality of 2D convolutional layers are pre-trained using a plurality of 2D training images ( page 5, step 212, pre-processing the training data). Regarding claim 15, Li teaches the method of claim 14, wherein, during the training the neural network, parameters of one or more 2D convolutional layers among the 2D convolutional layers are fixed and parameters of one or more remaining 2D convolution layers among the 2D convolutional layers are trained with the at least one key frame image( Page 3, in the training process, dividing the corresponding position of the region of interest in the medical image of different modes in each image block… (2b) when the image similarity LOSS value of the verification data set is less than or equal to the set threshold value, the model stops the iteration, storing the model ). Regarding claim 16, Li teaches the method of claim 14, wherein the plurality of 2D training images comprise the at least one key frame image selected from the 3D image data, and/or 2D images obtained from a different domain from the 3D image data(Page 5, step 213 … matching refers to each group of medical image, the organs in each layer slice of different modes). Claims 17-20 recite the device for the method in claims 1-9, 11-16. Since Li also teaches a device (Page 8, computer), claims 17-20 are also rejected. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li. Regarding claim 10, Li teaches the method of claim 6. Li does not expressly teach wherein the selecting at least one key frame image comprises to select at least one annotated frame image from the plurality of 2D images, as the at least one key frame image. However, official notice is taken it is routine and conventional to use annotated images for training neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to choose frames in Li with annotated data set, with motivation to carry out supervised neural network training. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIANGENG SUN whose telephone number is (571)272-3712. The examiner can normally be reached 8am to 5pm, EST, M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Randolph Vincent can be reached at 571 272 8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JIANGENG SUN Examiner Art Unit 2661 /Jiangeng Sun/Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Oct 27, 2023
Application Filed
Jan 17, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591973
Histological Image Analysis
2y 5m to grant Granted Mar 31, 2026
Patent 12561872
METHOD OF TRAINING IMAGE DECOMPOSITION MODEL, METHOD OF DECOMPOSING IMAGE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12561757
IMAGE SUPER-RESOLUTION NEURAL NETWORKS
2y 5m to grant Granted Feb 24, 2026
Patent 12548122
METHOD FOR FILTERING PERIODIC NOISE AND FILTER USING THE METHOD
2y 5m to grant Granted Feb 10, 2026
Patent 12524859
System And Method For The Visualization And Characterization Of Objects In Images
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
96%
With Interview (+14.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 403 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month