Prosecution Insights
Last updated: April 19, 2026
Application No. 18/410,361

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Non-Final OA §103
Filed
Jan 11, 2024
Examiner
GARCIA, SANTIAGO
Art Unit
2673
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
895 granted / 1015 resolved
+26.2% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
21 currently pending
Career history
1036
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
18.7%
-21.3% vs TC avg
§112
2.3%
-37.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1015 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/11/2024 is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over Saito (US 2022/0020496) in view of Kamon (US 2021/0343011). As per claim 1, and 8-9 Saito teaches, an image processing device, method and non-transitory readable medium comprising: at least one memory configured to store instructions (Saito, ¶[0186] “a storage unit (a hard disk or a semiconductor memory)” this represents at least one memory ); and at least one processor configured to execute the instructions to: acquire an endoscopic image obtained by photographing an examination target (Saito, ¶[0173] “In the CNN system according to the first embodiment, INTEL's Core i7-7700K was used as the CPU, and NVIDEA's GeForce GTX 1070 was used as the graphics processing unit (GPU).” This represents processing unit and ¶[0055] “and the probability corresponding to the site where the image is captured, for test data including the plurality of the endoscopic images of the digestive organ of a large number of subjects, not only an endoscopy specialist is enabled to perform check and make corrections easily, but also it becomes possible to simplify the tasks of creating a collection of images that are associated with a disease.” Would be acquiring or photographic multiple images of examination target such as in fig.5, an endoscopic image see ¶[0045] “ a diagnostic assistance system, a diagnostic assistance program, and a computer-readable recording medium storing therein the diagnostic assistance program for a disease based on an endoscopic image of a digestive organ, being capable of correctly identifying, for example, not only the cases of H. pylori positives and negatives but also cases after H. pylori eradication, using an endoscopic image of the digestive organ with use of a CNN system.” This represents single image ); detect, based on the endoscopic image, a lesion region which is a candidate region of a lesion of the examination target in the endoscopic image (Saito, ¶[0086] “With the diagnostic assistance method for a disease based on an endoscopic image of a digestive organ with use of a CNN system according to the seventeenth aspect of the present invention, it becomes possible to detect the presence of a pharyngeal cancer at a high sensitivity and a high accuracy during a normal esophagogastroduodenoscopy endoscopic examination.” This represents lesion region which is a candidate region “detect the presence of a pharyngeal cancer” is the lesion and ¶[0053-54] “ the trained CNN system outputs at least one of a probability of the positivity and/or the negativity for the disease in the digestive organ, a probability of the past disease, the severity level of the disease” severity level is also “a candidate” as it could be a severity of 0 for example, as well as the negative as a candidate right before it is determined negative result); determine whether or not the endoscopic image is an image adequate to determine a degree of progression or an invasion depth (Saito, ¶[0054] “the trained CNN system outputs at least one of a probability of the positivity and/or the negativity for the disease in the digestive organ, a probability of the past disease, the severity level of the disease, the invasion depth of the disease, and a probability corresponding to the site where the image is captured, based on a second endoscopic image of the digestive organ.” an invasion depth being detected is being represented by “invasion depth of the disease” ), based on at least one of a size of the lesion region and/or a degree of reliability regarding a probability of the lesion region as the lesion (Saito, ¶[0054] “the trained CNN system outputs at least one of a probability of the positivity and/or the negativity for the disease in the digestive organ, a probability of the past disease, the severity level of the disease, the invasion depth of the disease, and a probability corresponding to the site where the image is captured, based on a second endoscopic image of the digestive organ.” an invasion depth being detected is being represented by “invasion depth of the disease” ); determine the degree of progression or the invasion depth (Saito, ¶ [0053] “at least one final diagnosis result of the positivity or the negativity for the disease in the digestive organ, a severity level, or an invasion depth of the disease, the final diagnosis result being corresponding to the first endoscopic image, in which” This represents determine the degree of invasion depth), based on the endoscopic image determined to be the image adequate to determine the degree of progression or the invasion depth (Saito, ¶ [0053] “at least one final diagnosis result of the positivity or the negativity for the disease in the digestive organ, a severity level, or an invasion depth of the disease, the final diagnosis result being corresponding to the first endoscopic image, in which” This represents determine the degree of invasion depth). Saito doesn’t clearly teach, however, output, to a display device or audio output device, a suggestion regarding photography of the endoscopic image, upon determining that the endoscopic image is not the image adequate to determine the degree of progression or the invasion depth (Kamon, ¶[0096] “Furthermore, the display mode may be changed (the figure may be changed, the color or brightness may be changed, or the like) in accordance with a recognition result and/or the certainty thereof.” Therefore in this case the image is not adequate as the brightness changes in accordance with the result and certainty therefore the displays changes according to a particular results of the endoscopic image see also ¶[0096); and output, to the display device or the audio, information that the degree of progression cannot be determined, upon determining that inadequacy determination results are consecutively generated for a predetermined number of times or for a predetermined period without generation of an adequacy determination result even after the output of the suggestion (Kamon, ¶[0096] “The recognition result may be displayed or hidden by the above-described condition setting according to an area. In a case where a setting is made to hide the recognition result, the mode in which “recognition is performed but the result is not displayed” is possible. Alternatively, the recognition result may be displayed or hidden in accordance with a determined condition (an elapsed time or the like) other than an area. Furthermore, the display mode may be changed (the figure may be changed, the color or brightness may be changed, or the like) in accordance with a recognition result and/or the certainty thereof.” The certainty results represent the amount of progression cannot be determined because of this result, and “determined condition (an elapsed time or the like) “represents for a predetermined period without generation of an adequacy determination result even after the output of the suggestion represented by the elapse time). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Saito with Kamon’s ability to suggest regarding photography of the endoscopic image, upon determining that the endoscopic image is not the image adequate to determine the degree of progression and having into account the passage of time for these images. The motivation would have been to be able to display the results suitable for an area as taught by Kamon ¶[0031] “display of a recognition result in a manner suitable for an area.”. As per claim 2, Saito in view of Kamon teaches the image processing device according to claim 1 wherein the at least one processor is configured to execute the instructions to output, by the display, a frame indicating a preferable display range of the lesion region on the endoscopic image as the suggestion (Saito, fig.2 the predetermined area would be that of the stomach or different parts and stored in memory when training the CNN and the rest of the system, ¶ [0307] “In order to examine the performance of the trained CNN system according to the sixth embodiment in distinguishing the M cancers from the SM cancers, the same validity examination test data, that is, 405 non-magnification endoscopic images and 509 magnification endoscopic images from the 155 patients were selected.”). As per claim 3, Saito in view of Kamon teaches, the image processing device according to The image processing device according to wherein the at least one processor is configured to execute the instructions to: output, by the display, a message prompting an examiner to adjust the frame to include the lesion region (Saito, ¶[0166] “ The number of images can be thus increased by at least one of rotation, increasing or decreasing the scale, changing the number of pixels, extracting bright or dark portions, and extracting the sites with a color tone change, and can be increased automatically using some tool.” The rotation represents adjust of the frame). As per claim 4, Saito in view of Kamon teaches, the image processing device according to claim 1 wherein the at least one processor is configured to execute the instructions to: output the suggestion to assist examiner's decision making (Saito, ¶[0181] “Furthermore, with diagnoses of H. pylori using the CNN according to this first embodiment, because a result can be immediately obtained if an endoscopic image in an endoscopic examination is inputted, it is possible to provide completely “online” assistance to the diagnoses of H. pylori, and therefore, it becomes possible to solve the problem of heterogeneity of the distribution of medical doctors across the regions, by providing what is called “remote medical cares”. This is assisting examiners decision making and ¶[0182] “ by using this CNN with an enormous number of images in storage, screening of a H. pylori infection can be assisted greatly without evaluations of endoscopic examiners,” as in they would look a the results and validate them). As per claim 5, Saito in view of Kamon teaches, the image processing device according to claim 1 according to wherein the at least one processor is configured to execute the instructions to acquire an inference result regarding the lesion region outputted from a lesion detection model by inputting the endoscopic image to the lesion detection model, and wherein the lesion detection model is a model obtained by machine learning of a relation between an input image to the lesion detection model and the lesion region included in the input image (Saito, ¶[0035] “In establishing associations between medical images, deep learning can train a neural network using medical images accumulated in the past, and has a possibility of being a strong machine-learning technology that allows the clinical features of a patient to be acquired directly from the medical images. A neural network is a mathematical model representing features of a neural circuit of a brain with computational simulations, and the algorithm supporting deep learning takes an approach using a neural network. A convolutional neural network (CNN) is developed by Szegedy and others, and is a network architecture that is most typically used for a purpose of deep learning of images.” ). As per claim 6, Saito in view of Kamon teaches, the image processing device according to The image processing device according to wherein the at least one processor is configured to execute the instructions to acquire an inference result regarding the degree of progression from a progression determination model by inputting the endoscopic image to the progression determination model, and wherein the progression determination model is a model obtained by machine learning of a relation between an input image to the progression determination model and the degree of progression of a lesion in the input image (Saito, ¶[0035] “In establishing associations between medical images, deep learning can train a neural network using medical images accumulated in the past, and has a possibility of being a strong machine-learning technology that allows the clinical features of a patient to be acquired directly from the medical images. A neural network is a mathematical model representing features of a neural circuit of a brain with computational simulations, and the algorithm supporting deep learning takes an approach using a neural network. A convolutional neural network (CNN) is developed by Szegedy and others, and is a network architecture that is most typically used for a purpose of deep learning of images.”). As per claim 7, Saito in view of Kamon teaches, the image processing device according to claim 1, wherein input image to the progression determination is one of a whole image of the endoscopic image, an image cut out from the endoscopic image so as to include at least the lesion region detected by the lesion detection model (Saito, fig.13A-D cut out image representation ), and a feature of the endoscopic image calculated by the lesion detection model or feature extraction model (Saito, ¶[0035] “A neural network is a mathematical model representing features of a neural circuit of a brain with computational simulations, and the algorithm supporting deep learning takes an approach using a neural network. A convolutional neural network (CNN) is developed by Szegedy and others, and is a network architecture that is most typically used for a purpose of deep learning of images.” This represents a feature of the endoscopic image calculated by the lesion detection model or feature extraction model with ¶[0169] “The demographic features of these patients, and the features of the images are indicated in Table 1.’ ” ). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANTIAGO GARCIA whose telephone number is (571)270-5182. The examiner can normally be reached Monday-Friday 9:30am-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SANTIAGO GARCIA/Primary Examiner, Art Unit 2673 /SG/
Read full office action

Prosecution Timeline

Jan 11, 2024
Application Filed
Jan 15, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599912
Method for controlling and/or regulating the feed of material to be processed to a crushing and/or screening plant of a material processing device
2y 5m to grant Granted Apr 14, 2026
Patent 12598596
CHANNEL SELECTION BASED ON MULTI-HOP NEIGHBORING-ACCESS-POINT FEEDBACK
2y 5m to grant Granted Apr 07, 2026
Patent 12587818
DEVICE AND ROLE BASED USER AUTHENTICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12574708
COMMUNICATION FOR USER EQUIPMENT GROUPS
2y 5m to grant Granted Mar 10, 2026
Patent 12574764
CLIENT COOPERATIVE TROUBLESHOOTING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+12.8%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 1015 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month