Prosecution Insights
Last updated: April 19, 2026
Application No. 18/039,770

PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING DEVICE

Final Rejection §103
Filed
Jun 01, 2023
Examiner
DICKERSON, CHAD S
Art Unit
2683
Tech Center
2600 — Communications
Assignee
Hoya Corporation
OA Round
2 (Final)
63%
Grant Probability
Moderate
3-4
OA Rounds
2y 9m
To Grant
86%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
376 granted / 600 resolved
+0.7% vs TC avg
Strong +23% interview lift
Without
With
+23.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
35 currently pending
Career history
635
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
55.5%
+15.5% vs TC avg
§102
14.9%
-25.1% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 600 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see page 10, filed 8/29/2025, with respect to Specification objection have been fully considered and are persuasive. The objection of the specification has been withdrawn. Applicant’s arguments, see page 10, filed 8/29/2025, with respect to claim objections have been fully considered and are persuasive. The objection of claims has been withdrawn. The 112(f) interpretation is withdrawn based on the amendment to the claims. Applicant’s arguments, see page 11, filed 8/29/2025, with respect to 101 rejection have been fully considered and are persuasive. The 101 rejection of the claims has been withdrawn. Applicant’s arguments with respect to claim(s) 1-6, 8 and 9 have been considered but are moot because the new ground of rejection does not rely on all references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The arguments state that the applied references do not perform the features of the following: “in a case where an enlarged image including the region of interest is input, inputting the enlarged image to a second learned model learned to output diagnosis support information related to the region of interest, wherein the diagnosis support information includes: a type of the region of interest including a lesion, a lesion candidate, a drug, a treatment tool, or a marker, or a classification and stage of the lesion, acquiring the diagnosis support information related to the region of interest from the second learned model, and associating the acquired diagnosis support information and the enlarged image with each other and outputting the associated the acquired diagnosis support information and the enlarged image”. The reference of Matsuzaki is applied to cure the deficiencies of the previously applied references and will be explained why below. Regarding the Matsuzaki reference, the invention discloses a device with multiple machine learning models that determine lesion information within input images. The invention discloses a second machine learning model that receives a magnified image that is a “zoomed-in photo” of a region of interest that contains a polyp or lesion. The second machine learning model evaluates the image and determines the type of lesion within the magnified image. The determined type is displayed with the lesion in a display image. These details are disclosed in ¶ [34], [41]-[43] and [50]. These aspects perform the features of the contended claim limitations above. Therefore, based on the above, the combination of Matsuzaki, with the prior applied references, performs the features of the independent claims. Thus, based on the above, the features of the claims are disclosed below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 6, 8 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hirasawa (US Pub 2020/0337537) in view of Sasada (US Pub 2022/0020147) and Matsuzaki (US Pub 2023/0260117 (filing date: 11/9/2020)). This listing of claims will replace all prior versions, and listings, of claims in the application: Re claim 1: (Original) Hirasawa discloses a non-transitory computer readable medium storing an executable computer program, which when executed by a processor causes a computer to perform processing of: acquiring an image captured by an endoscope (e.g. the image acquisition section acquires image data from the endoscope, which is taught in ¶ [71], [72] and [78].); [0071] Diagnostic imaging support apparatus 100 is used for endoscopy of the digestive organs (for example, the esophagus, stomach, duodenum, large intestine, etc.) and supports a doctor (for example, an endoscopist) in diagnosis based on endoscopic images by using the diagnostic endoscopic imaging capability of a convolutional neural network (CNN). Diagnostic imaging support apparatus 100 is connected to endoscopic image capturing apparatus 200 (corresponding to a “digestive-organ endoscopic image capturing apparatus” of the present invention) and display apparatus 300. [0072] Examples of endoscopic image capturing apparatus 200 include an electronic endoscope containing a built-in imaging section (also called a videoscope), and a camera-equipped endoscope that is an optical endoscope with a camera head having a built-in imaging section. Endoscopic image capturing apparatus 200 is inserted from, for example, the mouth or nose of a subject into a digestive organ to capture an image of a diagnostic target site in the digestive organ. Then, endoscopic image capturing apparatus 200 outputs endoscopic image data D1 (still image) indicating a captured endoscopic image of the diagnostic target site in the digestive organ (corresponding to a “digestive-organ endoscopic image” of the present invention) to diagnostic imaging support apparatus 100. In place of endoscopic image data D1, an endoscopic moving image may be used. [0078] [Image Acquisition Section] [0079] Image acquisition section 10 acquires endoscopic image data D1 output from endoscopic image capturing apparatus 200. Then, image acquisition section 10 outputs acquired endoscopic image data D1 to lesion estimation section 20. Image acquisition section 10 may acquire endoscopic image data D1 directly from endoscopic image capturing apparatus 200 or may acquire endoscopic image data D1 stored in external storage apparatus 104 or endoscopic image data D1 provided via an Internet line or the like. in a case where the image captured by the endoscope is input, inputting the acquired image to a first learned model learned to output a position of a region of interest included in the image (e.g. the invention discloses inputting an image from an endoscope into a lesion estimation section that includes a CNN containing model data used to output the position of the lesion within an area of the endoscope image, which is taught in ¶ [81], [84]-[86], [94] and [95].); [0080] [Lesion Estimation Section] [0081] Lesion estimation section 20 estimates a lesion name (name) and lesion location (location) of a lesion present in an endoscopic image represented by endoscopic image data D1 output from the endoscopic image acquisition section 10, and also estimates the certainty of the lesion name and the lesion location by using the convolutional neural network. Then, lesion estimation section 20 outputs to display control section 30 endoscopic image data D1 output from the endoscopic image acquisition section 10 and estimation result data D2 indicating the estimation results of the lesion name, the lesion location, and the certainty. [0084] A convolutional neural network is a type of feedforward neural network and is based on findings in the structure of the visual cortex of the brain. The convolutional neural network basically has a structure in which a convolution layer responsible for local feature extraction from an image and a pooling layer (subsampling layer) for summarizing features for each local region are repeated. Each layer of the convolutional neural network has a plurality of neurons and is arranged such that each neuron corresponds to that of the visual cortex. The fundamental function of each neuron has signal input and output. Note that when transmitting signals to each other, neurons in each layer do not directly output signals that are input, but each input is assigned a coupling load such that when the sum of the weighted inputs exceeds a threshold that is set for each neuron, the neuron outputs signals to the neurons in the subsequent layer. The respective coupling loads between the neurons are calculated from learning data. This enables estimation of an output value in response to input of real-time data. Any convolutional neural network that achieves the object described above may be used, regardless of which algorithm it has. [0085] FIG. 3 is a diagram illustrating a configuration of a convolutional neural network according to this embodiment. The model data (such as structured data and learned weight parameters) of the convolutional neural network is stored in external storage apparatus 104 together with the diagnostic imaging support program. [0086] As illustrated in FIG. 3, the convolutional neural network has, for example, feature extraction section Na and identification section Nb. Feature extraction section Na performs a process of extracting image features from an input image (endoscopic image data D1). Identification section Nb outputs image-related estimation results from the image features extracted by feature extraction section Na. [0094] The convolutional neural network can have an estimation function such that the convolutional neural network is subjected to a learning process using reference data (hereinafter referred to as “teacher data”) obtained in advance by an experienced endoscopist through marking processing so that desired estimation results (here, a lesion name, a lesion location, and a probability score) can be output from an input endoscopic image. [0095] The convolutional neural network according to this embodiment is configured to receive input of endoscopic image data D1 (“input” in FIG. 3) and output a lesion name, a lesion location, and a probability score for an image feature of an endoscopic image represented by endoscopic image data D1 as estimation result data D2 (“output” in FIG. 3). acquiring a position of a region of interest included in the acquired image from the learned model (e.g. a lesion location within an endoscope image is identified and is output from the CNN, which is taught in ¶ [81], [84]-[86], [94] and [95] above.). However, Hirasawa fails to specifically teach the features of outputting an enlarged image in which a portion of the image including the region of interest is enlarged on a basis of the acquired position of the region of interest. However, this is well known in the art as evidenced by Sasada. Similar to the primary reference, Sasada discloses determining a location of a lesion in order to show the location (same field of endeavor or reasonably pertinent to the problem). Sasada discloses outputting an enlarged image in which a portion of the image including the region of interest is enlarged on a basis of the acquired position of the region of interest (e.g. the invention discloses enlarging an area of the affected site of the image that contains a high assessment tile value. The tile represents an area within an image that is affected by a particular condition, which is taught in ¶ [42]-[45], [74] and [85].). [0042] The assessment derivation unit 124 has a function of deriving a tile assessment value using a determiner obtained by machine learning based on the tile image cut out by the cut-out unit 122. The tile assessment value is an assessment value of the affected site derived for each of the tile images. The assessment value is either an assessment value for the pathological conditions of the affected site judged by a physician (endoscopist) by visual inspection of the surface of the affected site, or a pathological examination assessment value obtained by the diagnosis of a physician (pathologist) by pathologically examining the affected site. [0043] Specifically, the assessment value for the pathological conditions of the affected site is an assessment value scored by the physician for bleeding, tumor, or visible vascular pattern in the affected site. The assessment value for the pathological condition of the affected site may be an assessment value obtained by any known scoring method. [0044] For example, UCEIS score can be used as an assessment value for the pathological conditions of the affected site. The UCEIS score is an index that has recently come to be used as an assessment value indicating the severity of ulcerative colitis. The UCEIS score can perform precise classification, making it possible to formulate precise diagnostic policies and to reduce the assessment variation among endoscopists. [0045] Specifically, the UCEIS score is defined for the assessment of at least vascular pattern, bleeding, and erosions and ulcers in ulcerative colitis, as illustrated in Table 1 below. The assessment derivation unit 124 may derive assessment values for each of the vascular pattern, bleeding, and erosions and ulcers in ulcerative colitis. [0074] Next, it is judged whether learning of N or more affected site images for learning has been performed (S108). When the learning of N or more affected site images for learning has not been performed (S108/No), the learning of the affected site image is repeated until the operation reaches the learning of N or more images. The affected site image for learning that is used for machine learning may be subjected to data augmentation by rotation, enlargement, reduction, deformation, or the like. By using at least 10,000 affected site images for learning, a highly reliable determiner can be generated. [0085] Alternatively, the display device 130 may use enlarged display for a portion having a high tile assessment value in the affected site image. With these display methods, the medical support system 1000 can present to the user the portion judged to have particularly poor pathological conditions with emphasized manner. Therefore, in view of Sasada, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of outputting an enlarged image in which a portion of the image including the region of interest is enlarged on a basis of the acquired position of the region of interest, incorporated in the device of Hirasawa, in order to enlarge an area associated with a certain accuracy, which can improve the diagnosis accuracy when taking into consideration the assessment result of the pathological condition (as stated in Sasada ¶ [29]). However, the combination fails to specifically teach the features of in a case where an enlarged image including the region of interest is input, inputting the enlarged image to a second learned model learned to output diagnosis support information related to the region of interest, wherein the diagnosis support information includes: a type of the region of interest including a lesion, a lesion candidate, a drug, a treatment tool, or a marker, or a classification and stage of the lesion, acquiring the diagnosis support information related to the region of interest from the second learned model, and associating the acquired diagnosis support information and the enlarged image with each other and outputting the associated the acquired diagnosis support information and the enlarged image. However, this is well known in the art as evidenced by Matsuzaki. Similar to the primary reference, Matsuzaki discloses multiple models that output diagnosis information (same field of endeavor or reasonably pertinent to the problem). Matsuzaki discloses in a case where an enlarged image including the region of interest is input, inputting the enlarged image to a second learned model learned to output diagnosis support information related to the region of interest (e.g. the invention discloses a magnified image of an area of interest where a polyp may exist for examination, which is taught in ¶ [50]. A switch determines the magnified image in order to send this magnified image to a second trained model. The second trained model is used to identify a lesion and lesion type within an area of the enlarged image, which is taught in ¶ [34], [41]-[43].), wherein the diagnosis support information includes: [0034] As shown in FIG. 3, a switch 113 performs processing of selecting either a normal-magnification detection process 111 or a greater-magnification diagnosis process 112 according to the magnification change information SMAG. The switch 113 outputs the target image IMIN to the normal-magnification detection process 111 when the magnification change information SMAG indicates the normal magnification, and outputs the target image IMIN to the greater-magnification diagnosis process 112 when the magnification change information SMAG indicates the greater magnification. [0041] The greater-magnification diagnosis process 112 in FIG. 3 is processing that uses the second trained model 122 and processing of diagnosing the type of the lesion from the target image IMIN. When the second trained model 122 is a neural network, the target image IMIN is input to an input layer of the neural network, and the neural network classifies types of the lesion from the target image IMIN and generates a score indicating probability of being the type for the respective types. An output layer of the neural network outputs the type having the highest score as a diagnosis result DET2. [0042] The greater-magnification diagnosis process 112 is a classifier that classifies images into classes or categories. As shown in FIG. 4, the greater-magnification diagnosis process 112 is applied to the image IM3 in the frame F3 captured at the greater magnification. The image IM3 is an image in which the lesion LSA of the image IM2 is captured at the greater magnification. The greater-magnification diagnosis process 112 classifies the type of the lesion LSA from the image IM3. For example, when the lesion LSA is determined to be a “type 1,” letters CLS of the “type 1” is superimposed on the image IM3 and displayed as the display image DS2. [0043] The second trained model 122 has been trained in advance by the learning device 600 in FIG. 5. The storage device 620 stores a second training model 622 and a second training data 632 and generates the second trained model 122 by being subjected to training of the second training model 622 by the processing device 610 using the second training data 632. The generated second trained model 122 is transferred to the storage device 120 of the information processing system 100. [0050] The physician sets the magnification of the image to the greater magnification, enlarges and displays the region where a polyp is suspected to exist, and diagnoses the type of the polyp from a travelling pattern of microvessels in the mucosa, the structure of a ductal opening, the shape of the mucosal cell, or the like. The illumination light remains to be the narrow band imaging (NBI) light. At this time, the processing device 110 performs the greater-magnification diagnosis process 112 and displays the type of the polyp. The type of the polyp is a typological type classified according to tissue structure such as the travelling pattern of microvessels in the mucosa, the structure of the ductal opening, or the shape of the mucosal cell, or the depth to which the abnormal tissue reaches. The second trained model 122 is trained to enable recognition of the typological type of this polyp from the image. a type of the region of interest including a lesion, a lesion candidate, a drug, a treatment tool, or a marker, or a classification and stage of the lesion (e.g. the lesion and lesion type are identified. A type of lesion can be considered as a classification, which is taught in ¶ [41]-[43] above.), acquiring the diagnosis support information related to the region of interest from the second learned mode (e.g. the second machine learning model generates the lesion type related to the region where a polyp is located, which is taught in ¶ [41]-[43] above.), and associating the acquired diagnosis support information and the enlarged image with each other and outputting the associated the acquired diagnosis support information and the enlarged image (e.g. the invention can display the lesion with the lesion type identified on a screen for display, which is taught in ¶ [41]-[43] above.). Therefore, in view of Matsuzaki, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of in a case where an enlarged image including the region of interest is input, inputting the enlarged image to a second learned model learned to output diagnosis support information related to the region of interest, wherein the diagnosis support information includes: a type of the region of interest including a lesion, a lesion candidate, a drug, a treatment tool, or a marker, or a classification and stage of the lesion, acquiring the diagnosis support information related to the region of interest from the second learned model, and associating the acquired diagnosis support information and the enlarged image with each other and outputting the associated the acquired diagnosis support information and the enlarged image, incorporated in the device of Hirasawa, as modified by Sasada, in order to perform diagnostic support for images of areas that contain a greater magnification, which increases the capability to handle various tasks at various magnifications (as stated in Matsuzaki ¶ [53]). Re claim 2: (Original) Hirasawa discloses the computer readable medium according to claim 1, wherein in a case where an image captured by the endoscope is input, the first learned model outputs a position and an accuracy probability of the region of interest included in the image (e.g. the model within the CNN is used to output a location of the lesion as well as the accuracy probability of the area identified as the lesion location, which is taught in ¶ [82], [83], [94] and [95].), [0082] In this embodiment, lesion estimation section 20 estimates a probability score as an index indicating the certainty of a lesion name and a lesion location. The probability score is represented by a value greater than 0 and less than or equal to 1. A higher probability score indicates a higher certainty of a lesion name and a lesion location. [0083] The probability score is an example index indicating the certainty of a lesion name and a lesion location. Any other suitable index may instead be used. For example, the probability score may be represented by a value of 0% to 100% or may be represented by any one of several level values. [0094] The convolutional neural network can have an estimation function such that the convolutional neural network is subjected to a learning process using reference data (hereinafter referred to as “teacher data”) obtained in advance by an experienced endoscopist through marking processing so that desired estimation results (here, a lesion name, a lesion location, and a probability score) can be output from an input endoscopic image. [0095] The convolutional neural network according to this embodiment is configured to receive input of endoscopic image data D1 (“input” in FIG. 3) and output a lesion name, a lesion location, and a probability score for an image feature of an endoscopic image represented by endoscopic image data D1 as estimation result data D2 (“output” in FIG. 3). However, Hirasawa fails to specifically teach the features in a case where the accuracy probability is less than a predetermined value, the first learned model outputs the enlarged image so that the enlarged image is displayed on a screen different from a screen on which the acquired image is displayed, and in a case where the accuracy probability is equal to or larger than the predetermined value, the first learned model outputs the enlarged image so that the acquired image is switched to the enlarged image on a screen on which the acquired image is displayed, or the enlarged image is displayed on a screen different from the screen on which the acquired image is displayed. However, this is well known in the art as evidenced by Sasada. Similar to the primary reference, Sasada discloses determining a location of a lesion in order to show the location (same field of endeavor or reasonably pertinent to the problem). Sasada discloses in a case where the accuracy probability is less than a predetermined value, the first learned model outputs the enlarged image so that the enlarged image is displayed on a screen different from a screen on which the acquired image is displayed (e.g. the system discloses showing a part of the image associated with a high tile assessment value is shown on the enlarged display of the display device, which is taught in ¶ [85] above. If the portion is not considered as a high tle assessment value, the image is shown on the side of the enlarged image shown in figure 8.), and in a case where the accuracy probability is equal to or larger than the predetermined value, the first learned model outputs the enlarged image so that the acquired image is switched to the enlarged image on a screen on which the acquired image is displayed, or the enlarged image is displayed on a screen different from the screen on which the acquired image is displayed (e.g. the enlarged area of the display device is used to show a part of the image considered to have a high tile assessment value. This differs from showing an image that does not have a high tile assessment value.). Therefore, in view of Sasada, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of in a case where the accuracy probability is less than a predetermined value, the first learned model outputs the enlarged image so that the enlarged image is displayed on a screen different from a screen on which the acquired image is displayed, and in a case where the accuracy probability is equal to or larger than the predetermined value, the first learned model outputs the enlarged image so that the acquired image is switched to the enlarged image on a screen on which the acquired image is displayed, or the enlarged image is displayed on a screen different from the screen on which the acquired image is displayed, incorporated in the device of Hirasawa, in order to enlarge an area associated with a certain accuracy, which can improve the diagnosis accuracy when taking into consideration the assessment result of the pathological condition (as stated in Sasada ¶ [29]). Re claim 3:(Original) However, Hirasawa fails to specifically teach the features of the computer readable medium according to claim 2, wherein a set value for determining an output mode of an enlarged image in a case where the accuracy probability is equal to or larger than the predetermined value is stored in advance, and the enlarged image in a case where the accuracy probability is equal to or larger than the predetermined value is output in an output mode based on the set value. However, this is well known in the art as evidenced by Sasada. Similar to the primary reference, Sasada discloses determining a location of a lesion in order to show the location (same field of endeavor or reasonably pertinent to the problem). Sasada discloses wherein a set value for determining an output mode of an enlarged image in a case where the accuracy probability is equal to or larger than the predetermined value is stored in advance (e.g. if an assessment value is considered as high, the image portion associated with the high value is shown in an enlarged manner, which is taught in ¶ [85] above. The determination of being higher than a threshold is compared to the assessment value to determine if a value is higher or lower than the threshold, which is taught in ¶ [55], [56], [79] and [80].), and [0055] The assessment derivation unit 124 derives assessment item-based tile assessment values, as the tile assessment value. For example, the assessment items may be items of bleeding, tumor, or visible vascular pattern of the affected site, and may be items of pathological examination. Furthermore, the assessment derivation unit 124 may calculate a tile assessment value including a mixture of individual assessment items. For example, tile assessment values may be derived by using at least two or more assessment values of bleeding, ulcer, or visible vascular pattern of the affected site. For example, in this case, tile assessment values may be determined by totaling at least two or more assessment values of bleeding, ulcer, or visible vascular pattern of the affected site. [0056] The assessment derivation unit 124 calculates the reliability of each of tile assessment values, in addition to the tile assessment value. The reliability is a value indicating the certainty of the tile assessment value, and can be expressed by a probability or a numerical value of 0 to 1 (for example, 1 has a highest certainty). The assessment derivation unit 124 may judge that the tile assessment value is reliable when the reliability is a threshold or more and may output the reliability to the display device 130 described below. In contrast, when the reliability is less than the threshold, the assessment derivation unit 124 may judge that the tile assessment value is unreliable and does not have to output the tile assessment value to the display device 130 described below, or may output a result indicating non-analyzable to the display device 130. [0079] Next, the derivation device 120 derives the tile assessment value of the tile image and the reliability of the tile image by using the generated determiner (S206). When deriving the tile assessment value, the derivation device 120 may further derive the overall assessment value for the whole of the affected site image. For example, the derivation device 120 may derive the overall assessment value from the mean value of the tile assessment values in the affected site image, or may derive the overall assessment value from a combination of the maximum value and the mean value of the tile assessment values. Furthermore, the derivation device 120 may further combine the probability distributions of the tile assessment values to derive the overall assessment value. For example, the derivation device 120 may derive the overall assessment value in consideration of the degree of dispersion, standard deviation, or the like of the tile assessment value. [0080] Thereafter, the display device 130 presents the tile assessment value of the tile image and the reliability of the tile assessment value to the user (S208). The display device 130 may further display the overall assessment value. the enlarged image in a case where the accuracy probability is equal to or larger than the predetermined value is output in an output mode based on the set value (e.g. an assessment value of the tile is determined to be a high value. Based on the high value, the area considered to be in the high value area is enlarged, which is taught in ¶ [85] above.). Therefore, in view of Sasada, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of wherein a set value for determining an output mode of an enlarged image in a case where the accuracy probability is equal to or larger than the predetermined value is stored in advance, and the enlarged image in a case where the accuracy probability is equal to or larger than the predetermined value is output in an output mode based on the set value, incorporated in the device of Hirasawa, in order to enlarge an area associated with a certain accuracy, which can improve the diagnosis accuracy when taking into consideration the assessment result of the pathological condition (as stated in Sasada ¶ [29]). Re claim 6: (Currently Amended) However, Hirasawa fails to specifically teach the features of the computer readable medium according to claim 1, wherein in a case where positions of a plurality of the regions of interest included in the acquired image are acquired from the first learned model, enlarged images, in each of which a portion of an image including each of the plurality of regions of interest is enlarged, are output on the basis of the acquired positions of the plurality of regions of interest. However, this is well known in the art as evidenced by Sasada. Similar to the primary reference, Sasada discloses determining a location of a lesion in order to show the location (same field of endeavor or reasonably pertinent to the problem). Sasada discloses wherein in a case where positions of a plurality of the regions of interest included in the acquired image are acquired from the first learned model, enlarged images, in each of which a portion of an image including each of the plurality of regions of interest is enlarged, are output on the basis of the acquired positions of the plurality of regions of interest (e.g. the invention discloses capturing multiple areas within the endoscope image. In figure 8, the high value tile assessment image is shown in an enlarged form while other areas containing other values are also shown on a different portion of the display device. These areas represent similar enlarged areas that have received a certain score that may contain the region of interest, which is taught in ¶ [82], [83], [85], [94] and [95] above.). Therefore, in view of Sasada, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of wherein in a case where positions of a plurality of the regions of interest included in the acquired image are acquired from the first learned model, enlarged images, in each of which a portion of an image including each of the plurality of regions of interest is enlarged, are output on the basis of the acquired positions of the plurality of regions of interest, incorporated in the device of Hirasawa, in order to enlarge an area associated with a certain accuracy, which can improve the diagnosis accuracy when taking into consideration the assessment result of the pathological condition (as stated in Sasada ¶ [29]). Re claim 8: (Original) Hirasawa discloses an information processing method causing a computer to perform processing of: acquiring an image captured by an endoscope (e.g. the image acquisition section acquires image data from the endoscope, which is taught in ¶ [71], [72] and [78] above.); in a case where the image captured by the endoscope is input, inputting the acquired image to a first learned model learned to output a position of a region of interest included in the image (e.g. the invention discloses inputting an image from an endoscope into a lesion estimation section that includes a CNN containing model data used to output the position of the lesion within an area of the endoscope image, which is taught in ¶ [81], [84]-[86], [94] and [95] above.); acquiring a position of a region of interest included in the acquired image from the first learned model (e.g. a lesion location within an endoscope image is identified and is output from the CNN, which is taught in ¶ [81], [84]-[86], [94] and [95] above.). However, Hirasawa fails to specifically teach the features of outputting an enlarged image in which a portion of the image including the region of interest is enlarged on a basis of the acquired position of the region of interest. However, this is well known in the art as evidenced by Sasada. Similar to the primary reference, Sasada discloses determining a location of a lesion in order to show the location (same field of endeavor or reasonably pertinent to the problem). Sasada discloses outputting an enlarged image in which a portion of the image including the region of interest is enlarged on a basis of the acquired position of the region of interest (e.g. the invention discloses enlarging an area of the affected site of the image that contains a high assessment tile value. The tile represents an area within an image that is affected by a particular condition, which is taught in ¶ [42]-[45], [74] and [85] above.). Therefore, in view of Sasada, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of outputting an enlarged image in which a portion of the image including the region of interest is enlarged on a basis of the acquired position of the region of interest, incorporated in the device of Hirasawa, in order to enlarge an area associated with a certain accuracy, which can improve the diagnosis accuracy when taking into consideration the assessment result of the pathological condition (as stated in Sasada ¶ [29]). However, the combination fails to specifically teach the features of in a case where an enlarged image including the region of interest is input, inputting the enlarged image to a second learned model learned to output diagnosis support information related to the region of interest, wherein the diagnosis support information includes: a type of the region of interest including a lesion, a lesion candidate, a drug, a treatment tool, or a marker, or a classification and stage of the lesion, acquiring the diagnosis support information related to the region of interest from the second learned model, and associating the acquired diagnosis support information and the enlarged image with each other and outputting the associated the acquired diagnosis support information and the enlarged image. However, this is well known in the art as evidenced by Matsuzaki. Similar to the primary reference, Matsuzaki discloses multiple models that output diagnosis information (same field of endeavor or reasonably pertinent to the problem). Matsuzaki discloses in a case where an enlarged image including the region of interest is input, inputting the enlarged image to a second learned model learned to output diagnosis support information related to the region of interest (e.g. the invention discloses a magnified image of an area of interest where a polyp may exist for examination, which is taught in ¶ [50] above. A switch determines the magnified image in order to send this magnified image to a second trained model. The second trained model is used to identify a lesion and lesion type within an area of the enlarged image, which is taught in ¶ [34], [41]-[43] above.), wherein the diagnosis support information includes: a type of the region of interest including a lesion, a lesion candidate, a drug, a treatment tool, or a marker, or a classification and stage of the lesion (e.g. the lesion and lesion type are identified. A type of lesion can be considered as a classification, which is taught in ¶ [41]-[43] above.), acquiring the diagnosis support information related to the region of interest from the second learned mode (e.g. the second machine learning model generates the lesion type related to the region where a polyp is located, which is taught in ¶ [41]-[43] above.), and associating the acquired diagnosis support information and the enlarged image with each other and outputting the associated the acquired diagnosis support information and the enlarged image (e.g. the invention can display the lesion with the lesion type identified on a screen for display, which is taught in ¶ [41]-[43] above.). Therefore, in view of Matsuzaki, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of in a case where an enlarged image including the region of interest is input, inputting the enlarged image to a second learned model learned to output diagnosis support information related to the region of interest, wherein the diagnosis support information includes: a type of the region of interest including a lesion, a lesion candidate, a drug, a treatment tool, or a marker, or a classification and stage of the lesion, acquiring the diagnosis support information related to the region of interest from the second learned model, and associating the acquired diagnosis support information and the enlarged image with each other and outputting the associated the acquired diagnosis support information and the enlarged image, incorporated in the device of Hirasawa, as modified by Sasada, in order to perform diagnostic support for images of areas that contain a greater magnification, which increases the capability to handle various tasks at various magnifications (as stated in Matsuzaki ¶ [53]). Re claim 9: (Original) Hirasawa discloses an information processing device comprising: an image acquisition controller that acquires an image captured by an endoscope (e.g. the image acquisition section acquires image data from the endoscope, which is taught in ¶ [71], [72] and [78].); an input that, in a case where the image captured by the endoscope is input, inputs the acquired image to a learned model learned to output a position of a region of interest included in the image (e.g. the invention discloses inputting an image from an endoscope into a lesion estimation section that includes a CNN containing model data used to output the position of the lesion within an area of the endoscope image, which is taught in ¶ [81], [84]-[86], [94] and [95] above.); a position acquisition processor that acquires a position of a region of interest included in the acquired image from the learned model (e.g. a lesion location within an endoscope image is identified and is output from the CNN, which is taught in ¶ [81], [84]-[86], [94] and [95] above.). However, Hirasawa fails to specifically teach the features of an output unit that outputs an enlarged image in which a portion of the image including the region of interest is enlarged on a basis of the acquired position of the region of interest. However, this is well known in the art as evidenced by Sasada. Similar to the primary reference, Sasada discloses determining a location of a lesion in order to show the location (same field of endeavor or reasonably pertinent to the problem). Sasada discloses an output that outputs an enlarged image in which a portion of the image including the region of interest is enlarged on a basis of the acquired position of the region of interest (e.g. the invention discloses enlarging an area of the affected site of the image that contains a high assessment tile value. The tile represents an area within an image that is affected by a particular condition, which is taught in ¶ [42]-[45], [74] and [85] above.). Therefore, in view of Sasada, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of an output that outputs an enlarged image in which a portion of the image including the region of interest is enlarged on a basis of the acquired position of the region of interest, incorporated in the device of Hirasawa, in order to enlarge an area associated with a certain accuracy, which can improve the diagnosis accuracy when taking into consideration the assessment result of the pathological condition (as stated in Sasada ¶ [29]). However, the combination fails to specifically teach the features of in a case where an enlarged image including the region of interest is input, the input inputs the enlarged image to a second learned model learned to output diagnosis support information related to the region of interest, wherein the diagnosis support information includes: a type of the region of interest including a lesion, a lesion candidate, a drug, a treatment tool, or a marker, or a classification and stage of the lesion, an acquisition processor that acquires the diagnosis support information related to the region of interest from the second learned model, and a controller that associates the acquired diagnosis support information and the enlarged image with each other and outputs the associated the acquired diagnosis support information and the enlarged image. However, this is well known in the art as evidenced by Matsuzaki. Similar to the primary reference, Matsuzaki discloses multiple models that output diagnosis information (same field of endeavor or reasonably pertinent to the problem). Matsuzaki discloses in a case where an enlarged image including the region of interest is input, the input inputs the enlarged image to a second learned model learned to output diagnosis support information related to the region of interest (e.g. the invention discloses a magnified image of an area of interest where a polyp may exist for examination, which is taught in ¶ [50] above. A switch determines the magnified image in order to send this magnified image to a second trained model. The second trained model is used to identify a lesion and lesion type within an area of the enlarged image, which is taught in ¶ [34], [41]-[43] above.), wherein the diagnosis support information includes: a type of the region of interest including a lesion, a lesion candidate, a drug, a treatment tool, or a marker, or a classification and stage of the lesion (e.g. the lesion and lesion type are identified. A type of lesion can be considered as a classification, which is taught in ¶ [41]-[43] above.), an acquisition processor that acquires the diagnosis support information related to the region of interest from the second learned model (e.g. the second machine learning model generates the lesion type related to the region where a polyp is located, which is taught in ¶ [41]-[43] above.), and a controller that associates the acquired diagnosis support information and the enlarged image with each other and outputs the associated the acquired diagnosis support information and the enlarged image (e.g. the invention can display the lesion with the lesion type identified on a screen for display, which is taught in ¶ [41]-[43] above.). Therefore, in view of Matsuzaki, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of in a case where an enlarged image including the region of interest is input, the input inputs the enlarged image to a second learned model learned to output diagnosis support information related to the region of interest, wherein the diagnosis support information includes: a type of the region of interest including a lesion, a lesion candidate, a drug, a treatment tool, or a marker, or a classification and stage of the lesion, an acquisition processor that acquires the diagnosis support information related to the region of interest from the second learned model, and a controller that associates the acquired diagnosis support information and the enlarged image with each other and outputs the associated the acquired diagnosis support information and the enlarged image, incorporated in the device of Hirasawa, as modified by Sasada, in order to perform diagnostic support for images of areas that contain a greater magnification, which increases the capability to handle various tasks at various magnifications (as stated in Matsuzaki ¶ [53]). Claim(s) 4 and 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hirasawa, as modified by Sasada and Matsuzaki, as applied to claim 1 above, and further in view of Takeda (US Pub 2019/0075230). Re claim 4: (Currently Amended) However, Hirasawa fails to specifically teach the features of the computer readable medium according to claim 1, wherein in a case where the enlarged image is output, information for changing an observation mode of the endoscope to a special light observation mode is output. However, this is well known in the art as evidenced by Takeda. Similar to the primary reference, Takeda discloses utilizing an endoscope to observe medical conditions within a patient (same field of endeavor or reasonably pertinent to the problem). Takeda discloses wherein in a case where the enlarged image is output, information for changing an observation mode of the endoscope to a special light observation mode is output (e.g. an aera within a patient can be enlarged by the zoom function, which will enlarge the captured area on a display. When a zoom operation occurs, the exposure function can be changed to a special mode once the zoom is activated to record an area of interest, which is taught in [160]-[165] and [176]-[181].). [0160] The imaging control unit 156 controls, for example, a light source such as the light source unit 104 and the exposure function by controlling illumination light radiated to an observation target such as a lesion. In addition, the imaging control unit 156 may control the exposure function by, for example, controlling a gain with respect to an image signal indicating a medical captured image. As an example of a process relating to control of a gain with respect to an image signal, for example, signal processing of averaging luminance of a medical captured image is exemplified. [0161] More specifically, the imaging control unit 156 controls the exposure function of the imaging device on the basis of a detection result of a line of sight of a recognition target so that luminance of a predetermined region of a medical captured image is changed. The imaging control unit 156 controls the exposure function of the imaging device on the basis of, for example, a position of a line of sight of recognition target (an example of a detection result of a line of sight
Read full office action

Prosecution Timeline

Jun 01, 2023
Application Filed
Jun 14, 2025
Non-Final Rejection — §103
Jul 23, 2025
Interview Requested
Aug 01, 2025
Applicant Interview (Telephonic)
Aug 01, 2025
Examiner Interview Summary
Aug 12, 2025
Applicant Interview (Telephonic)
Aug 18, 2025
Examiner Interview Summary
Aug 29, 2025
Response Filed
Dec 12, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602908
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12603960
IMAGE ANALYSIS APPARATUS, IMAGE ANALYSIS SYSTEM, IMAGE ANALYSIS METHOD, PROGRAM, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM COMPRISING READING A PRINTED MATTER, ANALYZING CONTENT RELATED TO READING OF THE PRINTED MATTER AND ACQUIRING SUPPORT INFORMATION BASED ON AN ANALYSIS RESULT OF THE CONTENT FOR DISPLAY TO ASSIST A USER IN FURTHER READING OPERATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12579817
Vehicle Control Device and Control Method Thereof for Camera View Control Based on Surrounding Environment Information
2y 5m to grant Granted Mar 17, 2026
Patent 12522110
APPARATUS AND METHOD OF CONTROLLING THE SAME COMPRISING A CAMERA AND RADAR DETECTION OF A VEHICLE INTERIOR TO REDUCE A MISSED OR FALSE DETECTION REGARDING REAR SEAT OCCUPATION
2y 5m to grant Granted Jan 13, 2026
Patent 12519896
IMAGE READING DEVICE COMPRISING A LENS ARRAY INCLUDING FIRST LENS BODIES AND SECOND LENS BODIES, A LIGHT RECEIVER AND LIGHT BLOCKING PLATES THAT ARE BETWEEN THE LIGHT RECEIVER AND SECOND LENS BODIES, THE THICKNESS OF THE LIGHT BLOCKING PLATES EQUAL TO OR GREATER THAN THE SECOND LENS BODIES THICKNESS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
63%
Grant Probability
86%
With Interview (+23.0%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 600 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month