Prosecution Insights
Last updated: April 19, 2026
Application No. 18/395,710

DISEASE LABEL CREATION DEVICE, DISEASE LABEL CREATION METHOD, DISEASE LABEL CREATION PROGRAM, LEARNING DEVICE, AND DISEASE DETECTION MODEL

Non-Final OA §102§103§112
Filed
Dec 25, 2023
Examiner
MENDEZ MUNIZ, DYLAN JOHN
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
15 granted / 18 resolved
+21.3% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
15 currently pending
Career history
33
Total Applications
across all art units

Statute-Specific Performance

§101
16.3%
-23.7% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
21.3%
-18.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA Information Disclosure Statement The information disclosure statements (IDS) were submitted on 03/21/2024, 01/09/2025, 02/19/2025, 02/04/2026 and 02/04/2026. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 1-5, 7-9, 11-16, 18-21 and 24, the claims recites “a simple x-ray image” It is unclear from the context of the claim what “simple” means. One of ordinary skill in the art would ask “How can one discern an image to be simple? Is it the content? Is it the resolution? Is it the modality?”, “Wouldn’t one individual perceive an image as simple and another individual as not simple since it is based on opinion?”. Therefore one of ordinary skill in the art would not be able to apprise the scope of the claim for reasons regarding clarity. Regarding claim 5, the claims recites “a process of three-dimensionally restoring the simple X-ray image; and a process of performing registration between the CT image and the three-dimensionally restored simple X-ray image.” It is unclear from the context of the claim what “three-dimensionally restoring” and “three-dimensionally restored simple” means. One of ordinary skill in the art would ask “Is it a conversion of 2d to 3d data?” “Is it a recreation of 2d data into an interpretation of 3d data?” “Is it simply performing the process of storing the x-ray image again?”, “Is it just a process of upsampling (or adjusting to have better quality) performed on the 2d image?” “How can one three-dimensionally restore?”, “Is it meant to be a storage process?”, “Wouldn’t one individual perceive an image as simple and another individual as not simple since it is based on opinion?”. Therefore one of ordinary skill in the art would not be able to apprise the scope of the claim for reasons regarding clarity. Regarding claims 7-8 the claims recites “a normal region” It is unclear from the context of the claim what “normal” means. One of ordinary skill in the art would ask “How can one discern a region to be normal? Is it the content? Is it the resolution? Is it the modality? Is it a value?”, “Wouldn’t one individual perceive a region as normal and another individual as not normal since it is based on opinion?”. Therefore one of ordinary skill in the art would not be able to apprise the scope of the claim for reasons regarding clarity. Regarding claim 22, the claim recites “adjust the first error of the disease region, of which the second reliability output from the disease detection model is low and which is false positive, to a large value and adjust the first error of the disease region, of which the second reliability is low and which is false negative, to a small value.” It is unclear from the context of the claim what “false positive”, “false negative”, “large value”, “small value”, “low” means. One of ordinary skill in the art would ask “How can one discern a value to be “false positive”, “false negative”, “large value”, “small value”, “low”? “What is the range considered to be “false positive”, “false negative”, “large value”, “small value”, “low”” “Wouldn’t one individual perceive a value to be “large value”, “small value”, “low” and another individual as not “large value”, “small value”, “low” since it is based on opinion?” “What exactly does low mean, is it the height? The value?”. What is the range of a value for it to be considered “false negative”, “false positive”, “large value”, “small value”, “low” ? Therefore one of ordinary skill in the art would not be able to apprise the scope of the claim for reasons regarding clarity. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 12, 16 and 17, are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Makoto et. al., hereafter Makoto (JP Publication no. 2007-159643 A). As per claim 1, “A disease label creation device comprising: a first processor, wherein the first processor is configured to execute: (See page 5 paragraphs 4-12 “First, the computer 1000 will be described. A CPU 1010 controls the entire computer 1000 using programs and data stored in the RAM 1020 and the ROM 1030, and executes each process performed by the image processing apparatus” ) an information acquisition process of acquiring a simple X-ray image, a three-dimensional CT image paired with the simple X-ray image, and a three-dimensional first disease label extracted from the CT image; (See page 4 paragraphs 2-4 and 11. “By the above processing, the projection processing imitating the photographing means of the two-dimensional image data IM2 can be added to the three-dimensional image data IM1 and converted into the projection image data IM3 that is two-dimensional image data. In the present embodiment, the three-dimensional image IM1 is chest CT image data, and the two-dimensional image data IM2 is chest image data of the same subject, and the chest CT image data is converted into chest image data by projection processing.” IM2 is the simple x-ray image paired with IM1, which is the 3D CT image. IM2 is an x-ray image as seen in page 3 paragraphs 1, 5, 6, 9, 10 and in the abstract. The 3d first disease label (represents area) is structure data 0 seen in paragraph 11 (penultimate paragraph), “ The composite image IM4 is obtained by converting the structure data O having position information into a position on the two-dimensional image data IM3, and simply superimposing it on IM2 ′ aligned with IM3 to output the three-dimensional image. The notable structure O extracted in advance by IM1 is mapped and displayed on the two-dimensional image IM2.” See also page 3 paragraph 7. See also page 3 paragraph 3 “Next, the information extraction unit 7 extracts a specific structure O displayed in the 3D image data output from the 3D image input unit 1 (step S300).” Makoto) a registration process of performing registration between the simple X-ray image and the CT image; and (Examiner interprets the following as all being part of the registration process. See all of page 3 and page 3 paragraph 7 “ The projection processing unit 3 receives the three-dimensional image data IM1 output from the three-dimensional image input unit 1 and the two-dimensional image data IM2 output from the two-dimensional image input unit 2, and uses IM1 as two-dimensional image data IM3 equivalent to IM2. (Step S400).” Therefore IM3 represents the CT image. See all of page 4, and page 4 paragraphs 3-4, the registration is performed “The registration unit 4 performs registration between the two-dimensional image data IM2 output from the two-dimensional image input unit 2 and the two-dimensional image data IM3 output from the projection processing unit 3, and results in an image after registration. IM2 ′ is output to the composition processing unit 5 and the accuracy evaluation unit 8 (step S500).” See also page 4 paragraph 11. See also page 5 paragraph 1. Makoto) a conversion process of converting the first disease label into a two-dimensional second disease label corresponding to the simple X-ray image on the basis of a result of the registration. (See all of page 3 and 4, see also page 4 paragraph 11 “The composite image IM4 is obtained by converting the structure data O having position information into a position on the two-dimensional image data IM3, and simply superimposing it on IM2 ′ aligned with IM3 to output the three-dimensional image. The notable structure O extracted in advance by IM1 is mapped and displayed on the two-dimensional image IM2. Here, the structure data O may be colored and displayed for easy viewing, or the transmittance may be set high so that an object behind the structure can be visually recognized.” Examiner interprets the 2D second disease label as the mapped structure 0. Makoto ) Claim 16 is rejected under the same analysis as claim 1. As per claim 2, Makoto teaches “The disease label creation device according to claim 1, wherein the registration process includes: a process of projecting the CT image to create a pseudo X-ray image; and a process of performing registration between the simple X-ray image and the pseudo X-ray image. (Examiner interprets IM3 as the pseudo x-ray image. Seer page 3 paragraph 7 “The projection processing unit 3 receives the three-dimensional image data IM1 output from the three-dimensional image input unit 1 and the two-dimensional image data IM2 output from the two-dimensional image input unit 2, and uses IM1 as two-dimensional image data IM3 equivalent to IM2. (Step S400)” See also page 4 paragraphs 3-4 and 11. On paragraph 4 it shows “The registration unit 4 performs registration between the two-dimensional image data IM2 output from the two-dimensional image input unit 2 and the two-dimensional image data IM3 output from the projection processing unit 3, and results in an image after registration.” See also page 5 paragraph 1 “Two-dimensional image data IM1, IM3 obtained by converting IM1 into two-dimensional image data, IM2 aligned with IM3, and composite image IM4 output from the composition processing unit 5 can be selectively displayed and interpreted, or the above It is desirable that a plurality of images can be simultaneously displayed and interpreted, and any output device may be used as long as this purpose can be realized.” Makoto) As per claim 3, Makoto teaches “The disease label creation device according to claim 1, wherein the registration process includes: a process of extracting a two-dimensional anatomical landmark from the simple X-ray image; (Examiner interprets “anatomical landmark” as a region or part of the anatomy. See page 3 paragraphs 1-2 “The two-dimensional image input unit 2 inputs the two-dimensional image data of the subject from an imaging device (not shown) or a file server (step S200). In the present embodiment, the two-dimensional image data is chest X-ray image data obtained by photographing the same subject as the subject displayed in the chest CT image data input from the three-dimensional image input unit 1. In the present embodiment, the subject is the chest of the human body, but is not limited to this, and may be another subject, for example, a blood vessel that has undergone contrast imaging.” See also page 3 paragraph 9 “The imaging conditions of the input 2D image data IM2 are acquired (step S401). Here, the imaging conditions are positional relationship information of the subject, the X-ray tube, and the sensor for obtaining the orientation and imaging region of the subject in IM2.” Makoto.) a process of extracting a three-dimensional anatomical landmark corresponding to the two-dimensional anatomical landmark from the CT image; (See page 3 paragraph 3 “Next, the information extraction unit 7 extracts a specific structure O displayed in the 3D image data output from the 3D image input unit 1 (step S300). In the present embodiment, the blood vessel structure is extracted from the chest CT image data and output” It corresponds since they are of the same subject. Makoto) a process of projecting the three-dimensional anatomical landmark; and (See page 3 paragraphs 7-12, para 7 shows “ The projection processing unit 3 receives the three-dimensional image data IM1 output from the three-dimensional image input unit 1 and the two-dimensional image data IM2 output from the two-dimensional image input unit 2, and uses IM1 as two-dimensional image data IM3 equivalent to IM2. (Step S400)” and page 4 paragraphs 1-3, para3 on page 4 shows “By the above processing, the projection processing imitating the photographing means of the two-dimensional image data IM2 can be added to the three-dimensional image data IM1 and converted into the projection image data IM3 that is two-dimensional image data. In the present embodiment, the three-dimensional image IM1 is chest CT image data, and the two-dimensional image data IM2 is chest image data of the same subject, and the chest CT image data is converted into chest image data by projection processing” Makoto) a process of performing registration between the two-dimensional anatomical landmark and an anatomical landmark after the projection process. (See page 4 paragraph 4 “The registration unit 4 performs registration between the two-dimensional image data IM2 output from the two-dimensional image input unit 2 and the two-dimensional image data IM3 output from the projection processing unit 3, and results in an image after registration.” IM2 contains the 2D anatomical landmark and IM3 contains an anatomical landmark after the projection (it is output). See also page 4 paragraph 8 “The accuracy evaluation unit 8 evaluates the alignment result. In the present embodiment, the image similarity is calculated using the density of the chest rib region, and if it is equal to or less than a certain threshold T, it is determined that sufficient accuracy is not obtained. The position of the projection plane P is finely adjusted, and the projection process is performed again.” Makoto) As per claim 4, Makoto teaches “The disease label creation device according to claim 1, wherein the registration process includes: a process of extracting a two-dimensional anatomical region of interest from the simple X-ray image; (Examiner interprets “anatomical region of interest” as a region or part of the anatomy. See page 3 paragraphs 1-2 “The two-dimensional image input unit 2 inputs the two-dimensional image data of the subject from an imaging device (not shown) or a file server (step S200). In the present embodiment, the two-dimensional image data is chest X-ray image data obtained by photographing the same subject as the subject displayed in the chest CT image data input from the three-dimensional image input unit 1. In the present embodiment, the subject is the chest of the human body, but is not limited to this, and may be another subject, for example, a blood vessel that has undergone contrast imaging.” See also page 3 paragraph 9 “The imaging conditions of the input 2D image data IM2 are acquired (step S401). Here, the imaging conditions are positional relationship information of the subject, the X-ray tube, and the sensor for obtaining the orientation and imaging region of the subject in IM2.” Makoto.) a process of extracting a three-dimensional anatomical region of interest corresponding to the two-dimensional anatomical region of interest from the CT image; (See page 3 paragraph 3 “Next, the information extraction unit 7 extracts a specific structure O displayed in the 3D image data output from the 3D image input unit 1 (step S300). In the present embodiment, the blood vessel structure is extracted from the chest CT image data and output” It corresponds since they are of the same subject. Makoto) a process of projecting the three-dimensional anatomical region of interest; and (See page 3 paragraphs 7-12, para 7 shows “ The projection processing unit 3 receives the three-dimensional image data IM1 output from the three-dimensional image input unit 1 and the two-dimensional image data IM2 output from the two-dimensional image input unit 2, and uses IM1 as two-dimensional image data IM3 equivalent to IM2. (Step S400)” and page 4 paragraphs 1-3, para3 on page 4 shows “By the above processing, the projection processing imitating the photographing means of the two-dimensional image data IM2 can be added to the three-dimensional image data IM1 and converted into the projection image data IM3 that is two-dimensional image data. In the present embodiment, the three-dimensional image IM1 is chest CT image data, and the two-dimensional image data IM2 is chest image data of the same subject, and the chest CT image data is converted into chest image data by projection processing” Makoto) a process of performing registration between a contour of the two-dimensional anatomical region of interest and a contour of an anatomical region of interest after the projection process. (Examiner interprets “contour” as the region of interest. See page 4 paragraph 4 “The registration unit 4 performs registration between the two-dimensional image data IM2 output from the two-dimensional image input unit 2 and the two-dimensional image data IM3 output from the projection processing unit 3, and results in an image after registration.” IM2 contains the 2D anatomical landmark and IM3 contains an anatomical landmark after the projection (it is output). See also page 4 paragraph 8 “The accuracy evaluation unit 8 evaluates the alignment result. In the present embodiment, the image similarity is calculated using the density of the chest rib region, and if it is equal to or less than a certain threshold T, it is determined that sufficient accuracy is not obtained. The position of the projection plane P is finely adjusted, and the projection process is performed again.” See also page 4 paragraphs 11-12 (with the following page) “ The composite image IM4 is obtained by converting the structure data O having position information into a position on the two-dimensional image data IM3, and simply superimposing it on IM2 ′ aligned with IM3 to output the three-dimensional image. The notable structure O extracted in advance by IM1 is mapped and displayed on the two-dimensional image IM2. Here, the structure data O may be colored and displayed for easy viewing, or the transmittance may be set high so that an object behind the structure can be visually recognized. Priorities may be given to the target of interest and displayed in different colors.” A superimposed contour is displayed. Makoto) As per claim 12, Makoto teaches “The disease label creation device according to claim 1, wherein, in the registration process, the registration is performed by adjusting a solution space in the registration between the simple X-ray image and the CT image forming the pair associated with a patient, depending on the patient.” (See page 4 paragraph 8, it forms part of the registration “The accuracy evaluation unit 8 evaluates the alignment result. In the present embodiment, the image similarity is calculated using the density of the chest rib region, and if it is equal to or less than a certain threshold T, it is determined that sufficient accuracy is not obtained. The position of the projection plane P is finely adjusted, and the projection process is performed again. The threshold T used for evaluation is obtained experimentally. In addition, a threshold value T may be determined by a doctor who is a user while visually confirming the alignment result.” Examiner interprets “solution space” as any space (pixel/region/image) part of the working solution used for the result. The patient is the same as seen in page 4 paragraphs 2-3 and page 9 paragraph 2 “the three-dimensional image data and the first two-dimensional image data are image data of the same patient taken at different time points.”, it therefore is a pair. Makoto) As per claim 17, Makoto teaches “a non-transitory, computer-readable tangible recording medium on which a program for causing, when read by a computer, the computer to execute the disease label creation method according to claim 16 is recorded. “ (See page 7 paragraph 10 “ Recording media for supplying the program include the following media. For example, flexible disk, hard disk, optical disk, magneto-optical disk, MO, CD-ROM, CD-R, CD-RW, magnetic tape, nonvolatile memory card, ROM, DVD (DVD-ROM, DVD-R), etc. is there.” Makoto) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 6, 7, 8, 10, 18, 19, 21 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Makoto in view of Masubuchi et. el. (US Pub. No. 20200160981-A1). As per claim 6, Makoto already teaches “the disease label creation device according to claim 1, wherein the first processor is configured to:..” and “second disease label”, however Makoto does not teach “execute a first reliability calculation process of calculating a first reliability for the second disease label.” Masubuchi teaches “execute a first reliability calculation process of calculating a first reliability for the… disease label.” (Paragraphs 49-55 shows the process of setting a reliability for each label. “[0048]… The reliability setting function 144 can acquire awareness information of a training data creator by receiving a reply to the inquiry of the image S2. When a plurality of labels has been set to medical images, the above-described inquiry may be performed for each label or collectively performed for each medical image.” [0050] “[0050] In the training data 156 with reliability, information of items such as a “medical image,” “label information,” “operation situations” and “creator information” is associated with a “training data ID” that is identification information of training data. ”. See also paragraphs 87 “ In the example of FIG. 10, the acquisition function 242 acquires training data 252 with reliability from each medical institution terminal 100 (step S150). Next, the weighting function 244 sets a weight score to a label set to a medical image on the basis of reliability information of the acquired training data with reliability to generate weighted training data (step S152).” See also figs. 1-4 and 10-11. Masubuchi) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Makoto with the teachings of Masubuchi to calculate a reliability of the second disease label. The modification would have been motivated by the desire to confidently choose correct/reliable information for training with weights, therefore it is an improvement, as suggested by Masubuchi (See paragraphs 51-52 “[0051] The “operation situations” include, for example, information associated with items such as “time required for creation,” “confidence degree,” “freshness degree” and “quality.” The “time required for creation” is, for example, an index indicating a degree of awareness with respect to deficiency and excess of a time required for creation associated with choices of reply information for the first inquiry information of the image S2. The reliability setting function 144 sets a confidence degree based on a reply result for the first inquiry information. ” “[0052] The “confidence degree” is a quality of subjective training data of a training data creator (reliability of trusting that the training data is correct)...” See also paragraph 64-69 “Specifically, the weighting function 244 sets weight scores such that reliability increases as a creation time of a medical image or training data acquired through the acquisition function 242 becomes later. Accordingly, training data that is correct at this point in time can be acquired with high efficiency and training data with stabilized quality is acquired.” Masubuchi) As per claim 7, Makoto in view of Masubuchi already teaches “the disease label creation device according to claim 6, wherein, in the first reliability calculation process,” and “second disease label”, however Makoto also teaches “a visibility of a second disease region corresponding to the second disease label with respect to a normal region of the simple X-ray image is calculated (See page 4 paragraphs 8-9, they show a visibility of the alignment which is used to present the mapped structure data as seen in paragraphs 10-12 “The composite image IM4 is obtained by converting the structure data O having position information into a position on the two-dimensional image data IM3, and simply superimposing it on IM2 ′ aligned with IM3 to output the three-dimensional image. The notable structure O extracted in advance by IM1 is mapped and displayed on the two-dimensional image IM2. Here, the structure data O may be colored and displayed for easy viewing, or the transmittance may be set high so that an object behind the structure can be visually recognized. Priorities may be given to the target of interest and displayed in different colors. This synthesis method may be any method as long as it effectively presents information required by the doctor”. The region includes the processes seen on page 3 paragraphs 8-12. See also page 4 paragraph 8 “In the present embodiment, the image similarity is calculated using the density of the chest rib region, and if it is equal to or less than a certain threshold T…” Examiner interprets “visibility” as any process utilizing pixel values from the images. Examiner also interprets “second disease region” as the region presented py the position information of the mapped structure O. Makoto) using at least one of statistics of pixel values of a normal region and a first disease region of the CT image corresponding to the first disease label or a shape feature of the first disease region of the CT image,” (See page 3 paragraphs 9-12 “The imaging conditions of the input 2D image data IM2 are acquired (step S401). Here, the imaging conditions are positional relationship information of the subject, the X-ray tube, and the sensor for obtaining the orientation and imaging region of the subject in IM2. In the coordinate system of the three-dimensional image data IM1 input using this information, a plane P is constructed with the focal point S (X , s s Y , Z ) at the X-ray tube position and the projection plane at the sensor position. . Here, the plane P has pixel information reflecting the s number of pixels and the pixel pitch of the original sensor, and the pixel of interest (X , Y , Z ) on the plane P is a pixel with the original p sensor. And one-to-one correspondence.” It uses both IM1, IM2 and IM3 information of each region. See also page 4 paragraphs 1-8 “In the present embodiment, alignment of the thoracic bone region with little deformation even in the time-lapse image is performed, and a rigid registration is used as a technique.” Makoto.) “and the first reliability is calculated from the calculated visibility.” (Masubuchi already teaches reliability and it would have been obvious to implement the visibility with reliability at least for the same reason presented in the obviousness rationale. See also paragraph 35. Masubuchi) As per claim 8, Makoto in view of Masubuchi already teaches “the disease label creation device according to claim 6, wherein, in the information acquisition process, information of an anatomical region in the CT image is acquired, and in the first reliability calculation process” and Masubuchi already teaches “the first reliability is calculated from the calculated visibility” (See paragraphs 35, 48 “The first inquiry information displays choices of “more than sufficient,” “sufficient,” “slightly insufficient” and “insufficient” and the second inquiry information displays choices of “confident,” “normal” and “not confident.” The reliability setting function 144 receives one of those choices from an operator. The reliability setting function 144 can acquire awareness information of a training data creator by receiving a reply to the inquiry of the image S2. When a plurality of labels has been set to medical images, the above-described inquiry may be performed for each label or collectively performed for each medical image. ” and 51 “[0052] The “confidence degree” is a quality of subjective training data of a training data creator (reliability of trusting that the training data is correct). The “confidence degree” is set based on a reply result for the second inquiry information of the image S2. ” Examiner interprets visibility as a quality measure of the image according to a label (which shows the region area) Masubuchi), however Makoto also teaches “a visibility of a second disease region corresponding to the second disease label with respect to a normal region of the simple X-ray image is calculated on the basis of superimposition of the anatomical region” and a first disease region of the CT image corresponding to the first disease label in a projection direction, (See page 4 paragraphs 1-11, para 11 shows a visibility of the second region that corresponds to the mapped structure (label) and is according to the x-ray image “ The composite image IM4 is obtained by converting the structure data O having position information into a position on the two-dimensional image data IM3, and simply superimposing it on IM2 ′ aligned with IM3 to output the three-dimensional image. The notable structure O extracted in advance by IM1 is mapped and displayed on the two-dimensional image IM2. Here, the structure data O may be colored and displayed for easy viewing, or the transmittance may be set high so that an object behind the structure can be visually recognized. Priorities may be given to the target of interest and displayed in different colors. This synthesis method may be any method as long as it effectively presents information required by the doctor”. See also page 6 paragraphs 2-3 “However, using the fact that this specific structure data is extracted from the 3D image data as information about the anatomical structure and the diagnosis result, further enhancement processing is added to the composite image to emphasize the specific structure. It may be output as an image.” Makoto). As per claim 10, Makoto in view of Masubuchi already teaches “the disease label creation device according to claim 6, wherein the first processor is configured to: calculate a degree of success of the result of the registration” (See page 4 paragraphs 8-9 “ The accuracy evaluation unit 8 evaluates the alignment result. In the present embodiment, the image similarity is calculated using the density of the chest rib region, and if it is equal to or less than a certain threshold T, it is determined that sufficient accuracy is not obtained…” “By repeating this series of processes in which the output of the alignment unit 4 is fed back to the projection processing unit 3 via the accuracy evaluation unit 8, the conversion process from the three-dimensional image data IM1 to the two-dimensional image data IM3 is performed. Increases accuracy” Makoto) , and in the first reliability calculation process, the first reliability is calculated on the basis of the degree of success.” (See paragraphs 49-55 and 87. Para 48 shows “The first inquiry information displays choices of “more than sufficient,” “sufficient,” “slightly insufficient” and “insufficient” and the second inquiry information displays choices of “confident,” “normal” and “not confident.” The reliability setting function 144 receives one of those choices from an operator.” 51 shows “The reliability setting function 144 sets a confidence degree based on a reply result for the first inquiry information.” See also figs 1-4 and 10-11. The reliability is based on a degree of success. ) As per claim 18, Makoto already teaches “a simple X-ray image and the second disease label created by the disease label creation device according to claim 1” , however Makoto does not teach “A learning device comprising: a second processor, wherein the second processor is configured to: execute a learning process of training a disease detection model, using first training data consisting of a simple x-ray image and the… disease label… and converging a first error between an output of the disease detection model and the… disease label” Masubuchi teaches “A learning device comprising: a second processor, wherein the second processor is configured to: (See paragraph 33.) execute a learning process of training a disease detection model, using first training data consisting of a simple x-ray image and the… disease label… (See paragraphs 20, 27, 35 “[0035] For example, the training data creation function 142 may generate training data with respect to a medical image on the basis of an input operation… the training data creation function 142 displays each medical image or every related medical image included in the medical image DB 152 on the display 130 and receives set of labels through the input interface 120. A label includes, for example, at least one indication of the presence or absence of a lesion for a medical image, identification of a lesion type, and designation of a specific region such as a region of interest (ROI). In addition, a label may include identification of a portion of a test object (e.g., a human body) with respect to a medical image, presence or absence of a disease of the portion, designation of a diseased region (e.g., fibrillogenesis of the lung, a solitary pulmonary nodule, or a region of brain tumor) and the like, a numerical value of seriousness of a disease (e.g., a fatty liver level), and the like.” and 126-127 “[0127] According to at least one of the above-described embodiments, it is possible to clarify the quality of training data by including the training data creation function 142 for acquiring training data of a medical image and the reliability setting function 144 for setting, to the training data acquired through the training data creation function 142, reliability information based on one or both of a creation situation of the training data and information on a creator who has created the training data in the medical image processing apparatus. In addition, according to the first embodiment, it is possible to acquire a more appropriate learned model and learning results by adding the quality of training data thereto.”) (See also paragraph 23 “Medical images include computed tomography (CT) magnetic resonance (MR) images, mammography images, endoscopy images, X-ray images, etc…” Masubuchi) converging a first error between an output of the disease detection model and the… second disease label” (See paragraph 39 “[0039]… For example, the “technical score” may be a technical level based on a cumulative number of creations of training data in the past or a technical level derived on the basis of a difference between a label set to a medical image having a correct answer label in advance without teaching the correct answer and the correct answer level. A label difference is, for example, an error of a label set to a medical image and an ROI position difference, presence or absence of a disease, or a difference between disease details. I” Therefore a label difference is within the BRI (broadest reasonable interpretation) of output of the disease detection model (a label showing the disease) and second disease label (the other label used in the difference). See also fig. 8 and paragraphs 76 and 79. See also the loss (error) of paragraphs 79-81 which can also be interpreted as the error. Masubuchi) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Makoto with the teachings of Masubuchi to utilize a learning and training process of the disease along with calculating an error. The modification would have been motivated by the desire to improve the quality and efficiency of data selection/creation, therefore it is an improvement, as suggested by Masubuchi (See paragraphs 3, 106 “[0003]… However, training data items in the medical field are important because they are few in number and are subject to many constraints such as a small number of patients and a necessity of training data to be created under the supervision of doctors. Further, such training data is created at a plurality of sites and by a plurality of doctors in many cases, and the quality of training data collected from a plurality of sources (medical institution terminals, and the like) may not be consistent.” See also paragraph 70 and 81 “[0070] With respect to the “quality,” variations in the quality of training data are expected to occur for training data due to the presence or absence of artifacts, SNR difference, and different image conditions. Accordingly, the weighting function 244 can select learning data in optimal image conditions required for learning with high efficiency by setting a weight score to image quality.” [0081]… it is possible to improve the quality of the learned model 258 by generating the learned model 258 on the basis of the aforementioned weight scores.” Masubuchi) As per claim 19, Makoto in view of Masubuchi already teaches “using second training data consisting of a simple X-ray image, the second disease label created by the disease label creation device according to claim 6, and the first reliability”, however Masubuchi also teaches “A learning device comprising: a second processor, wherein the second processor is configured to: in a case in which a learning process of training a disease detection model,…. and converging a first error between an output of the disease detection model and the second disease label is performed, (See paragraphs 20, 27, 35 “[0035] For example, the training data creation function 142 may generate training data with respect to a medical image on the basis of an input operation… and receives set of labels through the input interface 120… a label may include identification of a portion of a test object (e.g., a human body) with respect to a medical image, presence or absence of a disease of the portion, designation of a diseased region (e.g., fibrillogenesis of the lung, a solitary pulmonary nodule, or a region of brain tumor) and the like, a numerical value of seriousness of a disease (e.g., a fatty liver level), and the like.” and 126-127 “[0127] According to at least one of the above-described embodiments, it is possible to clarify the quality of training data by including the training data creation function 142 for acquiring training data of a medical image and the reliability setting function 144 for setting, to the training data acquired throughouttraining data creation function 142, reliability information based on one or both of a creation situation of the training data and information on a creator who has created the training data in the medical image processing apparatus. In addition, according to the first embodiment, it is possible to acquire a more appropriate learned model and learning results by adding the quality of training data thereto.”) (See also paragraph 23 “Medical images include computed tomography (CT) magnetic resonance (MR) images, mammography images, endoscopy images, X-ray images, etc…” Masubuchi). See also paragraph 39 “[0039]… For example, the “technical score” may be a technical level based on a cumulative number of creations of training data in the past or a technical level derived on the basis of a difference between a label set to a medical image having a correct answer label in advance without teaching the correct answer and the correct answer level. A label difference is, for example, an error of a label set to a medical image and an ROI position difference, presence or absence of a disease, or a difference between disease details. I” Therefore a label difference is within the BRI (broadest reasonable interpretation) of output of the disease detection model (a label showing the disease) and second disease label (the other label used in the difference). See also fig. 8 and paragraphs 76 and 79. See also the loss (error) of paragraphs 79-81, which can also be interpreted as an error Masubuchi) execute the learning process of adjusting the first error according to the first reliability to train the disease detection model.” (See paragraphs 79, 81, 107, and 109-116 “the adjustment function 248 determines whether adjustment of correction of the learned model is required (step S350)”. Paragraph 79 shows “0079] The learning function 246 includes, for example, a deep neural network (DNN) using a convolutional neural network (CNN). For example, the learning function 246 causes the DNN to learn a neural network through a machine learning algorithm such as error back propagation.” ) It would have also been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Makoto with the teachings of Masubuchi to utilize a learning and training process of the disease along with calculating an error. The modification would have been motivated by the desire to improve the quality and efficiency of data selection/creation, therefore it is an improvement, as suggested by Masubuchi (See paragraphs 3, 106 “[0003]… However, training data items in the medical field are important because they are few in number and are subject to many constraints such as a small number of patients and a necessity of training data to be created under the supervision of doctors. Further, such training data is created at a plurality of sites and by a plurality of doctors in many cases, and the quality of training data collected from a plurality of sources (medical institution terminals, and the like) may not be consistent.” See also paragraph 70 and 81 “[0070] With respect to the “quality,” variations in the quality of training data are expected to occur for training data due to the presence or absence of artifacts, SNR difference, and different image conditions. Accordingly, the weighting function 244 can select learning data in optimal image conditions required for learning with high efficiency by setting a weight score to image quality.” [0081]… it is possible to improve the quality of the learned model 258 by generating the learned model 258 on the basis of the aforementioned weight scores.” Masubuchi) As per claim 21, Makoto in view of Masubuchi already teaches “the learning device according to claim 19, wherein the second processor is configured to: execute a learning process of directing the disease detection model to output a disease detection result indicating a disease region included in the simple X-ray image and a second reliability of the disease detection result. (See paragraphs 35-38, the data creation unit which is used for learning processes obtains labels which contain the disease region from the medical images (x-ray images paragraph 23-25). They are displayed as seen in fig. 2, therefore it is output. The second reliability is output as seen on fig. 5, there are several reliabilities shown. See also paragraphs 80, 78 and 88 “[0088] Next, the learning function 246 generates a learned model 258 which outputs label information for an input of a medical image using the weighted training data (step S154) and stores the generated learned model 258 in the memory 250 (step S156).” See also paragraphs 48, 51, 52, 55, “[0055] In addition, the reliability setting function 144 outputs training data with reliability stored in the memory 150 to the information processing server 200 through the network NW.” The reliability shown in Masubuchi at least contains a second reliability. Masubuchi) As per claim 24, Makoto in view of Masubuchi already teaches “A disease detection model trained by the learning device according to claim 18,”, however Masubuchi also teaches “wherein the disease detection model receives any simple X-ray image as an input image, detects a disease label from the input simple X-ray image, and outputs the disease label.” (See fig. 2, an inputted x-ray image shows the detected disease label and outputs it in the display. See also paragraph 35 “[0035] For example, the training data creation function 142 may generate training data with respect to a medical image on the basis of an input operation received through the input interface 120 in a state in which an input screen including the medical image is displayed on the display 130. Specifically, the training data creation function 142 displays each medical image or every related medical image included in the medical image DB 152 on the display 130 and receives set of labels through the input interface 120. A label includes, for example, at least one indication of the presence or absence of a lesion for a medical image, identification of a lesion type, and designation of a specific region such as a region of interest (ROI). In addition, a label may include identification of a portion of a test object (e.g., a human body) with respect to a medical image, presence or absence of a disease of the portion, designation of a diseased region (e.g., fibrillogenesis of the lung, a solitary pulmonary nodule, or a region of brain tumor) and the like…” See also paragraph 88 “[0088] Next, the learning function 246 generates a learned model 258 which outputs label information for an input of a medical image using the weighted training data (step S154)”. See also paragraph 23-24. Masubuchi) Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Makoto in view of Baka (Baka, Nora, et al. "Oriented Gaussian mixture models for nonrigid 2D/3D coronary artery registration." IEEE transactions on medical imaging 33.5 (2014): 1023-1034.) . As per claim 13, Makoto already teaches “The disease label creation device according to claim 12” and ”…. corresponding to the simple X-ray image and the CT image forming the pair; and a process of performing non-rigid registration between the simple X-ray image and the CT image” (page 4 paragraphs 5-7), however Makoto does not teach “further comprising: a database of a statistical deformation model for each patient feature information item, wherein the registration process includes: a process of selecting a corresponding statistical deformation model from the database on the basis of patient feature information of the patient… performing non-rigid registration between the simple X-ray image and the CT image using the selected statistical deformation model.” Baka teaches “further comprising: a database of a statistical deformation model for each patient feature information item, wherein the registration process includes: a process of selecting a corresponding statistical deformation model from the database on the basis of patient feature information of the patient… performing non-rigid registration between the simple X-ray image and the CT image using the selected statistical deformation model.” (See abstract “The oriented GMM registration achieved a median accuracy of 1.06 mm, with a convergence rate of 81% for nonrigid vessel centerline registration on 12 patient datasets, using a statistical shape model. The method thereby outperformed the iterative closest point algorithm, the GMM registration without orientation, and two recently published methods on 2D/3D coronary artery registration.” See also page 5 section III experimental setup paragraph 1-2 “The test datasets were collected from PCI patients for which both preinterventional 4D computed tomography angiography (CTA) and interventional X-ray angiography (XA) were available.…. The statistical shape model for the coronary arteries at a given cardiac phase was built from 4D CTA datasets. ” Examiner interprets “feature information item” as any information regarding a feature of the patients e.g. “coronary arteries”. See also page 8 subsection D. Nonrigid registration using a SSM. “In this work, we use statistical shape models for parameterizing nonrigid registration. Such deformation parameterization is more restrictive than B-spline or thin-plate-spline (TPS) based registrations, which is advantageous in the ill-defined problem of mono-plane 2D/3D registration”. See also page 5 subsection C, page 6 column 1 paragraphs 1-3 and column 2 paragraphs 3-4 and page 8 subsection B. Baka ) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Makoto with the teachings of Baka to use a statistical deformation model when using non-rigid registration. The modification would have been motivated by the desire to have better performance, therefore it is an improvement, as suggested by Baka ( See page 8 column 1 paragraphs 3-4 “Fig. 6 shows the SSM based simulation experiment results. The best median results were achieved by OGMM_.5, DT, and G1, being 0.48 mm, 0.50 mm, and 0.52 mm, respectively. OGMM_.5 and DT were not significantly different. ICP was significantly worse than all others (all )… Fig. 10 shows the results on all frames of all patients for the SSM matching on real data. Unlike the simulation experiments, on real data OGMM_.5 andOGMM_.3 performed significantly different ( ). OGMM_.3 performed best with 81% convergence, and median accuracy of 1.06 mm. It thereby out performed all other methods .” See also abstract “The oriented GMM registration achieved a median accuracy of 1.06 mm, with a convergence rate of 81% for nonrigid vessel centerline registration on 12 patient datasets, using a statistical shape model. The method thereby outperformed the iterative closest point algorithm, the GMM registration without orientation, and two recently published methods on 2D/3D coronary artery registration” Baka) Claims 22 is rejected under 35 U.S.C. 103 as being unpatentable over Makoto in view of Mausbuchi and further in view of Sugimoto et. al. (US Pub. No. 20210090261-A1) . As per claim 22, Makoto in view of Masubuchi already teaches “the learning device according to claim 21, wherein the second processor is configured to: adjust the first error of the disease region, of which the second reliability…”, however Makoto in view of Masubuchi does not teach “reliability output from the disease detection model is low and which is false positive, to a large value and adjust the first error of the disease region, of which the… reliability is low and which is false negative, to a small value.” Sugimoto teaches “reliability output from the disease detection model is low and which is false positive, to a large value and adjust the first error of the disease region, of which the… reliability is low and which is false negative, to a small value.” (See paragraphs 44-45. (0044] The image processing system 100 displays the extraction result (the region segmentation result) acquired in relation to the affected-area region 202 using display means of the imaging apparatus 120. A user confirms whether the displayed region segmentation result is too large or too small. When the result is too large or too small, the user performs an operation to set a reference value of the confidence values of the respective pixels, or in other words a reference value for extracting a region having a confidence with at least a specific numerical value. The image processing system 100 then extracts, or performs region segmentation on, the affected-area region 202 again on the basis of the set confidence reference value and the confidence values of the respective pixels. Sugimoto) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Makoto with the teachings of Masubuchi and Sugimoto to adjust the value as needed. The modification would have been motivated by the desire to correctly extract the affected region, therefore it is an improvement, as suggested by Sugimoto (See paragraph 45 “[0045] For example, when the user sets the confidence reference value at 90, the user may wish to include a slightly wider range than that of the extracted affected-area region 202 as the pressure ulcer. In this case, the user can make an adjustment by lowering the confidence to 85 or 80 so that the affected-area region 202 is extracted in the desired range. The user makes repeated adjustments interactively while visually checking the image until s/he determines that the appropriate affected-area region 202 has been extracted, and in so doing, an appropriately corrected region segmentation result can be acquired.” Sugimoto) Claims 14 is rejected under 35 U.S.C. 103 as being unpatentable over Makoto in view of Aoyagi (US Pub. No. 2020/0380680 A1) . As per claim 14, Makoto already teaches “the disease label creation device according to claim 1, wherein, in the information acquisition process,”, however Makoto does not teach “an image-level third disease label of the CT image is acquired, and the first processor is configured to: give the second disease label and the third disease label to the simple X-ray image.” Along with CT image and X-ray image. Aoyagi teaches “an image-level third disease label… is acquired, and the first processor is configured to: give the second disease label and the third disease label to the… image” (See paragraph 45 “Here, the “virtual projection image” is, for example, an image simulating a 2D general X-ray image.” See fig. 4, multiple labels (name and region) are given to the 2d x-ray image. See also paragraphs 46-52. Para 51 shows “[0051] A labeled virtual projection image is illustrated on the upper right side of FIG. 4. In this labeling, the size of the heart observed in the virtual projection image is larger than the normal size, and thus, the label “cardiomegalia” (i.e., cardiac hypertrophy) is attached as the diagnosed disease name. In addition, a label (white broken line) indicating the position of the enlarged heart is attached in the virtual projection image. ” Aoyagi ) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Makoto with the teachings of Aoyagi to apply multiple labels to an image. The modification would have been motivated by the desire to associate correct data, therefore it is an improvement, as suggested by Aoyagi (See paragraph 49 “[0049] In this specification, the verb of “to label” is used for “to associate correct information/data (i.e., ground truth) with the data for training” such as a training image. Further, the noun of “label”, per se, may refer to the correct information/data (i.e., the ground truth).”) Allowable Subject Matter Claims 5, 9, 11, 15, 20, 23 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DYLAN J MENDEZ MUNIZ whose telephone number is (703)756-5672. The examiner can normally be reached M-F, 8AM - 5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DYLAN JOHN MENDEZ MUNIZ/Examiner, Art Unit 2675 /ANDREW M MOYER/Supervisory Patent Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Dec 25, 2023
Application Filed
Feb 20, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597231
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12573053
Image Shadow Detection Method and System, and Image Segmentation Device and Readable Storage Medium
2y 5m to grant Granted Mar 10, 2026
Patent 12573040
IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12567127
MEDICAL USE IMAGE PROCESSING METHOD, MEDICAL USE IMAGE PROCESSING PROGRAM, MEDICAL USE IMAGE PROCESSING DEVICE, AND LEARNING METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12555175
METHOD FOR EMBEDDING INFORMATION IN A DECORATIVE LABEL
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+25.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month