Detailed Action
1. Claims1-14 are pending in this Application.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to amendment
3. Applicant’s response to the last Office Action filed on 07/08/2025 has been entered and made of record.
4. Claims1,2, 3, 8 and 12 have been amended.
.
Response to Argument
5. The Applicant’s argument filed on 11/03/2025 is fully consider. For Examiner response see discussion below.
a). Based on the Applicant’s argument the objection of claims 1 and 2 under 35 U.S.C 12(f) is expressly withdrawn
b) Applicant’s has amended claim 1 by adding the limitations shown below and substantially argue the applied prior art YAMAKAWA, US 20140112562 A1,does not teach the added limitation.
“the various types of medical information including a first type of medical information; and a second type of medical information; the display unit starts displaying the first type of medical information upon generating the first type of medical information while generating the second type of medical information, and the display unit further starts displaying the second type of medical information upon generating the second type of medical information”
The Applicant’s argument is persuasive, thus the 35U.S.C 102 rejection based on YAMAKAWA expressly withdrawn. However, after further search and consideration a new prior art, US 20210401511 A1 to SEKINE et al., that teach the added limitation is found. Specifically, SEKINE teaches: The generation function 321 sequentially generates virtualized laparoscopic images from the varying point of sight in the varying projection direction.The virtualized laparoscopic images sequentially generated are sequentially displayed, the virtualized
laparoscopic images are displayed in conjunction with changes in the endoscopic image
(see [0029] Fig.5) . Furthermore, the control function 541 causes the display 52 to display various types of medical information. Specifical, the analysis function 542 detects an event relating to an abnormality, on the basis of medical information to be sequentially acquired by the control function 541. Therefore, when events relating to abnormalities are successively detected on the basis of medical information sequentially acquired, pieces of association information are successively generated. The control function 541 then causes the time line information to successively display information allowing the identification of such events ( see [0050], [0084] and Fig.5).
c). Independent claims 2 and 12 also have been amended by adding the limitations similar to the limitations added to claim 1 discussed above. Thus, the response to the Applicant’s argument applied to claim 1 also applied to independent claims 2 and 12.
d). Regarding the dependent claims no additional argument is presented.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103, which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1-7,11-14 are rejected r under 35 U.S.C. 103(a) as being unpatentable over YAMAKAWA et al., (hereafter YAMAKAWA), US 20140112562 A1, pub. 04/24/2014, in view of US 20210401511 A1 to SEKINE et al.,(hereafter SEKINE), US 20210401511 A1 , pub. 12/30/2021.
As to claim 1, YAMAKAWA teaches An ophthalmologic image processing system for processing an ophthalmologic image that is an image of a tissue of a subject eye, the system (Abstract, [0002], [0007], an ophthalmic analysis apparatus and program for analyzing a subject eye comprise a processor; and memory for storing computer readable instructions)comprising:
an ophthalmologic imaging device that is configured to capture the ophthalmologic image ([0166], [0122] Also, the controller may output a front image (for example, a front image acquired by a fundus camera or SLO) acquired at the same date as the tomography image as the photography image. Fundus front image acquired by a scanning laser ophthalmoscope (SLO), a fundus camera, etc. may be displayed.);
a display unit (Figs.4-7); and
a control unit that is configured to control the display unit the control unit includes at least one processor programmed ([0127] the instruction receiver may receive selection instructions to select at least one two-dimensional image displayed on the display unit as a fixed display image from an examiner. The controller may change and display another two-dimensional image different from a fixed image according to change instructions from the instruction receiver while fixing display of the two-dimensional image selected as the fixed display image.),
acquire the ophthalmologic image captured by the ophthalmologic imaging device (([0166], [0122], Fundus front image acquired by a scanning laser ophthalmoscope (SLO), a fundus camera, etc. may be displayed);
generate various types of medical information to be displayed on the display unit by performing a plurality of mutually different processes on the acquired ophthalmologic image; (Figs. 4-11,[0216], [0231] the CPU 20 may output each two-dimensional image 105 for a specific period in time sequence. The two-dimensional image 105 is a two-dimensional image at each examination date for the specific period within the period for which analysis results are outputted on the time-series graph 150a. A two-dimensional image display region (hereinafter a display region) 101a is used as a display region for arranging and displaying plural two-dimensional images 105 for the specific period.) and
control the display unit to sequentially display the various types of medical information upon generating each of the various types of medical information ([0216], When the analysis layer is set by the analysis layer selective region 260, the CPU 20 acquires layer thickness
information about the set analysis layer and creates the retina thickness map and the analysis chart based on the acquired layer thickness information. The created retina thickness map and the analysis chart are outputted to the display unit 1);
however, it is noted that YAMAKAWA does not specifically teach
“the various types of medical information including a first type of medical information; and a second type of medical information; the display unit starts displaying the first type of medical information upon generating the first type of medical information while generating the second type of medical information, and the display unit further starts displaying the second type of medical information upon generating the second type of medical information”
On the other hand in the same field of endeavor a medical image data processing of
SEKINE teaches the various types of medical information including a first type of medical information; and a second type of medical information; the display unit starts displaying the first type of medical information upon generating the first type of medical information while generating the second type of medical information, and the display unit further starts displaying the second type of medical information upon generating the second type of medical information (Fig.5, [0029],[0050], [0084], the generation function 321 sequentially generates virtualized laparoscopic images from the varying point of sight in the varying projection direction.The virtualized laparoscopic images sequentially generated are
sequentially displayed, the virtualized laparoscopic images are displayed in conjunction with changes in the endoscopic image (see [0029] Fig.5) . Furthermore, the control function 541 causes the display 52 to display various types of medical information. Specifical, the analysis function 542 detects an event relating to an abnormality, on the basis of medical information to be sequentially acquired by the control function 541. Therefore, when events relating to abnormalities are successively detected on the basis of medical information sequentially acquired, pieces of association information are successively generated. The control function 541 then causes the time line information to successively display information allowing the identification of such events)
It would have been obvious to a person of ordinary skill in the art before the effective
filing date of the claimed invention to incorporate a point-of-time detection taught by SEKINE into YAMAKAWA.
The suggestion/motivation for doing so would have been allows user of YAMAKAWA to identify instantaneously time-sensitive medical events in real time, enabling immediate clinical intervention and reducing diagnostic turnaround time. Since it is known that the main advantage of point-of-time detection in medical imaging is the instantaneous, real-time identification of critical, time-sensitive events, enabling immediate clinical intervention and reducing diagnostic turnaround time.
Claim 2 is rejected the same as claim 1 except claim 2 is directed to a device claim. Thus, argument analogous to that presented above for claim 1 is applicable to claim 2.
As to claim 12, YAMAKAWA teaches A non-transitory, computer readable, storage medium storing an ophthalmologic image processing program for processing an ophthalmologic image that is an image of a tissue of a subject eye, the program (claim 17. A computer readable recording medium storing computer readable computer readable instructions, when executed by the processor, causing an ophthalmic analysis apparatus to function as:),
regarding the remaining limitation of claim 12, the remaining limitations are the same as claim 1. Thus, argument analogous to that presented above for claim 1 is also applicable to claim 12.
As to claim 3, YAMAKAWA teaches the at least one processor is further programmed to: control the display unit to display a plurality of display frames each of which is defined for a respective one of the various types of medical information ([0245] –[0248], Figs.9A-9B the CPU 20 displays two-dimensional images 105a set as the fixed display images mutually adjacently in the display region 101a (see FIG. 9B). Also, the CPU 20 displays other two-dimensional images 105b in time sequence in a display region different from the display region 101a of the fixed display images. As discussed above the CPU 20 acquires layer thickness information about the set analysis layer and creates the retina thickness );
control the display unit to sequentially display the various types of medical information by displaying each of the various types of medical information in a corresponding one of the plurality of display frames upon generating each of the various types of medical information ([0164]-[0165], In the analysis result display region 101, for example, analysis results are arranged from left to right in time sequence, and examination date information 102, image evaluation information 103, baseline information 104, a retina thickness map 110, a tomography image 120 and an analysis chart 130 are displayed).
As to claim 13, YAMAKAWA teaches, wherein the at least one processor is further programmed to: control the display unit to display a two-dimensional front view image that was captured or generated in advance on a same subject eye as the subject eye for which the ophthalmologic image is currently captured ([0155], [0164], Of course, follow-up observations of two-dimensional retina thickness information (thickness map) may be made. The acquired retina thickness information is sent to the CPU 20 and is stored in the storage unit 30. );
generate at least one of a first map indicative of a two-dimensional distribution of thickness of a specific layer and a second map indicative of a comparison of the two-dimensional distribution of thickness of the specific layer between the subject eye and a normal eye; and control the display unit to display the first map or the second map by superimposing the first map or the second map onto the two-dimensional front view image that is being displayed on the display unit after generating the first map or the second map([0166] The retina thickness map 110 is a color map indicating two-dimensional distribution of retina thickness of the eye, and is color-coded according to layer thickness. The retina thickness map 110 includes a thickness map indicating a thickness of a retina layer, a comparison map indicating a result of comparison between a thickness of a retina layer of the eye and a thickness of a retina layer of the normal eye stored in the normal eye database, a deviation map indicating a deviation of a thickness of a retina layer of the eye from a thickness of a retina layer of the normal eye stored in the normal eye database by a standard deviation, an examination date comparison thickness
difference map indicating a difference between each examination date and a thickness, etc.)
Claim 7 is rejected the same as claim 13 except claim 7 is directed to a device claim. Thus, argument analogous to that presented above for claim 13 is applicable to claim 7.
As to claim 14, YAMAKAWA teaches A non-transitory, computer readable, storage medium storing an ophthalmologic image processing program for processing an ophthalmologic image that is an image of a tissue of a subject eye, the program (claim 17. A computer readable recording medium storing computer readable computer readable instructions, when executed by the processor, causing an ophthalmic analysis apparatus to function as:),
Regarding the remaining limitation of claim 14, the remaining limitations are the same as claim 12. Thus, argument analogous to that presented above for claim 12 is also applicable to claim 14.
As to claim 4, YAMAKAWA teaches the at least one processor is further programmed to: control the display unit to add an in-progress display image or an explanatory display image to each of the plurality of display frames until a corresponding one of the various types of medical information is generated, the in-progress display image indicates that the corresponding one of the various types of medical information is being currently generated, and the explanatory display image indicates an explanation on the corresponding one of the various types of medical information( Fig.4 , [0207], [0208] , For example, plural tabs are displayed in the selective region 230 and, more concretely, items are classified by combinations of long term-glaucoma, long term-macular disease, long term-custom, short term-glaucoma, short term-macular disease, retina disease including custom display, long term-short term, singly. When the examiner selects a desired item, a tree corresponding to the selected item is displayed on the scan pattern setting region.) the explanatory display image indicates an explanation on the corresponding one of the various types of medical information(Fig.4 , [0207], [0208], the item about a long term such as long term-glaucoma, long term-macular disease, or long term-custom is an item for making long-term follow-up observations. When this item is selected, analysis results acquired at different dates are simultaneously displayed in the analysis result display region and also, a trend graph created based on these analysis results is displayed on the display unit 1.)
As to claim 5, YAMAKAWA teaches the at least one processor is further programmed to: control the display unit to display at least one of various types of past medical information in at least one of the plurality of display frames until a corresponding one of the various types of medical information is generated, and the various types of past medical information have been generated on a same subject eye as the subject eye for which the ophthalmologic image is currently captured(Fig.4 [0209] the item about a short term such as short term-glaucoma or short term-macular disease is an item for making short-term follow-up observations. When this item is selected, two analysis results acquired at different dates are simultaneously displayed in the analysis result display region. A retina thickness map, an analysis chart, a tomography image, etc. in the two analysis results are displayed relatively larger than the case of selecting the long-term item).
As to claim 6, YAMAKAWA teaches the at least one processor is further programmed to: specify at least one of a plurality of diseases as a specific disease, and control the display unit to display at least one of various types of other subject medical information in at least one of the plurality of display frames until a corresponding one of the various types of medical information is generated ( Fig.4 , [0207]-[0209] this limitation is discussed in claims 4 and 5 above , and the various types of other subject medical information are medical information on another subject suffering from the specific disease([0207] For example, plural tabs are displayed in the selective region 230 and, more concretely, items are classified by combinations of long term-glaucoma, long term-macular disease, long term-custom, short term-glaucoma, short term-macular disease, retina disease including custom display, long term-short term, singly ).
As to claim 11, YAMAKAWA teaches the at least one processor is further programmed to, if the plurality of mutually different processes include an other-medical information using process during which at least one of the various types of medical information is used ([0051] As the analysis results, for example, a thickness of the eye (for example, a thickness of at least one of a cornea, a crystalline lens, a retina layer and a choroid layer), a curvature of the eye (for example, a curvature of at least one of a cornea, anterior and posterior surfaces of a crystalline lens and a retina layer), etc. are acquired):
generate the at least one of the various types of medical information to be used in the other-medical information using process prior to performing the other-medical information using process ([0206] -[0207] In the selective region 230, a result outputted to the analysis result display region is distinguished by a disease and a follow-up observation period, and the disease and the period can be selected. For example, plural tabs are displayed in the selective region 230 and, more concretely, items are classified by combinations of long term-glaucoma, long term-macular disease, long term-custom, short term-glaucoma, short term-macular disease, retina disease including custom display,
long term-short term, singly. When the examiner selects a desired item, a tree corresponding to the selected item is displayed on the scan pattern setting region.).
5. Claim 8 is rejected under 35 U.S.C. 103(a) as being unpatentable over YAMAKAWA, US 20140112562 A1, in view of SEKINE, US 20210401511 A1, further in view of HAYASHI TAKESHI (hereafter HAYASHI), JP 2018033717 A, pub. 03/08/2018
As to claim 8, YAMAKAWA teaches the at least one processor is further programmed to: receive an input of an instruction for specifying a position in a three-dimensional tomographic image that is the ophthalmologic image; the various types of medical information upon receiving the input of the instruction for specifying the position (Fig.4, [0107] The analysis chart is calculated OCT data acquired by a two-dimensional scan (for example, a raster scan) on the eye. Of course, the analysis chart may be calculated based on each two-dimensional OCT data acquired by a multi-scan such as a radial scan);
however, it is noted that modified YAMAKAWA does not specifically teach “generate a two-dimensional tomographic image at the specified position from the three-dimensional tomographic image as one of the various types of medical information upon receiving the input of the instruction for specifying the position”
On the other hand, TAKESHI teaches generate a two-dimensional tomographic image at the specified position from the three-dimensional tomographic image(page 15 par. 7, In the ophthalmologic apparatus according to the embodiment, the image forming unit forms one or more two-dimensional tomographic images orthogonal to the front image based on the three-dimensional data, and the display control unit generates one or more two-dimensional tomographic images. You may display on a display means).
It would have been obvious to a person of ordinary skill in the art before the effective
filing date of the claimed invention to incorporate a method of producing 2D image from the 3D image taught by TAKESHI into YAMAKAWA
The suggestion/motivation for doing so would have been allows user of YAMAKAWA to create less complex and easily interpretable medical image. Further 2D image is faster to process compares to 3D image .
6. Claim 9 is rejected under 35 U.S.C. 103(a) as being unpatentable over YAMAKAWA, US 20140112562 A1, in view of SEKINE, US 20210401511 A1, further in view of OKAZAKI YOSHIRO (hereafter OKAZAKI), JP2009125432 A, pub. 06/11/2009.
As to claim 9, YAMAKAWA teaches generate at least one the various types of medical information by performing at least one of the plurality of processes on an extracted image that is formed of the extracted pixels or pixel rows ([0249], the image analysis and image extracting analysis carried out based on pixels data );
generate at least one the various types of medical information by performing at least one of the plurality of processes on an Figs. 5-11,[0216], [0231], this limitation is discussed in claim 1 above);
however, it is noted that modified YAMAKAWA does not specifically teach the underline limitation of “wherein the at least one processor is further programmed to: partially extract a plurality of pixels or pixel rows in accordance with a predetermined rule from entire pixels or pixel rows that form the ophthalmologic image; generate at least one the various types of medical information by performing at least one of the plurality of processes on an extracted image that is formed of the extracted pixels or pixel rows; and control the display unit to display the at least one of the various types of medical information generated from the extracted image upon generating each of the at least one of the various types of medical information from the extracted image.”
On the other hand, the combination of YAMAKAWA and OKAZAKI teaches wherein the at least one processor is further programmed to: partially extract a plurality of pixels or pixel rows in accordance with a predetermined rule from entire pixels or pixel rows that form the ophthalmologic image (OKAZAKI: the ophthalmologic image processing apparatus according to any one of claims 4 to 6, wherein the color image of the eye to be examined is a color fundus image, and the extraction means includes the B component. Pixel data of pixels corresponding to the image sensor of the color image is extracted from the image data of the color fundus image, and the analyzing means analyzes the extracted pixel data of the B component, and the optic nerve fiber layer in the fundus of the eye to be examined is analyzed. Thickness distribution information is obtained as the predetermined analysis result See, claim 7 and page 3 4th- 6th paragraphs );
generate at least one the various types of medical information by performing at least one of the plurality of processes on an extracted image that is formed of the extracted pixels or pixel rows( OKAZAKI: See page 3 4th- 6th paragraphs; Figs. 4-11,[0216], [0231]); and
control the display unit to display the at least one of the various types of medical information generated from the extracted image upon generating each of the at least one of the various types of medical information from the extracted image (YAMAKAWA: the CPU 20
displays two-dimensional images 105a set as the fixed display images mutually
adjacently in the display region 101a (see FIG. 9B). Also, the CPU 20 displays other two-dimensional images 105b in time sequence in a display region different from the display
region 101a of the fixed display images. As discussed above the CPU 20 acquires layer thickness information about the set analysis layer and creates the retina thickness. See [0245] –[0248], Figs.9A-9B).
It would have been obvious to a person of ordinary skill in the art before the effective
filing date of the claimed invention to incorporate a method of extracting the blue color component(B component) from the color fundus image of eye and analyzing. the extracted pixel data of the B component taught by OKAZAKI into modified YAMAKAWA
The suggestion/motivation for doing so would have been allows user modified YAMAKAWA of determine thickness distribution information of optic nerve fiber layer.
7. Claim 10 is rejected under 35 U.S.C. 103(a) as being unpatentable over YAMAKAWA, US 20140112562 A1, in view of SEKINE, US 20210401511 A1, further in view of OKAZAKI, JP2009125432 A, pub. 06/11/2009, still further in view of TAKANO et al., (hereafter TAKANO), US 20040042681 A1, pub. 03/04/2004.
As to claim 10, the combination of YAMAKAWA and OKAZAKI teaches the at least one processor is further programmed to: generate at least one of the various types of medical information (YAMAKAWA: this limitation discussed in claims 1 and 3 above) based on both the extracted image (OKAZAKI: the pixel data of the pixel corresponding to the image sensor is extracted from the image data of the color fundus image, and the analysis means analyzes the extracted pixel data of the B component, and the thickness of the optic nerve fiber layer in the fundus of the eye to be examined The distribution information is obtained as the predetermined analysis result ); and
display the at least one of the various types of medical information generated based on both the extracted image and the remaining image upon generating each of the at least one of the various types of medical information (Figs. 5-11,[0216], [0231], this limitation is discussed in claim 1 above )
It would have been obvious to a person of ordinary skill in the art before the effective
filing date of the claimed invention to incorporate a method of extracting the blue color component(B component) from the color fundus image of eye and analyzing. the extracted pixel data of the B component taught by OKAZAKI into modified YAMAKAWA
The suggestion/motivation for doing so would have been allows user modified YAMAKAWA of determine thickness distribution information of optic nerve fiber layer.
However, it is noted that modified YAMAKAWA does not specifically teach “ a remaining image including pixels or pixel rows that were not extracted and left behind by performing the at least one of the plurality of processes on the remaining image after performing the at least one of the plurality of processes on the extracted image”
On the other hand, TAKANO teaches a remaining image including pixels or pixel rows that were not extracted and left behind by performing the at least one of the plurality of processes on the remaining image after performing the at least one of the plurality of processes on the extracted image([0278], Digital cameras practice a processing to selectively enhance the chroma of specified colors such as red and green).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a method of selectively enhancing the chroma of specified colors such as red and green taught by TAKANO into modified YAMAKAWA
The suggestion/motivation for doing so would have been allows user of modified YAMAKAWA to enhance the visibility of the medical image and improve image processing by selectively enhancing red and green which are dominance colors in retinal image.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communication from the examiner should be directed to Mekonen Bekele whose telephone number is (469) 295-9077.The examiner can normally be reached on Monday -Friday from 9:00AM to 6:50 PM Eastern Time.
If attempt to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Eng, George can be reached on (571) 272-7495.The fax phone number for the organization where the application or proceeding is assigned is 571-237-8300. Information regarding the status of an application may be obtained from the patent Application Information Retrieval (PAIR) system. Status information for published application may be obtained from either Private PAIR or Public PAIR.
Status information for unpublished application is available through Privet PAIR only.
For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have question on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866.217-919 (tool-free)
/MEKONEN T BEKELE/Primary Examiner, Art Unit 2699