Prosecution Insights
Last updated: April 19, 2026
Application No. 18/148,129

IMAGE DISPLAY METHOD, IMAGE DISPLAY DEVICE AND RECORDING MEDIUM

Non-Final OA §103
Filed
Dec 29, 2022
Examiner
WANG, JIN CHENG
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Screen Holdings Co. Ltd.
OA Round
3 (Non-Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
3y 7m
To Grant
69%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
492 granted / 832 resolved
-2.9% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
40 currently pending
Career history
872
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
62.7%
+22.7% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 832 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed 6/12/2025 has been entered. The claims 1, 15 and 16 have been amended. The claims 9 and 14 have been cancelled. The claims 1-8, 10-13 and 15-16 are pending in the current application. Response to Arguments Applicant’s arguments filed 6/12/2025 with respect to the new features in the amended claim 1 and similar claims have been considered but are moot in view of the newly cited references with respect to the newly recited features. Lee does not teach the claim limitation that (f) generating a projection image by synthesizing contour masks of the respective objects to be observed from the three-dimensional image, integrating the projection image with the integration two-dimensional image to generate a second integrated image, and displaying the second integrated image on the display unit. However, Yan teaches the claim limitation that (f) generating a projection image by synthesizing contour masks of the respective objects to be observed from the three-dimensional image, integrating the projection image with the integration two-dimensional image to generate a second integrated image, and displaying the second integrated image on the display unit ( Yan teaches at Paragraph 0078 that a CT image is projected at a preset shooting angle (on a focal plane) to obtain the DRR, and the three-dimensional preset contour image is projected at the preset shooting angle to obtain the mask image. Then, a logical AND operation is performed on the DRR and the mask image to obtain the mask DRR, and a logical AND operation is performed on the KV image and the mask image to obtain the mask KV image, so as to perform image registration based on the mask DRR and the mask KV image. Yan teaches at Paragraph 0064 that the two-dimensional projection image at the preset shooting angle corresponding to the KV image can be obtained by projecting the CT image at the preset shooting angle identical to the preset shooting angle at which the radiographing device acquires the KV image. In this embodiment, the two-dimensional projection image may also be referred to as a digitally reconstructed radiograph (DRR). Yan teaches at Paragraph 0117 that the radiotherapy device acquires a CT image obtained by photographing the lung tumor with a CT imaging device before a treatment activity, and projects the CT image at the shooting angles of 0 degree and 90 degrees respectively to obtain two two-dimensional projection images, which are hereinafter referred to as DRR1 and DRR2. The radiography device invokes an image that only includes a contour line of the lung tumor from a treatment plan system referred to as a 3D RT contour, and projects the 3D RT contour at the shooting angles of 0 degree and 90 degrees to obtain two two-dimensional mask images, MASK1 and MASK2 and a logical AND operation is performed on the mask1 and KV1 image to obtain a mask KV1 image that only includes the lung tumor and a logical AND operation is performed on the MASK2 and the KV2 image to obtain a mask KV2 image that only includes the lung tumor. Yan teaches at Paragraph 0116 that the two radiograph devices can radiograph the lung tumor at the shooting angle of 0 degree to obtain a KV image corresponding to 0 degree and radiograph at the shooting angle 90 degrees to obtain a KV image corresponding to 90 degree as a KV2 image). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have further incorporated Yan’s integrating the projected contours of the 3D RT contour with the KV images to obtain an integrated image into Lee to have modified Lee’s contours of each of the multiple 2D cross-sectional images by integrating the projected contours with each of the multiple 2D cross-sectional images to displayed each modified 2D cross-sectional image as an integrated image with the contours projected from the 3D RT contour. One of the ordinary skill in the art would have been motivated to have shown the contours of the 2D cross-sectional image which is acquired at a focus plane of an imaging system such as OCT acquisition system. However, Gogin teaches the claim limitation that (f) generating a projection image by synthesizing contour masks of the respective objects to be observed from the three-dimensional image, integrating the projection image with the integration two-dimensional image to generate a second integrated image, and displaying the second integrated image on the display unit ( Gogin teaches at FIG. 2 and Paragraph 0034-0035 that the 2D images 204 are obtained (focused) at the axial planes (z-planes) based on the 3D coronal image 202 and at FIGS. 5-6 and Paragraph 0052-0053 generating an interpolated/projected image by synthesizing contours of the respective left/right lungs to be observed from the 3D image 202 and integrating the projection/interpolated image with the integration two-dimensional image (any of the intermediate images are the slice images focused at the different focal depth along the axial planes) to generate a second integrated image (the interpolated image), and displaying the interpolated image on the display unit. Gogin teaches projecting the edited area (the edited contours) of the edited top image 2041 to the intermediate image 2041+M by interpolating the contours of the edited top image 2041 with the contours of the intermediate image 2041+M to generate an interpolated image as a projection image. Gorgin teaches at Paragraph 0042 that the system 102 effectively merges the interpolated segmentation with the initial segmentation for each non-edited image 204N and at Paragraph 0057 that the interpolated segmentation for the intermediate image 2041+M has been selectively merged with the initial segmentation for that image). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have further incorporated Gogin’s interpolating the contours of the edited top image obtained from the 3D image 202 to obtain the interpolated intermediate image as a projection image into Lee to have modified Lee’s contours of each of the multiple 2D cross-sectional images and to have displayed each modified 2D cross-sectional image with the contours projected from at least one of the edited top image or the edited bottom image based on the interpolation of the contours. One of the ordinary skill in the art would have been motivated to have shown the contours of a 2D image at a particular axial plane as the 2D cross-sectional image is acquired at a focus plane of an imaging system such as OCT acquisition system. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8, 10-13 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Matsuda US-PGPUB No. 2023/0281922 (hereinafter Matsuda) in view of Gogin et al. US-PGPUB No. 2023/0196698 (hereinafter Gogin); Yan US-PGPUB No. 2021/0295542 (hereinafter Yan); Lee et al. US-PGPUB No. 2021/0293702 (hereinafter Lee); Buckland et al. US-PGPUB No. 2022/0326495 (hereinafter Buckland); Tang et al. US-PGPUB No. 2022/0151708 (hereinafter Tang); Lynch et al. US-PGPUB No. 2023/0107680 (hereinafter Lynch); Misch et al. US-PGPUB No. 2023/0291992 (hereinafter Misch); Dixon et al. US-PGPUB No. 2018/0307019 (hereinafter Dixon); Gutierrez Medina US-PGPUB No. 2023/0184683 (hereinafter Medina). Re Claim 1: Matsuda teaches an image display method, comprising: (a) obtaining, using a two-dimensional imager, a plurality of two-dimensional images by two-dimensionally imaging a specimen, in which a plurality of objects to be observed are present three-dimensionally, at a plurality of mutually different focus positions ( Matsuda teaches at Paragraph [0085] As illustrated in FIG. 10, in step S10, two-dimensional original image 27 is acquired using microscope 2. Microscope control unit 31 controls the piezoelectric element or motor (not illustrated) connected to one of sample support stage 11 and objective lens 16. One of sample support stage 11 and objective lens 16 is moved in the optical axis direction (z-axis direction) of observation optical system 10 with respect to the other of sample support stage 11 and objective lens 16. Sample 12 (three-dimensional object 12a) is imaged while focal plane 20 of microscope 2 (observation optical system 10) is moved along the optical axis direction (z-axis direction) of observation optical system 10. As illustrated in FIG. 4, two-dimensional original image 27 of sample 12 (three-dimensional object 12a) is obtained at each of the plurality of positions z.sub.1, z.sub.2, . . . , z.sub.n-1, z.sub.n of focal plane 20. Matsuda teaches at Paragraph [0086] As illustrated in FIG. 10, in step S11, three-dimensional original image 28 of sample 12 is generated from the plurality of two-dimensional original images 27. Specifically, three-dimensional original image generation unit 32 stacks a plurality of two-dimensional original images 27 over the plurality of positions z.sub.1, z.sub.2, . . . , z.sub.n-1, z.sub.n of focal plane 20, and generates three-dimensional original image 27 of sample 12 (three-dimensional object 12a) as the multilayer body (stack image) of the plurality of two-dimensional original images 28). (c) obtaining a three-dimensional image of the specimen based on the image data (Matsuda teaches at Paragraph [0085] as illustrated in FIG. 10, in step S10, two-dimensional original image 27 is acquired using microscope 2. Microscope control unit 31 controls the piezoelectric element or motor (not illustrated) connected to one of sample support stage 11 and objective lens 16. One of sample support stage 11 and objective lens 16 is moved in the optical axis direction (z-axis direction) of observation optical system 10 with respect to the other of sample support stage 11 and objective lens 16. Sample 12 (three-dimensional object 12a) is imaged while focal plane 20 of microscope 2 (observation optical system 10) is moved along the optical axis direction (z-axis direction) of observation optical system 10. As illustrated in FIG. 4, two-dimensional original image 27 of sample 12 (three-dimensional object 12a) is obtained at each of the plurality of positions z.sub.1, z.sub.2, . . . , z.sub.n-1, z.sub.n of focal plane 20. Matsuda teaches at Paragraph [0086] As illustrated in FIG. 10, in step S11, three-dimensional original image 28 of sample 12 is generated from the plurality of two-dimensional original images 27. Specifically, three-dimensional original image generation unit 32 stacks a plurality of two-dimensional original images 27 over the plurality of positions z.sub.1, z.sub.2, . . . , z.sub.n-1, z.sub.n of focal plane 20, and generates three-dimensional original image 27 of sample 12 (three-dimensional object 12a) as the multilayer body (stack image) of the plurality of two-dimensional original images 28); (d) obtaining the two-dimensional image selected from the plurality of two-dimensional images or a two-dimensional image generated to be focused on the plurality of objects to be observed based on the plurality of two-dimensional images as an integration two-dimensional image (Matsuda teaches at Paragraph [0085] As illustrated in FIG. 10, in step S10, two-dimensional original image 27 is acquired using microscope 2. Microscope control unit 31 controls the piezoelectric element or motor (not illustrated) connected to one of sample support stage 11 and objective lens 16. One of sample support stage 11 and objective lens 16 is moved in the optical axis direction (z-axis direction) of observation optical system 10 with respect to the other of sample support stage 11 and objective lens 16. Sample 12 (three-dimensional object 12a) is imaged while focal plane 20 of microscope 2 (observation optical system 10) is moved along the optical axis direction (z-axis direction) of observation optical system 10. As illustrated in FIG. 4, two-dimensional original image 27 of sample 12 (three-dimensional object 12a) is obtained at each of the plurality of positions z.sub.1, z.sub.2, . . . , z.sub.n-1, z.sub.n of focal plane 20. Matsuda teaches at Paragraph [0086] As illustrated in FIG. 10, in step S11, three-dimensional original image 28 of sample 12 is generated from the plurality of two-dimensional original images 27. Specifically, three-dimensional original image generation unit 32 stacks a plurality of two-dimensional original images 27 over the plurality of positions z.sub.1, z.sub.2, . . . , z.sub.n-1, z.sub.n of focal plane 20, and generates three-dimensional original image 27 of sample 12 (three-dimensional object 12a) as the multilayer body (stack image) of the plurality of two-dimensional original images 28). Matsuda does not teach the claim limitation: (b) obtaining, using a three-dimensional observation device different from the two- dimensional imager, image data representing a three-dimensional shape of the specimen; (c) obtaining a three-dimensional image of the specimen based on the image data; (e) integrating the integration two-dimensional image obtained in the operation (d) with the three-dimensional image obtained in the operation (c) to generate a first integrated image, and displaying the first integrated image on a display unit. Background of OCT imaging. Buckland teaches at Paragraph 0102 that the focal-stacked OCM images are stitched, as known in the art, to create a GDOCM image of the sample under test. The OCT image and GDOCM images may be viewed separately, may be viewed synchronously, or the GDOCM image volume and OCT image volume may be further merged to create an image block with the high lateral resolution volume embedded within the high depth of field survey volume. From within the GDOCM image volume, any layer of interest can be visualized and analyzed. Tang teaches at Paragraph 0005 that OCT can provide 2D cross-sectional images with high axial resolution (˜10 μm), which is 10-100 times higher than conventional medical imaging modalities (e.g., CT and MRI). Owing to the high speed of laser scanning and data processing, 3D images of the detected sample formed by numerous cross-sectional images can be obtained in real time. Lynch teaches at Paragraph [0006] OCT systems are designed to have optimal image quality at the working distance of the microscope objective as shown in FIG. 4. FIG. 4 illustrates an OCT system having an imaging plane set to match the focal plane of the objective lens. OCT uses the principles of low coherence interferometry to obtain three-dimensional (3D) images of a sample. However, Lee in view of the OCT tomographic image of a sample in Lee is formed by plurality of 2D cross-section images according to the background disclosures in Buckland/Tang/Lynch teaches the claim limitation: (b) obtaining, using a three-dimensional observation device different from the two- dimensional imager, image data representing a three-dimensional shape of the specimen; (c) obtaining a three-dimensional image of the specimen based on the image data ( Lee’s stack of tomographic images by OCT imaging are inherently stitched to obtain 3D image of a sample according to Buckland/Lynch. Lee teaches at Paragraph [0049] that, referring to FIG. 1, a dual mode microscope system 1 according to an example embodiment may be a system in which an optical coherence microscope module 11 for observing an optical coherence tomographic image (a 3D OCT image formed by 2D cross-sectional images) of one sample 8 through an optical coherence tomography (OCT), and at the same time, a nonlinear microscope module 12 for acquiring a three-dimensional structure image of the sample 8 are complexly integrated. The dual mode microscope system 1 may include a sample holder 13, an optical coherence microscope module 11, a nonlinear microscope module 12, and a controller 14. Lee teaches at Paragraph [0023] the method for controlling a dual mode microscope system according to another aspect may further include a focus adjustment operation of adjusting a focus of an optical coherence tomographic image by selectively positioning one of a plurality of optical path adjustment windows respectively providing different optical path lengths on a path of a light irradiated from a light source of the optical coherence microscope module toward a reference mirror. Lee teaches at Paragraph 0121 that when an optical coherence tomographic image of the biological sample 8 is observed through the optical coherence microscope module 21, in order to solve a problem of failing to properly observing the optical coherence tomographic image due to a change in a thickness of the biological sample 8 or an influence of water contained therein, a traveling distance of a light may be adjusted by replacing the plurality of optical path adjustment windows 3151 provided on the reference unit 315, thereby selecting an optical path adjustment window 3151 where the optical coherence tomographic image is best observed. Lee teaches at Paragraph 0139 that it may be possible to simultaneously photograph an optical coherence tomographic image and a nonlinear image with respect to one sample through the dual mode microscope system according to an example embodiment, and it may be possible to acquire a high-resolution cross-sectional image of an area desired by a user by aligning a mutual optical axis between the optical coherence microscope module and the nonlinear microscope module); (e) integrating the integration two-dimensional image obtained in the operation (d) with the three-dimensional image obtained in the operation (c) to generate a first integrated image, and displaying the first integrated image on a display unit ( Lee’s stack of tomographic images by OCT imaging are inherently stitched to obtain 3D image of a sample according to Buckland/Lynch. Lee teaches at Paragraph [0023] the method for controlling a dual mode microscope system according to another aspect may further include a focus adjustment operation of adjusting a focus of an optical coherence tomographic image by selectively positioning one of a plurality of optical path adjustment windows respectively providing different optical path lengths on a path of a light irradiated from a light source of the optical coherence microscope module toward a reference mirror. Lee teaches at Paragraph 0121 that when an optical coherence tomographic image of the biological sample 8 is observed through the optical coherence microscope module 21, in order to solve a problem of failing to properly observing the optical coherence tomographic image due to a change in a thickness of the biological sample 8 or an influence of water contained therein, a traveling distance of a light may be adjusted by replacing the plurality of optical path adjustment windows 3151 provided on the reference unit 315, thereby selecting an optical path adjustment window 3151 where the optical coherence tomographic image is best observed. Lee teaches at Paragraph 0139 that it may be possible to simultaneously photograph an optical coherence tomographic image and a nonlinear image with respect to one sample through the dual mode microscope system according to an example embodiment, and it may be possible to acquire a high-resolution cross-sectional image of an area desired by a user by aligning a mutual optical axis between the optical coherence microscope module and the nonlinear microscope module. Lee teaches at Paragraph [0049] that, referring to FIG. 1, a dual mode microscope system 1 according to an example embodiment may be a system in which an optical coherence microscope module 11 for observing an optical coherence tomographic image (a 3D OCT image formed by 2D cross-sectional images) of one sample 8 through an optical coherence tomography (OCT), and at the same time, a nonlinear microscope module 12 for acquiring a three-dimensional structure image of the sample 8 are complexly integrated. The dual mode microscope system 1 may include a sample holder 13, an optical coherence microscope module 11, a nonlinear microscope module 12, and a controller 14. ). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated the acquired 2D cross-sectional images at the different focal planes according to Tang into the image integration system of Lee to have to have enabled Lee’s image integration system to have integrated an OCT image formed by the multiple 2D cross-sectional images acquired at the different focal planes and 3D image of the sample acquired by the 3D imaging device. One of the ordinary skill in the art would have integrated an OCT image formed by multiple 2D cross-sectional images with the 3D image as Lee has taught integrating one of the tomographic image of one sample formed by the 2D cross-sectional images acquired at different focal planes with an additional 3D image acquired by a separate 3D imaging device. It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated the acquired 2D images at the different focal planes according to Matsuda into the image integration system of Lee to have to have enabled Lee’s image integration system to have integrated an OCT image formed by the multiple 2D cross-sectional images acquired at the different focal planes and 3D image of the sample acquired by the 3D imaging device. One of the ordinary skill in the art would have integrated an OCT image formed by multiple 2D cross-sectional images with the 3D image as Lee has taught integrating one of the tomographic image of one sample formed by the 2D cross-sectional images acquired at different focal planes with an additional 3D image acquired by a separate 3D imaging device. Lee does not teach the claim limitation that (f) generating a projection image by synthesizing contour masks of the respective objects to be observed from the three-dimensional image, integrating the projection image with the integration two-dimensional image to generate a second integrated image, and displaying the second integrated image on the display unit. However, Yan teaches the claim limitation that (f) generating a projection image by synthesizing contour masks of the respective objects to be observed from the three-dimensional image, integrating the projection image with the integration two-dimensional image to generate a second integrated image, and displaying the second integrated image on the display unit ( Yang teaches at Paragraph 0064 that the two-dimensional projection image at the preset shooting angle corresponding to the KV image can be obtained by projecting the CT image at the preset shooting angle identical to the preset shooting angle at which the radiographing device acquires the KV image. In this embodiment, the two-dimensional projection image may also be referred to as a digitally reconstructed radiograph (DRR). Yan teaches at Paragraph 0117 that the radiotherapy device acquires a CT image obtained by photographing the lung tumor with a CT imaging device before a treatment activity, and projects the CT image at the shooting angles of 0 degree and 90 degrees respectively to obtain two two-dimensional projection images, which are hereinafter referred to as DRR1 and DRR2. The radiography device invokes an image that only includes a contour line of the lung tumor from a treatment plan system referred to as a 3D RT contour, and projects the 3D RT contour at the shooting angles of 0 degree and 90 degrees to obtain two two-dimensional mask images, MASK1 and MASK2 and a logical AND operation is performed on the mask1 and KV1 image to obtain a mask KV1 image that only includes the lung tumor and a logical AND operation is performed on the MASK2 and the KV2 image to obtain a mask KV2 image that only includes the lung tumor. Yan teaches at Paragraph 0116 that the two radiograph devices can radiograph the lung tumor at the shooting angle of 0 degree to obtain a KV image corresponding to 0 degree and radiograph at the shooting angle 90 degrees to obtain a KV image corresponding to 90 degree as a KV2 image). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have further incorporated Yan’s integrating the projected contours of the 3D RT contour with the KV images to obtain an integrated image into Lee to have modified Lee’s contours of each of the multiple 2D cross-sectional images by integrating the projected contours with each of the multiple 2D cross-sectional images to displayed each modified 2D cross-sectional image as an integrated image with the contours projected from the 3D RT contour. One of the ordinary skill in the art would have been motivated to have shown the contours of the 2D cross-sectional image which is acquired at a focus plane of an imaging system such as OCT acquisition system. However, Gogin teaches the claim limitation that (f) generating a projection image by synthesizing contour masks of the respective objects to be observed from the three-dimensional image, integrating the projection image with the integration two-dimensional image to generate a second integrated image, and displaying the second integrated image on the display unit ( Gogin teaches at FIG. 2 and Paragraph 0034-0035 that the 2D images 204 are obtained (focused) at the axial planes (z-planes) based on the 3D coronal image 202 and at FIGS. 5-6 and Paragraph 0052-0053 generating an interpolated/projected image by synthesizing contours of the respective left/right lungs to be observed from the 3D image 202 and integrating the projection/interpolated image with the integration two-dimensional image (any of the intermediate images are the slice images focused at the different focal depth along the axial planes) to generate a second integrated image (the interpolated image), and displaying the interpolated image on the display unit. Gogin teaches projecting the edited area (the edited contours) of the edited top image 2041 to the intermediate image 2041+M by interpolating the contours of the edited top image 2041 with the contours of the intermediate image 2041+M to generate an interpolated image as a projection image). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have further incorporated Gogin’s interpolating the contours of the edited top image obtained from the 3D image 202 to obtain the interpolated intermediate image as a projection image into Lee to have modified Lee’s contours of each of the multiple 2D cross-sectional images and to have displayed each modified 2D cross-sectional image with the contours projected from at least one of the edited top image or the edited bottom image based on the interpolation of the contours. One of the ordinary skill in the art would have been motivated to have shown the contours of a 2D image at a particular axial plane as the 2D cross-sectional image is acquired at a focus plane of an imaging system such as OCT acquisition system. Misch teaches an image display method, comprising: (a) obtaining, using a two-dimensional imager, a plurality of two-dimensional images by two-dimensionally imaging a specimen, in which a plurality of objects to be observed are present three-dimensionally, at a plurality of mutually different focus positions ( Misch teaches at Paragraph 0022 that specimens can be any material that can comprise such a structure of interest and such specimens can be soil samples with nano-plastic particles or chemical compositions like foam with trapped dust particles. Misch teaches at 0052-0053 that Z-stacks are generated by taking multiple source images at different focal distances. The wording image with a number of z-stacks means that multiple source images are taken at different focal distances, i.e., taken in different focal planes within the specimen, and combined to provide the respective image as a composite image with a greater depth of field. The source images are two-dimensional digital images and by superimposing, a number of source images a three-dimensional image with a depth resolution depending on the number of source images is obtained. The resulting three-dimensional image may be an initial image and generally an initial image of a specimen has a smaller depth resolution and an initial image of a specimen is an image with a smaller number of z-stack than a main image of the respective specimen and at Paragraph 0065 that the least one main image of the detected structures of interest is acquired as an image with a variable number of z-stacks and the at least one main image is acquired as a respective 3D image or high magnification image at the same time classification by the machine learning classifying algorithm is done). (c) obtaining a three-dimensional image of the specimen based on the image data ( Misch teaches at Paragraph 0020 that the initial image may be an image composed of partial images as a result of stitching together partial images of the respective specimen and the initial image may be a three-dimensional image with a depth resolution and at Paragraph 0051 that at least one initial image is acquired with a predetermined number of z-stacks and the number of z-stacks being predetermined depending on the respective specimen and/or the expected detectable structures of interest); (d) obtaining the two-dimensional image selected from the plurality of two-dimensional images or a two-dimensional image generated to be focused on the plurality of objects to be observed based on the plurality of two-dimensional images as an integration two-dimensional image (Misch teaches at Paragraph 0031 that upon identifying one or more structures of interest the method is continued in step c) by switching to a more complex imaging procedure to acquire at least one main image of the detected structures of interest and the main image has a depth resolution that is usually greater than that of the previously recorded initial image and is composed of a plurality of superimposed two-dimensional digital images and acquiring main images not only from the specimen surface but from multiple focal planes within the specimen and at Paragraph 0040 usually multiple main images of the detected structures of interest are recorded. Misch teaches at Paragraph 0022 that specimens can be any material that can comprise such a structure of interest and such specimens can be soil samples with nano-plastic particles or chemical compositions like foam with trapped dust particles. Misch teaches at 0052-0053 that Z-stacks are generated by taking multiple source images at different focal distances. The wording image with a number of z-stacks means that multiple source images are taken at different focal distances, i.e., taken in different focal planes within the specimen, and combined to provide the respective image as a composite image with a greater depth of field. The source images are two-dimensional digital images and by superimposing, a number of source images a three-dimensional image with a depth resolution depending on the number of source images is obtained. The resulting three-dimensional image may be an initial image and generally an initial image of a specimen has a smaller depth resolution and an initial image of a specimen is an image with a smaller number of z-stack than a main image of the respective specimen and at Paragraph 0065 that the least one main image of the detected structures of interest is acquired as an image with a variable number of z-stacks and the at least one main image is acquired as a respective 3D image or high magnification image at the same time classification by the machine learning classifying algorithm is done). Misch does not teach the claim limitation: (b) obtaining, using a three-dimensional observation device different from the two- dimensional imager, image data representing a three-dimensional shape of the specimen; (c) obtaining a three-dimensional image of the specimen based on the image data; (e) integrating the integration two-dimensional image obtained in the operation (d) with the three-dimensional image obtained in the operation (c) to generate a first integrated image, and displaying the first integrated image on a display unit. Background of OCT imaging. Buckland teaches at Paragraph 0102 that the focal-stacked OCM images are stitched, as known in the art, to create a GDOCM image of the sample under test. The OCT image and GDOCM images may be viewed separately, may be viewed synchronously, or the GDOCM image volume and OCT image volume may be further merged to create an image block with the high lateral resolution volume embedded within the high depth of field survey volume. From within the GDOCM image volume, any layer of interest can be visualized and analyzed. Tang teaches at Paragraph 0005 that OCT can provide 2D cross-sectional images with high axial resolution (˜10 μm), which is 10-100 times higher than conventional medical imaging modalities (e.g., CT and MRI). Owing to the high speed of laser scanning and data processing, 3D images of the detected sample formed by numerous cross-sectional images can be obtained in real time. Lynch teaches at Paragraph [0006] OCT systems are designed to have optimal image quality at the working distance of the microscope objective as shown in FIG. 4. FIG. 4 illustrates an OCT system having an imaging plane set to match the focal plane of the objective lens. OCT uses the principles of low coherence interferometry to obtain three-dimensional (3D) images of a sample. However, Lee in view of the OCT tomographic image of a sample in Lee is formed by plurality of 2D cross-section images according to the background disclosures in Buckland/Tang/Lynch teaches the claim limitation: (b) obtaining, using a three-dimensional observation device different from the two- dimensional imager, image data representing a three-dimensional shape of the specimen; (c) obtaining a three-dimensional image of the specimen based on the image data ( Lee’s stack of tomographic images by OCT imaging are inherently stitched to obtain 3D image of a sample according to Buckland/Lynch. Lee teaches at Paragraph [0049] that, referring to FIG. 1, a dual mode microscope system 1 according to an example embodiment may be a system in which an optical coherence microscope module 11 for observing an optical coherence tomographic image (a 3D OCT image formed by 2D cross-sectional images) of one sample 8 through an optical coherence tomography (OCT), and at the same time, a nonlinear microscope module 12 for acquiring a three-dimensional structure image of the sample 8 are complexly integrated. The dual mode microscope system 1 may include a sample holder 13, an optical coherence microscope module 11, a nonlinear microscope module 12, and a controller 14. Lee teaches at Paragraph [0023] the method for controlling a dual mode microscope system according to another aspect may further include a focus adjustment operation of adjusting a focus of an optical coherence tomographic image by selectively positioning one of a plurality of optical path adjustment windows respectively providing different optical path lengths on a path of a light irradiated from a light source of the optical coherence microscope module toward a reference mirror. Lee teaches at Paragraph 0121 that when an optical coherence tomographic image of the biological sample 8 is observed through the optical coherence microscope module 21, in order to solve a problem of failing to properly observing the optical coherence tomographic image due to a change in a thickness of the biological sample 8 or an influence of water contained therein, a traveling distance of a light may be adjusted by replacing the plurality of optical path adjustment windows 3151 provided on the reference unit 315, thereby selecting an optical path adjustment window 3151 where the optical coherence tomographic image is best observed. Lee teaches at Paragraph 0139 that it may be possible to simultaneously photograph an optical coherence tomographic image and a nonlinear image with respect to one sample through the dual mode microscope system according to an example embodiment, and it may be possible to acquire a high-resolution cross-sectional image of an area desired by a user by aligning a mutual optical axis between the optical coherence microscope module and the nonlinear microscope module); (e) integrating the integration two-dimensional image obtained in the operation (d) with the three-dimensional image obtained in the operation (c) to generate a first integrated image, and displaying the first integrated image on a display unit ( Lee’s stack of tomographic images by OCT imaging are inherently stitched to obtain 3D image of a sample according to Buckland/Lynch. Lee teaches at Paragraph [0023] the method for controlling a dual mode microscope system according to another aspect may further include a focus adjustment operation of adjusting a focus of an optical coherence tomographic image by selectively positioning one of a plurality of optical path adjustment windows respectively providing different optical path lengths on a path of a light irradiated from a light source of the optical coherence microscope module toward a reference mirror. Lee teaches at Paragraph 0121 that when an optical coherence tomographic image of the biological sample 8 is observed through the optical coherence microscope module 21, in order to solve a problem of failing to properly observing the optical coherence tomographic image due to a change in a thickness of the biological sample 8 or an influence of water contained therein, a traveling distance of a light may be adjusted by replacing the plurality of optical path adjustment windows 3151 provided on the reference unit 315, thereby selecting an optical path adjustment window 3151 where the optical coherence tomographic image is best observed. Lee teaches at Paragraph 0139 that it may be possible to simultaneously photograph an optical coherence tomographic image and a nonlinear image with respect to one sample through the dual mode microscope system according to an example embodiment, and it may be possible to acquire a high-resolution cross-sectional image of an area desired by a user by aligning a mutual optical axis between the optical coherence microscope module and the nonlinear microscope module. Lee teaches at Paragraph [0049] that, referring to FIG. 1, a dual mode microscope system 1 according to an example embodiment may be a system in which an optical coherence microscope module 11 for observing an optical coherence tomographic image (a 3D OCT image formed by 2D cross-sectional images) of one sample 8 through an optical coherence tomography (OCT), and at the same time, a nonlinear microscope module 12 for acquiring a three-dimensional structure image of the sample 8 are complexly integrated. The dual mode microscope system 1 may include a sample holder 13, an optical coherence microscope module 11, a nonlinear microscope module 12, and a controller 14). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated the acquired 2D images at the different focal planes according to Misch into the image integration system of Lee to have to have enabled Lee’s image integration system to have integrated an OCT image formed by the multiple 2D cross-sectional images acquired at the different focal planes and 3D image of the sample acquired by the 3D imaging device. One of the ordinary skill in the art would have integrated an OCT image formed by multiple 2D cross-sectional images with the 3D image as Lee has taught integrating one of the tomographic image of one sample formed by the 2D cross-sectional images acquired at different focal planes with an additional 3D image acquired by a separate 3D imaging device. Lee does not teach the claim limitation that (f) generating a projection image by synthesizing contour masks of the respective objects to be observed from the three-dimensional image, integrating the projection image with the integration two-dimensional image to generate a second integrated image, and displaying the second integrated image on the display unit. However, Yan teaches the claim limitation that (f) generating a projection image by synthesizing contour masks of the respective objects to be observed from the three-dimensional image, integrating the projection image with the integration two-dimensional image to generate a second integrated image, and displaying the second integrated image on the display unit ( Yang teaches at Paragraph 0064 that the two-dimensional projection image at the preset shooting angle corresponding to the KV image can be obtained by projecting the CT image at the preset shooting angle identical to the preset shooting angle at which the radiographing device acquires the KV image. In this embodiment, the two-dimensional projection image may also be referred to as a digitally reconstructed radiograph (DRR). Yan teaches at Paragraph 0117 that the radiotherapy device acquires a CT image obtained by photographing the lung tumor with a CT imaging device before a treatment activity, and projects the CT image at the shooting angles of 0 degree and 90 degrees respectively to obtain two two-dimensional projection images, which are hereinafter referred to as DRR1 and DRR2. The radiography device invokes an image that only includes a contour line of the lung tumor from a treatment plan system referred to as a 3D RT contour, and projects the 3D RT contour at the shooting angles of 0 degree and 90 degrees to obtain two two-dimensional mask images, MASK1 and MASK2 and a logical AND operation is performed on the mask1 and KV1 image to obtain a mask KV1 image that only includes the lung tumor and a logical AND operation is performed on the MASK2 and the KV2 image to obtain a mask KV2 image that only includes the lung tumor. Yan teaches at Paragraph 0116 that the two radiograph devices can radiograph the lung tumor at the shooting angle of 0 degree to obtain a KV image corresponding to 0 degree and radiograph at the shooting angle 90 degrees to obtain a KV image corresponding to 90 degree as a KV2 image). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have further incorporated Yan’s integrating the projected contours of the 3D RT contour with the KV images to obtain an integrated image into Lee to have modified Lee’s contours of each of the multiple 2D cross-sectional images by integrating the projected contours with each of the multiple 2D cross-sectional images to displayed each modified 2D cross-sectional image as an integrated image with the contours projected from the 3D RT contour. One of the ordinary skill in the art would have been motivated to have shown the contours of the 2D cross-sectional image which is acquired at a focus plane of an imaging system such as OCT acquisition system. However, Gogin teaches the claim limitation that (f) generating a projection image by synthesizing contour masks of the respective objects to be observed from the three-dimensional image, integrating the projection image with the integration two-dimensional image to generate a second integrated image, and displaying the second integrated image on the display unit ( Gogin teaches at FIG. 2 and Paragraph 0034-0035 that the 2D images 204 are obtained (focused) at the axial planes (z-planes) based on the 3D coronal image 202 and at FIGS. 5-6 and Paragraph 0052-0053 generating an interpolated/projected image by synthesizing contours of the respective left/right lungs to be observed from the 3D image 202 and integrating the projection/interpolated image with the integration two-dimensional image (any of the intermediate images are the slice images focused at the different focal depth along the axial planes) to generate a second integrated image (the interpolated image), and displaying the interpolated image on the display unit. Gogin teaches projecting the edited area (the edited contours) of the edited top image 2041 to the intermediate image 2041+M by interpolating the contours of the edited top image 2041 with the contours of the intermediate image 2041+M to generate an interpolated image as a projection image). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have further incorporated Gogin’s interpolating the contours of the edited top image obtained from the 3D image 202 to obtain the interpolated intermediate image as a projection image into Lee to have modified Lee’s contours of each of the multiple 2D cross-sectional images and to have displayed each modified 2D cross-sectional image with the contours projected from at least one of the edited top image or the edited bottom image based on the interpolation of the contours. One of the ordinary skill in the art would have been motivated to have shown the contours of a 2D image at a particular axial plane as the 2D cross-sectional image is acquired at a focus plane of an imaging system such as OCT acquisition system. However, Medina teaches the claim limitation: (a) obtaining, using a two-dimensional imager, a plurality of two-dimensional images by two-dimensionally imaging a specimen, in which a plurality of objects to be observed are present three-dimensionally, at a plurality of mutually different focus positions ( Medina teaches at FIG. 4C and Paragraph 0089 that a 3D image of the fungal sample is provided in FIG. 4C. and FIG. 4B shows the MIP images of OSBM being overlaid with the MIP images of LSFM for the fungal sample. Medina teaches at Paragraph 0045 acquiring a Z-stack of raw images of said unstained sample by using said automated change of focus to image a set of different planes of focus and at Paragraph 0057 that the result of OSBM method is a stack of processed images containing optical sections of said sample from where the final 3D image of said sample can be reconstructed by digital means and at Paragraph 0071 that the LSFM modality uses fluorescence to enable three-dimensional visualization of fluorescent samples and at Paragraph 0087 that the same fungal sample was imaged using LSFM is compared with the OSBM results and a Z-stack of images was acquired employing the LSFM modality of said home-made microscope. Medina teaches at FIG. 5C and Paragraph 0095 that a 3D image of the cleared tissue sample is provided by a frame and at Paragraph 0019 that the raw image and final image of the OSBM method are labeled by square boxes. Media teaches at Paragraph 0027-0028 that XY maximum intensity projection image of the OSBM result for the onion sample is shown in FIG. 6A-6B wherein the cell shapes of onion skin cells in both XZ and YZ views are identified and at Paragraph 0047 producing a stack of optical section images of said unstained sample by applying to said Z-stack of raw images a set of digital image processing filters that reject out-of-focus background in the images of said Z-stack of raw images and at Paragraph 0049 applying background subtraction to the resulting image stack and at Paragraph 0057 that the final 3D image of said sample can be reconstructed by digital means. Medina teaches at Paragraph 0094 that a Z-stack of images was acquired employing the LSFM modality of the said home-made microscope and at Paragraph 0095 that the MIP images of OSBM are overlaid with the MIP images of LSFM for the cleared tissue sample, showing the shape of tissue along XY, XZ and YZ views in OSBM overlaps with the LSFM result and a 3D image of the cleared tissue sample is provided in FIG. 5C). (c) obtaining a three-dimensional image of the specimen based on the image data (Medina teaches at Paragraph 0087 that the same fungal sample was imaged using LSFM is compared with the OSBM results and a Z-stack of images was acquired employing the LSFM modality of said home-made microscope and at Paragraph 0094 that a Z-stack of images was acquired employing the LSFM modality of said home-made microscope using the imaging parameters with z-step between frames. Medina teaches at Paragraph 0071 that the LSFM modality uses fluorescence to enable three-dimensional visualization of fluorescent samples); (d) obtaining the two-dimensional image selected from the plurality of two-dimensional images or a two-dimensional image generated to be focused on the plurality of objects to be observed base
Read full office action

Prosecution Timeline

Dec 29, 2022
Application Filed
Sep 05, 2024
Non-Final Rejection — §103
Jan 06, 2025
Response Filed
Mar 06, 2025
Final Rejection — §103
May 29, 2025
Applicant Interview (Telephonic)
May 29, 2025
Examiner Interview Summary
Jun 12, 2025
Request for Continued Examination
Jun 13, 2025
Response after Non-Final Action
Sep 22, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594883
DISPLAY DEVICE FOR DISPLAYING PATHS OF A VEHICLE
2y 5m to grant Granted Apr 07, 2026
Patent 12597086
Tile Region Protection in a Graphics Processing System
2y 5m to grant Granted Apr 07, 2026
Patent 12592012
METHOD, APPARATUS, ELECTRONIC DEVICE AND READABLE MEDIUM FOR COLLAGE MAKING
2y 5m to grant Granted Mar 31, 2026
Patent 12586270
GENERATING AND MODIFYING DIGITAL IMAGES USING A JOINT FEATURE STYLE LATENT SPACE OF A GENERATIVE NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12579709
IMAGE SPECIAL EFFECT PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
69%
With Interview (+10.3%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 832 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month