DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/20/2024 is in compliance with the provisions on 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-19 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites “the focus adjustment mechanism” in lines 13-14. There is insufficient antecedent basis for this limitation in the claim. Examiner notes claim 2 recites “a focus adjustment mechanism”.
Claims 2-7 are rejected as being dependent on claim 1.
Claim 8 recites “the focus adjustment mechanism” in lines 15-16. There is insufficient antecedent basis for this limitation in the claim. Examiner notes claim 11 recites “a focus adjustment mechanism”.
Claim 8 recites “the first, second, and third images” in lines 28-29. There is insufficient antecedent basis for this limitation in the claim.
Claim 8 recites “a depth map” in line 26. It is unclear if “a depth map” recited in line 26 is referring to “a depth map” recited in lines 6-7 or if it is a different depth map.
Claims 9-13 are rejected as being dependent on claim 8.
Claim 9 recites “the second image” in line 2. There is insufficient antecedent basis for this limitation in the claim.
Claim 14 recites “an imaging device” in line 5 and line 6. It is unclear if “an imaging device” in line 6 is referring to “an imaging device” recited in line 5 or if it is a different imaging device.
Claim 14 recites “the focus adjustment mechanism” in lines 13-14. There is insufficient antecedent basis for this limitation in the claim. Examiner notes claim 15 recites “a focus adjustment mechanism”.
Claim 14 recites “the imaging device” in line 9 and line 15. It is unclear if they refer to “an imaging device” recited in line 5 or “an imaging device” recited in line 6.
Claim 14 recites “the optical system” in line 8. There is insufficient antecedent basis for this limitation in the claim.
Claims 15-19 are rejected as being dependent on claim 14.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jeon et al. (US 2023/0377177 A1) in view of Blayvas (US 2014/0043435 A1).
Regarding claim 1, Jeon et al. (hereafter referred as Jeon) teaches a control device (Jeon, Paragraph 0060) comprising:
one or more processors comprising hardware (Jeon, Paragraph 0060), the one or more processors being configured to:
acquire a focal stack images including a set of a plurality of images taken at different focuses (Jeon, Fig. 2, aligned focal stack 20, Paragraph 0062) from an imaging device (Jeon, Fig. 2, photographing device 1, Paragraph 0062)
input the focal stack images to a learned model so as to infer a depth map of the subject indicating a distance from the optical system to the subject (Jeon, Figs. 2 and 9-11, depth estimation neural network 200, Paragraphs 0062 and 0113-0114), and
output the depth map (Jeon Fig. 2, depth map 30, Paragraphs 0113-0114).
However, Jeon does not teach the one or more processors being configured to: acquire first, second and third images from an imaging device; control the imaging device to generate the first image by an image sensor imaging a subject on a near point side of an optical system, control the imaging device to generate the second image by the image sensor imaging the subject during a period in which a focal length of the optical system is changed and moved between the near point side and a far point side of the optical system by the focus adjustment mechanism, control the imaging device to generate the third image by the image sensor imaging the subject on the far point side of the optical system.
In reference to Blayvas, Blayvas teaches one or more processors (Blayvas, Fig. 1, processing circuitry 100, Paragraph 0013) being configured to:
acquire first, second and third images from an imaging device (Blayvas, Fig. 2, Paragraphs 0017-0019);
control the imaging device to generate the first image by an image sensor imaging a subject on a near point side of an optical system (Blayvas, Fig. 2, sub-frame s1, Paragraphs 0017-0019),
control the imaging device to generate the second image by the image sensor imaging the subject during a period in which a focal length of the optical system is changed and moved between the near point side and a far point side of the optical system by the focus adjustment mechanism (Blayvas, Fig. 2, sub-frames s2-s4, Paragraphs 0017-0019),
control the imaging device to generate the third image by the image sensor imaging the subject on the far point side of the optical system (Blayvas, Fig. 2, sub-frame s5, Paragraphs 0017-0019),
input the first, second, and third images to a processor to infer a depth map of the subject indicating a distance from the optical system to the subject (Blayvas, Fig. 1, output frame ISP 104, Paragraph 0028), and
output the depth map (Blayvas, Fig. 1, Depth map 192, Paragraph 0028).
These arts are analogous since they are both related to capturing images at different focal lengths to produce a depth map. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Jeon with the method of capturing images as seen in Blayvas since it is a known method of capturing images at different focal lengths and would produce similar and expected results.
Regarding claim 2, the combination of Jeon and Blayvas teaches the control device according to claim 1 (see claim 1 analysis), wherein the second image is at least one frame image in a video including an image group which is temporally continuous, the image group being generated by the imaging element during a period in which a focus adjustment mechanism changes the focal length between the near point side and the far point side (Blayvas, Fig. 2, sub-frames s2-s4, Paragraphs 0017-0019).
Claim(s) 3-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jeon et al. (US 2023/0377177 A1) in view of Blayvas (US 2014/0043435 A1) in view of Wadhwa et al. (US 2021/0183089 A1).
Regarding claim 3, the combination of Jeon and Blayvas teaches the control device according to claim 2 (see claim 2 analysis), wherein the one or more processors being further configured to: acquire a camera parameter of the optical system (Jeon, Paragraphs 0065-0066 and 0069-0070); wherein the depth map is based on the camera parameter (Jeon, Paragraphs 0069-0070 and 0113-0114, Camera parameters are used to align the images which are then used to create the depth map.)
However, the combination of Jeon and Blayvas does not teach estimate a shape of the subject based on the camera parameter and the depth map of the subject.
In reference to Wadhwa et al. (hereafter referred as Wadhwa) estimating a shape of the subject based on a depth map of the subject (Wadhwa, Paragraph 0027).
These arts are analogous since they are all related to imaging devices generating depth maps. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the combination of Jeon and Blayvas with the method of determining object shapes as seen in Wadhwa to allow the device to perform depth-aware image processing or some other image processing on the image (Wadhwa, Paragraph 0027). Further, the limitation “estimate a shape of the subject based on the camera parameter and the depth map of the subject” is met since the depth mad is based on the camera parameter.
Regarding claim 4, the combination of Jeon, Blayvas and Wadhwa teaches the control device according to claim 3 (see claim 3 analysis), wherein the one or more processors being configured to control the focus adjustment mechanism so as to stop the optical system at a near point end and a far point end (Blayvas, Fig. 2).
Regarding claim 5, the combination of Jeon, Blayvas and Wadhwa teaches the control device according to claim 3 (see claim 3 analysis), wherein the one or more processors being configured to control the focus adjustment mechanism so as to linearly move the optical system from a near point end to a far point end (Blayvas, Fig. 2).
Regarding claim 6, the combination of Jeon, Blayvas and Wadhwa teaches the control device according to claim 3 (see claim 3 analysis), wherein the one or more processors being configured to control the focus adjustment mechanism so as to stop the optical system at each of a near point end and a far point end within a predetermined period of time (Blayvas, Fig. 2).
Claim(s) 7 and 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jeon et al. (US 2023/0377177 A1) in view of Blayvas (US 2014/0043435 A1) in view of Nishide (US 2022/0293268 A1).
Regarding claim 7, the combination of Jeon and Blayvas teaches the control device according to claim 1 (see claim 1 analysis). However, the combination of Jeon and Blayvas does not teach an endoscope system comprising: an endoscope configured to image an inside of a body of a subject; and the control device according to claim 1 connected to the endoscope, wherein the endoscope comprises the optical system, the imaging element, and the focus adjustment mechanism.
In reference to Nishide, Nishide teaches an endoscope system (Nishide, Fig. 1) comprising:
an endoscope configured to image an inside of a body of a subject (Nishide, Fig. 1, endoscope 10, Paragraph 0026); and
a control device connected to the endoscope (Nishide, Fig. 1, processor 20, Paragraph 0025-0026),
wherein the endoscope comprises an optical system, an imaging element, and a focus adjustment mechanism (Nishide, Fig. 2, imaging unit 12, Paragraph 0033).
These arts are analogous since they are all related to capturing images at different focal lengths to produce a depth/distance map (Nishide, Fig. 8, Steps S101 and S108, Paragraphs 0092 and 0099). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the combination of Jeon and Blayvas with its use for an endoscope as seen in Nishide to allow the device to generate depth maps for endoscopic images.
Regarding claim 14, Jeon teaches a control device (Jeon, Paragraph 0060) including one or more processors (Jeon, Paragraph 0060), the one or more processors being configured to:
acquire, by the processor, a focal stack images including a set of a plurality of images taken at different focuses (Jeon, Fig. 2, aligned focal stack 20, Paragraph 0062) from an imaging device (Jeon, Fig. 2, photographing device 1, Paragraph 0062)
input, by the processor, the focal stack images as input parameters to a learned model configured to output a depth map indicating a distance from the optical system to the subject as an output parameter, to infer a depth map of the subject (Jeon, Figs. 2 and 9-11, depth estimation neural network 200, Paragraphs 0062 and 0113-0114), and
output, by the processor, the depth map of the subject (Jeon Fig. 2, depth map 30, Paragraphs 0113-0114).
However, Jeon does not teach a medical assistant method executed by a control device including one or more processors, the medical assistant method comprising: acquiring, by the processor, first, second and third images from an imaging device controlling an imaging device to generate the first image by an image sensor imaging a subject on a near point side of the optical system, controlling the imaging device to generate the second image by the image sensor imaging the subject during a period in which a focal length of the optical system is changed and moved between the near point side and a far point side of the optical system by the focus adjustment mechanism, controlling the imaging device to generate the third image by the image sensor imaging the subject on the far point side of the optical system; inputting, by the processor, the first image, the second image, and the third image as input parameters to a learned model configured to output the depth map.
In reference to Blayvas, Blayvas teaches one or more processors (Blayvas, Fig. 1, processing circuitry 100, Paragraph 0013) being configured to:
acquiring, by the processor, first, second and third images from an imaging device (Blayvas, Fig. 2, Paragraphs 0017-0019);
controlling an imaging device to generate the first image by an image sensor imaging a subject on a near point side of the optical system (Blayvas, Fig. 2, sub-frame s1, Paragraphs 0017-0019),
controlling the imaging device to generate the second image by the image sensor imaging the subject during a period in which a focal length of the optical system is changed and moved between the near point side and a far point side of the optical system by the focus adjustment mechanism (Blayvas, Fig. 2, sub-frames s2-s4, Paragraphs 0017-0019),
controlling the imaging device to generate the third image by the image sensor imaging the subject on the far point side of the optical system (Blayvas, Fig. 2, sub-frame s5, Paragraphs 0017-0019),
inputting, by the processor, the first image, the second image, and the third image as input parameters to a processor configured to output a depth map indicating a distance from the optical system to the subject as an output parameter, to infer a depth map of the subject (Blayvas, Fig. 1, output frame ISP 104, Paragraph 0028), and
outputting, by the processor, the depth map of the subject (Blayvas, Fig. 1, Depth map 192, Paragraph 0028).
However, the combination of Jeon and Blayvas does not teach a medical assistant method executed by a control device including one or more processors.
In reference to Nishide, Nishide teaches a medical assistant method executed by a control device including one or more processors (Nishide, Fig. 1)
These arts are analogous since they are all related to capturing images at different focal lengths to produce a depth/distance map (Nishide, Fig. 8, Steps S101 and S108, Paragraphs 0092 and 0099). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the combination of Jeon and Blayvas with its use for an endoscope as seen in Nishide to allow the device to generate depth maps for endoscopic images.
Regarding claim 15, the combination Jeon, Blayvas and Nishide of teaches the medical assistant method according to claim 14 (see claim 14 analysis), wherein the second image is at least one frame image in a video including an image group which is temporally continuous, the image group being generated by the imaging sensor during a period in which a focus adjustment mechanism changes the focal length between the near point side and the far point side (Blayvas, Fig. 2, sub-frames s2-s4, Paragraphs 0017-0019).
Claim(s) 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jeon et al. (US 2023/0377177 A1) in view of Blayvas (US 2014/0043435 A1) in view of Nishide (US 2022/0293268 A1) in view of Wadhwa et al. (US 2021/0183089 A1).
Regarding claim 16, the combination of Jeon, Blayvas and Nishide teaches the medical assistant method according to claim 15 (see claim 15 analysis), further comprising, acquiring a camera parameter of the optical system (Jeon, Paragraphs 0065-0066 and 0069-0070); wherein the depth map is based on the camera parameter (Jeon, Paragraphs 0069-0070 and 0113-0114, Camera parameters are used to align the images which are then used to create the depth map.)
However, the combination of Jeon, Blayvas and Nishide does not teach estimating a shape of the subject based on the camera parameter and the depth map of the subject.
In reference to Wadhwa et al. (hereafter referred as Wadhwa) estimating a shape of the subject based on a depth map of the subject (Wadhwa, Paragraph 0027).
These arts are analogous since they are all related to imaging devices generating depth maps. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the combination of Jeon, Blayvas and Nishide with the method of determining object shapes as seen in Wadhwa to allow the device to perform depth-aware image processing or some other image processing on the image (Wadhwa, Paragraph 0027). Further, the limitation “estimating a shape of the subject based on the camera parameter and the depth map of the subject” is met since the depth mad is based on the camera parameter.
Regarding claim 17, the combination of Jeon, Blayvas, Nishide and Wadhwa teaches the medical assistant method according to claim 16 (see claim 16 analysis), further comprising controlling the focus adjustment mechanism so as to stop the optical system at a near point end and a far point end (Blayvas, Fig. 2).
Regarding claim 18, the combination of Jeon, Blayvas, Nishide and Wadhwa teaches the medical assistant method according to claim 16 (see claim 16 analysis), further comprising controlling the focus adjustment mechanism so as to linearly move the optical system from a near point end to a far point end (Blayvas, Fig. 2).
Regarding claim 19, the combination of Jeon, Blayvas, Nishide and Wadhwa teaches the medical assistant method according to claim 16 (see claim 16 analysis), further comprising controlling the focus adjustment mechanism so as to stop the optical system at each of a near point end and a far point end within a predetermined period of time (Blayvas, Fig. 2).
Claim(s) 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sabato et al. (US 2022/0383525 A1) in view of Blayvas (US 2014/0043435 A1).
Regarding claim 8, Sabato et al. (hereafter referred as Sabato) teaches a learned model generator (Sabato, Fig. 6) comprising:
one or more processors comprising hardware (Sabato, Paragraph 0108), the one or more processors being configured to:
acquire learning data obtained by combining a plurality of first images, a plurality of second images (Sabato, Fig. 6, sequence 600, Paragraph 0235, A first and second image are images of different focus positions for a first scene. A plurality of different scenes are imaged at the different focus positions.), and a correct value of a depth map with each of a plurality of targets (Sabato, Fig. 6, ground truth depth maps 610, Paragraph 0239), from an imaging device (Sabato, Paragraphs 0236-0237),
the correct value of the depth map being related to a distance from the optical system to each of the plurality of targets (Sabato, Paragraphs 0238-0243),
generate, by learning using the learning data, a learned model configured to output a depth map indicating a distance from the optical system to a subject as an output parameter (Sabato, Fig. 6, machine learning algorithm 630, Paragraphs 0238-0243).
However, Sabato does not explicitly state a plurality of third images, and does not teach control the imaging device to generate the plurality of first images by an image sensor imaging the plurality of targets on a near point side of an optical system, control the imaging device to generate the plurality of second images by the imaging image sensor the plurality of targets during a period in which the focus adjustment mechanism changes and moves a focal length of the optical system between the near point side and a far point side of the optical system, control the imaging device to generate the plurality of third images by the image sensor imaging the plurality of targets on a far point side of the optical system, and the learned model configured to output a depth map with the first, second, and third images as input parameters.
In reference to Blayvas, Blayvas teaches one or more processors (Blayvas, Fig. 1, processing circuitry 100, Paragraph 0013) being configured to:
acquire first, second and third images from an imaging device (Blayvas, Fig. 2, Paragraphs 0017-0019);
control the imaging device to generate the first image by an image sensor imaging a target on a near point side of an optical system (Blayvas, Fig. 2, sub-frame s1, Paragraphs 0017-0019),
control the imaging device to generate the second image by the image sensor imaging the target during a period in which a focal length of the optical system is changed and moved between the near point side and a far point side of the optical system by the focus adjustment mechanism (Blayvas, Fig. 2, sub-frames s2-s4, Paragraphs 0017-0019),
control the imaging device to generate the third image by the image sensor imaging the target on the far point side of the optical system (Blayvas, Fig. 2, sub-frame s5, Paragraphs 0017-0019), and
output a depth map with the first, second, and third images as input parameters (Blayvas, Fig. 1, output frame ISP 104, Paragraph 0028).
These arts are analogous since they are both related to capturing images at different focal lengths to produce a depth map. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Sabato with the method of capturing first, second and third images of a scene as seen in Blayvas since it is a known method of capturing images at different focal lengths and would produce similar and expected results. Further, the limitations “a plurality of third images, control the imaging device to generate the plurality of first images by an image sensor imaging the plurality of targets on a near point side of an optical system, control the imaging device to generate the plurality of second images by the imaging image sensor the plurality of targets during a period in which the focus adjustment mechanism changes and moves a focal length of the optical system between the near point side and a far point side of the optical system, control the imaging device to generate the plurality of third images by the image sensor imaging the plurality of targets on a far point side of the optical system” are met by performing the method of image capture of Blayvas at the plurality of different scenes of Sabato.
Regarding claim 9, the combination of Sabato and Blayvas teaches the learned model generator according to claim 8 (see claim 8 analysis), wherein the second image is at least one frame image in a video including an image group which is temporally continuous, the image group being generated by the imaging element during a period in which the focus adjustment mechanism changes the focal length between the near point side and the far point side (Blayvas, Fig. 2, sub-frames s2-s4, Paragraphs 0017-0019).
Claim(s) 10-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sabato et al. (US 2022/0383525 A1) in view of Blayvas (US 2014/0043435 A1) in view of Wadhwa et al. (US 2021/0183089 A1).
Regarding claim 10, the combination of Sabato and Blayvas teaches the learned model generator according to claim 9 (see claim 9 analysis), wherein the one or more processors being further configured to: acquire a camera parameter of the optical system (Sabato, Fig. 6, Focus schedule 620, Paragraph 022); wherein the depth map is based on the camera parameter (Sabato, Fig. 6, Paragraphs 0238)
However, the combination of Sabato and Blayvas does not teach estimate a shape of the subject based on the camera parameter and the depth map of the subject.
In reference to Wadhwa et al. (hereafter referred as Wadhwa) estimating a shape of the subject based on a depth map of the subject (Wadhwa, Paragraph 0027).
These arts are analogous since they are all related to imaging devices generating depth maps. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the combination of Sabato and Blayvas with the method of determining object shapes as seen in Wadhwa to allow the device to perform depth-aware image processing or some other image processing on the image (Wadhwa, Paragraph 0027). Further, the limitation “estimate a shape of the subject based on the camera parameter and the depth map of the subject” is met since the depth mad is based on the camera parameter.
Regarding claim 11, the combination of Sabato, Blayvas and Wadhwa teaches the learned model generator according to claim 10 (see claim 10 analysis), wherein the one or more processors being configured to control a focus adjustment mechanism so as to stop the optical system at a near point end and a far point end (Blayvas, Fig. 2).
Regarding claim 12, the combination of Sabato, Blayvas and Wadhwa teaches the learned model generator according to claim 10 (see claim 10 analysis), wherein the one or more processors being configured to control the focus adjustment mechanism so as to linearly move the optical system from a near point end to a far point end (Blayvas, Fig. 2).
Regarding claim 13, the combination of Sabato, Blayvas and Wadhwa teaches the learned model generator according to claim 10 (see claim 10 analysis), wherein the one or more processors being configured to control the focus adjustment mechanism so as to stop the optical system at each of a near point end and a far point end within a predetermined period of time (Blayvas, Fig. 2).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESLEY JASON CHIU whose telephone number is (571)270-1312. The examiner can normally be reached Mon-Fri: 8am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Twyler Haskins can be reached at (571) 272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WESLEY J CHIU/ Examiner, Art Unit 2639
/TWYLER L HASKINS/ Supervisory Patent Examiner, Art Unit 2639