DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Specification
The disclosure is objected to because of the following informalities:
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 7, 9 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Freedman et al. US2021/0280312 hereinafter referred to as Freedman in view of Lee et al. US2017/0200284 hereinafter referred to as Lee.
As per Claim 1, Freedman teaches an image learning device comprising: one or more processors configured to:
acquire at least one observation image obtained by imaging an observation target with an image sensor; (Freedman, Paragraph [0008], “obtaining, by one or more computing devices, a plurality of images captured by an endoscopic device during a gastroenterological procedure for a patient”)
calculate, for the observation image, an estimated distance to the observation target for each of a plurality of locations within an imaging range of the image sensor and (Freedman, Paragraph [0008], “using a machine-learned depth estimation model, the plurality of images to obtain a plurality of depth maps respectively for the plurality of images, wherein the depth map obtained for each image describes one or more depths of the respective portions of the anatomical structure from the endoscopic device”)
Freedman does not explicitly teach by using a distance estimation parameter; update the distance estimation parameter based on a difference between an actual distance obtained by measuring a distance to the observation target and the estimated distance for at least one location within the imaging range.
Lee teaches using a distance estimation parameter; update the distance estimation parameter based on a difference between an actual distance obtained by measuring a distance to the observation target and the estimated distance for at least one location within the imaging range. (Lee, Paragraph [0067], “Since the trainer 170 is aware of an actual depth of the long-distance image object, the trainer 170 may calculate a difference between the actual depth and the depth of the long-distance image object estimated by the distance estimator 190” and Paragraph [0068], “The trainer 170 may update the distance estimator 190 using a back propagation method in order to decrease the difference”; distance estimation parameter are the weights in the neural network)
Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Lee into Freedman because by utilizing a distance estimator in conjunction with the model of Freedman will provide a back propagation method in order to increase the accuracy of the model of Freedman.
Therefore it would have been obvious to one of ordinary skill to combine the two references to obtain the invention in Claim 1.
As per Claim 2, Freedman in view of Lee teaches the image learning device according to claim 1, wherein the one or more processors are configured to: calculate the estimated distance only for some locations in the observation image. (Freedman, Paragraph [0008], “using a machine-learned depth estimation model… wherein the depth map obtained for each image describes one or more depths of the respective portions of the anatomical structure from the endoscopic device”)
The rationale applied to the rejection of claim 1 has been incorporated herein.
As per Claim 3, Freedman in view of Lee teaches the image learning device according to claim 2, wherein the one or more processors are configured to: calculate the estimated distance for a location at which the actual distance is measured. (Lee, Paragraph [0067], “Since the trainer 170 is aware of an actual depth of the long-distance image object, the trainer 170 may calculate a difference between the actual depth and the depth of the long-distance image object estimated by the distance estimator 190”)
The rationale applied to the rejection of claim 2 has been incorporated herein.
As per Claim 4, Freedman in view of Lee teaches the image learning device according to claim 1, further comprising: a distance estimation model, wherein the one or more processors are configured to: use the distance estimation model to calculate the estimated distance using the distance estimation parameter and to perform learning to update the distance estimation parameter based on the difference. (Lee, Paragraph [0068], “The trainer 170 may update the distance estimator 190 using a back propagation method in order to decrease the difference. For example, the trainer 170 may propagate the difference in a reverse direction from the output layer to the input layer via hidden layer in the artificial neural network... The aforementioned training operation may be iteratively performed until the difference is less than a predetermined or desired threshold value”)
The rationale applied to the rejection of claim 1 has been incorporated herein.
As per Claim 7, Freedman in view of Lee teaches the image learning device according to claim 1, wherein the one or more processors are configured to: update the distance estimation parameter such that the difference between the actual distance and the estimated distance is a minimum value or is equal to or less than a predetermined threshold value. (Lee, Paragraph [0068], “The trainer 170 may update the distance estimator 190 using a back propagation method in order to decrease the difference. For example, the trainer 170 may propagate the difference in a reverse direction from the output layer to the input layer via hidden layer in the artificial neural network... The aforementioned training operation may be iteratively performed until the difference is less than a predetermined or desired threshold value”)
The rationale applied to the rejection of claim 1 has been incorporated herein.
As per Claim 9, Freedman in view of Lee teaches the image learning device according to claim 1, wherein the observation target is a digestive tract, and the observation image is an endoscopic image. (Freedman, Paragraph [0008], “obtaining, by one or more computing devices, a plurality of images captured by an endoscopic device during a gastroenterological procedure for a patient”)
The rationale applied to the rejection of claim 1 has been incorporated herein.
As per Claim 11, Claim 11 claims an image learning method executing the image learning device as claimed in Claim 1. Therefore the rejection and rationale are analogous to that made in Claim 1.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Freedman et al. US2021/0280312 hereinafter referred to as Freedman in view of Lee et al. US2017/0200284 hereinafter referred to as Lee as applied to Claim 1 and further in view of Noda et al. US2018/0373942 hereinafter referred to as Noda.
As per Claim 10, Freedman in view of Lee teaches the image learning device according to claim 1,
Freedman in view of Lee does not explicitly teach wherein the one or more processors are configured to: use a value acquired through laser-based distance measurement as the actual distance.
Noda teaches wherein the one or more processors are configured to: use a value acquired through laser-based distance measurement as the actual distance. (Noda, Paragraph[0035], “ the actual distance that is a value measured by a distance sensor such as a light detection and ranging (LIDAR) sensor, but also a value used for calculating the actual distance from a known value”)
Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Noda into Freedman in view of Lee because by utilizing a LIDAR sensor for determining the actual distance will provide an accurate result for a sensor-based distance measurement.
Therefore it would have been obvious to one of ordinary skill to combine the three references to obtain the invention in Claim 10.
Allowable Subject Matter
Claims 5-6 and 8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MING HON whose telephone number is (571)270-5245. The examiner can normally be reached M-F 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on 571-270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MING Y HON/Primary Examiner, Art Unit 2666