DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Prior arts cited in this office action:
Ishi et al. (US 20200074231 A1, hereinafter “Ishi”)
Ohba (WO 2019059343 A1, hereinafter “Ohba”)
Boddington et al. (US 20210015560 A1, hereinafter “Boddington”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 5-7, 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Ishi et al. (US 20200074231 A1, hereinafter “Ishi”) in view of Ohba (WO 2019059343 A1, hereinafter “Ohba”).
Regarding claims 1, 14 and 15:
Ishi teaches a recognition model generation method (Ishi Abstract, [0006], where Ishi teaches a recognition model generation method) comprising:
acquiring composite images depicting a detection target (Ishe [0030], [0049], [0094]-[0095], figs. 7 and 8 where Ishi teaches an image synthesis unit 140 that provides composite images a detection processing unit 140);
performing a first training, based on the plurality of composite images, to create a first recognition model configured to output object recognition results for input of an image (Ishi [0049]-[0050], [0056], figs. 7 and 8, wherein Ishi teaches training the recognition model base on the composite image and output object recognition results for the input image);
acquiring captured image images of the detection target (Ishi [0056], [0074], [0110]-[0111], figs. 7 and 8, where Ishi teaches using the captured images from sampling unit 112 and the composite images);
acquiring the object recognition results as annotation data, the annotation
data outputted by input of the captured images into the first recognition model (Ishi [0056], [0074], [0110]-[0111], figs. 7 and 8, where Ishi teaches using the captured images from sampling unit 112 and the recognition results as annotation data from the detection processing unit 140); and
performing a second training, based on the captured images and the annotation data, to create a second recognition model (Ishi [0006], [0056], [0074], [0088], [0110]-[0111], figs. 7 and 8, where Ishi teaches using the captured images from sampling unit 112 and the recognition results as annotation data from the detection processing unit 140 to perform second training to update the recognition model).
Although Ishi teaches using and updating the same model for recognition of the target object, he fails to explicitly teach using the first model as teacher.
However, Ohba discloses a recognition model generation apparatus that generates a second recognition model by training a first recognition model using a captured image of a detection target as teacher data, wherein the first recognition model is a recognition model generated by training an original recognition model used in object recognition using a composite image generated on the basis of three-dimensional shape data of the detection target as teacher data (Ohba [0024]-[0027] and [0037]-[0038], [0065], figs. 4 and 5).
Therefore, taking the teachings of Ishi and Ohba as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to receive composite images, perform annotation and object detection by training model, update the model by learning from the previous training and further train the model again accordingly, in order to properly annotate and detect the object in the image.
Regarding claim 2:
Ishi in view of Ohba teaches wherein in the second training, the first recognition model is retrained (Ishi [0056], [0074], [0110]-[0111], figs. 7 and 8, where Ishi teaches the first recognition model is retrained.
Regarding claim 3:
Ishi in view of Ohba teaches wherein the second training is performed with the captured images fewer in number than the composite images used during the first training (Ishi [0056], [0074], [0110]-[0111], figs. 7 and 8).
Regarding claim 5:
Ishi in view of Ohba teaches wherein in the second training, the second recognition model is generated using the captured images to which the annotation data are provided (Ishi [0056], [0074], [0110]-[0111], figs. 7 and 8 wherein the annotation is provided for the captured images).
Regarding claim 6:
Ishi in view of Ohba teaches Wherein in the second training, the first recognition model is retrained by performing domain adaptation using first captured images of the detection target to which annotation data have not been provided, and second captured image images of the detection target to which the annotation data are provided are used to evaluate the second recognition model provided (Ishi [0056], [0074], [0110]-[0111], figs. 7 and 8)
Regarding claim 7:
Ishi in view of Ohba teaches Wherein in a case in which a degree of confidence in annotation of a captured image is equal to or less than a threshold, a composite image of the detection target is generated so as to have an identical feature as the captured image, and
the composite image is used in the second training (Ishi [0050]).
Regarding claim 11:
Ishi in view of Ohba wherein in the annotation, the annotation data are acquired by having the first recognition model recognize removed images yielded by removing noise from the captured images, and in the second training, the first recognition model is trained retrained using the captured images (Ishi [0053] fig. 4).
Regarding claim 12:
Ishi in view of Ohba teaches wherein the composite images are generated using a texture corresponding to a material of the detection target identified based on an image of the detection target captured by imaging means, or a texture selected from a template corresponding to any material (Ishi [0023], [0047]-[0048]).
Regarding claim 13:
Ishi in view of Ohba teaches wherein the annotation data have at least one of a mask image of the detection target and a bounding box surrounding the detection target in a captured image that is acquired (Ishi [0053] fig. 4).
Claims 4, 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Ishi et al. (US 20200074231 A1, hereinafter “Ishi”) in view of Ohba (WO 2019059343 A1, hereinafter “Ohba”) and in view of Boddington et al. (US 20210015560 A1, hereinafter “Boddington”).
Regarding claim 4:
Ishi in view of Ohba fails to explicitly teach wherein the composite images are generated based on 3D shape data of the detection target using an image guide for the captured of the image of the subject.
However, Boddington teaches an artificial intelligent intra-operative surgical system and method wherein the system estimates three-dimensional anatomical shape information using the 3D Shape Modeling Module 10 followed by a registration (mapping) of an alignment grid to annotated image using the Image Registration Module 9. The system produces composite image display of any combination of aligned preoperative image, 3D model, and alignment grid using image composition module (11) (Boddington [0007], [0008], [0014]-[0115], fig. 5B).
Therefore, taking the teachings of Ishi, Ohba and Boddington as whole, taking the teachings of Ishi and Boddington as whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to generate the composite images based on 3D shape data of the target and using image guide to guide in the capturing of the image since it is a well-known technique in the art and when use produce predictable result such as obtaining composite images with enough details that would allow for viewing relevant information regarding the object of interest.
Regarding claim 8:
Ishi in view of Ohba and in view of Boddington teaches wherein at least one of the captured image images is captured based on an imaging guide for capturing the at least one of the captured images, the imaging guide being provided based on 3D shape data (Boddington [0007], [0008], [0014]-[0115], fig. 5B).
Regarding claim 9:
Ishi in view of Ohba and in view of Boddington teaches wherein the at least one of the captured image images is captured by controlling, based on the imaging guide, a robot having attached thereto an imaging apparatus that acquires configured to acquire the at least one of the captured images of the detection target (Boddington [0006]- [0008], [0073], [0114]-[0115], fig. 5B).
Regarding claim 10:
Ishi in view of Ohba and in view of Boddington teaches wherein the imaging guide includes an imaging direction of the detection target as determined based on the 3D shape data (Boddington [0007], [0008], [0014]-[0115], fig. 5B).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEDNEL CADEAU whose telephone number is (571)270-7843. The examiner can normally be reached Mon-Fri 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chieh Fan can be reached at 571-272-3042. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WEDNEL CADEAU/Primary Examiner, Art Unit 2632 January 9, 2026