DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 6 and 7 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 6 and 7 include the limitation of “setting newly parameters” and it is unclear what the adverb “newly” is describing. Appropriate correction is required.
Claim 7 recites the limitation "the learning process" in line 3. There is insufficient antecedent basis for this limitation in the claim. There is antecedent basis for “the learning processing” and examiner will be interpreting the claim as such.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 10 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Namiki (US 2018/0260628 Al) in view of Drutsa (US 2020/0380410 A1).
Regarding claims 1, 10 and 11 Namiki teaches an information processing method comprising:
acquiring an evaluation result indicating whether an estimation result of a learning model for a first evaluation image is correct or incorrect (paragraphs [0038], [0054]) based on first evaluation data including data of at least one first evaluation image and correct answer data for the first evaluation image (paragraphs [0038], [0054]); and
executing identification processing in which feature information that is likely to cause the estimation result of the learning model to be correct is identified based on the evaluation result (paragraphs [0002], [0009], [0033]). Namiki describes an object detection system which trains an object detection model and subsequently utilizes the parameters and features associated with a correct result in further learning.
Namiki fails to teach feature information that is likely to cause the estimation result of the learning model to be incorrect is identified based on the evaluation result
However, Drutsa teaches feature information that is likely to cause the estimation result of the learning model to be incorrect is identified based on the evaluation result (paragraphs [0010], [0014], [0023], [0027]). Drutsa describes a method of machine learning. This method includes identifying a distinct set of features that are associated with an incorrect prediction result and further training the machine learning model based on this information. Drutsa is considered analogous to the claimed invention as it is in the same field of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date to combine the teachings of Drutsa with Namiki to identify features corresponding to an incorrect result in order to improve error determination and in turn machine learning performance.
Regarding claim 2, Namiki in view of Drutsa teaches the information processing method according to claim 1, further comprising:
generating first learning data including data of at least one first learning image (paragraph [0038]) and correct answer data for a first learning image (Fig. 5, 6, paragraph [0038]), wherein the first learning image is generated based on the feature information (paragraph [0040]).
Regarding claim 3, Namiki in view of Drutsa teaches The information processing method according to claim 2, further comprising:
training the learning model using the first learning data (paragraph [0038]).
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Namiki in view of Drutsa and in further view of X. Zhang, J. Feng, H. Xiong and Q. Tian, "Zigzag Learning for Weakly Supervised Object Detection," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 4262-4270, doi: 10.1109/CVPR.2018.00448 (hereinafter "Zhang") .
Regarding claim 4, Namiki in view of Drutsa teaches The information processing method according to claim 3, Namiki further teaches wherein the feature information includes at least any one of parameters set for an object for which the estimation result of the learning model was correct (paragraphs [0038], [0054]) and parameters set for an environment of the object for which the estimation result was correct (paragraphs [0037], [0038], [0040]) and generating the first evaluation image based on the parameters (paragraph [0040] – “the machine learning device can perform unsupervised learning by, for example, allowing narrower (stricter) ranges for the parameters to be used in the detection algorithm to obtain only the image data almost certain to tum out to be correct.”). Namiki describes parameters which will almost certainly produce a correct output and describes inputting environment information. While environment information is not explicitly described as one of the parameters, it is implied that the environment information is used. Drutsa further teaches wherein the feature information includes at least any one of parameters set for an object for which the estimation result of the learning model was incorrect (paragraphs [0002], [0009], [0033])
Namiki in view of Drutsa fails to teach setting parameters corresponding to a difficulty level of estimating an object in an image.
However, Zhang teaches setting parameters corresponding to a difficulty level of estimating an object in an image (Figures 1, 2 sections 3.2, 4.2). Zhang describes a system of object detection which utilizes difficulty in a progressive learning algorithm. Zhang sorts training images by increasing difficulty. In doing this the difficulty is implicitly included as a parameter as the model will train in the given order of increasing difficulty. Zhang further describes the difficulty information a parameter, when describing the analysis of how the learning folds affect performance. The learning folds of Zhang are the sets of images increasing in difficulty. Zhang is considered analogous to the claimed invention as it is in the same field of machine learning and object recognition. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Zhang with the teachings of Namiki in view of Drutsa in order to improve model generalizability.
Claim(s) 5-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Namiki in view of Drutsa and Zhang and in further view of Han (US 2021/0125042 A1).
Regarding claim 5, Namiki in view of Drutsa and Zhang teach the information processing method according to claim 4, further comprising:
wherein the learning processing includes: first processing of setting the parameters; second processing of generating at least one second learning image based on the parameters (paragraph [0053], [0059], [0060] – the obtaining of “partial images” can be considered generation of second learning images); and third processing of training the learning model (paragraph [0056], [0059] – NR times suggests a third processing) using second learning data including data of the at least one second learning image (paragraph [0053], [0059], [0060]) and correct answer data for the at least one second learning image ([0053]-[0055], [0059], [0060])
Namiki in view of Drutsa and Nakata fail to teach executing predetermined learning processing and the identification processing in parallel.
However Han teaches executing predetermined learning processing and the identification processing in parallel (paragraph [0051]). Han is considered analogous to the claimed invention as it is in the same field of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Namiki in view of Drutsa and Nakata with Han to implement simultaneous training and inference to improve the training and inference processing.
Regarding claim 6, Namiki in view of Drutsa, Zhang, and Han teach the information processing method according to claim 5. Namiki further teaches repeatedly executing the learning processing (Fig. 10, paragraphs [0059]), wherein the repeatedly executing the learning processing includes a latest repetition of a first processing (Fig. 10, paragraphs [0059]) and generating the first evaluation image includes generating the first evaluation image based on the newly parameters (paragraph [0040])
Zhang further teaches setting parameters with the difficulty level at least one level higher than a difficulty level corresponding to previous parameters set in a previous repetition of a first processing (Figures 1, 2 sections 3.2, 4.2).
Regarding claim 7, The information processing method according to The information processing method according to claim 5, further comprising, repeatedly executing the learning process (Fig. 10, paragraphs [0059]), and parameters set for the environment of the object for which the estimation result was correct (paragraphs [0037], [0038], [0040] – as described in claim 4), acquiring a parameter range of a parameter whose type has been identified (paragraph [0040], [0045], [0053], [0054], [0071] – range is based on likelihood to cause model to be correct or have large margin of error, this is in addition to ), and generating the first learning image based on at least part of the parameter range (paragraph [0040]), the parameter range being a range from a value set in the previous repetition of the first processing to a value set in the latest repetition of the first processing (paragraph [0045] – gradually reducing the range of parameters as learning progress inherently describes a range using values set in previous repetitions);
Zhang further teaches wherein the repeatedly executing the learning processing includes a latest repetition of a first processing including setting newly parameters with the difficulty level multiple levels higher than a difficulty level corresponding to previous parameters set in a previous repetition of a first processing (Figures 1, 2 sections 3.2, 4.2)
identifying a type of at least any one of parameters set for the object for which the estimation result of the learning model was incorrect (section 4.2); Zhang analyzes influences of parameters on the performance of the model and does so with two different types of parameters. These parameters are the learnings folds and the masking ratio. While Zhang does not describe specifically do this for estimation results that are incorrect, it analyzes the influence of each type of parameter across the dataset, which would include incorrect predictions. So, the identification of a parameter type’s influence on the prediction result present in Zhang can be considered analogous to the claimed invention’s identification of a type of parameter set for solely an incorrect result.
Regarding claim 8, Namiki in view of Drutsa and Zhang and Han teaches the information processing method according to claim 6. Namiki further teaches repeatedly executing the learning processing until an estimation accuracy of the learning model satisfies a set condition. (Figs. 13A, 13B; paragraphs [0003], [0009], [0046], [0055], [0067])
Regarding claim 9, Namiki in view of Drutsa, Nakata and Han teach The information processing method according to claim 5. Namiki further teaches wherein generating the first evaluation image, the first learning image, and the second learning image includes using a cut-and-paste method on an existing image (paragraph [0059]). The method of generating a second learning image in Namiki involves extracting a region of an input image and using it as a partial image for further learning. This can be considered analogous to a cut-and-paste method on an existing image.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Brown (WO 2020/014294 A1) describes a system of segmentation using cut-and-paste.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aidan W McCoy whose telephone number is (571)272-5935. The examiner can normally be reached 8:00 AM-5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571)272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AIDAN W MCCOY/Examiner, Art Unit 2611
/TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611