Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed on 10/13/2025 has been entered.
Claims 1, 2, 4, 5, 7-16 and 18-21 are currently pending and an Office Action on the merits follows.
Response to Arguments
Applicant's arguments filed on 10/13/2025 have been fully considered but they are not persuasive. The applicant states that Kohli (US 2012/0288186) fails to disclose “wherein the electronic device is configured to obtain the primary object based on the first input”. Particularly, the applicant argues that the human annotator in paragraph [0042] does not result in obtaining primary object. The examiner respectfully disagrees. Referring to the first portion of claim 1, it recites that the primary object is obtained based on the first image data and then in the later part of the claim states that this primary object is obtained based on the first input. This appears to indicate that there is a broad relationship between “the first image data” and “the first input”. As provided in the Non-Final OA dated 7/11/25 (see page 3), the primary object is obtained based on the manual annotation performed by a user (see paragraph [0020] of Kohli). Since this manual annotation leads to obtaining of primary objects (initial box/manual annotation of a tree or a bird), Kohli can be read as teaching the limitation of obtaining the primary object based on the first input. The applicant’s invention may perform the obtaining step in a different way than Kohli. However, this difference is apparent in current claim wording. Therefore, the previous rejection under 35 USC 103 as being unpatentable over Kohli in view of Shrivastava is maintained.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 4, 5, 7-16 and 18-21 are rejected under 35 U.S.C. 103 as being unpatentable over Kohli et al. (US 2012/0288186 A1) in view of Shrivastava et al. (“Learning from Simulated and Unsupervised Images through Adversarial Training”).
Regarding claims 1 and 15, Kohli discloses an electronic device comprising: memory circuitry (Kohli, paragraph 82); interface circuitry (Kohli, paragraph 24); and processor circuitry (Kohli, Fig. 13, paragraphs 3 and 82: Kohli discloses an electronic device for providing an enhanced training sample set containing new synthesized training images that are artificially generated from an original training sample set);
wherein the processor circuitry is configured to: obtain first image data associated with a first image, obtain a primary object based on the first image data (Kohli, Fig. 1, 8 and paragraph 20: training image containing an object (bird or tree) is given in input; a bounding box identifies the object);
generate one or more secondary objects based on a first augmentation operation of the primary object (Kohli, paragraph 22: the object of the original training image is enhanced by generating a variation of the object (flipped bird or rotated tree));
obtain primary background data from the first image data (Kohli, Fig. 1 and 8 and paragraphs 20 and 62: the pixel belonging to the background are extracted);
generate secondary background data based on a second augmentation operation of the primary background data (Kohli, Fig. 1, 8 and paragraph 20 and 62: a different background is generated from the original background);
provide a first data set by combining the primary object and/or the one or more secondary objects with the primary background data and/or the secondary background data (Kohli, Fig. 1 and 8, case C and paragraph 62: the object (the bird) of the original image is scaled in a transformed image; the background (the three and the beach) are cropped and scaled in the transformed image; the transformed image is part of an enhanced training sample set);
generate, based on the first data set, a detection model for detecting one or more objects of the same type as the primary object (Kohli, paragraph 23: an object recognition model is trained by the enhanced training sample set; Kohli, Fig. 1 and 8 and case C and paragraph 62: the enhanced training sample set contains images with the object (bird) of the original image);
and provide an object detector configured to detect, based on the detection model, the one or more objects of the same type as the primary object (Kohli, paragraph 24: an image processor engine performs object classification, detection, and/or segmentation operations based on the training provided by the enhanced training sample set),
wherein the interface circuitry comprises display circuitry configured to display a user interface and to receive user input; wherein the electronic device is configured to receive a first input from a user via the user interface (GUI in paragraph 42), wherein the electronic device is configured to obtain the primary object based on the first input (Kohli, paragraphs 20 and 42)
Kohli does not explicitly disclose wherein the electronic device is configured to generate, based on the first data set, a training image data set and/or a test image data set.
Shrivastava discloses generating, based on the first data set, a training image data set and/or a test image data set (Shrivastava uses a first data set of synthetic images which are artificial images generated from real image data to generate a set of training images that are further refined, left hand column paragraph of page 2108).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to have modified the invention of Kohli to include wherein generating, based on the first data set, a training image data set and/or a test image data set as taught by Shrivastava. The suggestion/motivation for doing so would have been that generating a training image data set based on a first data set was that it improves the training images by further refining the training data set and making the training data set more realistic as would be well-known to one of ordinary skill in the art. Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to have combined Shrivastava with Kohli.
Regarding claim 2, the combination of Kohli and Shrivastava discloses the electronic device according to claim 1, wherein the electronic device is configured to label the primary object (Kohli, paragraph 20).
Regarding claim 4, the combination of Kohli and Shrivastava discloses the electronic device according to claim 3, wherein the labelling of the primary object is based on the first input (Kohli, paragraph 42).
Regarding claim 5, the combination of Kohli and Shrivastava discloses the electronic device according to claim 4, wherein the display circuitry is configured to display a first user interface object representative of a first object selector associated with the primary object; wherein the electronic device is configured to: detect a selection of the first user interface object; and use the object detector according to a detection model associated with the selected first user interface object (Kohli, paragraph 42).
Regarding claim 7, the combination of Kohli and Shrivastava discloses the electronic device according to claim 1, wherein the electronic device is configured to train the detection model based on the training image data set (Kohli, paragraph 26 and 27; Kohli, paragraphs 41 and 42).
Regarding claim 8, the combination of Kohli and Shrivastava discloses the electronic device according to claim 1, wherein the electronic device is configured to test the detection model based on the test image data set (Kohli, paragraph 26 and 27; Kohli, paragraphs 41 and 42).
Regarding claim 9, the combination of Kohli and Shrivastava discloses the electronic device according to claim 8, wherein the electronic device is configured to detect a failed object detection based on the test of the detection model; and - determine a cause of the failed object detection (Kohli, paragraph 26 and 27; Kohli, paragraphs 41 and 42).
Regarding claim 10, the combination of Kohli and Shrivastava discloses the electronic device according to claim 9, wherein the display circuitry is configured to display a second user interface object representative of a guidance to remedy the failure (Kohli, paragraph 26 and 27; Kohli, paragraphs 41 and 42).
Regarding claim 11, the combination of Kohli and Shrivastava discloses the electronic device according to claim 9, wherein the display circuitry is configured to display, a third user interface object representative of a confidence score of the detection (Kohli, paragraph 26 and 27; Kohli, paragraphs 41 and 42).
Regarding claim 12, the combination of Kohli and Shrivastava discloses the electronic device according to claim 1, the electronic device comprises a camera configured to capture a plurality of images including the first image and to generate the first image data associated with the first image (Kohli, paragraph 82).
Regarding claim 13, the combination of Kohli and Shrivastava discloses the electronic device according to claim 1, wherein the electronic device is a user device (Kohli, paragraph 82).
Regarding claim 14, the combination of Kohli and Shrivastava discloses the electronic device according to claim 1, wherein the electronic device is a server device (Kohli, paragraph 86).
Regarding claim 16, the combination of Kohli and Shrivastava discloses the method according to claim 15, the method comprising: labelling the primary object (Kohli, paragraph 20).
Regarding claim 18, the combination of Kohli and Shrivastava discloses the method according to claim 17, wherein labeling the primary object comprises labelling the primary object based on the first input (Kohli, paragraph 42).
Regarding claim 19, the combination of Kohli and Shrivastava discloses the method according to claim 17, the method comprising: displaying, using the display circuitry, a first user interface object representative of a first object selector associated with the primary object; detecting a selection of the first user interface object; and using the object detector according to a detection model associated with the selected first user interface object (Kohli, paragraph 42).
Regarding claim 20, the combination of Kohli and Shrivastava discloses the method according to claim 15, the method comprising: generating, based on the first data set, a training image data set and/or a test image data set (Kohli, paragraph 26 and 27; Kohli, paragraphs 41 and 42).
Regarding claim 21, the combination of Kohli and Shrivastava discloses the method according to claim 20, the method comprising: training the detection model based on the training image data set (Kohli, paragraph 26 and 27; Kohli, paragraphs 41 and 42).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAN S PARK whose telephone number is (571)272-7409. The examiner can normally be reached Monday-Friday 8:30am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669