Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings Objection
Figures 1 and 3-5 is objected to as depicting a block diagram without “readily identifiable” descriptors of each block, as required by 37 CFR 1.84(n). Rule 84(n) requires “labeled representations” of graphical symbols, such as blocks; and any that are “not universally recognized may be used, subject to approval by the Office, if they are not likely to be confused with existing conventional symbols, and if they are readily identifiable.” In the case of figure 1 and 3-5, the blocks are not readily identifiable per se and therefore require the insertion of text that identifies the function of that block. That is, each vacant block should be provided with a corresponding label identifying its function or purpose.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 4, 6-13, 15-16 and 18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Zhou et al. (US20210387646, hereinafter “Zhou”)
Claim 1. (Currently Amended) Zhou teaches A computer-implemented method ([0086] “computer program is configured to implement any one of the possible visual perception methods”) for providing training image data for training a function, (abstract “train the perception network to be trained according to the edited image and the label.”) the method comprises :
providing at least one annotated image , wherein the annotated image includes at least one object comprising an annotation assigned to the at least one object, ([0026] “acquiring image data and model data containing the perceived target, where the image data comprises a 2D image and a label”)
wherein the annotation describes an image in which the at least one object is contained, ([0161] “vehicle is labeled with the contents, including: the category of the perceived target (i.e., the vehicle), the corresponding moving component of the perceived target (i.e., the front-right door or trunk cover), the status of the front-right door and trunk (i.e., open),”)
selecting an object in the annotated image, ([0130] “perceiving target vehicle, the moving components of the vehicle can be divided to include: at least one of a front-left door, a rear-left door, a front-right door, a rear-right door, a trunk cover and a bonnet.”)
replacing the image described by the annotation with an of another image to remove the selected object together with the annotation assigned to the selected object from the annotated image and produce a modified annotated image , ([0182] In the two edited images, which are the output results in FIG. 7, the filled image is superimposed on the second visible area, and the superimposed image is used to replace the moving component in the 2D image, i.e., the trunk cover or the front-right door,”)
providing the training image data containing the modified annotated image. ([0182] “thus the edited image available for training with the label of the pose of the moving component is finally obtained.)”)
Claim 2. (Currently Amended) Zhou teaches The method of claim 1, wherein the annotated image has a background (H). (fig. 3b shows the annotated image has a background and fig. 7)
PNG
media_image1.png
291
639
media_image1.png
Greyscale
Claim 4. (Currently Amended) Zhou teaches The method of claim 1 , wherein the annotated image comprises a multiplicity of preferably different objects . ([0130] “components of the vehicle can be divided to include: at least one of a front-left door, a rear-left door, a front-right door, a rear-right door, a trunk cover and a bonnet.”)
Claim 6. (Currently Amended) Zhou teaches The method of claim 1 , wherein, in order to generate a plurality of different modified annotated images, selecting and replacing are repeated until the modified annotated image has no more objects , ([0182] “the superimposed image is used to replace the moving component in the 2D image, i.e., the trunk cover or the front-right door,” the components which are known by the system as moving components are selected and replaced i.e. the trunk cover and the front right door. ) wherein the different modified annotated images are added to the training image data. ([0182] “thus the edited image available for training with the label of the pose of the moving component is finally obtained.”)
Claim 7. (Currently Amended) Zhou teaches The method of claim 1 , wherein the annotated image and the modified annotated image are processed to generate further annotated images, wherein the further annotated images are added to the training image data. ([0182] “thus the edited image available for training with the label of the pose of the moving component is finally obtained.”)
Claim 8. (Currently Amended) Zhou teaches The method of claim 1 , wherein the replacement of the image area described by the annotation comprises overwriting the image. ([0181] “S609, superimposing the filling image with the second visible area, and replacing the moving component in the 2D image with the superimposed image to generate the edited image.”)
Claim 9. (Currently Amended) Zhou teaches The method of claim 8, wherein a color, a random pattern, or an area of another image is used for overwriting. ([0182] “the filled image is superimposed on the second visible area, and the superimposed image is used to replace the moving component in the 2D image, i.e., the trunk cover or the front-right door,”)
Claim 10. (Currently Amended) Zhou teaches The method of claim 1 , wherein the area of the other image corresponds to the image, in particular is of the same size and/or is in the same position. ([0167] “Determining the front-right door as the moving component, the pose information of the 6-degrees of freedom corresponding to the front-right door includes: the rotating direction, the final position, the required rotating angle, etc., in a completely open status.” And [0185] “the component-level CAD 3D model aligned with the object in the 2D image is used to guide the 2D component area to perform reasonable motions and changes, so as to enable the 2D object in the image to show different status,”)
Claim 11. (Currently Amended) Zhou teaches The method of claim 1 , wherein the annotation contains information about a size and a position of the at least one object in the annotated image and/or about all pixels associated with the object in the annotated image. (fig. 3f shows annotation which contains information about the size of the trunk as well as the position of the trunk which is “open”. )
PNG
media_image2.png
215
487
media_image2.png
Greyscale
Claim 12. (Currently Amended) Zhou teaches The method of claim 1 , wherein the annotation contains a border of the object . (fig. 3f shows annotation which contains a border of the object)
Claim 13. (Currently Amended) Zhou teaches The method of claim 12, wherein the border is configured as a rectangle and is optimal in such a way that it delimits the smallest possible image area in which the bordered object can still be contained. (fig. 3f shows annotation which contains a border of the object is a rectangle and is the smallest possible containment of the object)
Claim 15. (Currently Amended) Zhou teaches The method of claim 1 , wherein the selection of an object in the annotated image takes place([0182] “the superimposed image is used to replace the moving component in the 2D image, i.e., the trunk cover or the front-right door,”) based on of the annotation assigned to this object. ([0161] “the 2D image of the vehicle is labeled with the contents, including: the category of the perceived target (i.e., the vehicle), the corresponding moving component of the perceived target (i.e., the front-right door or trunk cover), the status of the front-right door and trunk (i.e., open),”)
Claim 16. (Currently Amended) Zhou teaches The method of claim 1 , wherein the annotation of each object in the annotated image contains information about identification comprising at least one of a type, description, or a nature, of the object the object's position on the image, or a segmentation. ([0161] “the 2D image of the vehicle is labeled with the contents, including: the category of the perceived target (i.e., the vehicle), the corresponding moving component of the perceived target (i.e., the front-right door or trunk cover), the status of the front-right door and trunk (i.e., open),”)
Claim 18. (Currently Amended) Zhou teaches A system for generating and providing training image data for training a function, (abstract “train the perception network to be trained according to the edited image and the label.”) the system comprising: the a first interface configured to receive at least one annotated image , ([0068] “an acquisition module, configured to acquire image data and model data containing a perceived target, where the image data includes a 2D image and a label,”) wherein the annotated image has at least one object with an annotation assigned to the at least one object, ([0026] “acquiring image data and model data containing the perceived target, where the image data comprises a 2D image and a label”) wherein the annotation defines an image in which the at least one object is contained, ([0161] “vehicle is labeled with the contents, including: the category of the perceived target (i.e., the vehicle), the corresponding moving component of the perceived target (i.e., the front-right door or trunk cover), the status of the front-right door and trunk (i.e., open),”)
the a computing facility configured to select an object in the annotated image, ([0130] “perceiving target vehicle, the moving components of the vehicle can be divided to include: at least one of a front-left door, a rear-left door, a front-right door, a rear-right door, a trunk cover and a bonnet.”)
and to replace the image described by the annotation with an area of another image in order to remove the selected object together with the annotation assigned to the selected object from the annotated image and to generate a modified annotated image, ([0182] In the two edited images, which are the output results in FIG. 7, the filled image is superimposed on the second visible area, and the superimposed image is used to replace the moving component in the 2D image, i.e., the trunk cover or the front-right door,”) a second interface ([0070] “a training module, configured to train the perception network to be trained according to the editing image and the label”) configured to provide the training image data containing the modified annotated image. ([0182] “thus the edited image available for training with the label of the pose of the moving component is finally obtained.)”)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 3 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US20210387646, hereinafter “Zhou”) and in view of Kent et al (US20210082101, hereinafter “Kent”)
Claim 3. (Currently Amended) Zhou teaches The method of claim 2,
Zhou does not explicitly teach wherein the background has a digital image of a printed circuit board.
Kent teaches wherein the background has a digital image of a printed circuit board. ([0032] “visible image of the entire PCB)”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhou to have background of the image is a printed circuit board as taught by Kent to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been so that (Kent et al [0039] “ allows the MSI system to be expanded to incorporate images of the electronic item acquired by other imaging or scanning modalities”)
Claim 5. (Currently Amended) Zhou teaches The method claim 1 , wherein the
Zhou does not explicitly teach at least one object is designed as a digital image of an electronic component for populating a printed circuit board.
Kent teaches at least one object is designed as a digital image of an electronic component for populating a printed circuit board. ([0032] “visible image is analyzed to identify electronic components on the PCB 14.”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhou to have image of an electronic component on a PCB as taught by Kent to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been so that (Kent et al [0039] “ allows the MSI system to be expanded to incorporate images of the electronic item acquired by other imaging or scanning modalities”)
Claims 14 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US20210387646, hereinafter “Zhou”) and in view of Lipson et al (US7167583, hereinafter “Lipson”)
Claim 14. (Currently Amended) Zhou teaches The method of claim 1 ,
Zhou does not explicitly teach wherein at least one image in which no objects are contained is added to the training image data.
Lipson teaches wherein at least one image in which no objects are contained is added to the training image data. (col15line49 “The models may need to be trained on board specific images… examples of bare pads and the surround without the part (known as the "bare" image).”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhou to have a training image which contains no objects as taught by Lipson to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been so that (Lipson et al col4 line30 “accurate inspection test results with the usefulness and desirability of performing rapid image analysis can be achieved”)
Claim 17. (Currently Amended) Zhou teaches The method of claim 1 ,
Zhou does not explicitly teach wherein each object has partial objects.
Lipson teaches wherein each object has partial objects. (col18 line28 “the geometry model can measure the true dimensions of the part and its subparts from the placed image and the snapshot.”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhou to have each object has partial objects as taught by Lipson to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been so that (Lipson et al col4 line30 “accurate inspection test results with the usefulness and desirability of performing rapid image analysis can be achieved”)
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Lipson et al (US7167583, hereinafter “Lipson”) and in view of Zhou et al. (US20210387646, hereinafter “Zhou”)
Claim 21. (Currently Amended) Lipson teaches A method for checking an accuracy of a population of printed circuit boards (Abstract “printed circuit board inspection system” and col4line30 “accurate inspection test results”) in a production of printed circuit boards, (col19 line22 “to be used in full production board inspection.”) the method comprising: providing at least one image is provided of a printed circuit board produced according to a specific process step of a process (col10 line5 “image capture system 22 may be provided, for example, as one or more cameras or sensors which capture an image of an object to be inspected.”): and analyzing the at least one image by a trained function in order to check the accuracy of the population of the printed circuit board, (col11line46 “inspection system 10 (FIG. 1) to inspect printed circuit boards (PCBs).”) wherein the function has been trained with training image data, wherein the training image data is provided according to a method comprising: providing at least one annotated image, (col13line65 “Each group should have labeled positive (part present) and negative (part absent) examples. The system will train the models on the learn group and verify they are working properly on the test set.”) wherein the annotated image includes at least one object comprising an annotation assigned to the at least one object, wherein the annotation describes an image in which the at least one is contained, (col12 line59 “grouping of parts based on their structure or appearance. Thus, it is useful to annotate any part libraries with a definition of visual classes.” And col13 line 60 “For each type of part on the board, we want to train the models to be able to detect when the part is present and when it is absent. … Each group should have labeled positive (part present) and negative (part absent) examples.”)
Lipson does not explicitly teach selecting an object in the annotated image, and replacing the image described by the annotation with an of another image to remove the selected object together with the annotation assigned to the selected object from the annotated image and produce a modified annotated image, and wherein the training image data contains the modified annotated image.
Zhou teaches selecting an object in the annotated image, ([0130] “perceiving target vehicle, the moving components of the vehicle can be divided to include: at least one of a front-left door, a rear-left door, a front-right door, a rear-right door, a trunk cover and a bonnet.”)
and replacing the image described by the annotation with an of another image to remove the selected object together with the annotation assigned to the selected object from the annotated image and produce a modified annotated image, ([0182] In the two edited images, which are the output results in FIG. 7, the filled image is superimposed on the second visible area, and the superimposed image is used to replace the moving component in the 2D image, i.e., the trunk cover or the front-right door,”)
and wherein the training image data contains the modified annotated image. ([0182] “thus the edited image available for training with the label of the pose of the moving component is finally obtained.)”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify Lipson to have selecting an annotated object from an image to replace it with another object to generate a new training image to be part of the training image data as taught by Zhou to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been so that (Zhou et al [0148]“perceived targets can be accurately analyzed while guaranteeing that the perception process is neither too complicated or time-consuming.”)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure:
Nakajima et al US20200397346 teaches a component can be omitted, replaced or added.
Buggenthin et al US20200167510 teaches an AI system utilizing Turbine blade image data with various characteristics changed to generate an image to train the system
Wang et al NPL “Defect Simulation in SEM Images using Generative
Adversarial Networks” teaches simulating defects using generative adversarial networks for training
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OWAIS MEMON whose telephone number is (571)272-2168. The examiner can normally be reached M-F (7:00am - 4:00pm) CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OWAIS I MEMON/Examiner, Art Unit 2663