DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 5, and 7-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tomomi Hishinuma et al., US 2020/0151450 A1.
Independent claim 1, Hishinuma discloses a method, comprising:
creating a training set for training a machine learning algorithm of a machine vision system that detects objects in an environment, the training set including multiple images of an object (i.e. capture and store images for inspection – Para 58; Fig. 4); and
applying one or more training set augmentations to each of a plurality of images included in the multiple images of the object to generate additional images that include the object for inclusion in the training set, wherein the one or more training set augmentations include an object motion augmentation, a camera motion augmentation, an object clumping augmentation, an object size reduction augmentation, a first diversified background augmentation (i.e. insert an AR tag to display a modified background of an image of an object, e.g. component/instrument– Fig. 11), or a second diversified background augmentation with one or more synthetic background images.
Claim 5, Hishinuma discloses the method of claim 1, wherein the applying the first diversified background augmentation to generate an additional image that includes the object comprises inserting image pixels in an image that corresponds to the object into the additional image that includes background clutter (i.e. visual presentation includes inserted AR tags that correspond to the recognized object, e.g. instrument – Fig. 11 “49”).
Claim 7, Hishinuma discloses the method of claim 1, wherein the creating the training set includes:
capturing the multiple images of the object using at least one of a variety of different cameras, different camera angles, different distances of the different cameras from the object, different light conditions, and different image backgrounds (i.e. capture instrument/component using different cameras – Para 147);
labeling the object as captured in the multiple images with corresponding labels by at least segmenting the object in each of the multiple images from a corresponding background based on an inputted polygon with a perimeter that corresponds to one or more boundaries of the object and associate the object that is segmented with a corresponding label (i.e. classify recognized images captured via camera – Para 262 -using visual recognition of a bounded region of an instrument/component – Fig. 8, 10);
annotating each of the multiple images with additional annotating information about the object (i.e. insert AR tags that correspond to the recognized object, e.g. instrument – Fig. 11 “49”);
compiling the multiple images of the object, the corresponding labels, and the additional annotation information into the training set for training the machine learning algorithm of the machine vision system (i.e. management database maintains image data, AR support information and image classification information – Fig. 4 – for use in AI analysis – Para 248, 255).
Claim 8, Hishinuma discloses the method of claim 7, wherein the corresponding label of the object in an image of the multiple images is a label from a structured knowledge representation, the structured knowledge representation includes labels that are members of multiple object classes (i.e. management database maintains image data, AR support information and image classification information – Fig. 4 – for use in AI analysis – Para 248, 255).
Claim 9, Hishinuma discloses the method of claim 1, wherein the machine vision system is used by an augmented reality procedural guidance system to guide an operator in completing one or more steps for one or more objects using an augmented reality environment (i.e. the system provides information for completing maintenance work – Para 180 – and display support information, e.g. AR tags – Para 29; Fig. 11).
Claim 10, Hishinuma discloses the method of claim 1, wherein the machine learning algorithm includes a neural network (i.e. the system includes a neural network – Para 256).
Claim 11, Hishinuma discloses the method of claim 1, further comprising:
training the machine vision system to recognize optically distinguishable markers (i.e. using VSLAM to recognize markers – Para 40, 55); and
associating the optically distinguishable markers with particular objects in a knowledge base or a structured knowledge representation (i.e. markers are associated objects and stored in a management database – Para 3; Fig. 4); and
recognizing, at least via the machine vision system, an additional object in the environment as a particular object based at least on an optically distinguishable marker that is affixed to the additional object and an association of the particular object with the optically distinguishable marker in the knowledge base or the structure knowledge representation (i.e. automatically recognize the objects, e.g. instrument/components, in the environment using the marker data - Para 32, 47 – identified using machine vision – Para 40, 248, 255).
Claim 12, Hishinuma discloses the method of claim 17, wherein the optically distinguishable markers are generated by a generative cooperating network (GCN) (i.e. a generative adversarial network, e.g. GCN – Para 263).
Independent claim 13, the rationale as applied in the rejection of claim 7 applies herein.
Independent claim 17, the rationale as applied in the rej3eciton of claim 11 applies herein.
Claim 18, Hishinuma discloses the method of claim 17, wherein the optically distinguishable markers are generated by a generative cooperating network (GCN) (i.e. a generative adversarial network, e.g. GCN – Para 263).
Claims 14-16, 19 and 20, the corresponding rationale as applied in the rejection of claims 1, 9, and 11 applies herein.
Claims 2-4, and 6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANTE HARRISON whose telephone number is (571)272-7659. The examiner can normally be reached Monday - Friday 8:00 am to 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHANTE E HARRISON/Primary Examiner, Art Unit 2615