DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1 – 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Atsmon, publication number: US 2024/0403717.
As per claims 1 and 17, Atsmon teaches a method comprising:
generating a simulated object based at least on extracting at least a portion of the simulated object from a simulated textured representation (simulated objects, [0020][0087]);
combining the simulated object with an existing image to generate a training image (Inserting, [0045], objects, [0065]); and
updating one or more parameters of a machine learning model based at least on the training image and ground truth data corresponding to the training image (training model with generated data and ground truth, [0063-0064][0093], Data describing new scenes, [0065]).
As per claim 2, Atsmon teaches wherein the simulated object comprises at least the first portion combined with a second portion (Inserting, [0045], objects [0065]).
As per claims 3 and 13, Atsmon teaches wherein the portion of the simulated object is retrieved from a portion of the simulated textured representation based at least on one of a random number of randomly distributed points or a convex polygon (randomly selected simulation objects with shapes and sizes, [0020][0087]).
As per claim 4, Atsmon teaches wherein:
the simulated object comprises the first portion that is randomly rotated and randomly translated and a second portion that is randomly rotated and randomly translated; and
the first portion is joined with the second portion based at least on joining a first randomly selected pixel from the first portion with a second randomly selected pixel from the second portion (Placing rotated objects in different locations, [0045][0067]).
As per claim 5, Atsmon teaches wherein the existing image includes an object mask associated therewith that directs one or more object locations within the existing image that the simulated object may be located (Placing objects based on outline,[0076]).
As per claim 6, Atsmon teaches wherein the existing image includes a freespace representation associated therewith that directs one or more object locations within the existing image that the simulated object may be located (Placing objects based on outline,[0076] and ground truth, [0064]).
As per claim 7, Atsmon teaches further comprising, in response to the training image being generated, generating a bounding shape around the simulated object to be included in the ground truth data (Identified size and shape, [0020][0075]).
As per claims 8 and 14, Atsmon teaches wherein:
a size of the simulated object comprises an upper threshold and a lower threshold; and
the size, the upper threshold, and the lower threshold are determined based at least on one or more characteristics of an environment as depicted in the existing image (Identified size and shape, [0020][0075]).
As per claims 9 and 15, Atsmon teaches wherein the size of the simulated object is determined using heuristics based on at least one of a focal distance and a relative size of at least one existing object in the environment (Size and shape relative to base distance, [0075]).
As per claims 10 and 18, Atsmon teaches further comprising: segmenting the training image; and
labelling a portion of the segmented training image that includes the simulated object as the anomaly object (not adhering to contraints, [0063]),
wherein a resulting label from the labelling is included in the ground truth data (Ground truths and data describing new scenes, [0064-0065]).
As per claim 11, Atsmon teaches wherein the simulated object is automatically combined with the existing image to generate the training image (Generating synthetic training data, [0020][0057]).
As per claim 12, Atsmon teaches a system comprising:
one or more processing units to:
generate one or more simulated objects from one or more textured images based at least on one or more randomly generated shapes (simulated objects, [0020][0087], placing objects based on outline [0076]); and
updating one or more parameters of a machine learning model using the one or more simulated objects and ground truth data corresponding to the one or more simulated objects (training model with generated data and ground truth, [0063-0064][0093], Data describing new scenes, [0065]).
As per claims 16 and 20, Atsmon teaches wherein the computing system comprises one or more of:
a control system for an autonomous or semi-autonomous machine;
a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations;
a system for performing digital twin operations;
a system for performing light transport simulation;
a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations;
a system implemented using an edge device;
a system for generating or presenting at least one virtual reality content, augmented reality content, or mixed reality content;
a system implemented using a robot;
a system for performing conversational AI operations; a system for generating synthetic data;
a system incorporating one or more virtual machines (VMs) (Autonomous system, [0098]);
As per claim 19, Atsmon teaches further comprising, in response to the forming of the training image, generating a bounding object around the simulated anomaly object, and associating the bounding shape with the training image (Generating synthetic training data, [0020][0057]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUGBENGA O IDOWU whose telephone number is (571)270-1450. The examiner can normally be reached Monday-Friday 8am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jung Kim can be reached at 5712723804. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OLUGBENGA O IDOWU/Primary Examiner, Art Unit 2494