DETAILED ACTION
This action is in response to the RCE filing of 9-8-2025. Claims 1-11 and 13-19 are pending and have been considered below:
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-11 and 13-19 rejected under 35 U.S.C. 101 have been withdrawn.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-11 and 13-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Heisele (20140176551 A1) in view of Avidan et al. (“Avidan” 20190205667 A1), Gordon et al. (“Gordon” 20140101090 A1) and Heisele (Heisele-1 20140177911 A1).
Claim 1: Heisele discloses a method of generating an annotated synthetic training data for training a machine learning module for processing an operational data set, the method comprising: creating a first procedural model for an object the first procedural model having a first set of parameters relating to the object (Figure 4a and Paragraphs 49-54; create model with first parameters);
creating a second procedural model for background related to the object the second procedural model having a second set of parameters relating to the background (Paragraph 35, parameters of background modified and 49-54 (create multiple model with adjustments to background));
training the machine learning module using the annotated synthetic training data (abstract (classifiers trained)Figure 4b and Paragraphs 10, 26, 56-58 and 61);
processing the operational data set using the trained machine learning module(Figure 4b and Paragraphs 10, 26, 56-58 and 61);
evaluating performance score of the machine learning module when used processing the operational data set, based on an annotation of the processed operational data set and an output of the machine learning module (Paragraphs 49 and 62-63; score for accuracy);
and further training the machine learning module using the annotated synthetic training data (Figure 4b Paragraphs 10, 26 and 56-58; trained classifiers further trained through process).
Heisele may not explicitly disclose creating a task environment model by combining the first and the second procedural models;
Avidan is provided because it discloses creating a task environment to simulate action and creating the synthetic data to train the model (abstract, Figure 4 and Paragraphs 35, 42-43 and 64).
Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve similar devices in the same way and provide a task/environment with the synthetic data of Heisele. One would have been motivated to provide the functionality as a way to enhance the training data for a more robust evaluation of the model.
Heisele also may not explicitly disclose optimizing the annotated synthetic training data using Bayesian maximization by modifying values of parameters of the task environment model used when generating the annotated synthetic training data based on the evaluation of the performance score;
Gordon is provided because it discloses a method for modeling synthetic data and utilizes a Bayesian model, where errors are evaluated and the sampler provides additional synthetic data used as the model retraining process (Paragraphs 15 and 65 (sampler generating synthetic data) 44, 73 and 84).
Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve similar devices in the same way and provide a Bayesian model for the synthetic data of Heisele. One would have been motivated to provide the functionality as a way to enhance the training method for a more robust and refined model.
Last, Heisele also may not explicitly disclose the features below, however Heisele-1 discloses creating a synthetic data set by repeatedly using the task environment model with varied values of the first set of parameters values related to the object and the second parameters related to the background, and generate data items of the synthetic data set according to each parameter value combination (Heisele-1: Paragraphs 34-39 and 50; data created where different parameters are combined for an environment (including background));
generating the annotated synthetic training data by allocating the parameter of the first set of parameters varied to generate the data items as an annotation for the data (Heisele-1: Figure 3a and Paragraphs 32, 39 and 50; annotation module for data).
Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve similar devices in the same way and provide a task/environment with generated data, along with annotation functionality within the modified Heisele. One would have been motivated to provide the functionality as a way to enhance the training data for a more robust evaluation of the model.
Claim 2: Heisele, Avidan, Gordon and Heisele-1 disclose a computer-implemented method according to claim 1, wherein the annotated synthetic training data comprises a first set of annotated synthetic data items, wherein each of the first set of annotated synthetic data items are generated by varying at least one of a parameter among the first set of parameters or the second set of parameters (Heisele: Figure 4a and Paragraph 54 and Avidan: Paragraphs 37, 54-55 (change background objects), 59-61 (parameters specified and random for different generations) and 64 (parameter used for labeling)).
Claim 3: Heisele, Avidan, Gordon and Heisele-1 disclose a computer-implemented method according to claim 2, wherein the annotated synthetic training data further comprises a second set of annotated synthetic data items, wherein each of the second set of annotated synthetic data items are generated by varying at least one of a parameter among the first set of parameters, the second set of parameters or a third set of parameters, wherein the third set of parameters relate to creating the synthetic data set from the task environment model (Heisele: Figure 4a and Paragraph 54 and Avidan: Paragraphs 34, 37, 54-55 (change background objects) and 59-61; parameters specified and random for different generations).
Claim 4: Heisele, Avidan, Gordon and Heisele-1 disclose a computer-implemented method according to claim 1, wherein the machine learning module is further trained based upon the set of operational data (Heisele: Paragraphs 22, 26-27 and 42 and Avidan: Paragraphs 34 (depict action for training) and 64; designated action).
Claim 5: Heisele, Avidan, Gordon and Heisele-1 disclose a computer-implemented method according to claim 1, wherein processing the operational data set comprises performing at least one of: classification, recognition, segmentation and regression (Heisele: Paragraphs 22, 26-27 and 42 classification and SVM and Avidan: Paragraphs 30 (CNN for recognition)).
Claim 6: Heisele, Avidan, Gordon and Heisele-1 disclose a computer-implemented method according to claim 1, wherein the first set of parameters relating to the object comprises at least one of: a position of the object in the task environment, an orientation of the object in the task environment, a shape of the object, a colour of the object, a size of the object, a texture of the object (Heisele: Paragraphs 33-35 and Avidan: Paragraphs 53-55 and 60( “variable” change background objects)).
Claim 7: Heisele, Avidan, Gordon and Heisele-1 disclose a computer-implemented method according to claim 1, wherein the second set of parameters relating to the background comprises at least one of: elements in the background, a position of the elements in the background, orientation of the elements in the background, shape of the elements, a colour of the elements, a size of the elements, a texture of the elements (Heisele: Paragraphs 25, 33-35).
Claim 8: Heisele, Avidan, Gordon and Heisele-1 disclose a computer-implemented method according to claim 1, wherein the third set of parameters relating to the creating the synthetic data set for the task environment model comprises at least one of: point of view, illumination level, zoom level, camera settings (Heisele: Paragraphs 33-34 and Avidan: Paragraph 73; point of view).
Claim 9: Heisele, Avidan, Gordon and Heisele-1 disclose a computer-implemented method according to claim 1, wherein selecting the parameter values for the first, second and third set of parameters is based on at least one of: principles of experimental design (Heisele: Paragraphs 35 and 54 layout incorporation uses parameters for layout (design) and Avidan: Paragraphs 37, 54-55 (change background objects) and 59-61; parameters specified and random for different generations).
Claim 10: Heisele, Avidan, Gordon and Heisele-1 disclose a computer-implemented method according to claim 1, wherein at least one of a parameter from among the first set of parameters and the second set of parameters is varied based upon at least one of: the object and the background (Heisele: Paragraphs 35-36 and Avidan: Paragraphs 37, 54-55 (change background objects) and 59-61; parameters specified and random for different generations).
Claim 11 is similar in scope to claim 1 and therefore rejected under the same rationale.
Additionally, regarding the system comprising a Server arrangement (Heisele: Paragraph 29; couples to other computer system (i.e. external/server).
Claim 13 is similar in scope to claim 4 and therefore rejected under the same rationale.
Claim 14 is similar in scope to claim 5 and therefore rejected under the same rationale.
Claim 15: Heisele, Avidan, Gordon and Heisele-1 disclose a system according to claim 11 wherein the server arrangement is further configured to select the values of the parameters of at least one of the first procedural model for the object and the second procedural model for the background based on at least one principle of design of experiments (Heisele: Paragraphs 35 and 54 layout incorporation uses parameters for layout (design) and Avidan: Paragraphs 37, 54-55 (change background objects) and 59-61; parameters specified and random for different generations).
Claim 16: Heisele, Avidan, Gordon and Heisele-1 disclose a system according to claim 11 wherein at least one of a parameter from among the first set of parameters and the second set of parameters is varied based upon at least one of the object and the background (Heisele: Paragraphs 35-36 and 53-54 and Avidan: Paragraphs 37, 54-55 (change background objects) and 59-61; parameters specified and random for different generations).
Claim 17: Heisele, Avidan, Gordon and Heisele-1 disclose a system according to claim 11 wherein the annotated synthetic training data comprises a set of annotated synthetic data items, wherein each of the annotated synthetic data items are generated by varying at least one of a parameter among the first set of parameters or the second set of parameters (Heisele: Figure 4a and Paragraph 54 and Avidan: Paragraphs 37, 55 and 59-61; parameters specified and random for different generations).
Claim 18: Heisele, Avidan, Gordon and Heisele-1 disclose a system according to claim 11 wherein the allocated annotation for the synthetic data set is a metadata associated with the first procedural model for the object and the created first procedural model for the object is 3D graphical object (Heisele: abstract, Paragraphs 33 and 49 and Avidan: Paragraphs 37, 55, 59; 3d object output 98 (associated metadata for synthetic image)).
Claim 19: Heisele, Avidan, Gordon and Heisele-1 disclose a system according to claim 11 where the system is configured to communicate the results of processing the operational data with the machine learning module as a visual output or via an communication interface (Heisele: Paragraphs 27 and 63; output and Avidan: Paragraphs 63-65; visual output of designated task).
Response to Arguments
Applicant's arguments have been fully considered and Heisele-1 is disclosed to address the limitation of creating synthetic data through a combination.
The arguments regarding Gordon and the Bayesian model are not persuasive. Applicant argues that Gordon uses observed data, however in paragraphs 15 and 65 it is stated that a sampler creates datasets to be utilized. The sampler will be understood to modify parameters in order to provide data sets that achieve a specified objective.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
20180260793 A1 [0329]
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHERROD L KEATON whose telephone number is (571)270-1697. The examiner can normally be reached on MONDAY -FRIDAY 9:30-5.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MICHELLE BECHTOLD can be reached at 571-431-0762. . The fax phone number for the organization where this application or proceeding is assigned is 571-273-3800. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHERROD L KEATON/Primary Examiner, Art Unit 2148
1-19-2026