DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after 16 March 2013, is being examined under the first inventor to file provisions of the AIA . This Office action is in response to the application received on 13 June 2024.
Election
The election of Group I without traverse in the reply filed on 11 December 2025 is acknowledged.
Priority
The claim for foreign priority under 35 U.S.C. 119 (a)-(d) is acknowledged. Certified copies of the priority applications have been received.
The claims for the benefit of prior-filed applications under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) are acknowledged. The prior art of record in the parent application has been reviewed.
Information Disclosure Statement
The IDSes received on 14 October 2024, 28 March 2025, and 06 October 2025 have been considered.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 34, 37-39, 49, and 52-53 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US 10,643,320 B2 (Lee et al.).
As to claim 34, Lee discloses a computer-implemented method of training a scenario generator to generate driving scenarios, in which a training set of real driving scenarios is extracted from real-world driving scenario data, and the training set is used to train the scenario generator to generate artificial driving scenarios corresponding to the training set (col 4 ln 2-9 - "the system 100 may be employed on a computing device 102. The computing device 102 includes a processor 104, input/output hardware 106, the network interface hardware 108, a data storage component 110 [...] a memory component 120, and a local communications interface 140"), the method comprising:
receiving, at a scenario classifier, real driving scenarios from the training set and artificial driving scenarios generated by the scenario generator (Fig 2, Fig 3, col 5 ln 8-10 - "the generator logic 132 may cause the processor 104 to train a generator to generate photorealistic synthetic image data", col 6 ln 23-27 - "The real-world data may be collected from a moving platform (e.g., a vehicle or robot) as it navigates an environment. The collection of real-world data may be further annotated with external meta-data about the scene", col 6 ln 64-66 - "the real-world image 224 is provided to the discriminator 230 from a real-world image dataset 250 selected by a sampling module 260", col 7 ln 1-3 - "The discriminator 230 learns to distinguish between a synthetic-to-real image 222 and a real-world image 224"); and
in a process of training the scenario generator and the scenario classifier, incentivising the scenario classifier to accurately classify the received driving scenarios as real or artificial, whilst also incentivising the scenario generator to generate artificial driving scenarios which the scenario classifier classifies as real (col 6 ln 50-55 - "The generator 220 of the [Simulator Privileged Information Generative Adversarial Network] SPIGAN model 200 learns (e.g., a pixel-level mapping function) to map the synthetic image 212 to a synthetic-to-real image 222 such that the discriminator 230 is unable to discern the synthetic-to-real image 222 from a real-world image 224", col 7 ln 1-4 - "The discriminator 230 learns to distinguish between a synthetic-to-real image 222 and a real-world image 224 by playing an adversarial game with the generator 220 until an equilibrium point is reached").
As to claim 37, Lee discloses the method of claim 34, and further discloses wherein incentivising the scenario generator and the scenario classifier comprises applying a loss function to outputs of the scenario generator and the scenario classifier (col 8 ln 38-49 - "To achieve good performance when training the SPIGAN model, in some embodiments, a consistent set of loss functions and domain specific constraints related to the main prediction task need to be designed and optimized. [...] For example, the minimax objective includes a set of loss functions [...] characterized by Equation 1, where α, β, γ, δ are weighting parameters and θ.sub.G, θ.sub.D [...] represent the parameters of the generator, discriminator").
As to claim 38, Lee discloses the method of claim 34, and further discloses the method comprising training an autonomous vehicle agent based on a scenario generated by the scenario generator (col 6 ln 4-7 - "in an application such as training a vision system for an autonomous vehicle it may be advantageous for the simulator 210 to create a synthetic image 212 from the point of view of a vehicle on a street").
As to claim 39, Lee discloses the method of claim 34, and further discloses wherein the scenario generator and the scenario classifier form a generative adversarial network (GAN) (col 6 ln 50-55, col 7 ln 1-4).
As to claim 49, the limitations recited by claim 49 correspond to the limitations recited by claim 34. Therefore, claim 49 is rejected on the same grounds as claim 34.
As to claim 52, the limitations recited by claim 52 correspond to the limitations recited by claim 37. Therefore, claim 52 is rejected on the same grounds as claim 37.
As to claim 53, the limitations recited by claim 53 correspond to the limitations recited by claim 38. Therefore, claim 53 is rejected on the same grounds as claim 38.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 35-36 and 50-51 are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of US 10,546,201 B2 (Kang et al.).
As to claim 35, Lee discloses the method of claim 34.
Kang teaches the limitations not expressly further disclosed by Lee, namely:
wherein the training set comprises examples of driving behaviour data classified as abnormal with respect to a normal driving behaviour model (col 5 ln 28-31 - "the abnormal object is a vehicle that interferes with the driving of the host vehicle and has a driving pattern that differs from that of another vehicle travelling normally", col 7 ln 18-21 - "In training the neural network, a whole or part of the 2D image, a box of the target object, a class of the target object, such as, for example, a vehicle or a person, and whether the target object is abnormal may be used as learning data").
As of the effective filing date of the claimed invention, one of ordinary skill in the art would have been motivated to combine Lee and Kang because both relate to methods of training an autonomous system to control a vehicle. The combination would yield predictable results according to the teachings of Kang by training the autonomous vehicle control system to distinguish objects that may interfere with the host vehicle from objects that do not interfere.
As to claim 36, Lee discloses the method of claim 34.
Kang teaches the limitations not expressly further disclosed by Lee, namely:
wherein the training set comprises examples of driving behaviour data classified as normal with respect to a normal driving behaviour model (col 5 ln 28-31, col 7 ln 18-21).
See claim 35 for a statement of an obviousness rationale.
As to claim 50, the limitations recited by claim 50 correspond to the limitations recited by claim 35. Therefore, claim 50 is rejected on the same grounds as claim 35.
As to claim 51, the limitations recited by claim 51 correspond to the limitations recited by claim 36. Therefore, claim 51 is rejected on the same grounds as claim 36.
Conclusion
The prior art made of record on Form 892 (Notice of References Cited) and not relied upon is considered pertinent to applicant's disclosure, because the references generally relate to methods of generating images for training an autonomous vehicle or to methods of training an autonomous vehicle using images.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Todd Melton whose telephone number is (571)270-3871. The examiner can normally be reached weekdays, 9:30am - 6:00pm (Eastern time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Navid Mehdizadeh can be reached at 571-272-7691. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TODD MELTON/Primary Examiner, Art Unit 3669