ETAILED ACTION
Remarks
This office action is issued in response to communication filed on 2/27/2026. Claims 6-8 and 10-26 are pending in this Office Action.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments/Arguments
Applicant’s arguments with respect to claim objection have been fully considered and are not persuasive. Accordingly, the examiner maintains the claim objection.
Applicant’s arguments with respect to claims rejected under 35 USC 103 have been considered and are moot in view of new ground of rejection.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 8,10-13,16 and 19-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 8,12-13,16 and 19-20 recite the phrase “the data” . There is insufficient antecedent basis for the recited “the data” in these claims. It is not clear which of “the data” the claims refer back to since the independent claim recites different types of data namely “receiving ..data”, “scene data”, “output data” and “data representing conditions of an environment”. Appropriate correction is required (i.e. amending claims to recite “the receiving data”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 6-7 and 10-18, 20-24 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al.(US Patent 12,322,068 B1, hereinafter “Kim”) and further in view of Heiser et al.(US Patent Application Publication 2021/0105339 A1, hereinafter “Heiser”)
As to claim 6, Kim teaches one or more non-transitory computer-readable media storing instructions that, when executed, cause one or more processors to perform operations comprising:
receiving, by a diffusion model, data representing a condition of an environment;(Kim col 4 ,lines 5-20 teaches a generative model takes in sequence of image, relevant camera parameter, orientation and other relevant information. Kim col 5, lines 19-20 teaches diffusion model can be used for generation); [determining a cost associated with a component of a vehicle computing device, the cost indicting an impact on a processor resource or a memory resource for the component to determine output data that controls a vehicle associated with the vehicle computing device and determining that the cost meets or exceeds a cost threshold];
generating, by the diffusion model and based at least in part the data [and the cost meeting or exceeding the cost threshold], one of: first scene data for simulating potential interactions between the vehicle and one or more objects in the environment, or an intermediate output for input into a decoder that is configured to output second scene data including the one or more objects. (Kim col 5, line 65- col 6, line 27 teaches images can be passed to one or more encoders 204 which can each produce a set of 2D features and a density prediction. The output of volume renderer 210 can then be provided to a decoder 212 , which can generate output image 214 corresponding to this scene as viewed from this desired view )
Kim fails to expressly teach determining a cost associated with a component of a vehicle computing device, the cost indicting an impact on a processor resource or a memory resource for the component to determine output data that controls a vehicle associated with the vehicle computing device and determining that the cost meets or exceeds a cost threshold;
However, Heiser teaches determining a cost associated with a component of a vehicle computing device, the cost indicting an impact on a processor resource or a memory resource for the component to determine output data that controls a vehicle associated with the vehicle computing device and determining that the cost meets or exceeds a cost threshold;;(Heiser par [0039] teaches the cost module may function to determine computing resource costs to run specific application run requests. Heiser par [0040] teaches the cost modules functions to determine or calculate computing resources costs for the application run requests. Heiser par [0071] teaches the computing resource cost may be based on costs exceeding a minimum cost threshold)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim and Heiser to achieve the claimed invention. One would have been motivated to make such combination to alleviate the perpetual strain to the computing resources and capabilities of an autonomous vehicle often caused by a combination of the normal operational requirements of the vehicle as well as by the extra operational processing of thousands to millions of routines of the applications and programs in communication with or being operated by the autonomous vehicle.(Heiser par [0029])
As to claim 7,Kim and Heiser teach the one or more non-transitory computer-readable media of claim 6, wherein the first scene data or the second scene data comprises discrete data associated with the one or more objects.(Kim col 6, lines 1-10 teaches density voxels 206 and feature voxels 208)
As to claim 10, Kim and Heiser teach the one or more non-transitory computer-readable media of claim 6, wherein: the intermediate output represents an object of the one or more objects absent from the data received by the diffusion model. (Kim col 6, lines 1-10 teaches density voxels 206 and feature voxels 208)
As to claim 11, Kim and Heiser teach the one or more non-transitory computer-readable media of claim 6, wherein the data comprises text data describing an intersection type, a number of objects, or a scene characteristic to include in the first scene data. (Kim col 3 , lines 29-40 teaches encoder 106 can analyze images 102, extract representative features of those images and encode those features into a latent representation 108 of a scene represented in images 102)
As to claim 12, Kim and Heiser teach the one or more non-transitory computer-readable media of claim 6, wherein the data further represents a first action for a first object of the one or more objects and a second action for a second object of the one or more objects.(Kim col 41, lines 36-40 teaches the server 1078 may receive, from vehicles , image data representative of images showing unexpected or changed road conditions)
As to claim 13, Kim and Heiser teach the one or more non-transitory computer-readable media of claim 6, wherein the data is based at least in part on input from a user specifying the condition of the environment at a previous time.( Kim col 7 lines 50-60 teaches a user who wants to remove a car from a scene can cause corresponding density voxel values to be set to zero to effectively remove that car from consideration)
As to claim 14, Kim and Heiser teach the one or more non-transitory computer-readable media of claim 6, wherein the diffusion model is configured to apply a denoising algorithm to generate the first scene data or the second scene data. (Kim col 7, lines 20-24 teaches this process attempt to remove this random noise in order to generate images that are very similar in appearance to corresponding original input images )
As to claim 15, Kim and Heiser teach the one or more non-transitory computer-readable media of claim 6, wherein the diffusion model generates at least one object that does not exist in sensor data from a sensor associated with the vehicle.( Kim col 3, lines 32-36 teaches one or more images 102 can be input to a content generation system 104 that can generate 3D image content 112, such as may correspond to the scene or other object or representation)
As to claim 16, Kim and Heiser teach one or more non-transitory computer-readable media of claim 6, wherein the data comprises one of: a first vector representation of an object of the one or more objects or a second vector representation of the environment. (Kim col 3, lines 30-50 teaches latent representation 108 of a scene represented in images 102. Latent representation 108 may take form of a latent space or latent vector)
Claims 17-18 , 20,21 and 22 merely recite a computer method performed by the one or more non-transitory computer readable of claims 6-7,11,10 and 12 respectively. Accordingly, Kim and Heiser teach every limitation of claims 17-18 , 20,21 and 22 as indicates in the above rejection of claims 6-7,11,10 and 12 respectively.
Claims 23-24 and 26 merely recite a system to execute the instruction of the non-transitory computer readable of claims 6-7 and 14 respectively. Accordingly, Kim and Heiser teach every limitation of claims 23-24 and 26 as indicates in the above rejection of claims 6-7 and 14respectively.
Claims 8,19 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Kim, Heiser and further in view of Suo et al.(US Patent Application Publication 2022/0153314 A1, hereinafter “Suo”)
As to claim 8, Kim and Heiser teach the one or more non-transitory computer-readable media of claim 6 but fail to teach wherein the data comprises a node or a token to represent a potential action of an object of the one or more objects.
However, Suo teaches wherein the data comprises a node or a token to represent a potential action of an object of the one or more objects. ( Suo par [0103] teaches fully connected interaction graph with objects as nodes)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Kim , Heiser and Suo to achieve the claimed invention. One would have been motivated to make such combination to generate synthetic testing data which can be used to massively scale evaluation of autonomous systems enabling rapid development and deployment (Suo par [004])
As to claims 19 and 25, see the above rejection of claim 8.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HIEN DUONG whose telephone number is (571)270-7335. The examiner can normally be reached Monday-Friday 8:00AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HIEN L DUONG/Primary Examiner, Art Unit 2147