Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 11-14, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2021/0383616 A1 (hereinafter Rong) in view of “AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection” by Zongdai Liu, et al. (hereinafter Liu) in view of “PerMO: Perceiving More at Once from a Single Image for Autonomous Driving” by Feixiang Lu, et al. (hereinafter Lu).
Regarding claim 1, the limitations “A method comprising: generating a target object model for a target object … rendering, by a differential rendering engine, an object image from a target object model; computing, by a loss function of the differential rendering engine, a loss based on a comparison of the object image with an actual image and a comparison of the target object model with a corresponding lidar point cloud; updating the target object model by the differential rendering engine according to the loss” are taught by Rong (Rong, e.g. abstract, paragraphs 24-183, discloses a system for generating and rendering augmented autonomous driving scenes by compositing environment data collected by an autonomous vehicle with one or more 3D objects/assets stored in an object bank, e.g. paragraphs 24-57. Rong teaches details of generating the 3D objects/assets stored in the object bank, e.g. paragraphs 31-42, including using a 3D mesh model representing the shape of the object to predict images and/or shapes of a target object, e.g. paragraphs 36, 37, calculating a loss based on the difference between the predicted images and object features in the input image(s), e.g. paragraphs 34, 37, 38, and based on the difference between the predicted shape and the object features in the input LiDAR point cloud data, e.g. paragraphs 35, 37, 39, and updating the parameters of the 3D object/asset model based on the loss function, e.g. paragraph 37. Rong, e.g. paragraph 161, clarifies that the predicted silhouette images are generated using differentiable neural rendering, i.e. Rong’s 3D object/asset modeling technique corresponds to the claimed differential rendering engine. That is, as claimed, the target object model, i.e. Rong’s 3D object/asset model from the object bank, is updated by rendering, by a differential rendering technique, an object image from the target object model, computing a loss function based on a comparing the predicted object image with an actual image and comparing the target object model with the corresponding lidar point cloud, and updating the target object model according to the loss.)
The limitation “rendering, after updating the target object model, a target object in a virtual world using the target object model” is taught by Rong (Rong, as noted above, teaches generating and rendering augmented autonomous driving scenes by compositing environment data collected by an autonomous vehicle with one or more 3D objects/assets stored in the object bank, i.e. Rong, e.g. paragraphs 27, 28, 42-54, teaches that the environment data is processed to identify locations for inserting selected objects from the object bank, followed by rendering the augmented scene comprising the collected environment data and the selected 3D objects/assets from the object bank, where the selected 3D objects/assets correspond to the target object models produced by the 3D object/asset generation technique. That is, as claimed, the augmented scene is a virtual world comprising one or more target object models which is rendered after the target object model(s) is(are) updated, i.e. generated by the 3D object/asset generation technique.)
The limitations “generating a target object model for a target object from a decomposed object model, the decomposed object model comprising: a body component model for a body component of the target object, and a first auxiliary component model for a first auxiliary component of the target object” are not explicitly taught by Rong (Rong, e.g. paragraph 36, teaches that a 3D mesh that is parameterized as a category-specific mean shape in a canonical pose with a 3D deformation for each vertex may be used to represent the 3D objects/assets stored in the object bank, i.e. a CAD model is deformed to match the pre-recorded data in the library model of a corresponding object/asset, corresponding to the target object model. Rong does not explicitly address using target object models generated from a decomposed object model comprising body and auxiliary component models for the body and auxiliary components, respectively.) However, these limitations are taught by Liu in view of Lu (Liu, e.g. abstract, sections 1, 3-7, describes the AutoShape system, a 3D keypoint neural network for recovering the 3D bounding box of vehicle(s) captured in input images, e.g. sections 3, 4, figure 2. Liu, e.g. section 4, paragraph 1, section 5, trains the network using a set of reconstructed 3D vehicle objects having annotated ground truth 3D keypoints, where the reconstructed 3D vehicle objects are modeled using a deformable vehicle template model fit to pre-recorded data of a target vehicle. That is, analogous to the reconstructed models in Rong’s 3D object/asset bank, Liu’s reconstructed 3D vehicle objects are modeled by matching a mesh to input 2D images and 3D lidar point clouds, e.g. section 5, paragraph 1, section 5.2, figures 3, 4, where the matching of the mesh is performed by deforming a mean shape template model, e.g. section 5.1, and is guided using a loss function calculated based on differences between the deformed model, images of the deformed model rendered using a differentiable rendering function, and the pre-recorded data, e.g. section 5.2, equations 10-12. Finally, Liu, sections 5, 5.1, indicate the use of a deformable vehicle 3D model from reference 25, i.e. Lu, which is prior work by most of the same authors of the Liu reference. Lu, sections 1, 3-9, Appendix A, describes said deformable vehicle 3D model, including the deformable 3D object model, per se, section 4, where a dense 2D/3D mapping can be recovered between the deformable 3D object model and a 3D CAD model for the target vehicle, e.g. sections 5, 5.2, figure 6(b), allowing for fully reconstructing the pose, shape, and appearance of the vehicle, e.g. section 7, figures 8, 9. Further, Lu, sections 4-.4.2, Appendix A, describes the deformable vehicle 3D model, which is a decomposed model comprising 18 separate parts, e.g. figure 6(b), UV map shows the parts, and further decomposed into 2 groups, a first group, the body, having all the parts except for the tires, and a second group comprising the tires, where the body is non-rigidly deformed to represent a target object model, and the tires are separately deformed using rotation, translation, and scaling operations, i.e. rigid deformations used to avoid causing the tires to become elliptical. That is, as claimed, Lu’s decomposed object models comprise a body component model for the body component of the target object, and a first auxiliary component model for a first auxiliary component of the target object.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Rong’s autonomous vehicle simulation system to use Liu’s reconstructed 3D vehicle modelling technique, comprising Lu’s deformable vehicle 3D model, to generate 3D object/asset models for the 3D object/asset model bank because the resulting 3D object/asset models would have better visual reconstruction quality, e.g. Rong, paragraph 47, selects target objects in part based on having pre-recorded data at similar distances and viewpoints as it will be viewed in the simulated scene, but Lu, e.g. section 7, figure 8, teaches that the deformable vehicle 3D model can recover the full vehicle texture, making it suitable for a wider variety of simulated scenes, i.e. the fully textured model can be rendered at a wider range of novel viewpoints without loss of quality than Rong’s inverse texture warping operation technique of paragraph 47. Further it is noted that Rong’s 3D object/asset model bank is not limited to the disclosed 3D mesh reconstruction technique, e.g. paragraph 32 indicates the bank may include systems or methods, plural, such that one of ordinary skill in the art would be motivated to include additional 3D object/asset modelling techniques, as well as additional pre-recorded datasets, which as taught by Lu, could include CAD models for the target vehicles. In Rong’s modified system, Liu’s reconstructed 3D vehicle modelling technique would be used to generate deformed decomposed object models to match pre-recorded data in the object bank/library model for a target vehicle, corresponding to the claimed target object models generated from decomposed object models comprising a body component model and a first auxiliary component model.
The limitations “wherein: the body component model and the first auxiliary component model are individual and separate models, and the body component model comprises a first set of parameters comprises an identifier of the first auxiliary component model, a location parameter detailing a connection point of the first auxiliary component on the body component model, a scaling parameter detailing a scaling factor in at least one direction, and an amount of translation offset of the first auxiliary component model, and wherein generating the target object model comprises selecting the first auxiliary component model and applying scaling and rotation to the first auxiliary component model according to the first set of parameters to obtain a revised first auxiliary component model, and adding the revised first auxiliary component model to the body component model at the connection point to generate the target object model” are taught by Rong in view of Lu (Lu, Appendix A, teaches that the model is further decomposed into 2 groups, a first group, the body, having all the parts except for the tires, and a second group comprising the tires, where the body is non-rigidly deformed to represent a target object model, and the tires are separately deformed using rotation, translation, and scaling operations, i.e. rigid deformations used to avoid causing the tires to become elliptical. That is, as claimed, the decomposed object models comprise a first component model for the body component of the target object which is individual and separate from the second component model for the second tire component(s) of the target object. Further, Lu’s model comprises the additionally recited parameters, i.e. the template tire model corresponds to the auxiliary component model, and there are 4 separate sets of parameters identifying connection points of the tire model to the deformed body model, corresponding to the claimed identifier of the first auxiliary component model and a location parameter detail its connection point, where a global alignment algorithm is applied to deform each template tire model using rotation, translation, and scaling operations prior to assembly with the deformed body model, i.e. the claimed scaling and translation parameters for each first auxiliary component/connection point. Finally, as noted, Lu, Appendix A, indicates that the template tire models for each connection point are deformed by rotation, scaling, and translation, and assembled with the corresponding connection point to generate the deformed 3D model, i.e. as claimed, the target object model is generated by applying scaling and rotation to the first auxiliary component model to obtain a revised first auxiliary component model, and adding the revised first auxiliary component model to the body component model at the connection point.)
Regarding claim 2, the limitations “further comprising: obtaining an … CAD model; obtaining a library … model for the target object; deforming, by a CAD transformer engine, the … CAD model to match the library … model to generate a deformed … CAD model; [using] the deformed annotated CAD model to generate an [improved] library model, wherein the target object model is generated from the [improved] library model” are taught by Rong (Rong, e.g. paragraphs 31, 32, teaches that the object bank includes pre-recorded data for objects stored in the object bank, i.e. each 3D object/asset starts as a set of pre-recorded data, which may include 3D bounding boxes, corresponding to a library model(s) of a target object, i.e. the object bank is a library and the objects/assets initially correspond to pre-recorded data which are library model(s) for target object(s). Rong, e.g. paragraph 36, further teaches that a 3D mesh that is parameterized as a category-specific mean shape in a canonical pose with a 3D deformation for each vertex may be used to represent the 3D objects/assets stored in the object bank, i.e. a CAD model is deformed to match the pre-recorded data in the library model of a corresponding object/asset to generate a deformed library 3D model representing the target object, corresponding to obtaining and deforming a CAD model to match the library model of a target object, where the deformed CAD model is combined with the pre-existing data in the library object to generate a target object model, i.e. the target object model is generated from the improved library model comprising the deformed CAD model matching the library model data for the target object.)
The limitations “further comprising: obtaining an annotated CAD model; obtaining a library CAD model for the target object; deforming, by a CAD transformer engine, the annotated CAD model to match the library CAD model to generate a deformed annotated CAD model; annotating the library CAD model with an annotation from the deformed annotated CAD model to generate an annotated library model, wherein the target object model is generated from the annotated library model” is not explicitly taught by Rong in view of Liu and Lu (While, as noted above, Rong teaches that a CAD model is deformed to match the library model data for a target object, where the deformed CAD model is used to improve the library model from which the target object model is generated, Rong does not address whether the deformed CAD model/3D mesh is annotated, or by extension, using an annotated CAD model/3D mesh deformed to match the library model data to generate annotations for the library model. In the modification of Rong’s autonomous vehicle simulation system to use Liu’s reconstructed 3D vehicle modelling technique, comprising Lu’s deformable vehicle 3D model, as discussed in the claim 1 rejection, the target object model/deformed CAD model may be Lu’s deformable vehicle model having separate body and tire component models. While not specifically addressed in the claim 1 rejection, Liu’s reconstructed 3D vehicle modeling technique includes the claimed feature of generating annotations for the library model(s). That is, Liu, section 4, paragraph 1, section 5, paragraph 1, teaches that the resulting reconstructed 3D vehicle objects have the 3D keypoints defined on the CAD models. That is, as claimed, an annotated CAD model is deformed to match a library model of the target object to generate an deformed annotated CAD model, where the annotations of the deformed annotated CAD model are stored as part of the reconstructed 3D vehicle model in the 3D object bank/library, where the reconstructed 3D vehicle models in the 3D object bank/library are, analogous to the claimed use for generating the target vehicle model, used as rendering assets for a related system. While Liu’s reconstructed 3D vehicle modelling technique corresponds to the claimed features of annotating library models of rendering assets by deforming an annotated CAD model to match a “library CAD model of the target object from the pre-recorded data. However, Liu, sections 5, 5.1, indicate the use of Lu’s deformable vehicle 3D model as discussed in the claim 1 rejection, wherein Lu, sections 4.2, 5.1, paragraph 1, 5.2, teaches that the ApolloCar3D dataset provides, in addition to the pre-recorded image and pose data, CAD models for the respective target vehicles, which can be used directly to generate the dense 2D/3D UV mapping, i.e. the annotated deformable CAD template model is deformed to match a library CAD model as part of the reconstruction process when said library CAD model is provided in the input dataset. That is, as claimed, in Rong’s modified system, Liu’s reconstructed 3D vehicle modelling technique would be used to generate annotated library models of target vehicles by deforming an annotated CAD model to match pre-recorded data in the object bank/library model for a target vehicle, including deforming the annotated CAD model to match a pre-recorded CAD model in the object bank/library model for a target vehicle when available, as in Lu’s use of the ApolloCar3D dataset, corresponding to the claimed annotated library models generated from the deformed annotated CAD model, which are used to generate the target object models for rendering the target objects in a virtual world.)
Regarding claim 3, the limitations “generating, with a parameterization engine, the decomposed object model from the annotated library model; and storing the decomposed object model” are taught by Rong in view of Liu and Lu (As discussed in the claim 2 rejection above, in Rong’s modified system, Liu’s reconstructed 3D vehicle modelling technique would be used to generate annotated library models of target vehicles, where Liu’s reconstructed 3D vehicle modelling technique comprises Lu’s deformable vehicle 3D model. Lu, sections 4-4.2, Appendix A, describes the deformable vehicle 3D model, which is a decomposed model comprising 18 separate parts, as well as separate body and tire component models as discussed in the claim 1 rejection above. That is, as claimed, the annotated library model is stored as a decomposed object model.)
Regarding claims 12 and 19, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above, with Rong, e.g. paragraphs 109-113, teaching that the system may be implemented using a processor executing a program stored in a non-transitory memory.
Regarding claims 13 and 20, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 2 above.
Regarding claim 14, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 3 above.
Claims 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2021/0383616 A1 (hereinafter Rong) in view of “AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection” by Zongdai Liu, et al. (hereinafter Liu) in view of “PerMO: Perceiving More at Once from a Single Image for Autonomous Driving” by Feixiang Lu, et al. (hereinafter Lu) as applied to claim 1 above, and further in view of “Generic, Deformable Models for 3-D Vehicle Surveillance” by Matthew J. Leotta (hereinafter Leotta)
Regarding claim 21, the limitations “wherein the target object is a vehicle, and wherein the body component model comprises: the first set of parameters for a first non-steered tire and a second non-steered tire, a second set of parameters for a first steered tire and a second steered tire, wherein each set of parameters identifies a corresponding connection point on the body component model, a corresponding amount of scaling, a corresponding translation offset, and wherein the second set of parameters further each comprise a yaw-relative orientation to the vehicle” are partially taught by Rong in view of Lu (As discussed in the claim 1 rejection above, Lu’s model comprises the additionally recited parameters, i.e. the template tire model corresponds to the auxiliary component model, and there are 4 separate sets of parameters identifying connection points of the tire model to the deformed body model, corresponding to the claimed identifier of the first auxiliary component model and a location parameter detail its connection point, where a global alignment algorithm is applied to deform each template tire model using rotation, translation, and scaling operations prior to assembly with the deformed body model, i.e. the claimed scaling and translation parameters for each first auxiliary component/connection point. More specifically with respect to the limitations of claim 21, said 4 separate sets of parameters correspond to the claimed first set(s) of parameters for the non-steered (rear) tires and second set(s) of parameters for the steered (front) tires, each set of parameters identifying a corresponding connection point, i.e. front left, front right, back left, or back right, an amount of scaling, translation, and rotation. None of the references, including Lu, address including a steered tire yaw-relative orientation as part of the deformed vehicle model.) However, this limitation is taught by Leotta (Leotta, e.g. abstract, chapters 1-10, discloses a system for reconstructing vehicles from surveillance cameras using a deformable 3D model, e.g. chapter 3 describes the deformable vehicle model, and chapters 4-9 describe fitting the model to surveillance video and experiments performed with the model. Leotta, e.g. sections 3.2, 3.3, teaches that the model comprises a plurality of vehicle parts, and a separate body component model and wheel component model, with the wheels being deformed separately from the body component model, e.g. page 95, final paragraph, analogous to Lu’s model. Leotta, e.g. section 10.1.4, paragraph 6, teaches that the vehicle model could be improved with a more complex motion model, accounting not only for the position of the wheels in the shape model, but also modeling the changing orientation of the wheels relative to the vehicle body, corresponding to the claimed steered tire yaw-relative orientation parameter, i.e. ground vehicles steer by rotating the steered wheels relative to the body in a yaw orientation, i.e. left and right, such that Leotta’s teaching to include a parameter modeling the orientation of the wheels relative to the body would be the claimed steered tire parameter indicating the yaw-relative orientation of the steered tire(s) to the body component of the vehicle.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Rong’s autonomous vehicle simulation system, using Liu’s reconstructed 3D vehicle modelling technique, comprising Lu’s deformable vehicle 3D model, to include Leotta’s tire modelling parameter indicating the orientation of the tire(s) relative to the vehicle body because Leotta teaches that it would be interesting to improve the motion modeling in analogous deformed vehicle models used for reconstructing vehicle models from surveillance video, and further because one of ordinary skill in the art would recognize that it would increase the fidelity of Rong’s autonomous vehicle simulation, i.e. real vehicle wheels rotate relative to the vehicle body while turning, and Leotta’s improvement would allow Rong’s simulation to model the wheel-body relative rotation.
Regarding claim 22, the limitation “wherein the first auxiliary component model is for a first auxiliary component and a second auxiliary component of the target object” is taught by Rong in view of Lu (As discussed in the claim 1 rejection above, Lu’s model comprises the parameters for four tire components, i.e. the template tire model corresponds to the auxiliary component model, and there are 4 separate sets of parameters identifying connection points of the tire model to the deformed body model, corresponding to the first auxiliary component and second auxiliary component both using the first auxiliary component model.)
The limitation “and wherein the decomposed object model further comprises a second auxiliary component model” is not explicitly taught by Rong in view of Lu (None of the references, including Lu, address including a second auxiliary component model as part of the deformed vehicle model, i.e. the only additional auxiliary component model is the template tire model.) However, this limitation is taught by Leotta (Leotta, e.g. abstract, chapters 1-10, discloses a system for reconstructing vehicles from surveillance cameras using a deformable 3D model, e.g. chapter 3 describes the deformable vehicle model, and chapters 4-9 describe fitting the model to surveillance video and experiments performed with the model. Leotta, e.g. sections 3.2, 3.3, teaches that the model comprises a plurality of vehicle parts, and a separate body component model and wheel component model, with the wheels being deformed separately from the body component model, e.g. page 95, final paragraph, analogous to Lu’s model. Leotta, e.g. page 46, paragraph 2, pages 94-95, figure 5.6, teaches that in fitting the template mesh to an input CAD model, protruding structures like side mirrors, luggage racks, and wiper blades are removed, analogous to the wheel components, but teaches that the protruding structures could be modeled using separate component models disconnected from the body, just like the wheel components, i.e. Leotta teaches that the deformable vehicle model could include a first auxiliary model for the wheels/tires, analogous to Lu’s model and the claimed first/second auxiliary component as discussed above, and further second auxiliary model(s) for the protruding structures like side mirrors, luggage racks, and/or wiper blades.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Rong’s autonomous vehicle simulation system, using Liu’s reconstructed 3D vehicle modelling technique, comprising Lu’s deformable vehicle 3D model, to include Leotta’s protruding part separate component models for modeling smaller protruding parts because Leotta indicates it is known to ignore these smaller parts during modeling to reduce complexity and effort, but can be modeled analogously to the wheels/tires if necessary, and one of ordinary skill in the art would recognize that modeling the protruding parts would increase the fidelity of Rong’s autonomous vehicle simulation, i.e. modeling the finer details of the vehicles results in a higher fidelity representation.
Response to Arguments
Applicant’s arguments, see page 11, filed 9/24/25, with respect to 35 U.S.C. 112(b) rejections of claims 2-6, 11, 13-16, and 20 have been fully considered and are persuasive. The 35 U.S.C. 112(b) rejections of claims 2-6, 11, 13-16, and 20 have been withdrawn.
Applicant's arguments filed 9/24/25 have been fully considered but they are not persuasive.
Applicant’s remarks, e.g. pages 12, 13, suggest that Lu does not describe the particular storage required by the amended claims. However, Applicant’s remarks do not explain how Lu’s disclosed system does not include the claimed parameters, i.e. Lu’s system includes the body and tire, i.e. auxiliary, components/component models, where each tire is separately scaled, rotated, and translated prior to being assembled at its corresponding connection point on the model, necessitating storage of parameters for scaling, rotation, and translation, as well as a separate set of said parameters for each tire/connection point, where each set of parameters is applied to the same template tire model to generate a corresponding deformed tire model assembled to the corresponding connection point. That is, the claimed parameters of auxiliary model identifier, i.e. said template tire model, the connection point location, i.e. front left, front right, back left, or back right, scaling factor(s), translation offset(s), and rotation to be applied to the template tire model. As Applicant’s remarks do not suggest how Lu’s system would function without storing the claimed parameters for each of the tires, Applicant’s remarks cannot be considered persuasive.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT BADER whose telephone number is (571)270-3335. The examiner can normally be reached 11-7 m-f.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT BADER/Primary Examiner, Art Unit 2611