DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-5, 10, 11, and 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li (CN 115468778 A, hereinafter referenced “Li”).
In regards to claim 1. Li discloses a method of generating sensor-realistic sensor data (Li, para [0004]), comprising the steps of:
-obtaining background sensor data from sensor data of a sensor (Li, para [0046]-[0047]; Reference at [0046] discloses Step 202: Based on the parameter information of the vehicle-mounted sensors, determine the simulated parameter information of the vehicle-mounted sensors of the target vehicle among multiple vehicles (i.e. sensor data of sensor). Para [0047] discloses based on any one of the multiple simulated pose information, determine the traffic static element image that matches any one of the simulated pose information (i.e. static element image interpreted as representing background sensor data));
-augmenting the sensor background data with one or more objects to generate an augmented background sensor output (Li, para [0085]; Reference discloses Step 405: Perform augmented reality synthesis on any static traffic element image (i.e. sensor background data) and any simulated dynamic traffic element image (i.e. one or more objects) that matches the static traffic element to obtain the synthesized image (i.e. augmented background sensor output)),
-wherein the augmenting the background sensor data includes determining a two-dimensional (2D) representation of each of the one or more objects based on a pose of the sensor (Li, para [0081], [0084], and [0085]; Reference at [0081] discloses Step 402: Based on the parameter information of the vehicle-mounted sensors, determine the simulated parameter information of the vehicle-mounted sensors of the target vehicle among multiple vehicles. Para [0084] discloses each of the multiple static traffic element images can correspond to a simulated pose information, and each simulated dynamic traffic element image can correspond to a simulated pose information (i.e. images are 2D representations of sensor data regarding one or more static objects in relation to pose). Para [0085] discloses Step 405: Perform augmented reality synthesis on any static traffic element image (i.e. sensor background data) and any simulated dynamic traffic element image that matches the static traffic element to obtain the synthesized image (i.e. augmented background sensor output));
-and generating sensor-realistic augmented sensor data based on the augmented background sensor output through use of a domain transfer network that takes, as input, the augmented background sensor output and generates, as output, the sensor-realistic augmented sensor data (Li, para [0076] and [0091]; Reference at [0076] discloses in summary, by acquiring sample static traffic element images, inputting the pose information of the vehicle-mounted sensors marked on the sample static traffic element images into the initial static element image generation model, a traffic static element prediction image output by the initial static element image generation model is obtained; based on the difference between the traffic static element prediction image and the sample static traffic element images, the initial traffic static element image generation model is trained. Thus, the traffic static element image generation model can be trained so that it learns the correspondence between pose information and traffic static element images (i.e. pose information from sensors of static traffic element images into the model interpreted as generating sensor-realistic augmented sensor data based on the augmented background sensor output through use of a domain transfer network). Para [0091] discloses by taking any one of multiple static traffic element images and determining the corresponding simulated dynamic traffic element image based on the simulated pose information of that static traffic element image, augmented reality synthesis is performed between the static traffic element image and the simulated dynamic traffic element image that matches it to obtain a synthesized image…by synthesizing virtual dynamic traffic elements with real static traffic elements, the interaction between simulated dynamic traffic elements and real static traffic elements is realized, improving the realism of the vehicle driving environment (i.e. the augmented background sensor output and generates, as output, the sensor-realistic augmented sensor data) in vehicle testing and thereby improving the accuracy of vehicle testing).
In regards to claim 2. Li discloses the method of claim 1.
Li further discloses
-further comprising receiving traffic simulation data providing trajectory data for the one or more objects, and determining an orientation and frame position of the one or more objects within the augmented background sensor output based on the trajectory data (Li, para [0126] and [0128]; Reference discloses based on historical traffic flow data from large-scale real roads, generate highly realistic traffic flow information such as the position and movement of the main vehicle and obstacle vehicles (i.e. receiving traffic simulation data providing trajectory data for the one or more objects), which is used to drive the movement of the main vehicle and obstacle vehicles in the scene and simulate real traffic scenarios. Para [0128] discloses As shown in Figure 8, the pose data of the sensor is used as the input of the NeRF model to generate a highly realistic environmental rendering image of the corresponding viewpoint (i.e. objects and vehicle controlled by position and movement data such as pose data within the scene interpreted as determining an orientation and frame position of the one or more objects within the augmented background sensor output based on the trajectory data)).
In regards to claim 3. Li discloses the method of claim 1.
Li further discloses
-wherein the augmented background sensor output includes the background sensor data with the one or more objects incorporated therein in a manner that is physically consistent with the background sensor data (Li, para [0142]; Reference discloses the fusion module 940 is configured to: for any one of the multiple static traffic element images, determine a simulated dynamic traffic element image that matches the static traffic element image based on the simulated pose information corresponding to the static traffic element image; perform augmented reality synthesis on the static traffic element image and the simulated dynamic traffic element image that matches the static traffic element image to obtain a synthesized image; and determine multiple target fusion images based on each synthesized image (i.e. augmented reality synthesis on static and dynamic traffic image via images that match interpreted as augmented background sensor output includes the background sensor data with the one or more objects incorporated therein in a manner that is physically consistent with the background sensor data)).
In regards to claim 4. Li discloses the method of claim 3.
Li further discloses
-wherein the orientation and the frame position of each of the one or more objects is determined based on a sensor pose of the sensor, wherein the sensor pose of the sensor is represented by a position and rotation of the sensor, wherein each object of the one or more objects is rendered over and/or incorporated into the background sensor data as a part of the augmented background sensor data output, and wherein the two-dimensional (2D) representation of each object of the objects is determined based on a three-dimensional (3D) model representing the object and the sensor pose (Li, para [0076]; Reference discloses by acquiring sample static traffic element images, inputting the pose information of the vehicle-mounted sensors marked on the sample static traffic element images (i.e. wherein the orientation and the frame position of each of the one or more objects is determined based on a sensor pose of the sensor, wherein the sensor pose of the sensor is represented by a position and rotation of the sensor) into the initial static element image generation model, a traffic static element prediction image output by the initial static element image generation model is obtained (i.e. wherein each object of the one or more objects is rendered over and/or incorporated into the background sensor data as a part of the augmented background sensor data output); based on the difference between the traffic static element prediction image and the sample static traffic element images, the initial traffic static element image generation model is trained. Thus, the traffic static element image generation model can be trained so that it learns the correspondence between pose information and traffic static element images (i.e. and wherein the two-dimensional (2D) representation of each object of the objects is determined based on a three-dimensional (3D) model representing the object and the sensor pose).
In regards to claim 5. Li discloses the method of claim 4.
Li further discloses
-wherein the sensor-realistic image data includes photorealistic renderings of one or more graphical objects, each of which is one of the one or more objects (Li, para [0055]; Reference discloses by determining a static traffic element image that matches any of the simulated pose information from multiple simulated pose information, and using simulated traffic flow information and multiple simulated parameter information for image rendering, multiple simulated traffic dynamic element images can be obtained. Thus, based on simulated traffic flow information and/or simulated parameter information, multiple simulated traffic dynamic element images including multiple traffic dynamic elements and multiple traffic static element images with high realism can be generated (i.e. high realism traffic dynamic and static images interpreted as sensor-realistic image data includes photorealistic renderings of one or more graphical objects, each of which is one of the one or more objects)).
In regards to claim 10. Li discloses the method of claim 1.
Li further discloses
-wherein the target sensor is an image sensor (Li, para [0032]; Reference discloses The on-board sensors may include onboard cameras), and wherein the sensor-realistic augmented sensor data is photorealistic augmented image data for the image sensor (Li, para [0148]; Reference discloses it determines the simulated parameter information of the on-board sensors of a target vehicle among the multiple vehicles based on the parameter information of the on-board sensors; it determines multiple static traffic element images and multiple simulated dynamic traffic element images based on the simulated traffic flow information and/or the simulated parameter information; it fuses the multiple static traffic element images and multiple simulated dynamic traffic element images to obtain multiple target fused images…improves the realism of the vehicle driving environment in vehicle testing, (i.e. sensor-realistic augmented sensor data is photorealistic augmented image data for the image sensor)).
In regards to claim 11. Li discloses the method of claim 1.
Li further discloses
-wherein the domain transfer network is used for performing an image-to-image translation of image data representing the one or more objects within the augmented background sensor output to sensor-realistic graphical image data representing the one or more objects as one or more sensor-realistic objects according to a target domain (Li, para [0076] and [0091]; Reference at [0076] discloses in summary, by acquiring sample static traffic element images, inputting the pose information of the vehicle-mounted sensors marked on the sample static traffic element images into the initial static element image generation model, a traffic static element prediction image output by the initial static element image generation model is obtained; based on the difference between the traffic static element prediction image and the sample static traffic element images, the initial traffic static element image generation model is trained (i.e. the model interpreted as the domain transfer network is used for performing an image-to-image translation of image data representing the one or more objects within the augmented background sensor output). Thus, the traffic static element image generation model can be trained so that it learns the correspondence between pose information and traffic static element images (i.e. pose information from sensors of static traffic element images into the model interpreted as generating sensor-realistic augmented sensor data based on the augmented background sensor output through use of a domain transfer network). Para [0091] discloses by taking any one of multiple static traffic element images and determining the corresponding simulated dynamic traffic element image based on the simulated pose information of that static traffic element image, augmented reality synthesis is performed between the static traffic element image and the simulated dynamic traffic element image that matches it to obtain a synthesized image…(i.e. sensor-realistic graphical image data representing the one or more objects as one or more sensor-realistic objects according to a target domain)).
In regards to claim 18. Li discloses a data generation computer system (Li, para [0005]), comprising:
-at least one processor (Li, para [0157]; Reference discloses examples of computing units 1001 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs));
-and memory storing computer instructions; wherein the data generation computer system is, upon execution of the computer instructions by the at least one processor (Li, para [0157]; Reference discloses the computing unit 1001 executes the various methods and processes described above, such as the vehicle testing method. For example, in some embodiments, the vehicle testing method may be implemented as a computer software program that is tangibly contained in a machine-readable medium, such as storage unit 1008), configured to:
-obtain background sensor data from sensor data of a sensor (Li, para [0046]-[0047]; Reference at [0046] discloses Step 202: Based on the parameter information of the vehicle-mounted sensors, determine the simulated parameter information of the vehicle-mounted sensors of the target vehicle among multiple vehicles (i.e. sensor data of sensor). Para [0047] discloses based on any one of the multiple simulated pose information, determine the traffic static element image that matches any one of the simulated pose information (i.e. static element image interpreted as representing background sensor data));
-augment the sensor background data with one or more objects to generate an augmented background sensor output (Li, para [0085]; Reference discloses Step 405: Perform augmented reality synthesis on any static traffic element image (i.e. sensor background data) and any simulated dynamic traffic element image (i.e. one or more objects) that matches the static traffic element to obtain the synthesized image (i.e. augmented background sensor output)),
-wherein the augmenting the background sensor data includes determining a two-dimensional (2D) representation of each of the one or more objects based on a pose of the sensor (Li, para [0081], [0084], and [0085]; Reference at [0081] discloses Step 402: Based on the parameter information of the vehicle-mounted sensors, determine the simulated parameter information of the vehicle-mounted sensors of the target vehicle among multiple vehicles. Para [0084] discloses each of the multiple static traffic element images can correspond to a simulated pose information, and each simulated dynamic traffic element image can correspond to a simulated pose information (i.e. images are 2D representations of sensor data regarding one or more static objects in relation to pose). Para [0085] discloses Step 405: Perform augmented reality synthesis on any static traffic element image (i.e. sensor background data) and any simulated dynamic traffic element image that matches the static traffic element to obtain the synthesized image (i.e. augmented background sensor output));
-and generate sensor-realistic augmented sensor data based on the augmented background sensor output through use of a domain transfer network that takes, as input, the augmented background sensor output and generates, as output, the sensor-realistic augmented sensor data (Li, para [0076] and [0091]; Reference at [0076] discloses in summary, by acquiring sample static traffic element images, inputting the pose information of the vehicle-mounted sensors marked on the sample static traffic element images into the initial static element image generation model, a traffic static element prediction image output by the initial static element image generation model is obtained; based on the difference between the traffic static element prediction image and the sample static traffic element images, the initial traffic static element image generation model is trained. Thus, the traffic static element image generation model can be trained so that it learns the correspondence between pose information and traffic static element images (i.e. pose information from sensors of static traffic element images into the model interpreted as generating sensor-realistic augmented sensor data based on the augmented background sensor output through use of a domain transfer network). Para [0091] discloses by taking any one of multiple static traffic element images and determining the corresponding simulated dynamic traffic element image based on the simulated pose information of that static traffic element image, augmented reality synthesis is performed between the static traffic element image and the simulated dynamic traffic element image that matches it to obtain a synthesized image…by synthesizing virtual dynamic traffic elements with real static traffic elements, the interaction between simulated dynamic traffic elements and real static traffic elements is realized, improving the realism of the vehicle driving environment (i.e. the augmented background sensor output and generates, as output, the sensor-realistic augmented sensor data) in vehicle testing and thereby improving the accuracy of vehicle testing).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 6-9 are rejected under 35 U.S.C. 103 as being unpatentable over Li (CN 115468778 A) in view of Yangxue (CN 109613974 B, hereinafter referenced “Xue”).
In regards to claim 6. Li discloses the method of claim 4.
Li does not explicitly disclose but Xue teaches
-wherein the sensor is a camera, and the sensor pose of the camera is determined by a point-n-perspective (PnP) technique (Xue, para [0041]; Reference discloses estimate the camera pose using the PnP method and complete the feature point matching between the current frame's feature points and the keyframes in the home environment map).
Li and Xue are combinable because they are in the same field of endeavor regarding augmented reality implementation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the vehicle testing system of Li to include the augmented reality image fusion features of Xue in order to provide the user with system for performing driving simulations on multiple vehicles based on set historical traffic flow information to obtain simulated traffic flow information corresponding to the multiple vehicles; determining simulated parameter information of the on-board sensors of a target vehicle for subsequent fusion on different types of traffic images for AR synthesis as taught by Li while incorporating the augmented reality image fusion features of Xue to allow for use image recognition for selected items to complete 3D registration of virtual objects via binocular cameras as video frame images containing the identification images of the selected home items are identified and 3D registered using natural feature identification technology to determine the initial pose of the camera for improving AR object visualization, applicable to AR synthesis features as taught in Li.
In regards to claim 7. Li discloses the method of claim 4.
Li does not explicitly disclose but Xue teaches
-wherein homography data is generated as a part of determining the sensor pose of the sensor, and wherein the homography data provides a correspondence between sensor data coordinates within a sensor data frame of the sensor and geographic locations of a real-world environment shown within a field of view (FOV) of the sensor (Xue, para [0016]-[0018]; Reference at para [0016] discloses Step 2.4: Calculate the homography matrix between the video frame image containing the marker image captured by the camera and the marker image (i.e. homography data is generated as a part of determining the sensor pose of the sensor). Para [0017] discloses Step 2.5: Use the RANSAC algorithm to establish the homography relationship between key points in the current video frame image and key points in the marker image to achieve refined matching; Para [0018] discloses Step 2.6: Calculate the coordinates of the four corner points of the marker image in the video frame image using the homography matrix, set the coordinates of the four corner points in the normalized world coordinate system, combine the coordinates in the image coordinate system, and use the 2D-3D correspondence to estimate the camera pose (i.e. wherein the homography data provides a correspondence between sensor data coordinates within a sensor data frame of the sensor and geographic locations of a real-world environment shown within a field of view (FOV) of the sensor)).
Li and Xue are combinable because they are in the same field of endeavor regarding augmented reality implementation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the vehicle testing system of Li to include the augmented reality image fusion features of Xue in order to provide the user with system for performing driving simulations on multiple vehicles based on set historical traffic flow information to obtain simulated traffic flow information corresponding to the multiple vehicles; determining simulated parameter information of the on-board sensors of a target vehicle for subsequent fusion on different types of traffic images for AR synthesis as taught by Li while incorporating the augmented reality image fusion features of Xue to allow for use image recognition for selected items to complete 3D registration of virtual objects via binocular cameras as video frame images containing the identification images of the selected home items are identified and 3D registered using natural feature identification technology to determine the initial pose of the camera for improving AR object visualization, applicable to AR synthesis features as taught in Li.
In regards to claim 8. Li in view Xue teach the method of claim 7.
Li does not explicitly disclose but Xue teaches
-wherein the homography data is used to determine a geographic location of at least one object of the one or more objects based on a frame location of the at least one object (Xue, para [0017]-[0018]; Reference at para [0018] discloses Step 2.6: Calculate the coordinates of the four corner points of the marker image in the video frame image using the homography matrix, set the coordinates of the four corner points in the normalized world coordinate system, combine the coordinates in the image coordinate system, and use the 2D-3D correspondence to estimate the camera (i.e. the homography data is used to determine a geographic location). Para [0020] discloses fuse the model with the pose updated in Step 2.7 with the video frame image containing the marker image captured by the binocular camera and render it to the screen to complete the initial 3D registration of the real home scene and the 3D model of the selected home items (i.e. of at least one object of the one or more objects based on a frame location of the at least one object)).
Li and Xue are combinable because they are in the same field of endeavor regarding augmented reality implementation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the vehicle testing system of Li to include the augmented reality image fusion features of Xue in order to provide the user with system for performing driving simulations on multiple vehicles based on set historical traffic flow information to obtain simulated traffic flow information corresponding to the multiple vehicles; determining simulated parameter information of the on-board sensors of a target vehicle for subsequent fusion on different types of traffic images for AR synthesis as taught by Li while incorporating the augmented reality image fusion features of Xue to allow for use image recognition for selected items to complete 3D registration of virtual objects via binocular cameras as video frame images containing the identification images of the selected home items are identified and 3D registered using natural feature identification technology to determine the initial pose of the camera for improving AR object visualization, applicable to AR synthesis features as taught in Li.
In regards to claim 9. Li in view Xue teach the method of claim 8.
Li further discloses
-wherein the sensor is a camera (Li, para [0032]; Reference discloses The on-board sensors may include onboard cameras) and at least one of the objects is a graphical object, and wherein the graphical object includes a vehicle and the frame location of the vehicle corresponds to a pixel location of a vehicle bottom center position of the vehicle (Li, para [0133]; Reference discloses by simulating the driving of multiple vehicles based on established historical traffic flow information, simulated traffic flow information corresponding to multiple vehicles is obtained. Based on the parameter information of the onboard sensors, the simulated parameter information of the onboard sensors of the target vehicle among the multiple vehicles is determined (i.e. graphical objects including a vehicle). Based on the simulated traffic flow information and/or simulated parameter information, multiple static traffic element images and multiple simulated dynamic traffic element images are determined. These multiple static traffic element images and multiple simulated dynamic traffic element images are then fused to obtain multiple target fused images (i.e. frame location of the vehicle corresponds to a pixel location of a vehicle bottom center position of the vehicle)).
Claims 12-17 are rejected under 35 U.S.C. 103 as being unpatentable over Li (CN 115468778 A) in view of LIU (US 2024/0328772 A1, hereinafter referenced “LIU”).
In regards to claim 12. Li discloses the method of claim 11.
Li does not explicitly disclose but LIU teaches
-wherein the target domain is a photorealistic vehicle style domain that is generated by performing a contrastive learning technique on one or more datasets having photorealistic images of vehicles (LIU, para [0054] and [0061]; Reference at [0054] discloses In the present disclosure, photorealistic rendering is implemented through machine learning using neural networks through the means of style transfer. Style transfer is a process of presenting images in a first domain in another style from a second domain through machine learning. Para [0061] discloses FIG. 5 illustrates an input image and its corresponding output image generated using contrastive learning in accordance with some implementations of the present disclosure. As shown in FIG. 5 , image 501 is an input image of a neural network, and image 502 is the corresponding output image by using only contrastive learning).
Li and LIU are combinable because they are in the same field of endeavor regarding augmented reality implementation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the vehicle testing system of Li to include the article measure features of LIU in order to provide the user with system for performing driving simulations on multiple vehicles based on set historical traffic flow information to obtain simulated traffic flow information corresponding to the multiple vehicles; determining simulated parameter information of the on-board sensors of a target vehicle for subsequent fusion on different types of traffic images for AR synthesis as taught by Li while incorporating the article measure features of LIU to allow for use of a camera configured to obtain image data associated with an article, a neural network configured to identify a plurality of pixel locations in the image, and a processor configured to calculate 3D coordinates for each of the pixel locations to allow for more accurate and reliable object modeling within AR technology, applicable to AR synthesis features as taught in Li.
In regards to claim 13. Li in view of LIU teach the method of claim 12.
Li does not explicitly disclose but LIU teaches
-wherein the contrastive learning technique is performed on input photorealistic vehicle image data in which portions of images corresponding to depictions of vehicles within the photorealistic images of vehicles of the one or more datasets are excised and the excised portions are used for the contrastive learning technique (LIU, para [0060] and [0061]; Reference at [0060] discloses In the present disclosure, unsupervised learning is achieved through use of contrastive learning. After obtaining the corresponding output image, a contrastive loss is calculated based on a generated output patch selected from the output image, its corresponding input patch, and other patches in the input image, so that the output patch closely resembles its corresponding input patch, while also differing from any other patches in the image.. Para [0061] discloses FIG. 5 illustrates an input image and its corresponding output image generated using contrastive learning in accordance with some implementations of the present disclosure. As shown in FIG. 5 , image 501 is an input image of a neural network, and image 502 is the corresponding output image by using only contrastive learning).
Li and LIU are combinable because they are in the same field of endeavor regarding augmented reality implementation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the vehicle testing system of Li to include the article measure features of LIU in order to provide the user with system for performing driving simulations on multiple vehicles based on set historical traffic flow information to obtain simulated traffic flow information corresponding to the multiple vehicles; determining simulated parameter information of the on-board sensors of a target vehicle for subsequent fusion on different types of traffic images for AR synthesis as taught by Li while incorporating the article measure features of LIU to allow for use of a camera configured to obtain image data associated with an article, a neural network configured to identify a plurality of pixel locations in the image, and a processor configured to calculate 3D coordinates for each of the pixel locations to allow for more accurate and reliable object modeling within AR technology, applicable to AR synthesis features as taught in Li.
In regards to claim 14. Li in view of LIU teach the method of claim 12.
Li does not explicitly disclose but LIU teaches
-wherein the contrastive learning technique is used to perform unpaired image-to-image translation that maintains structure of the one or more objects and modifies an appearance of the one or more objects according to the photorealistic vehicle style domain (LIU, para [0061]; Reference discloses by using contrastive learning only, the identity information of the digital avatar cannot be retained in the output image because during training, the images from the first domain and the second domain contains different identity information (i.e. contrastive learning technique is used to perform unpaired image-to-image translation that maintains structure of the one or more objects). FIG. 5 illustrates an input image and its corresponding output image generated using contrastive learning in accordance with some implementations of the present disclosure. As shown in FIG. 5 , image 501 is an input image of a neural network, and image 502 is the corresponding output image by using only contrastive learning. Para [0062] discloses to solve such problem, a patch-based method is implemented in some examples. During the training process, instead of using a whole image as the input data to the neural network, a patch is obtained by cropping the image in a dataset. At an iteration, the patch is used as input data and inputted into the neural network. Thus, the network may pay more attention to style transformation (i.e. and modifies an appearance of the one or more objects according to the photorealistic vehicle style domain) in local areas and leave some global information unchanged).
Li and LIU are combinable because they are in the same field of endeavor regarding augmented reality implementation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the vehicle testing system of Li to include the article measure features of LIU in order to provide the user with system for performing driving simulations on multiple vehicles based on set historical traffic flow information to obtain simulated traffic flow information corresponding to the multiple vehicles; determining simulated parameter information of the on-board sensors of a target vehicle for subsequent fusion on different types of traffic images for AR synthesis as taught by Li while incorporating the article measure features of LIU to allow for use of a camera configured to obtain image data associated with an article, a neural network configured to identify a plurality of pixel locations in the image, and a processor configured to calculate 3D coordinates for each of the pixel locations to allow for more accurate and reliable object modeling within AR technology, applicable to AR synthesis features as taught in Li.
In regards to claim 15. Li in view of LIU teach the method of claim 14.
Li does not explicitly disclose but LIU teaches
-wherein the contrastive learning technique is a contrastive unpaired translation (CUT) technique (LIU, para [0061]; Reference discloses using contrastive learning only, the identity information of the digital avatar cannot be retained in the output image because during training, the images from the first domain and the second domain contains different identity information).
Li and LIU are combinable because they are in the same field of endeavor regarding augmented reality implementation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the vehicle testing system of Li to include the article measure features of LIU in order to provide the user with system for performing driving simulations on multiple vehicles based on set historical traffic flow information to obtain simulated traffic flow information corresponding to the multiple vehicles; determining simulated parameter information of the on-board sensors of a target vehicle for subsequent fusion on different types of traffic images for AR synthesis as taught by Li while incorporating the article measure features of LIU to allow for use of a camera configured to obtain image data associated with an article, a neural network configured to identify a plurality of pixel locations in the image, and a processor configured to calculate 3D coordinates for each of the pixel locations to allow for more accurate and reliable object modeling within AR technology, applicable to AR synthesis features as taught in Li.
In regards to claim 16. Li discloses the method of claim 1.
Li does not explicitly disclose but LIU teaches
-wherein the domain transfer network is a generative adversarial network (GAN) model that includes a generative network that generates output image data and an adversarial network that evaluates the output image data to determine adversarial loss (LIU, para [0069]; Reference discloses in addition to the contrastive loss calculated based on the selected query sub-patch, multiple negative sub-patches, and positive sub-patch, an adversarial loss, such as a GAN loss, is also calculated, and model parameters of the neural network are updated based on both the contrastive loss and the adversarial loss. The use of the adversarial loss improves visual similarity between output patch and patch in a target domain, such as the second domain).
Li and LIU are combinable because they are in the same field of endeavor regarding augmented reality implementation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the vehicle testing system of Li to include the article measure features of LIU in order to provide the user with system for performing driving simulations on multiple vehicles based on set historical traffic flow information to obtain simulated traffic flow information corresponding to the multiple vehicles; determining simulated parameter information of the on-board sensors of a target vehicle for subsequent fusion on different types of traffic images for AR synthesis as taught by Li while incorporating the article measure features of LIU to allow for use of a camera configured to obtain image data associated with an article, a neural network configured to identify a plurality of pixel locations in the image, and a processor configured to calculate 3D coordinates for each of the pixel locations to allow for more accurate and reliable object modeling within AR technology, applicable to AR synthesis features as taught in Li.
In regards to claim 17. Li in view of LIU teach the method of claim 16.
Li further discloses
-wherein the Li, para [0076] and [0091]; Reference at [0076] discloses in summary, by acquiring sample static traffic element images, inputting the pose information of the vehicle-mounted sensors marked on the sample static traffic element images into the initial static element image generation model, a traffic static element prediction image output by the initial static element image generation model is obtained; based on the difference between the traffic static element prediction image and the sample static traffic element images, the initial traffic static element image generation model is trained (i.e. the model interpreted as the domain transfer network is used for performing an image-to-image translation of image data representing the one or more objects within the augmented background sensor output). Thus, the traffic static element image generation model can be trained so that it learns the correspondence between pose information and traffic static element images (i.e. pose information from sensors of static traffic element images into the model interpreted as generating sensor-realistic augmented sensor data based on the augmented background sensor output through use of a domain transfer network). Para [0091] discloses by taking any one of multiple static traffic element images and determining the corresponding simulated dynamic traffic element image based on the simulated pose information of that static traffic element image, augmented reality synthesis is performed between the static traffic element image and the simulated dynamic traffic element image that matches it to obtain a synthesized image…(i.e. sensor-realistic graphical image data representing the one or more objects as one or more sensor-realistic objects according to a target domain)).
Li does not explicitly disclose but LIU teaches
-GAN (model) (LIU, para [0041]; Reference discloses FIG. 19 is a flow chart illustrating steps of calculating a generative adversarial network (GAN) loss and updating model parameters of a neural network)
Li and LIU are combinable because they are in the same field of endeavor regarding augmented reality implementation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the vehicle testing system of Li to include the article measure features of LIU in order to provide the user with system for performing driving simulations on multiple vehicles based on set historical traffic flow information to obtain simulated traffic flow information corresponding to the multiple vehicles; determining simulated parameter information of the on-board sensors of a target vehicle for subsequent fusion on different types of traffic images for AR synthesis as taught by Li while incorporating the article measure features of LIU to allow for use of a camera configured to obtain image data associated with an article, a neural network configured to identify a plurality of pixel locations in the image, and a processor configured to calculate 3D coordinates for each of the pixel locations to allow for more accurate and reliable object modeling within AR technology, applicable to AR synthesis features as taught in Li.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: See the Notice of References Cited (PTO-892)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TERRELL M ROBINSON whose telephone number is (571)270-3526. The examiner can normally be reached 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KENT CHANG can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TERRELL M ROBINSON/Primary Examiner, Art Unit 2614