DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/16/2025 has been entered.
Response to Amendment
Received 09/29/2025
Claim(s) 1-20 is/are pending.
Claim(s) 1, 2, 9, 11, 15, and 19 has/have been amended.
The objections to the Specification is/are maintained in view of the amendments received 09/29/2025.
The 35 U.S.C § 103 rejection to claim(s) 1-20 have been fully considered in view of the amendments received on 09/29/2025 and are fully addressed in the prior art rejection below.
Response to Arguments
Received 09/29/2025
Regarding independent claim 1:
Applicant’s arguments (Remarks; Page 7: ¶ 3), filed 09/29/2025, with respect to the rejection(s) of claim(s) 1 under 35 U.S.C § 103 have been fully considered and are persuasive. Wherein, the newly proposed amendments incorporate subject matter/limitations that further limits the determined one or more differences which are between generated images (i.e. 2nd images) and perspective images (i.e. 3rd images). Moreover, the 2nd images are generated based on data from a 1st images and checked against the 3rd images (which are different perspective images than the 1st images). Therefore, the rejection has been withdrawn, necessitated by Applicant's amendments. However, upon further consideration, a new ground(s) of rejection is made in view of Chernov et al. (US PGPUB No. 20170046868 A1), in view of Jin et al. (US PGPUB No. 20130124148 A1), in view of Trenholm et al. (US PGPUB No. 20190138786 A1), and further in view of Kawahara (US PGPUB No. 20190335162 A1).
Regarding independent claims 9 and 15:
Applicant’s arguments (Remarks; Page 8: ¶ 1), filed 09/29/2025, with respect to the rejection(s) of claim(s) 9 and 15 under 35 U.S.C § 103 have been fully considered and are persuasive due to claim 9’s and claim 11’s similarity to claim 1. Therefore, the rejection has been withdrawn, necessitated by Applicant's amendments. However, upon further consideration, a new ground(s) of rejection is made in view of the prior art as mentioned above.
Regarding dependent claims 2-8, 10-14, and 16-20:
Applicant’s arguments (Remarks; Page 8: ¶ 3), filed 09/29/2025, with respect to the rejection(s) of claim(s) 2-8, 10-14, and 16-20 under 35 U.S.C § 103 have been fully considered and are persuasive due to a dependency upon claims 1, 9, and 15 respectively. Therefore, the rejection has been withdrawn, necessitated by Applicant's amendments. However, upon further consideration, a new ground(s) of rejection is made in view of the prior art as mentioned above.
Specification
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words. The form and legal phraseology often used in patent claims, such as "means" and "said," should be avoided. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, "The disclosure concerns," "The disclosure defined by this invention," "The disclosure describes," etc.
The abstract of the disclosure is objected to because “… embodiment …” Correction is required. See MPEP § 608.01(b).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 7-15, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chernov et al., US PGPUB No. 20170046868 A1, hereinafter Chernov, in view of Jin et al., US PGPUB No. 20130124148 A1, hereinafter Jin, in view of Trenholm et al., US PGPUB No. 20190138786 A1, hereinafter Trenholm, and further in view of Kawahara, US PGPUB No. 20190335162 A1, hereinafter Kawahara.
Regarding claim 1, Chernov discloses one or more processors (Chernov; processor(s) [¶ 0048 and ¶ 0159-0160], as illustrated within Fig. 18), comprising:
circuitry to use one or more neural networks (processor(s) [as addressed above], comprises circuitry to use one or more processes [¶ 0160-0161, ¶ 0182, and ¶ 0189]) to:
project one or more first images depicting a three-dimensional (3D) object from a first perspective onto one or more 3D mesh representations (Chernov; the processes [as addressed above] (configured) to project one or more 1st images depicting a 3D object from an implicit 1st perspective (given images of a capture position/angle) onto one or more 3D mesh representations [¶ 0057-0058], as illustrated within Fig. 2A; moreover, mapping textures on to a surface mesh [¶ 0066-0069]; wherein, high quality 3D model reconstruction may be provided via accurate surface reconstruction and texture mapping [¶ 0070]);
determine a region within the one or more 3D mesh representations corresponding to the first perspective (Chernov; the processes [as addressed above] (configured) to determine a region/surface within the one or more 3D mesh representations corresponding to the implicit 1st perspective (given images/textures are based on the capture position/angle) [¶ 0127-0130]; wherein, a surface corresponds to a number of polygons [¶ 0124-0125]; and wherein, assigning mesh faces as visible or invisible from a camera position [¶ 0134-0135 and ¶ 0139-0140]);
determine one or more differences (Chernov; the processes [as addressed above] (configured) to determine one or more differences between images [¶ 0128-0130]); and
modify the one or more 3D mesh representations based, at least in part, on the one or more differences (Chernov; the processes [as addressed above] (configured) to modify the one or more 3D mesh representations based (at least in part) on the one or more differences [¶ 0128-0130]).
Chernov fails to disclose circuitry to use one or more neural networks;
project the region to generate one or more second images depicting the 3D object from a second perspective that is a different perspective from the first perspective; and
determine one or more differences between the one or more second images and one or more third images that captures the 3D object from the second perspective.
However, Jin teaches project the region to generate one or more second images depicting the 3D object from a second perspective that is a different perspective from the first perspective (Jin; project the region [¶ 0063-0066] to generate one or more 2nd images depicting the 3D object from a 2nd perspective that is a different perspective from the 1st perspective [¶ 0040-0042]; wherein, image data is associated with mesh construction [¶ 0044-0045] and image analysis [¶ 0051-0052, ¶ 0055, and ¶ 0059-0060]; moreover, mesh tessellation [¶ 0035 and ¶ 0073-0074]);
determine one or more differences (Jin; determine one or more differences between images [¶ 0068-0070]); and
modify the one or more 3D mesh representations based, at least in part, on the one or more differences (Jin; modify the one or more 3D mesh representations based (at least in part) on the one or more differences [¶ 0072-0074]; moreover, 3D surface approximation [¶ 0045 and ¶ 0063]; wherein, reconstructing a 3D model [¶ 0083-0086]).
Chernov and Jin are considered to be analogous art because both pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov, to incorporate to: project the region to generate one or more second images depicting the 3D object from a second perspective that is a different perspective from the first perspective; determine one or more differences; and modify the one or more 3D mesh representations based, at least in part, on the one or more differences (as taught by Jin), in order to provide an improved modeling while reducing system resources (Jin; [¶ 0002-0003 and ¶ 0005-0006]).
Chernov as modified by Jin fails to disclose circuitry to use one or more neural networks; and
determine one or more differences between the one or more second images and one or more third images that captures the 3D object from the second perspective.
However, Trenholm teaches circuitry to use one or more neural networks (Trenholm; circuitry to use one or more NNs [¶ 0064-0066]; additionally, NN operating in at least two modes [¶ 0067 and ¶ 0073]).
Chernov in view of Jin and Trenholm are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov as modified by Jin, to incorporate circuitry to use one or more neural networks (as taught by Trenholm), in order to provide an improved imaging information while accurately and reliably identifying objects within images (Trenholm; [¶ 0002-0005]).
Chernov as modified by Jin and Trenholm fails to disclose determine one or more differences between the one or more second images and one or more third images that captures the 3D object from the second perspective.
However, Kawahara teaches to: determine one or more differences between the one or more second images and one or more third images that captures the 3D object from the second perspective (Kawahara; determine one or more differences between the one or more 2nd images and one or more 3rd images that captures the 3D object from the 2nd perspective [¶ 0054-0055 and ¶ 0058-0060]; moreover, difference detection [¶ 0045]; additionally, 3D object data [¶ 0042 and ¶ 0056]).
Chernov in view of Jin and Trenholm and Kawahara are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov as modified by Jin, and Trenholm, to incorporate to: determine one or more differences between the one or more second images and one or more third images that captures the 3D object from the second perspective (as taught by Kawahara), in order to provide an improved modeling while reducing system resources (Kawahara; [¶ 0002-0004 and ¶ 0019]).
Regarding claim 2, Chernov in view of Jin, Trenholm, and Kawahara further discloses the one or more processors of claim 1, wherein using the one or more neural networks to determine the one or more differences further comprise determining an error between the one or more images (Trenholm; using the one or more NNs to determine the one or more differences further comprise determining an error between the one or more images [¶ 0064 and ¶ 0066]; wherein, images depicting the 3D object from the multiple perspectives [¶ 0047 and ¶ 0058]; and moreover, bundle adjustment poses [¶ 0062] and pose estimation [¶ 0071-0072]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov as modified by Jin, Trenholm, and Kawahara, to incorporate using the one or more neural networks to determine the one or more differences further comprise determining an error between the one or more images (as taught by Trenholm), in order to provide an improved imaging information while accurately and reliably identifying objects within images (Trenholm; [¶ 0002-0005]).
Kawahara further teaches to determine the one or more differences further comprise determining an error between the one or more second images and the one or more third images depicting the 3D object from the second perspective (Kawahara; using tests to determine the one or more differences further comprise determining an implicit error (given the nature of a test) between the one or more 2nd images and the one or more 3rd images depicting the 3D object from the 2nd perspective [¶ 0058-0060]; moreover, difference detection in relation with testing and matching [¶ 0045, ¶ 0055-0056 and ¶ 0062]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov as modified by Jin, Trenholm, and Kawahara, to incorporate to determine the one or more differences further comprise determining an error between the one or more second images and the one or more third images depicting the 3D object from the second perspective (as taught by Kawahara), in order to provide an improved modeling while reducing system resources (Kawahara; [¶ 0002-0004 and ¶ 0019]).
Regarding claim 3, Chernov in view of Jin, Trenholm, and Kawahara further discloses the one or more processors of claim 1, wherein the using the one or more neural networks to modify the one or more 3D mesh representations further comprise updating one or more latent vector values in latent space based, at least in part, on the one or more differences (Trenholm; the using the one or more NNs to modify the one or more 3D mesh representations [as addressed within the parent claim(s)] further comprise updating one or more latent vector values in latent space based (at least in part) on the one or more differences [¶ 0064 and ¶ 0066-0067]; wherein, a NN uses weights within layers [¶ 0079 and ¶ 0081-0082]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov as modified by Jin, Trenholm, and Kawahara, to incorporate the using the one or more neural networks to modify the one or more 3D mesh representations further comprise updating one or more latent vector values in latent space based, at least in part, on the one or more differences (as taught by Trenholm), in order to provide an improved imaging information while accurately and reliably identifying objects within images (Trenholm; [¶ 0002-0005]).
Kawahara further teaches updating based, at least in part, on the one or more differences (Kawahara; updating [¶ 0062-0064] based, at least in part, on the one or more differences [¶ 0055-0056 and ¶ 0058-0059]; wherein, differences are detected [¶ 0045 and ¶ 0060] in relation with changing or updating [¶ 0065-0066]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov as modified by Jin, Trenholm, and Kawahara, to incorporate updating based, at least in part, on the one or more differences (as taught by Kawahara), in order to provide an improved modeling while reducing system resources (Kawahara; [¶ 0002-0004 and ¶ 0019]).
Regarding claim 4, Chernov in view of Jin, Trenholm, and Kawahara further discloses the one or more processors of claim 1, wherein using the one or more neural networks to modify the one or more 3D mesh representations further comprise obtaining one or more geometric constraints corresponding to a 3D object (Trenholm; the one or more NNs to modify the one or more 3D mesh representations [as addressed within the parent claim(s)] further comprise obtaining one or more geometric constraints corresponding to a 3D object [¶ 0051-0054]; additionally, segmentation algorithm [¶ 0068-0070]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov as modified by Jin, Trenholm, and Kawahara, to incorporate using the one or more neural networks to modify the one or more 3D mesh representations further comprise obtaining one or more geometric constraints corresponding to a 3D object (as taught by Trenholm), in order to provide an improved imaging information while accurately and reliably identifying objects within images (Trenholm; [¶ 0002-0005]).
Regarding claim 5, Chernov in view of Jin, Trenholm, and Kawahara further discloses the one or more processors of claim 1, wherein the one or more 3D mesh representations are generated by at least selecting an initial latent value (Trenholm; the one or more 3D mesh representations are generated by at least selecting an initial latent value [¶ 0064, ¶ 0066-0067, and ¶ 0076]; moreover, CNN machine learning models [¶ 0081-0082]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov as modified by Jin, Trenholm, and Kawahara, to incorporate the one or more 3D mesh representations are generated by at least selecting an initial latent value (as taught by Trenholm), in order to provide an improved imaging information while accurately and reliably identifying objects within images (Trenholm; [¶ 0002-0005]).
Regarding claim 7, Chernov in view of Jin, Trenholm, and Kawahara further discloses the one or more processors of claim 1, wherein resolution of the one or more 3D mesh representations is increased as a result of modification of the one or more 3D mesh representations (Chernov; resolution of the one or more 3D mesh representations is increased as a result of modification (i.e. up-sampling) of the one or more 3D mesh representations [¶ 0109-0111 and ¶ 0116]; additionally, estimated ambiguity of pixels [¶ 0114]).
Regarding claim 8, Chernov in view of Jin, Trenholm, and Kawahara further discloses the one or more processors of claim 1, wherein the region corresponds to a meshlet that is combined with another meshlet to modify the one or more 3D mesh representations (Chernov; the region corresponds to a meshlet that is combined with another meshlet to modify the one or more 3D mesh representations [¶ 0058 and ¶ 0068-0069]; moreover, a number of patches [¶ 0093 and ¶ 0128]).
Regarding claim 9, the rejection of claim 9 is addressed within the rejection of claim 1, due to the similarities claim 9 and claim 1 share, therefore refer to the rejection of claim 1 regarding the rejection of claim 9.
Regarding claim 10, Chernov in view of Jin, Trenholm, and Kawahara further discloses the method of claim 9, further comprising extracting a value representing one or more features of the 3D object and using the value to generate the one or more 3D mesh representations (Chernov; extracting a value representing one or more features of the 3D object and using the value to cause the one or more processes to generate the one or more 3D mesh representations [¶ 0079-0080 and ¶ 0084]; moreover, aligning feature point(s) [¶ 0087], creating a feature vector [¶ 0091], and feature tracking [¶ 0093]).
Jin further teaches feature processing (Jin; constraints representing a 3D model associated with indicating features [¶ 0025-0027]; moreover, identifying features [¶ 0043-0044]; moreover, analyzing individual image data to determine object feature [¶ 0049-0051]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov as modified by Jin, Trenholm, and Kawahara, to incorporate feature processing (as taught by Jin), in order to provide an improved modeling while reducing system resources (Jin; [¶ 0002-0003 and ¶ 0005-0006]).
Trenholm further teaches extracting a value representing one or more features of the 3D object and using the value to cause the one or more neural networks to generate the one or more 3D mesh representations (Trenholm; extracting a value representing one or more features of the 3D object and using the value to cause the one or more neural networks to generate the one or more 3D mesh representations [¶ 0044-0045] and ¶ 0052-0054; wherein features are detected/extracted [¶ 0061]; moreover, detecting a plurality of features of the object [¶ 0022]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov as modified by Jin, Trenholm, and Kawahara, to incorporate extracting a value representing one or more features of the 3D object and using the value to cause the one or more neural networks to generate the one or more 3D mesh representations (as taught by Trenholm), in order to provide an improved imaging information while accurately and reliably identifying objects within images (Trenholm; [¶ 0002-0005]).
Regarding claim 11, the rejection of claim 11 is addressed within the rejection of claim 2, due to the similarities claim 11 and claim 2 share, therefore refer to the rejection of claim 2 regarding the rejection of claim 11.
Regarding claim 12, the rejection of claim 12 is addressed within the rejection of claim 8, due to the similarities claim 12 and claim 8 share, therefore refer to the rejection of claim 8 regarding the rejection of claim 12. Although, claim 12 and claim 8 may not be identical, they are considerably comparable or substantially equivalent given their overlapping subject matter. Thus, it is reasonable to reject claim 12 based on the teachings and rational in relation with the prior art within the rejection of claim 8.
Regarding claim 13, the rejection of claim 13 is addressed within the rejection of claim 8, due to the similarities claim 13 and claim 8 share, therefore refer to the rejection of claim 8 regarding the rejection of claim 13. Although, claim 13 and claim 8 may not be identical, they are considerably comparable or substantially equivalent given their overlapping subject matter. Thus, it is reasonable to reject claim 13 based on the teachings and rational in relation with the prior art within the rejection of claim 8.
Regarding claim 14, the rejection of claim 14 is addressed within the rejection of claim 7, due to the similarities claim 14 and claim 7 share, therefore refer to the rejection of claim 7 regarding the rejection of claim 14. Although, claim 14 and claim 7 may not be identical, they are considerably comparable or substantially equivalent given their overlapping subject matter. Thus, it is reasonable to reject claim 14 based on the teachings and rational in relation with the prior art within the rejection of claim 7.
Regarding claim 15, the rejection of claim 15 is addressed within the rejection of claim 1, due to the similarities claim 15 and claim 1 share, therefore refer to the rejection of claim 1 regarding the rejection of claim 15.
Chernov teaches a non-transitory computer readable medium storing instructions that, when executed by a processor cause the processor to perform operations (Chernov; a non-transitory computer readable medium storing instructions that, when executed by a processor perform a method/operations [¶ 0012 and ¶ 0189]).
(further refer to the rejection of claim 1)
Regarding claim 17, Chernov in view of Jin, Trenholm, and Kawahara further discloses the non-transitory computer readable medium of claim 15, wherein using the one or more processes to modify the one or more 3D mesh representations further comprises increasing a resolution of the one or more 3D mesh representations using one or more geometric constraints imposed on the one or more objects (Chernov; using the one or more processes to modify the one or more 3D mesh representations [as addressed within the parent claim(s)] further comprises increasing a resolution of the one or more 3D mesh representations [¶ 0109-011 and ¶ 0116] using one or more implicit geometric constraints imposed on the one or more objects [¶ 0058 and ¶ 0068-0070]), and wherein the one or more geometric constraints correspond to one or more conditions of a set of conditions under which the one or more first images were captured (Chernov; the one or more implicit geometric constraints correspond to one or more conditions/parameters of a set of conditions under which the one or more 1st images were captured [¶ 0061-0062 an ¶ 0064-0066]).
Trenholm further teaches wherein using the one or more neural networks to modify the one or more 3D mesh representations further comprises increasing a resolution of the one or more 3D mesh representations using one or more geometric constraints imposed on the one or more objects (Trenholm; using the one or more NNs [as addressed within the parent claim(s)] to modify the one or more 3D mesh representations further comprises increasing an implicit resolution (given dynamic range) of the one or more 3D mesh representations using one or more geometric constraints imposed on the one or more objects [¶ 0053-0054 and ¶ 0059]; moreover, segmentation [¶ 0068-0070]), and wherein the one or more geometric constraints correspond to one or more conditions of a set of conditions under which the one or more first images were captured (Trenholm; the one or more geometric constraints correspond to one or more conditions of a set of conditions under which the one or more 1st images were captured [¶ 0058-0059]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov as modified by Jin, Trenholm, and Kawahara, to incorporate using the one or more neural networks to modify the one or more 3D mesh representations further comprises increasing a resolution of the one or more 3D mesh representations using one or more geometric constraints imposed on the one or more objects, and wherein the one or more geometric constraints correspond to one or more conditions of a set of conditions under which the one or more first images were captured (as taught by Trenholm), in order to provide an improved imaging information while accurately and reliably identifying objects within images (Trenholm; [¶ 0002-0005]).
Regarding claim 18, the rejection of claim 18 is addressed within the rejection of claim 8, due to the similarities claim 18 and claim 8 share, therefore refer to the rejection of claim 8 regarding the rejection of claim 18. Although, claim 18 and claim 8 may not be identical, they are considerably comparable or substantially equivalent given their overlapping subject matter. Thus, it is reasonable to reject claim 18 based on the teachings and rational in relation with the prior art within the rejection of claim 8.
Regarding claim 19, the rejection of claim 19 is addressed within the rejection of claim 2, due to the similarities claim 19 and claim 2 share, therefore refer to the rejection of claim 2 regarding the rejection of claim 19.
Regarding claim 20, Chernov in view of Jin and Trenholm further discloses the non-transitory computer readable medium of claim 19, wherein the error comprises a silhouette error or photometric error (Chernov; the error comprises a photometric error [¶ 0127-0129]).
Claim(s) 6 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chernov in view of Jin, Trenholm, and Kawahara as applied to claim(s) 1 and 15 above, and further in view of Florez Choque, US Patent No. 10311334 B1, hereinafter Florez-Choque.
Regarding claim 6, Chernov in view of Jin, Trenholm, and Kawahara further discloses the one or more processors of claim 1, wherein the one or more neural networks comprise a variational autoencoder (VAE) (Trenholm; the one or more NNs comprise an autoencoder [¶ 0054]; moreover, the NN implicitly comprises a VAE (given supervised or unsupervised learning) [¶ 0066-0067]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov as modified by Jin, Trenholm, and Kawahara, to incorporate the one or more neural networks comprise a variational autoencoder (VAE) (as taught by Trenholm), in order to provide an improved imaging information while accurately and reliably identifying objects within images (Trenholm; [¶ 0002-0005]).
Chernov in view of Jin, Trenholm, and Kawahara fails to disclose a variational autoencoder (VAE).
However, Florez-Choque teaches the one or more neural networks comprise a variational autoencoder (VAE) (Florez-Choque; the one or more NNs comprise a VAE [Col. 3, line 15 to Col. 4, line 3]).
Chernov in view of Jin, Trenholm, and Kawahara and Florez-Choque are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov as modified by Jin, Trenholm, and Kawahara, to incorporate one or more neural networks comprise a variational autoencoder (VAE) (as taught by Florez-Choque), in order to provide an improved recognition and reconstruction of image data that utilities an optimized neural network (Florez-Choque; [Col. 1, lines 7-50]).
Regarding claim 16, Chernov in view of Jin, Trenholm, and Kawahara further discloses the non-transitory computer readable medium of claim 15, wherein the instructions, when executed by the processor, further cause the processor (Chernov; the instructions, when executed by the processor, further cause the processor [¶ 0189]) to at least:
to generate the one or more 3D mesh representations of the 3D object (Chernov; the processor [as addressed above] is configured to use a processing stage to generate the one or more 3D mesh representations of the 3D object [¶ 0058 and ¶ 0068-0070]).
Chernov as modified by Jin, Trenholm, and Kawahara fails to disclose use a decoder to generate the one or more representations of the object, wherein the decoder receives one or more latent vector values as input and generates a decoded representation of the one or more latent vector values.
However, Florez-Choque teaches use a decoder to generate the one or more representations of the object (Florez-Choque; use an implicit decoder (given VAE) to generate the one or more representations of the object [Col. 7, lines 12-60 and Col. 8, lines 1-35]; wherein, the VAE comprises an encoder/compression and decoder/reconstruction Col. 4, line 66 to Col. 5, line 35), wherein the decoder receives one or more latent vector values as input and generates a decoded representation of the one or more latent vector values (Florez-Choque; the implicit decoder (given VAE, comprises a reconstruction) [as addressed above] receives one or more latent vector values [Col. 3, lines 28-45 and Col. 4, line 66 to Col. 5, line 35] as input and generates a decoded/reconstructed representation of the one or more latent vector values [Col. 7, lines 12-60 and Col. 8, lines 1-35]).
Chernov in view of Jin, Trenholm, and Kawahara and Florez-Choque are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Chernov as modified by Jin, Trenholm, and Kawahara, to incorporate one or more neural networks comprise a variational autoencoder (VAE) (as taught by Florez-Choque), in order to provide an improved recognition and reconstruction of image data that utilities an optimized neural network (Florez-Choque; [Col. 1, lines 7-50]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of Reference Cited for a listing of analogous art.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Charles Lloyd Beard whose telephone number is (571)272-5735. The examiner can normally be reached Monday - Friday, 8:00 AM - 5: 00 PM, alternate Fridays EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached at (571) 272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
CHARLES LLOYD. BEARD
Primary Examiner
Art Unit 2616
/CHARLES L BEARD/ Primary Examiner, Art Unit 2611