DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 11-13, 16, and 18-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 11-13, and 16-19 of U.S. Patent No. 11521358 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because all the limitations in claim 1 are anticipated by claim 1 of the U.S. Patent No. US 11521358 B2.
Application 18737560
Claim 1
US 11521358 B2
Claim 1
1. A computer-implemented method for providing third party data assets to clients, the method comprising:
1. A computer-implemented method for providing third party data assets to clients, the method comprising:
sending, by a computing system, a software development kit to a third party computing system,
sending, by a computing system, a software development kit to a third party computing system,
wherein the software development kit comprises a template for building one or more rendering effect shaders,
wherein the software development kit comprises a template for building one or more rendering effect shaders,
wherein the software development kit comprises one or more per-product presets,
wherein the software development kit comprises one or more per-product presets,
wherein the template and the one or more per-product presets are associated with products in a particular product class,
wherein the template and the one or more per-product presets are associated with products in a particular product class,
wherein the one or more per-product presets comprise one or more parameters associated with uniform values and textures for the particular product class;
wherein the one or more per-product presets comprise one or more parameters associated with uniform values and textures for the particular product class;
receiving, by the computing system, data assets from the third party computing system,
receiving, by the computing system, data assets from the third party computing system,
wherein the data assets comprise one or more rendering effect shaders built using the software development kit,
wherein the data assets comprise one or more rendering effect shaders built using the software development kit,
wherein the data assets are associated with one or more products of the particular product class;
wherein the data assets are associated with one or more products of the particular product class;
processing, by the computing system, the data assets to generate obfuscated code;
processing, by the computing system, the data assets to generate obfuscated code,
wherein generating the obfuscated code comprises: determining, by the computing system, one or more comments in a code comprise text descriptive of code semantics, wherein the code is associated with the data assets; and
removing, by the computing system, the one or more comments that comprise text descriptive of code semantics;
storing, by the computing system, the obfuscated code associated with the data assets; and
storing, by the computing system, the obfuscated code associated with the data assets; and
providing, by the computing system, an augmented reality rendering experience,
providing, by the computing system, an augmented reality rendering experience,
wherein augmented reality renderings are based at least in part on the data assets.
wherein augmented reality renderings are based at least in part on the data assets.
Claims 11, 12, 13, 18, and 20
Claim 1, 11, 12, 13, and 17-18
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6-7, 10-14, 16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Gentilin (US 20170206708 A1) in view of Abreu (US 20160196665 A1), and further in view of Yehuda (US 20200150933 A1).
Regarding to claim 1, Gentilin discloses a computer-implemented method for providing third party data assets to clients ([0005]: third party with respect to the user and the virtual reality application provider; [0028]: generate a virtual reality environment for displaying content; [0029]: create a modified virtual reality environment from the design data; Fig. 1; [0032]; [0042]: send one or more applications to client computing device 110 over network; [0085]: receive one or more files of virtual reality environments created by third party software development kits), the method comprising:
sending, by a computing system, a software development kit to a third party computing system (Fig. 5; [0118]: virtual reality content management system 170 sends a software development kit, i.e., SDK, with publisher identifying information; [0120]: application publisher computing devices 160 inserts and sends a software development kit, SDK, into a virtual reality application;
PNG
media_image1.png
350
484
media_image1.png
Greyscale
; [0124]: application publisher computing devices 160 sends the VR application to client computing device 110), wherein the software development kit comprises a template for building one or more rendering effect shaders (Fig. 5; [0118]: the software development kit includes tools for generating and building an environment within the particular application by making calls to virtual reality content management system 170 for design data and content), wherein the software development kit comprises one or more per-product presets ([0078]: proportional parameters allow the application publisher to better customize the experience to a particular application, i.e. per-product presets; Fig. 5; [0118]: application publisher computing devices 160 receives a software development kit with publisher identifying information, such as an application ID, and placement ID; application ID is per-product preset), wherein the template and the one or more per-product presets are associated with products in a particular product class ([0036]: executes a particular application 120 to render a virtual reality game; [0074]: an SDK includes tools for selecting texture types, three dimensional models, movement logic, and other graphical elements; the other graphical elements include uniform; [0079]: an open desert is a particular product; if the environment was an open desert, while the environment may include models and textures beyond the movement boundaries, the viewer would be unable to move past the movement boundaries; [0120]: an SDK renders a virtual reality environment on demand by making one or more calls to virtual reality content management system; the virtual reality environment with particular content, i.e. particular product, may be loaded in advance of being displayed; display the content in the particular virtual reality environment, a particular product), wherein the one or more per-product presets comprise one or more parameters associated with uniform values and textures for the particular product class ([0070: the design data comprises one or more environment textures; [0074]: an SDK includes tools for selecting texture types, three dimensional models, movement logic, and other graphical elements; the other graphical elements include uniform; [0079]: if the environment was an open desert, while the environment may include models and textures beyond the movement boundaries, the viewer would be unable to move past the movement boundaries; [0099]: a uniform resource locator; execute an application with a parameter of uniform);
receiving, by the computing system, data assets from the third party computing system ([0085]: receive one or more files of virtual reality environments, created by third party software development kits; [0092]: the particular content is received at any time before the execution of the virtual reality environment; virtual reality content management system 170 is initially receive content from one or more content provider servers), wherein the data assets comprise one or more rendering effect shaders built using the software development kit ([0070]: permit generating a fully rendered three dimensional virtual reality environment; Fig. 5; [0118]: the software development kit may include tools for generating and building an environment within the particular application by making calls to virtual reality content management system 170 for design data and content; [0120]: the virtual reality environment library is an SDK configured to render a virtual reality environment on demand by making one or more calls to virtual reality content management system 170);
storing, by the computing device, the data assets ([0049] store data defining one or more parameters for a digital graphical virtual reality environment; [0073]: store data defining one or more parameters for a digital graphical virtual reality environment; [0076]: virtual reality content management system 170 stores data for one or more default environments; [0122]: store the virtual reality environment); and
providing, by the computing system, a reality rendering experience ([0123]: application publisher computing device(s) 160 sends the VR application to client computing device 110; Fig. 5; [0124]: client computing device 110 plays a three dimensional movie or initiate a virtual reality video game; [0126]: virtual reality content management system 170 causes displaying the virtual reality environment on client computing device 110), wherein reality renderings are based at least in part on the data assets ([0126]: virtual reality content management system 170 causes displaying the virtual reality environment on client computing device 110).
Gentilin fails to explicitly disclose:
rendering effect environment is rendering effect shaders;
wherein the data assets are associated with one or more products of the particular product class;
processing, by the computing system, the data assets to generate obfuscated code;
storing, by the computing system, the obfuscated code associated with the data assets;
reality rendering is an augmented reality rendering.
In same field of endeavor, Abreu teaches:
rendering effect environment is rendering effect shaders ([0073]: a plurality of shader modules 22 determines and applies image color to selected regions; the output of a custom makeup shader module 22 is sent to a renderer 24 that augments the underlying user's face in the captured image; [0076]: After all of the regions and color parameters are processed by the transform module 30 and defined shader modules 22, the renderer 24 overlays the selected optimized meshes; [0142]: the renderer 24 performs an alpha blend of the adjusted texture data regions associated with each of the layered optimized meshes 18, as output by the respective shader modules 22);
wherein the data assets are associated with one or more products of the particular product class ([0073]: the output of a custom makeup shader module 22 is sent to a renderer 24; determine and apply image color to represent virtual application of lipstick, blusher, eye shadow and foundation to the captured image data; Fig. 2; [0074]: a database of beauty product details; each product or group of products is associated with a respective set of color parameters);
providing, by the computing device, an augmented reality rendering experience ([0015]: an augmentation representation is generated by retrieving data defining a plurality of polygonal regions; [0021]: output the augmented captured image data; [0076]: the renderer 24 overlays the selected optimized meshes 18 according to the common reference plane), wherein augmented reality renderings are based at least in part on the data assets ([0015]; [0021]; [0073]: a plurality of shader modules 22 determine and apply image color to selected regions of texture data files 20; shader is part of rendering subgraph; [0076]: the renderer 24 overlays the selected optimized meshes 18 according to the common reference plane; Fig. 15; [0114]; Fig. 19; [0138]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gentilin to include rendering effect environment is rendering effect shaders; wherein the data assets are associated with one or more products of the particular product class; providing, by the computing device, an augmented reality rendering experience, wherein augmented reality renderings are based at least in part on the data assets as taught by Abreu. The motivation for doing so would have been to output the augmented captured image data; to determine and apply image color to selected regions of texture data files 20 by shader modules; to output effects by a custom makeup shader module 22; to improve computation speed; to improve processing efficiency; to improve the quality of the generated histograms; as taught by Abreu in paragraphs [0021], [0073], [0104], [0106], and [0118].
Gentilin in view of Abreu fails to explicitly disclose:
processing, by the computing system, the data assets to generate obfuscated code;
storing, by the computing system, the obfuscated code associated with the data assets.
In same field of endeavor, Yehuda teaches:
processing, by the computing system, the data assets to generate obfuscated code ([0069]: preparation module 601 performs an in-binary file modification process on the APK files; obfuscated files; [0097]: renames are applied to both the plugin invocation and the import table that is attached to each plugin; [0099]: obfuscate the data; the AFP extends standard stripping tools strip away symbols and metadata referring to symbols such as load commands, debug lines, decorations, etc.);
storing, by the computing system, the obfuscated code associated with the data assets ([0031]: store an unlimited number of mobile apps in a stateful repository; [0069]: obfuscated files; [0071]: programs for Android are commonly written in Java and compiled to bytecode for the Java VM, which is then translated to Dalvik bytecode and stored in Dalvik Executable file and Optimized Dalvik Executable files).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gentilin in view of Abreu to include processing, by the computing system, the data assets to generate obfuscated code; storing, by the computing system, the obfuscated code associated with the data assets as taught by Yehuda. The motivation for doing so would have been to rename both the plugin invocation and the import table that is attached to each plugin; to obfuscate the data; rename all SDK classes and resources as taught by Yehuda in paragraphs [0097], [0099], and [0107].
Regarding to claim 2, Gentilin in view of Abreu and Yehuda discloses the method of claim 1, wherein receiving the data assets comprise receiving one or more rendering subgraphs for the one or more particular products (Abreu; Fig. 3a-d; [0065]: a mesh generator 6 retrieves at least one reference image 8 from the training image database; [0073]: a plurality of shader modules 22 determine and apply image color to selected regions of texture data files 20; shader is part of rendering subgraph; [0076]: the renderer 24 overlays the selected optimized meshes 18 according to the common reference plane; renderer is part of rendering subgraph; [0077]: shader modules 22 and renderer 24 are part of rendering subgraph).
Same motivation of claim 1 is applied here.
Regarding to claim 3, Gentilin in view of Abreu and Yehuda discloses the method of claim 2, wherein storing, by the computing system, the obfuscated code associated with the data assets (Yehuda; [0031]: store an unlimited number of mobile apps in a stateful repository; [0069]: obfuscated files; [0071]: programs for Android are commonly written in Java and compiled to bytecode for the Java VM, which is then translated to Dalvik bytecode and stored in Dalvik Executable file and Optimized Dalvik Executable files) comprises:
storing, by the computing system, the data assets including the one or more rendering subgraphs (Abreu; Fig. 2; Fig. 3a-d; [0065]: a mesh generator 6 retrieves at least one reference image 8 from the training image database 5, and generates data defining a plurality of polygonal regions based on the retrieved reference image 8; a mesh generator, mesh, and reference image 8 are a perception subgraph;
PNG
media_image2.png
799
730
media_image2.png
Greyscale
; [0067]: the normalized mesh 10 data are stored in an object model database 7 of the system 1; [0072]: a plurality of texture data files 20 are stored in the object model database).
Regarding to claim 4, Gentilin in view of Abreu and Yehuda discloses the method of claim 2, wherein the augmented reality rendering is generated by:
receiving a camera feed (Abreu; [0021]: receive captured image data from a camera);
processing the camera feed with a perception subgraph to generate a user mesh (Abreu; Fig. 2; Fig. 3a-d; [0065]: a mesh generator 6 retrieves at least one reference image 8 from the training image database 5, and generates data defining a plurality of polygonal regions based on the retrieved reference image 8; a mesh generator, mesh, and reference image 8 are a perception subgraph;
PNG
media_image2.png
799
730
media_image2.png
Greyscale
); and
processing the user mesh with the one or more rendering subgraphs to generate the augmented reality rendering, wherein the augmented reality rendering comprises a rendering of the one or more particular products within the camera feed (Abreu; [0015]: an augmentation representation is generated by retrieving data defining a plurality of polygonal regions; [0021]: output the augmented captured image data; Fig. 2; Fig. 3a-d; [0065]: a mesh generator 6 retrieves at least one reference image 8 from the training image database 5, and generates data defining a plurality of polygonal regions based on the retrieved reference image 8; a mesh generator, mesh, and reference image 8 are a perception subgraph;
PNG
media_image2.png
799
730
media_image2.png
Greyscale
; [0076]: the renderer 24 overlays the selected optimized meshes 18 according to the common reference plane).
Regarding to claim 6 (objected ), Gentilin in view of Abreu and Yehuda discloses the method of claim 1, wherein processing, by the computing system, the data assets to generate obfuscated code comprises:
determining, by the computing system, one or more terms to rename (Yehuda; [0097]: renames are applied to both the plugin invocation and the import table that is attached to each plugin);
indexing, by the computing system, the one or more terms (Yehuda; [0097]: index by the import table);
determining, by the computing system one or more assigned terms for the one or more terms (Yehuda; [0097]: import table; [0107]: all SDK classes and resources are automatically renamed to avoid conflicts with the fused app); and
renaming, by the computing system, the one or more terms in the code based on the one or more assigned terms, wherein renaming is uniform across multiple instances of the one or more terms (Yehuda; [0097]: renames are applied to both the plugin invocation and the import table that is attached to each plugin; [0107]: all SDK classes and resources are automatically renamed to avoid conflicts with the fused app).
Same motivation of claim 1 is applied here.
Regarding to claim 7, Gentilin in view of Abreu and Yehuda discloses the method of claim 6, wherein renaming, by the computing system, the one or more terms in the code comprises uniform renaming across files (Yehuda; [0097]: renames are applied to both the plugin invocation and the import table that is attached to each plugin; [0107]: all SDK classes and resources are automatically renamed to avoid conflicts with the fused app).
Same motivation of claim 1 is applied here.
Regarding to claim 10, Gentilin in view of Abreu and Yehuda discloses the method of claim 1, wherein providing the augmented reality rendering experience (same as rejected in claim 1) comprises:
obtaining image data ([0021]: receive captured image data from a camera);
processing the image data with a perception subgraph of an augmented-reality rendering model to generate a first output, wherein the perception subgraph was generated with the computing system (Abreu; [0015]: an augmentation representation is generated by retrieving data defining a plurality of polygonal regions; [0021]: output the augmented captured image data; Fig. 2; Fig. 3a-d; [0065]: a mesh generator 6 retrieves at least one reference image 8 from the training image database 5, and generates data defining a plurality of polygonal regions based on the retrieved reference image 8; a mesh generator, mesh, and reference image 8 are a perception subgraph;
PNG
media_image2.png
799
730
media_image2.png
Greyscale
);
processing the first output with a rendering subgraph of the augmented-reality rendering model to generate augmented-reality media, wherein the rendering subgraph is obtained from the data assets obtained from the third party computing system (Abreu; [0079]: the main elements of the shape training module 3 as well as the data elements processed and generated by the shape training module 3 for the trained shape models; [0086]: data that is processed and generated by the texture model training module 4 during the training process; [0092]: the plurality of training images 23; [0100]: utilizing stored training data 5).
Same motivation of claim 1 is applied here.
Regarding to claim 11, Gentilin in view of Abreu and Yehuda discloses the method of claim 1, wherein the reality rendering is a video game reality rendering (Gentilin; [0003]: the game or movie; [0005]: play a video of the tower defense game in a different virtual reality game; [0007]: a user plays a racing game in a virtual reality application; [0036]: modified reality game, 360 degree video, or 3D video; [0038]: a two dimensional game or game demonstration, a three dimensional video, a 360 degree video, a virtual reality game or game demonstration, and/or a three dimensional game or game demonstration).
Gentilin in view of Abreu and Yehuda further discloses augmented reality rendering (Abreu; [0015]: an augmentation representation is generated by retrieving data defining a plurality of polygonal regions; [0021]: output the augmented captured image data; [0076]: the renderer 24 overlays the selected optimized meshes 18 according to the common reference plane).
Regarding to claim 12, Gentilin in view of Abreu and Yehuda discloses the method of claim 1, wherein the data assets comprise product data descriptive of a product sold by a third party (Gentilin; [0004]: video games are sold by third party; [0005]: means content from third parties with respect to the user and the virtual reality application provider; many modern HMDs, such as the Samsung Gear VR.TM. and the Google Cardboard.TM., make use of the technology in modern smartphones to provide virtual reality applications; [0085]: virtual reality content management system 170 receives one or more files of virtual reality environments created by third party software development kits, such as Maya.RTM., Blender, or Unity; ).
Regarding to claim 13, Gentilin in view of Abreu and Yehuda discloses the method of claim 1, wherein the augmented reality rendering is generated by (same as rejected in claim 1):
receiving, by the computing device, user data (Abreu; [0028]: receive data of an image captured by a camera, the captured image including a facial feature portion corresponding to at least one feature of a user's face; [0075]: the user's face; [0090]: processes user input to define a plurality of labelled feature points 25 in the training images of the training image database; Fig. 15; [0129]: receive captured image data from the camera);
processing, by the computing device, the user data with an encoder model to generate a user mesh (Abreu; Fig. 3a; [0065]: a mesh generator 6 that retrieves at least one reference image 8 from the training image database; [0066]: the mesh generator 6 may further prompt the user for input to optimize the normalized mesh; [0068]: the normalized mesh 10 retrieved from the object model database 7 and data defining one or more user-defined masks; Fig. 15; [0129]: process and determine if an object, a subject's face); and
processing, by the computing device, the user mesh with an augmentation model to generate the augmented reality rendering (Abreu; [0087]: overlays the retrieved mask 14a on the retrieved normalized object mesh; overlaid on the normalized mesh), wherein the augmentation model comprises shaders based at least in part on the data assets (Abreu; [0073]: a plurality of shader modules 22 that determine and apply image colorization to selected regions of texture data files 20; the output of a custom makeup shader module 22 is sent to a renderer 24 that augments the underlying user's face in the captured image from the camera 9 with the specified virtual makeup; [0076]: after all of the regions and color parameters are processed by the transform module 30 and defined shader modules 22, the renderer 24 overlays the selected optimized meshes; [0142]: the renderer 24 performs an alpha blend of the adjusted texture data regions associated with each of the layered optimized meshes 18, as output by the respective shader modules 22).
Same motivation of claim 1 is applied here.
Regarding to claim 14, Gentilin in view of Abreu and Yehuda discloses the method of claim 1, wherein the one or more rendering effect shaders comprise a texture shader, a uniforms shader, and a filtering shader (or is optional; Abreu; [0073]: a plurality of shader modules 22 that determine and apply image color to selected regions of texture data files 20; each shader module 22 can be based on predefined sets of sub-shader modules; Fig. 2; [0074]; [0135]: Kalman filtering; [0153]: a simple Gaussian noise).
Regarding to claim 16, Gentilin discloses a computing system ([0005]: third party; [0028]: generate a virtual reality environment for displaying content; [0029]: create a modified virtual reality environment from the design data for the purpose of displaying inserted content; Fig. 1; [0032]: client computing device 110, application publisher computing devices 160, virtual reality content management system 170, and content provider servers 180, which are communicatively coupled over network 100; [0042]: send one or more applications to client computing device 110 over network; [0085]: receive one or more files of virtual reality environments created by third party software development kits), comprising:
one or more processors (Fig. 1; [0034]: one or more processor cores, co-processors);
one or more non-transitory computer readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations, the operations comprising ([0036]: one or more computer readable media storing instructions which, when executed by client computing device 110, cause client computing device 110 to execute the particular application):
the rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is also used to reject claim 16.
Regarding to claim 19, Gentilin discloses one or more non-transitory computer readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations, the operations comprising ([0005]: third parties with respect to the user and the virtual reality application provider; [0028]: generate a virtual reality environment for displaying content; [0029]: create a modified virtual reality environment from the design data for the purpose of displaying inserted content; Fig. 1; [0032]; [0036]: one or more computer readable media storing instructions which, when executed by client computing device 110, cause client computing device 110 to execute the particular application; [0042]: send one or more applications to client computing device 110 over network; [0085]: receive one or more files of virtual reality environments, created by third party software development kits):
The rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is also used to reject claim 19.
Claims 5, 15, and 20 is rejected under 35 U.S.C. 103 as being unpatentable over Gentilin (US 20170206708 A1) in view of Abreu (US 20160196665 A1), in view of Yehuda (US 20200150933 A1), and further in view of Hofmann (US 20150040074 A1).
Regarding to claim 5, Gentilin in view of Abreu and Yehuda discloses the method of claim 1, wherein the software development kit comprises one or more preview tools.
Gentilin in view of Abreu and Yehuda fails to explicitly disclose wherein the software development kit comprises one or more preview tools.
In same field of endeavor, Hofmann teaches wherein the software development kit comprises one or more preview tools ([0086]: edit augmented reality content; [0111]: a user may preview the augmented reality content locally by moving into augmented reality view to see the augmented reality content 2014 first and decides whether to continue editing; return to content editor view).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gentilin in view of Abreu and Yehuda to include wherein the software development kit comprises one or more preview tools as taught by Hofmann. The motivation for doing so would have been to edit augmented reality content; to preview the augmented reality content locally and to improve a user experience as taught by Hofmann in paragraphs [0086], [0111] and [0148].
Regarding to claim 15, Gentilin in view of Abreu and Yehuda discloses the method of claim 1, wherein the augmented reality rendering experience comprises a product-specific augmented reality experience (Gentilin; Fig. 5; [0124]: client computing device 110 plays a three dimensional movie or initiate a virtual reality video game),
Gentilin in view of Abreu and Yehuda fails to explicitly disclose:
wherein the product-specific augmented reality experience comprises an augmented-reality try-on experience that renders the one or more products in user image data.
In same field of endeavor, Hofmann teaches:
wherein the product-specific augmented reality experience comprises an augmented-reality try-on experience that renders the one or more products in user image data ([0086]: edit augmented reality content; [0111]: a user previews and try-on the augmented reality content locally by moving into augmented reality view to see the augmented reality content 2014 first and decides whether to continue editing; return to content editor view).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gentilin in view of Abreu and Yehuda to include wherein the product-specific augmented reality experience comprises an augmented-reality try-on experience that renders the one or more products in user image data as taught by Hofmann. The motivation for doing so would have been to edit augmented reality content; to preview the augmented reality content locally and to improve a user experience as taught by Hofmann in paragraphs [0086], [0111] and [0148].
Regarding to claim 20, Gentilin in view of Abreu and Yehuda discloses one or more non-transitory computer readable media of claim 19, wherein the software development kit comprises a joint interface for editing a reality experience (Gentilin; [0113]: display the content in the modified virtual reality environment; display an option to view three dimensional content; [0114]: selects a type of content to display in the modified virtual reality environment based on one or more capabilities of the client computing device; [0125]: requests a modified virtual reality environment from virtual reality content management system 170).
Gentilin in view of Abreu and Yehuda further discloses an augmented reality experience (Abreu; [0015]: an augmentation representation is generated by retrieving data defining a plurality of polygonal regions; [0021]: output the augmented captured image data; [0076]: the renderer 24 overlays the selected optimized meshes 18 according to the common reference plane).
Gentilin in view of Abreu and Yehuda fails to explicitly disclose: previewing an augmented reality experience.
In same field of endeavor, Hofmann teaches:
previewing an augmented reality experience ([0086]: edit augmented reality content; [0111]: a user may preview the augmented reality content locally by moving into augmented reality view to see the augmented reality content 2014 first and decides whether to continue editing; return to content editor view).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gentilin in view of Abreu and Yehuda to include previewing an augmented reality experience as taught by Hofmann. The motivation for doing so would have been to edit augmented reality content; to preview the augmented reality content locally and to improve a user experience as taught by Hofmann in paragraphs [0086], [0111] and [0148].
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Gentilin (US 20170206708 A1) in view of Abreu (US 20160196665 A1), in view of Yehuda (US 20200150933 A1), and further in view of Krutsch (US 20160379381 A1).
Regarding to claim 8, Gentilin in view of Abreu and Yehuda discloses the method of claim 6, wherein renaming the one or more terms in the code of the data assets comprises renaming using a hashing function, wherein generated hashes are indexed in a global registry (Yehuda; [0097]: renames are applied to both the plugin invocation and the import table that is attached to each plugin; [0107]: all SDK classes and resources are automatically renamed to avoid conflicts with the fused app).
Gentilin in view of Abreu and Yehuda fails to explicitly disclose:
a hashing function.
In same field of endeavor, Krutsch teaches:
a hashing function ([0059]: a non-cryptographic hash function algorithm; [0065]: determine hash values).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gentilin in view of Abreu and Yehuda to include a hashing function as taught by Krutsch. The motivation for doing so would have been to determine hash values as taught by Krutsch in paragraphs [0059] and [0065].
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Gentilin (US 20170206708 A1) in view of Abreu (US 20160196665 A1), in view of Yehuda (US 20200150933 A1), and further in view of Ding (US 20190129715 A1).
Regarding to claim 9, Gentilin in view of Abreu and Yehuda discloses the method of claim 1, wherein providing the augmented reality rendering experience (same as rejected in claim 1) comprises
Gentilin in view of Abreu and Yehuda fails to explicitly disclose:
utilizing a rendering subgraph associated with the data asset and a perception subgraph usable with a plurality of different data assets associated with a plurality of different renderings.
In same filed of endeavor, Ding teaches:
utilizing a rendering subgraph associated with the data asset and a perception subgraph usable with a plurality of different data assets associated with a plurality of different renderings (Ding; Fig. 2; [0021]: a position of the marker decides a position of the AR renderer; different render subgraphs are shown right side in Fig. 2;
PNG
media_image3.png
239
493
media_image3.png
Greyscale
; Fig. 2; [0022]: AR marker is perception subgraph as illustrated in Fig. 2; services stored by the AR marker and the AR renderer are used for background services; manage AR marker image information uploaded by the AR developer; [0024]: an AR renderer management module manages an AR renderer model file that is uploaded by the AR developer and that is corresponding to the AR marker image information; upload the AR renderer model file to the third party AR background service system; a relative positional relationship between the AR renderer and the AR marker).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gentilin in view of Abreu and Yehuda to include utilizing a rendering subgraph associated with the data asset and a perception subgraph usable with a plurality of different data assets associated with a plurality of different renderings as taught by Ding. The motivation for doing so would have been to improve flexibility of AR application development; to executes a mobile terminal of the AR application to interact with the preset third party AR background service system, to obtain and display a real-time AR rendering effect as taught by Ding in paragraphs [0046] and [0050].
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Gentilin (US 20170206708 A1) in view of Abreu (US 20160196665 A1), in view of Yehuda (US 20200150933 A1), and further in view of Wang (US 20150082298 A1).
Regarding to claim 17, Gentilin in view of Abreu and Yehuda discloses the computing system of claim 16,
wherein the data assets were generated based at least in part on the one or more inputs received from the third party computing system, wherein generating the data assets based at least in part on the one or more inputs (Abreu; [0028]: receive data of an image captured by a camera, the captured image including a facial feature portion corresponding to at least one feature of a user's face; Fig. 3a; [0065]: a mesh generator 6 retrieves at least one reference image 8 from the training image database; [0066]: the mesh generator 6 may further prompt the user for input to optimize the normalized mesh; [0068]: the normalized mesh 10 retrieved from the object model database 7 and data defining one or more user-defined masks) comprises:
Gentilin in view of Abreu and Yehuda fails to explicitly disclose generating a renderable compressed file that comprises the data assets that are associated with rendering a product-specific rendering effect.
In same field of endeavor, Wang teaches generating a renderable compressed file that comprises the data assets that are associated with rendering a product-specific rendering effect ([0031]: the package is compressed to obtain an even smaller file size to be deployed to the target device or device simulator for installation; [0048]: the package may further be compressed using a compression algorithm; a platform SDK 170 generates the compressed package for installation at the target platform; [0049]: the package may be automatically executed on the target device or device simulator).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gentilin in view of Abreu and Yehuda to include generating a renderable compressed file that comprises the data assets that are associated with rendering a product-specific rendering effect as taught by Wang. The motivation for doing so would have been to identify whitespaces, line breaks, comments and/or unnecessary statements that can be removed, long variable names, function names and/or statements that can be optimized to reduce the size; to remove any whitespace, line break, comment and/or unnecessary statement that have been identified by code scanner 123 as taught by Wang in paragraphs [0030-0031].
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Gentilin (US 20170206708 A1) in view of Abreu (US 20160196665 A1), in view of Yehuda (US 20200150933 A1), and further in view of Urtasun (US 20200160117 A1).
Regarding to claim 18, Gentilin in view of Abreu and Yehuda discloses the computing system of claim 16, further comprising:
testing the augmented reality experience, wherein testing the augmented reality experience (Abreu; [0022]: test and refine the location of the object in the image; [0029]: the location of the lips in the captured image is iteratively refined and tested) comprises:
obtaining training data (Abreu; [0028]: receive data of an image captured by a camera, the captured image including a facial feature portion corresponding to at least one feature of a user's face; [0060]: a process of augmenting image data of the tracked objects based on trained object texture models; [0061]: receives image data captured by the camera; [0064]: data that are processed and generated by the texture model training module 4 during the training process);
processing the training data with the augmented reality experience to generate augmented reality media (Abreu; [0079]: the main elements of the shape training module 3 as well as the data elements processed and generated by the shape training module 3 for the trained shape models; [0086]: data that is processed and generated by the texture model training module 4 during the training process; [0092]: the plurality of training images 23; [0100]: utilizing stored training data).
Gentilin in view of Abreu and Yehuda fails to explicitly disclose:
evaluating a loss function based at least in part on a comparison between the augmented reality media and ground truth data; and
adjusting one or more parameters based at least in part on the loss function.
In same field of endeavor, Urtasun teaches:
evaluating a loss function based at least in part on a comparison between the augmented reality media and ground truth data (Urtasun; [0041]: the machine-learned feature extraction models are trained based on evaluation of a loss function associated with training data; [0042]: evaluation of a loss function associated with the accuracy of the localized state with respect to the ground-truth state); and
adjusting one or more parameters based at least in part on the loss function (Urtasun; [0042]: adjust parameters of the machine-learned feature extraction models based on the loss; [0085]: the computing system adjusts one or more parameters of the one or more machine-learned feature extraction models based at least in part on the loss).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gentilin in view of Abreu and Yehuda to include evaluating a loss function based at least in part on a comparison between the augmented reality media and ground truth data; and adjusting one or more parameters based at least in part on the loss function as taught by Urtasun. The motivation for doing so would have been to evaluate the loss function associated with the accuracy of the localized state with respect to the ground-truth state; to adjust parameters of the machine-learned feature extraction models based on the loss as taught by Urtasun in paragraph [0042].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hai Tao Sun whose telephone number is (571)272-5630. The examiner can normally be reached 9:00AM-6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 5712727642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAI TAO SUN/Primary Examiner, Art Unit 2616