Prosecution Insights
Last updated: April 19, 2026
Application No. 18/527,660

User Configurable and Editable Real time CGI Image Rendering

Non-Final OA §103§112
Filed
Dec 04, 2023
Examiner
SHENG, XIN
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Katana Spooka Zoo
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
290 granted / 401 resolved
+10.3% vs TC avg
Strong +17% interview lift
Without
With
+17.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
17 currently pending
Career history
418
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
75.0%
+35.0% vs TC avg
§102
2.2%
-37.8% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 401 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claim 12 recites the limitation "the state data" in “The system of claim 1 wherein the state data is indicative of at least…”. There is insufficient antecedent basis for this limitation in the claim. Claim 14 recites the limitation "the state data" in “The system of claim 10 wherein the state data is metadata”. There is insufficient antecedent basis for this limitation in the claim. Claim Objections Claim 21 is objected to under 37 CFR 1.75 as being a substantial duplicate of claim 16. Applicant is advised that should claim 16 be found allowable, claim 21 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 7-10, 15, 17, 22, 25-26 are rejected under 35 U.S.C. 103 as being unpatentable over Haveman et al (US20170200302). Regarding Claim 1. Haveman teaches A Computer Generated Imagery (CGI) system (Haveman, abstract, the invention describes a method and system for high-performance real-time adjustment of colors and patterns on one or more pre-selected elements in a playing video, interactive 360° content, or other image, using graphics processing unit ("GPU") shader code to process the asset per-pixel and blend in a target color or pattern based on prepared masks and special metadata lookup tables encoded visually in the video file. The method and system can generate asset-specific optimized GPU code that is generated from templates. Pixels are blended into the target asset using the source image for shadows and highlights, one or more masks, and various metadata lookup-tables, such as a texture lookup-table that allows for changing or adding patterns, z-depth to displace parts of the image, or normals to calculate light reflection and refraction.) comprising: a processor having software executing thereon, said software providing a user interface which allows a user to select two or more asset files which are pre-configured CGI asset files to combine the two or more asset files into a CGI asset (Haveman, [0017] With reference to FIG. 1, the hardware associated with various aspects of the present invention will be described. Assets, such as videos, 360° content or images for use with the present invention, are generated at an asset generation computer 1, as will be described in more detail in connection with FIGS. 2A-2C through 5A-5B. The assets are delivered to a user computer 2, typically through a standard Internet connection…. The user computer 2 contains a standard CPU 3, and a graphics processing unit, or GPU 4. [0082] Another application of the present invention to the display of 360° content will be described with reference to FIGS. 4A-4B, 7 and 11A-11E. Here, an asset, namely a bottle, is prepared as described with reference to FIGS. 4A-4E. Playback is commenced, at block 34, FIG. 7, and the bottle is displayed within a user interface, an example of which is shown in FIG. 11A, The user interface allows the user to select different patterns or textures (in this case, labels) at input 77. The user interface also allows the user to select different backgrounds and settings at input 78, and different orientations (rotations through the two axes discussed with reference to FIGS. 4C-4E, as well as a rotation through a third axis, normal to the plane of the screen) and positions (translations), at input 79. [0083] For example, the user may select a particular label 80A-80C, at input 77, which will be applied to the region and in the manner specified by regions 20E-20I,); a first asset is a background and is one of the two or more asset files, and a second asset is an object and is one of the two or more asset files (Haveman, [0082] Another application of the present invention to the display of 360° content will be described with reference to FIGS. 4A-4B, 7 and 11A-11E. Here, an asset, namely a bottle, is prepared as described with reference to FIGS. 4A-4E. Playback is commenced, at block 34, FIG. 7, and the bottle is displayed within a user interface, an example of which is shown in FIG. 11A, The user interface allows the user to select different patterns or textures (in this case, labels) at input 77. The user interface also allows the user to select different backgrounds and settings at input 78, and different orientations (rotations through the two axes discussed with reference to FIGS. 4C-4E, as well as a rotation through a third axis, normal to the plane of the screen) and positions (translations), at input 79. Unlike the full-frame video described with reference to FIGS. 10A-10C, the display of FIGS. 11A-11E can be static until the user modifies a parameter through the user interface. The player will check to see if such has occurred, in block 35, FIG. 7, and if so, processing proceeds through blocks 37-40, FIG. 7, to update the frame of the display.); the first asset includes image content and perspective information which perspective information is indicative of: ground plane, surface information, point of view position, lighting or combinations thereof associated with the background (Haveman, [0082] Another application of the present invention to the display of 360° content will be described with reference to FIGS. 4A-4B, 7 and 11A-11E. Here, an asset, namely a bottle, is prepared as described with reference to FIGS. 4A-4E. Playback is commenced, at block 34, FIG. 7, and the bottle is displayed within a user interface, an example of which is shown in FIG. 11A, The user interface allows the user to select different patterns or textures (in this case, labels) at input 77. The user interface also allows the user to select different backgrounds and settings at input 78, and different orientations (rotations through the two axes discussed with reference to FIGS. 4C-4E, as well as a rotation through a third axis, normal to the plane of the screen) and positions (translations), at input 79. Unlike the full-frame video described with reference to FIGS. 10A-10C, the display of FIGS. 11A-11E can be static until the user modifies a parameter through the user interface. The player will check to see if such has occurred, in block 35, FIG. 7, and if so, processing proceeds through blocks 37-40, FIG. 7, to update the frame of the display. Although Haveman didn’t explicitly teach perspective information, Haveman did describe the rotation parameter with reference to a plane. It is obvious to a person with ordinary skill in the art that perspective information can be calculated from the rotation parameters. Therefore, rotation parameters implies perspective information.); wherein the software uses the perspective information to modify how image content of the second asset is displayed relative to the first asset on the user interface and the user interface allows the second asset to be user manipulated consistent with the perspective information to create the CGI asset (Haveman, [0084] The user can also select different backgrounds and settings for the bottle, essentially to place it anywhere he or she desires. For example, in FIG. 11D, the user has selected a background for the bottle on a beach, and its appearance will be a function of the reflection and refraction characteristics defined by the neutral, transparency, diffusion, intensity and normal regions 20A, 20C, 20D, 20E and 201, in combination with the background, as described above. This can be compared to FIG. 11E, where the user has translated the bottle, at input 79, so that it obscures the jogger in the selected background. Different refraction effects can be seen at 82, for example, where the jogger and his shadow can be seen as refracted through the semitransparent bottle. Again, this effect is highly realistic and is produced automatically for any background setting selected by the user. Therefore, in Fig 11D & 11E, the object bottle is put in different perspective relative to background image.). Regarding Claim 2. Haveman further teaches The system of claim 1 wherein: the user interface allows the second asset to be user manipulated by moving a position of the second asset relative to the background such that a change in position of the second asset modifies an orientation of the second asset, lighting of the asset, shadow of the asset or combinations thereof (Haveman, [0084] The user can also select different backgrounds and settings for the bottle, essentially to place it anywhere he or she desires. For example, in FIG. 11D, the user has selected a background for the bottle on a beach, and its appearance will be a function of the reflection and refraction characteristics defined by the neutral, transparency, diffusion, intensity and normal regions 20A, 20C, 20D, 20E and 201, in combination with the background, as described above. This can be compared to FIG. 11E, where the user has translated the bottle, at input 79, so that it obscures the jogger in the selected background. Different refraction effects can be seen at 82, for example, where the jogger and his shadow can be seen as refracted through the semitransparent bottle. Again, this effect is highly realistic and is produced automatically for any background setting selected by the user. Therefore, in Fig 11D & 11E, the object bottle is put in different perspective relative to background image, when the bottle is moved to a position that obscures the jogger in the background.). Regarding Claim 7. Haveman further teaches The system of claim 1 wherein the second asset can be user manipulated to change a color thereof (Haveman, [0079] In conclusion, the system and method described above allow a user, by simply setting a new property (e.g., the color for a region) in a user interface, to change, in real-time, with no latency, the color or other aspect of the configurable element, without requiring multiple, separate videos for each color/pattern combination, or the reloading of an entire new video from a server. This is demonstrated with reference to the examples shown in FIGS. 10A-10C and 11A-11C.). Regarding Claim 8. Haveman further teaches The system of claim 1 wherein the second asset can be user manipulated to change a configuration thereof (Haveman, [0080] With reference to FIGS. 3A-3B, 7 and 10A-10C, playback of a video allowing real-time adjustment of color will be described. In this case, playback is commenced, at block 34, FIG. 7, of a video of an asset, namely an athletic shoe, which has been prepared as described in connection with FIGS. 3A-3B. The player displays a user interface 75, FIGS. 10A-10C, in a well-known manner, which allows the user to change the color of the modifiable regions of the shoe, as desired. On a frame by frame basis, video playback is implemented at blocks 35-36, FIG. 7. At block 35, the player checks to see whether the modifiable region has changed based upon the user input of the desired color for the region, based upon the user-controlled cursor, 76, FIGS. 10A-10B. If this is the case, processing proceeds through blocks 37-40, FIG. 7, to update the frame of the playing video, and a new frame is displayed with the selected color, as shown in the sequence of displays in FIGS. 10A-10C. [0081] Thus, by simply moving the cursor 76 of the user interface 75, the color of the configurable region can be changed in real-time, with no latency, in a highly realistic manner, and without requiring multiple, separate videos for each color/pattern combination, or the reloading of an entire new video from a server. Such facility is extremely useful in many applications, for example, in allowing a potential purchaser to view an item in many different colors or styles, or allowing a product designer to develop new products.). Regarding Claim 10. Haveman further teaches The system of claim 1 wherein the user interface is displayed on a display which is remote to the processor via a network (Haveman, [0017] With reference to FIG. 1, the hardware associated with various aspects of the present invention will be described. Assets, such as videos, 360° content or images for use with the present invention, are generated at an asset generation computer 1, as will be described in more detail in connection with FIGS. 2A-2C through 5A-5B. The assets are delivered to a user computer 2, typically through a standard Internet connection. It is obvious to a person with ordinary skill in the art that a user computer includes a display and connects to a server which includes processor(s), through internet connection.). Regarding Claim 15. Haveman further teaches The system of claim 1 wherein the user interface is a web based user interface and the processor includes a server (Haveman, [0017] With reference to FIG. 1, the hardware associated with various aspects of the present invention will be described. Assets, such as videos, 360° content or images for use with the present invention, are generated at an asset generation computer 1, as will be described in more detail in connection with FIGS. 2A-2C through 5A-5B. The assets are delivered to a user computer 2, typically through a standard Internet connection…. The user computer 2 contains a standard CPU 3, and a graphics processing unit, or GPU 4. [0079] In conclusion, the system and method described above allow a user, by simply setting a new property (e.g., the color for a region) in a user interface, to change, in real-time, with no latency, the color or other aspect of the configurable element, without requiring multiple, separate videos for each color/pattern combination, or the reloading of an entire new video from a server. This is demonstrated with reference to the examples shown in FIGS. 10A-10C and 11A-11C.). Regarding Claim 17. Haveman further teaches The system of claim 1 wherein the software receives commands via the user interface and converts those commands into commands for the CGI software such that the CGI software generates the CGI asset for display on the user interface based on the commands (Haveman, [0082] Another application of the present invention to the display of 360° content will be described with reference to FIGS. 4A-4B, 7 and 11A-11E. Here, an asset, namely a bottle, is prepared as described with reference to FIGS. 4A-4E. Playback is commenced, at block 34, FIG. 7, and the bottle is displayed within a user interface, an example of which is shown in FIG. 11A, The user interface allows the user to select different patterns or textures (in this case, labels) at input 77. The user interface also allows the user to select different backgrounds and settings at input 78, and different orientations (rotations through the two axes discussed with reference to FIGS. 4C-4E, as well as a rotation through a third axis, normal to the plane of the screen) and positions (translations), at input 79. [0083] For example, the user may select a particular label 80A-80C, at input 77, which will be applied to the region and in the manner specified by regions 20E-20I. Therefore, user’s selection commands are used to generate the virtual environment shown in FIGS. 4A-4B and 11A-11E.). Claim 22 is similar in scope as Claim 1, and thus is rejected under same rationale. Claim 25 is similar in scope as Claim 1, and thus is rejected under same rationale. Claim 26 is similar in scope as Claim 2, and thus is rejected under same rationale. Claims 3-6, 18, 27-29 are rejected under 35 U.S.C. 103 as being unpatentable over Haveman et al (US20170200302) in view of Wang et al (US20190082118). Regarding Claim 3. Haveman fails to explicitly teach, however, Wang teaches The system of claim 1 wherein the first asset is a three-dimensional background (Wang, abstract, the invention describes methods for generating AR self-portraits or "AR selfies." In an embodiment, a method comprises: capturing, by a first camera of a mobile device, live image data, the live image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation. [0042] FIGS. 3C and 3D illustrate graphical user interfaces with different background scenes selected and showing a recording view and full-screen playback view, according to an embodiment. In FIG. 3C, a recording view is shown where user 302c has selected a virtual background 303c. [0069] Process 900 continues by receiving a virtual background content (903) from storage. For example, the virtual background content can be a 2D image, 3D image or 360° video. The virtual background content can be selected by the user through a GUI. The virtual background content can be extracted or sampled from any desired virtual environment, such as a famous city or cartoon environment with animated cartoon characters and objects. Haveman, [0016] FIGS. 11A-11E illustrate a display of 360° content and examples of adjustments to texture, reflection and refraction based on user-selected patterns, orientations and backgrounds, in accordance with the present invention.). Haveman and Wang are analogous art, because they both teach method of generating virtual image by selecting background image and foreground object. Wang further teaches selecting 3D background image. Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify the virtual image generation method (taught by Haveman), to further use 3D background image (taught in Wang), so as to create a live view of physical, real-world environment whose elements are “augmented” by computer generated sensory input such as sound, video or graphics (Wang, [0003]). Regarding Claim 4. The combination of Haveman and Wang further teaches The system of claim 3 wherein the user interface allows the perspective information to be user modifiable to modify how the background is displayed (Haveman, [0082] Another application of the present invention to the display of 360° content will be described with reference to FIGS. 4A-4B, 7 and 11A-11E. Here, an asset, namely a bottle, is prepared as described with reference to FIGS. 4A-4E. Playback is commenced, at block 34, FIG. 7, and the bottle is displayed within a user interface, an example of which is shown in FIG. 11A, The user interface allows the user to select different patterns or textures (in this case, labels) at input 77. The user interface also allows the user to select different backgrounds and settings at input 78, and different orientations (rotations through the two axes discussed with reference to FIGS. 4C-4E, as well as a rotation through a third axis, normal to the plane of the screen) and positions (translations), at input 79. Wang, [0037] FIGS. 3A and 3B are graphical user interfaces for recording AR selfies, according to an embodiment. Referring to FIG. 3A, AR selfie GUI 300 includes viewport 301 displaying a composite video frame that includes selfie subject 302a and virtual background content 303a. A "cartoon" special effect has been applied to the composite video to create an interesting effect and to hide artifacts from the alpha compositing process. Although a single composite video frame is shown, it should be understood that viewport 301 is displaying a live video feed (e.g., 30 frames/second), and if the orientation of the real-world camera view direction changes, virtual background 303a will also seamlessly change to show a different portion of the virtual environment. This allows the user to "look around" the visual environment by changing the view direction of the realworld camera.). The reasoning for combination of Haveman and Wang is the same as described in Claim 3. Regarding Claim 5. The combination of Haveman and Wang further teaches The system of claim 4 wherein modification of the perspective information modifies how the second asset is displayed relative to the first asset (Haveman, [0082] Another application of the present invention to the display of 360° content will be described with reference to FIGS. 4A-4B, 7 and 11A-11E. Here, an asset, namely a bottle, is prepared as described with reference to FIGS. 4A-4E. Playback is commenced, at block 34, FIG. 7, and the bottle is displayed within a user interface, an example of which is shown in FIG. 11A, The user interface allows the user to select different patterns or textures (in this case, labels) at input 77. The user interface also allows the user to select different backgrounds and settings at input 78, and different orientations (rotations through the two axes discussed with reference to FIGS. 4C-4E, as well as a rotation through a third axis, normal to the plane of the screen) and positions (translations), at input 79. [0084] The user can also select different backgrounds and settings for the bottle, essentially to place it anywhere he or she desires. For example, in FIG. 11D, the user has selected a background for the bottle on a beach, and its appearance will be a function of the reflection and refraction characteristics defined by the neutral, transparency, diffusion, intensity and normal regions 20A, 20C, 20D, 20E and 201, in combination with the background, as described above. This can be compared to FIG. 11E, where the user has translated the bottle, at input 79, so that it obscures the jogger in the selected background. Different refraction effects can be seen at 82, for example, where the jogger and his shadow can be seen as refracted through the semitransparent bottle. Again, this effect is highly realistic and is produced automatically for any background setting selected by the user. Therefore, in Fig 11D & 11E, the object bottle is put in different perspective relative to background image, when the bottle is moved to a position that obscures the jogger in the background.). Regarding Claim 6. The combination of Haveman and Wang further teaches The system of claim 3 wherein the three dimensional background and perspective information is used to modify how the second asset is displayed in order to provide a reflection of at least part of the background on the second asset (Haveman, [0084] The user can also select different backgrounds and settings for the bottle, essentially to place it anywhere he or she desires. For example, in FIG. 11D, the user has selected a background for the bottle on a beach, and its appearance will be a function of the reflection and refraction characteristics defined by the neutral, transparency, diffusion, intensity and normal regions 20A, 20C, 20D, 20E and 201, in combination with the background, as described above. This can be compared to FIG. 11E, where the user has translated the bottle, at input 79, so that it obscures the jogger in the selected background. Different refraction effects can be seen at 82, for example, where the jogger and his shadow can be seen as refracted through the semitransparent bottle. Again, this effect is highly realistic and is produced automatically for any background setting selected by the user. Therefore, in Fig 11D & 11E, the object bottle is put in different perspective relative to background image, when the bottle is moved to a position that obscures the jogger (part of the background) in the background. The shadow of the jogger can be seen through the bottle in Fig 11E.). Claim 18 is similar in scope as Claim 1 & 3-5, and thus is rejected under same rationale. Claim 27 is similar in scope as Claim 3, and thus is rejected under same rationale. Claim 28 is similar in scope as Claim 4, and thus is rejected under same rationale. Claim 29 is similar in scope as Claim 5, and thus is rejected under same rationale. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Haveman et al (US20170200302) in view of Kaizen (“Basic Car Animation in Blender”, 2021). Regarding Claim 9. Haveman fails to explicitly teach, however, Kaizen teaches The system of claim 8 wherein the second asset is a vehicle and the configuration defines: a vehicle feature, vehicle element position or combinations thereof (Kaizen, The video describes how to create 3D car animation using Blender®. Page 2-3, user import a 3D car model to the GUI application. Page 4, a 3D road background is created. Page 5-6, a 3D tunnel background is created on top of the road background. Page 7, the final rendering car animation.). Haveman and Kaizen are analogous art, because they both teach method of generating virtual image by selecting background image and foreground object. Kaizen further teaches creating 3D car animation. Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify the virtual image generation method (taught by Haveman), to further use Blender® to create 3D car animation (taught in Kaizen), so as to create vehicle video such as car commercial. Claims 11-12, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Haveman et al (US20170200302) in view of Cheng et al (US11004246). Regarding Claim 11. Haveman fails to explicitly teach, however, Cheng teaches The system of claim 1 wherein the CGI asset is downloadable as an image or video file containing state data such that the CGI asset, when opened at the user interface, is editable such that the second asset can be manipulated relative to the first asset from a starting point based on the CGI asset as opened (Cheng, abstract, the invention describes methods and devices for a jewelry generation service, and, more particularly, to jewelry generation services for providing an instant real time view of a piece of personalized laser cut jewelry. Col 1, line 43-58, For example, a method for generating a personalized jewelry item is provided that may include, for a jewelry style type, receiving, at a server subsystem, from a user electronic device, user device data indicative of a plurality of user selected letters, in response to the receiving, automatically generating, at the server subsystem, a vector graphics jewelry image of a personalized jewelry item design that includes the plurality of user-selected letters integrated into a shape of the jewelry style type while satisfying a plurality of manufacturing constraints associated with the jewelry style type, after the automatically generating, automatically enabling, with the server subsystem, a presentation of a visual representation of the generated vector graphics jewelry image by the user electronic device, and, after the automatically generating, producing a physical representation of the generated vector graphics jewelry image. Col 17, line 32-49, After the parameters have been initialized and populated into the correct places, the download of a nameplate source SVG file may take place. A particular source SVG file to be downloaded may only be selected (e.g., variable) based on a particular style code provided. Each source SVG file may be actually saved with a file name related to its associated style code, which may be how a particular file may be selected and downloaded (e.g., by image server subsystem 220 (e.g., from static data server subsystem 230 (e.g., as static response data 226) of system 201 of FIG. 2 (e.g., static data server subsystem 230 may store any suitable number of template nameplate source files 23012/ ( e.g., SVG files) for selective loading and use by subsystem 220))). A particular template nameplate source file may include several required and/or optional pieces, which may be broken into groups and objects (e.g., SVG file broken into SVG groups and SVG objects), where all naming conventions for such groups and objects may assume N to be a unique number to that name. Therefore, the user customized jewelry design data is saved as SVG file in the system. And the SVG file can be later selected and loaded for further processing.). Haveman and Cheng are analogous art, because they both teach method of generating virtual image based on user’s selecting/configuring virtual object. Cheng further teaches saving the modified virtual object data into SVG file. Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify the virtual image generation method (taught by Haveman), to further use the saving and loading method for the modified virtual object data (taught in Cheng), so as to provide a GUI for user to revisit the Jewelry design at a later time. Regarding Claim 12. The combination of Haveman and Cheng further teaches The system of claim 1 wherein the state data is indicative of at least: a position relative to the background, a configuration of the second asset or combinations thereof (Cheng, col 20, line 8-40, A noticeable characteristic may be that a cut guide 704 may be found on a LEFT side of each capital letter 702 of FIG. 7, and that left and right eyelets may be found for each capital letter 702 of FIG. 7. The eyelets, guides, and letter may all be separate paths, each of which may be identified with their own identifier ("ID"). A convention may be as follows for a <letter> (group): <letter>-path (the Jetter path), <letter>-left-guide (left guide), <letter>-right-guide (right guide), <letter>-cut-guide (cut guide), <etter>-left-eye (left eyelet position), and/or <letter>-right-eye (right eyelet position). The <letter> portion of the name may be always repeated as SVG path and group IDs may be (e.g., must be) unique. Col 20, line 21-40, SVG may be an eXtensible markup language ("XML"), which may use tags with attributes to characterize shapes and visual presentation. Ways to show a shape may be <line> and <path> tags. These tags may contain attributes that may characterize the shape of the line or path, and other styling references that may change the visual appearance of the shape, such as fill color, stroke color, stroke width, and/or gradients.). The reasoning for combination of Haveman and Cheng is the same as described in Claim 11. Regarding Claim 14. The combination of Haveman and Cheng further teaches The system of claim 10 wherein the state data is metadata (Cheng, Col 17, line 32-49, After the parameters have been initialized and populated into the correct places, the download of a nameplate source SVG file may take place. A particular source SVG file to be downloaded may only be selected (e.g., variable) based on a particular style code provided. Each source SVG file may be actually saved with a file name related to its associated style code, which may be how a particular file may be selected and downloaded (e.g., by image server subsystem 220 (e.g., from static data server subsystem 230 (e.g., as static response data 226) of system 201 of FIG. 2 (e.g., static data server subsystem 230 may store any suitable number of template nameplate source files 23012/ ( e.g., SVG files) for selective loading and use by subsystem 220))). A particular template nameplate source file may include several required and/or optional pieces, which may be broken into groups and objects (e.g., SVG file broken into SVG groups and SVG objects), where all naming conventions for such groups and objects may assume N to be a unique number to that name. Therefore, the user customized jewelry design data is saved as SVG file in the system. And the SVG file can be later selected and loaded for further processing. SVG (Scalable Vector Graphics) is an XML-based vector graphics format for defining virtual object/graphics. SVG includes ‘metadata’ elements which stores information including state information.). The reasoning for combination of Haveman and Cheng is the same as described in Claim 11. Claims 13 are rejected under 35 U.S.C. 103 as being unpatentable over Haveman et al (US20170200302) in view of Sheeler et al (US20120210262). Regarding Claim 13. Haveman fails to explicitly teach, however, Sheeler teaches The system of claim 1 wherein the user interface allows for rendering of the CGI asset in a user specified quality, resolution, aspect ratio or combinations thereof (Sheeler, abstract, the invention describes a media editing application that enables an author-user to create a rig graphically. A rig includes a group of snapshots of one or more parameters at different instances of time. A rig is created by selecting one or more objects and creating snapshots of one or more parameters of the selected objects to create an effect. In some embodiments animation is added to some of the snapshots. Some embodiments provide an edit mode where all parameters that are changed during the edit mode are automatically added to the current snapshot at the end of editing mode. [0008] Each snapshot in a rig can have an animation (or animated object). That is, the representative parameter of the rig is associated with the properties of the animation. In some embodiments, values for different display aspect ratios are converted into snapshots in a rig. The user by choosing a snapshot can modify the aspect ratio parameters and change the aspect ratio of the video.). Haveman and Sheeler are analogous art, because they both teach method of generating virtual image based on user’s selecting/configuring virtual object. Sheeler further teaches modifying video based on adjusting parameters including aspect ratio. Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify the virtual image generation method (taught by Haveman), to further use video parameter adjusting method (taught in Sheeler), so as to provide an intuitive GUI for user to change the setting for the video (Sheeler, [0004]). Claims 16, 21, 23 are rejected under 35 U.S.C. 103 as being unpatentable over Haveman et al (US20170200302) in view of Wang et al (US20190082118) further in view of Chapman et al (US12094072). Regarding Claim 16. The combination of Haveman and Wang further teaches The system of claim 1 further comprising a second software executing on the processor or a second processor, wherein the second software is a CGI software which accesses a storage containing the at least two assets, at least one of said at least two assets is a three dimensional model and (Wang, [0042] FIGS. 3C and 3D illustrate graphical user interfaces with different background scenes selected and showing a recording view and full-screen playback view, according to an embodiment. In FIG. 3C, a recording view is shown where user 302c has selected a virtual background 303c. [0069] Process 900 continues by receiving a virtual background content (903) from storage. For example, the virtual background content can be a 2D image, 3D image or 360° video. The virtual background content can be selected by the user through a GUI. The virtual background content can be extracted or sampled from any desired virtual environment, such as a famous city or cartoon environment with animated cartoon characters and objects. Haveman, [0016] FIGS. 11A-11E illustrate a display of 360° content and examples of adjustments to texture, reflection and refraction based on user-selected patterns, orientations and backgrounds, in accordance with the present invention.) The combination of Haveman and Wang fails to explicitly teach, however, Chapman teaches the CGI software manipulates the three dimensional model based on instructions from the software which are generated in response to user commands input via the user interface (Chapman, abstract, the invention describes Controllable three-dimensional (3D) virtual dioramas in a rendered 3D environment such as a virtual reality or augmented reality environment including one or more rendered objects. 3D diorama is associated with a spatial computing content item such as a downloadable application executable by a computing device. 3D diorama assets may include visual and/or audio content and are presented with rendered 3D environment objects in a composite view, which is presented to a user through a display of computing device. 3D diorama is rotatable in composite view, and at least one 3D diorama asset at least partially occludes, or is at least partially occluded by, at least one rendered 3D environment object. 3D diorama may depict or provide a preview of a spatial computing user experience generated by the downloadable application. Col 13, line 28-55, 3D diorama 150 can be shown in browse and/or detail pages of portal 120, and assets 152 included in 3D diorama 150 may occlude 190 objects presented in the rendered 3D environment 160, as the assets 152 move in their animation and/or as the scene is rotated 153. For example, if astronaut asset 711 of FIG. 7 moves in its animation to be in front of sofa object 703 of rendered living room, astronaut asset 711 may occlude 190 the view of at least a portion of sofa object 35 703.). Haveman, Wang and Chapman are analogous art, because they all teach method of generating virtual image based on user’s selecting/configuring virtual object. Chapman further teaches GUI for manipulating the 3D virtual model(s). Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify the virtual image generation method (taught by Haveman and Wang), to further use the GUI for modifying 3D virtual model(s) (taught in Chapman), so as to provide user with a collaborative 3D virtual environment (Chapman, col 1, line 21-55). Claim 21 is similar in scope as Claim 16, and thus is rejected under same rationale. Claim 23 is similar in scope as Claim 1&16, and thus is rejected under same rationale. Claims 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Haveman et al (US20170200302) in view of Wang et al (US20190082118) further in view of Cheng et al (US11004246). Claim 19 is similar in scope as Claim 11 & 14, and thus is rejected under same rationale. Claim 20 is similar in scope as Claim 11, and thus is rejected under same rationale. Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Haveman et al (US20170200302) in view of Wang et al (US20190082118), Chapman et al (US12094072) further in view of Kaizen (“Basic Car Animation in Blender”, 2021). Regarding Claim 24. The combination of Haveman, Wang and Chapman fails to explicitly teach, however, Kaizen teaches The system of claim 23 wherein the second asset is a three dimensional background and the perspective information is user modifiable via controls on the user interface (Kaizen, The video describes how to create 3D car animation using Blender®. Page 2-3, user import a 3D car model to the GUI application. Page 4, a 3D road background is created. Page 5-6, a 3D tunnel background is created on top of the road background. Page 7, the final rendering car animation. From the video, it is clear that user can manually change the camera viewing angle when editing/rendering the car animation.). Haveman, Wang, Chapman and Kaizen are analogous art, because they all teach method of generating virtual image by selecting background image and foreground object. Kaizen further teaches creating 3D car animation. Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify the virtual image generation method (taught by Haveman, Wang and Chapman), to further use Blender® to create 3D car animation (taught in Kaizen), so as to create vehicle video such as car commercial. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIN SHENG whose telephone number is (571)272-5734. The examiner can normally be reached M-F 9:30AM-3:30PM 6:00PM-8:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 5712727794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Xin Sheng/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Dec 04, 2023
Application Filed
Nov 07, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603971
PROVIDING AWARENESS OF WHO CAN HEAR AUDIO IN A VIRTUAL CONFERENCE, AND APPLICATIONS THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12602861
IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12592030
INTERACTIVE THREE-DIMENSION AWARE TEXT-TO-IMAGE GENERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12579920
ADAPTING A USER INTERFACE RESPONSIVE TO SCREEN SIZE ADJUSTMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12555343
3D MODEL GENERATION USING MULTIMODAL GENERATIVE AI
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
90%
With Interview (+17.3%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 401 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month