DETAILED ACTION
This action is in response to the claim amendments received 01/15/2026. Claims 1-20 are pending with claims 1-3, 13-14 and 19-20 currently amended.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-19 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1 and 19 recites the claim limitation “wherein the parameterization is agnostic to the different viewing angles”. A thorough search in the specification does NOT disclose this claimed limitation. Instead, the specification discloses in many spots that the parameterization changes with the different viewing angles --- “change the parameterization of the virtual 3D space to represent the user input within the virtual 3D space from different viewing angles” (abstract). And “Then at block 540, based on the 2D user input, the device may change the parameterization of the 3D space to represent the 2D user input within the virtual 3D space in 3D from different viewing angles” ([0066]). Thus claims 1-19 are rejected as failing to comply with the written description requirement.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1 and 19 recites the claim limitation “wherein the parameterization is agnostic to the different viewing angles”. This is conflicting with the spec disclosure “change the parameterization of the virtual 3D space to represent the user input within the virtual 3D space from different viewing angles” (abstract); and “Then at block 540, based on the 2D user input, the device may change the parameterization of the 3D space to represent the 2D user input within the virtual 3D space in 3D from different viewing angles” ([0066]). Thus rendering the claim limitation indefinite.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6, 10, 12, 13, 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over XIAO et al. [US20230334806], hereinafter XIAO, in view of Morishita [US11602692].
Regarding claim 1, XIAO discloses an apparatus (Figs. 1 and 2), comprising:
at least one processor assembly configured to:
access a parameterization of a virtual three-dimensional (3D) space ([0023], “Also, in some examples, a three-dimensional (3D) representation of a scene may be displayed based on at least one of the first neural network, the second neural network, the feature volume, or the transformation parameter”);
receive user input modifying an object represented in the virtual 3D space ([0055], “In some examples, the various sensors 350a-350e may be used as input devices to control or influence the displayed content of the near-eye display 300, and/or to provide an interactive virtual reality (VR), augmented reality (AR), and/or mixed reality (MR) experience to a user of the near-eye display 300” and [0089], “In some instances, techniques using hybrid representations may be used for reconstructing single objects”); and
based on the user input, modify the parameterization of the virtual 3D space from different viewing angles ([0090], “In some examples, to produce a color c(x, v) attached to the ray point (x, v), the neural network module 410 may concatenate the position x, the input view direction v, the surface normal n, and the same encoding z and provide the resulting vector to the shading neural network”), wherein the parameterization is agnostic to the different viewing angles.
However, XIAO does not explicitly disclose to include a graphical representation of the user input within the virtual 3D space.
Nevertheless, Morishita teaches in a like invention, to include a graphical representation of the user input within the virtual space (Fig. 2, col. 4, lines 8-10, “According to FIG. 2, the effect object 24-1 is varied to the size of the effect object 24-2 by the operation input by the player.”).
Thus, it would have been obvious to one having ordinary skill in the at before the effective filing date of the claimed invention to have modified the apparatus disclosed by XIAO, to include a graphical representation of the user input within the virtual space, as taught by Morishita, in order to make it easier for the player to see the effect of the input.
Regarding claim 2, the combination of XIAO and Morishita discloses the apparatus of Claim 1, wherein the at least one processor assembly is configured to: output a visualization of the virtual 3D space on a display according to the modification(s) in the parameterization of the virtual 3D space (XIAO, Figs. 2 and 3, and [0023], “Also, in some examples, a three-dimensional (3D) representation of a scene may be displayed based on at least one of the first neural network, the second neural network, the feature volume, or the transformation parameter”).
Regarding claim 3, the combination of XIAO and Morishita discloses the apparatus of Claim 1, wherein the parameterization is established at least in part by a neural representation (XIAO, [0023], “Also, in some examples, a three-dimensional (3D) representation of a scene may be displayed based on at least one of the first neural network, the second neural network, the feature volume, or the transformation parameter”).
Regarding claim 4, the combination of XIAO and Morishita discloses the apparatus of Claim 3, wherein the neural representation comprises one or more volumetric representations, the volumetric representations comprising one or more Gaussian representations and/or one or more Wavelet volumetric representations (XIAO, [0095], “In some examples, the feature vector z.sub.v∈Z may be initialized with a Gaussian prior with a zero mean and σ=0.01”).
Regarding claim 5, the combination of XIAO and Morishita discloses the apparatus of Claim 3, wherein the neural representation comprises a neural network (XIAO, [0023], “Also, in some examples, a three-dimensional (3D) representation of a scene may be displayed based on at least one of the first neural network, the second neural network, the feature volume, or the transformation parameter”).
Regarding claim 6, the combination of XIAO and Morishita discloses the apparatus of Claim 5, wherein the neural network is a neural radiance field (XIAO, [0079], “In some examples, a neural radiance field (NeRF) may be a continuous volumetric representation characterized as ƒ: R6[Wingdings font/0xE0]R4 mapping a point x∈R3 and a view direction v∈ R3 to an red, blue, and green (RGB) color c=(r, g, b) and a volumetric density σ≥0”).
Regarding claim 10, the combination of XIAO and Morishita discloses the apparatus of Claim 1, wherein the at least one processor assembly is configured to: generate the parameterization from computer game video (XIAO, [0040], “Example input devices may include a keyboard, a mouse, a game controller, a glove, a button, a touch screen, or any other suitable device for receiving action requests and communicating the received action requests to the optional console 110” and [0091], “A transform module (e.g., the transform module 412 of FIG. 4) may implement a signed distance field (SDF)-to-density converter (e.g., the signed distance field (SDF)-to-density convertor 508 of FIG. 5) and may use a parameterization”).
Regarding claim 12, the combination of XIAO and Morishita discloses the apparatus of Claim 10, wherein the parameterization is generated using one or more signed distance functions determined from the game video (XIAO, [0040], “Example input devices may include a keyboard, a mouse, a game controller, a glove, a button, a touch screen, or any other suitable device for receiving action requests and communicating the received action requests to the optional console 110” and [0091], “A transform module (e.g., the transform module 412 of FIG. 4) may implement a signed distance field (SDF)-to-density converter (e.g., the signed distance field (SDF)-to-density convertor 508 of FIG. 5) and may use a parameterization”).
Regarding claim 13, the combination of XIAO and Morishita discloses the apparatus of Claim 1, wherein the parameterization is established at least in part by a 3D mesh (XIAO, [0083], “In some examples, to recover a surface, a mesh extraction algorithm, such as marching cubes, may be used to convert the learned density field into a triangle mesh based on a user-defined σ-threshold”).
Regarding claim 19, please refer to the claim rejection of claim 1.
Regarding claim 20, please refer to the claim rejection of claim 1 and for the “base video” part, XIAO discloses reconstruction from base video ([0018], “Neural implicit three-dimensional (3D) representations may be used for surface reconstruction from input images. However, some neural implicit three-dimensional (3D) representations have limitations. Neural volume methods based on neural radiance fields (NeRF) may synthesize novel views. For example, given multiple views of a scene, techniques based on neural radiance fields may determine the appearance of the scene from a different point of view”).
Claim(s) 7-9, 11 and 14-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over XIAO, in view of Morishita, further in view of van Welzen et al. [US20220410004], hereinafter van Welzen.
Regarding claim 7, the combination of XIAO and Morishita discloses the apparatus of Claim 1. However, the combination of XIAO and Morishita does not explicitly disclose wherein the user input comprises drawing input directed to the object.
Nevertheless, van Welzen teaches in a like invention, the user input comprises drawing input directed to the object ([0056], “For example, the locations of one or more players, NPCs, and/or objects may be overlaid over or otherwise used to annotate the map 204 to indicate where those objects or entities were located in the virtual environment at particular times in the gameplay session(s)”).
Thus, it would have been obvious to one having ordinary skill in the at before the effective filing date of the claimed invention to have modified the apparatus disclosed by the combination of XIAO and Morishita, to have the user input comprises drawing input directed to the object, as taught by van Welzen, in order to provide more ways to the user for input to increase the flexibility.
Regarding claim 8, the combination of XIAO, Morishita and van Welzen discloses the apparatus of Claim 7, wherein the user input comprises voice input associated with the object (van Welzen, [0105], “The I/O components 914 may provide a natural user interface (NUI) that processes air gestures, voice, or other physiological inputs generated by a user”).
Regarding claim 9, the combination of XIAO, Morishita and van Welzen discloses the apparatus of Claim 8, wherein the voice input indicates a way in which the drawing input is to be incorporated into the virtual 3D space (van Welzen, [0105], “The I/O components 914 may provide a natural user interface (NUI) that processes air gestures, voice, or other physiological inputs generated by a user. In some instances, inputs may be transmitted to an appropriate network element for further processing. An NUI may implement any combination of speech recognition, stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition (as described in more detail below) associated with a display of the computing device 900”).
Regarding claim 11, the combination of XIAO and Morishita discloses the apparatus of Claim 10. However, the combination of XIAO and Morishita does not explicitly disclose wherein the parameterization is generated without access to metadata from a game engine used to generate the computer game video.
Nevertheless, van Welzen teaches in a like invention, the parameterization is generated without access to metadata from a game engine used to generate the computer game video ([0022], “In at least one embodiment, it may additionally or alternatively trigger further analysis of the corresponding video data to extract one or more elements of the metadata or at least some of the metadata may be derived by detection of a corresponding visual cue”).
Thus, it would have been obvious to one having ordinary skill in the at before the effective filing date of the claimed invention to have modified the apparatus disclosed by the combination of XIAO and Morishita, to have the parameterization generated without access to metadata from a game engine, as taught by van Welzen, in order to be more intelligent in generating the parameterization so that generating the parameterization is independent of the metadata from game engines.
Regarding claim 14, the combination of XIAO and Morishita discloses the apparatus of Claim 1. However, the combination of XIAO and Morishita does not explicitly disclose wherein the at least one processor assembly is configured to: responsive to a threshold amount of time expiring from the modification of the parameterization, use the parameterization without the modification(s) to subsequently render video of the virtual 3D space.
Nevertheless, van Welzen teaches in a like invention, the at least one processor assembly is configured to: responsive to a threshold amount of time expiring from the change of the parametrization, use the parametrization without the change(s) to subsequently render video of the virtual 3D space ([0056], “Corresponding information used to update the map 204 may be captured in the metadata and one or more updates may occur periodically and/or continuously” and [0070], “Based on values of the interest level algorithm over time, time segments can be identified that correlate with potentially high in-game activity. For example, a time segment having data points (e.g., continuously) above a threshold (e.g., an average) may be identified as correlating with a potentially highlight-worthy clip or screenshot. A start of the time segment may be based on when the running activity measurement exceeds the threshold and an end of the time segment may be based on when the running activity measurement falls below the threshold”).
Thus, it would have been obvious to one having ordinary skill in the at before the effective filing date of the claimed invention to have modified the apparatus disclosed by the combination of XIAO and Morishita, to have a threshold amount of time for changing the parameterization, as taught by van Welzen, in order to guarantee a stabled rendering for video of the virtual 3D space.
Regarding claim 15, the combination of XIAO and Morishita discloses the apparatus of Claim 1. However, the combination of XIAO and Morishita does not explicitly disclose wherein the parameterization is changed to represent the user input within the virtual 3D space as a drawing anchored to the object, the drawing assigned a translated or rotated geometry within the virtual 3D space and in relation to the object.
Nevertheless, van Welzen teaches in a like invention, the parameterization is changed to represent the user input within the virtual 3D space as a drawing anchored to the object, the drawing assigned a translated or rotated geometry within the virtual 3D space and in relation to the object ([0020], “Based on the selection of the interface element, the map may be annotated using a location and time associated with the in-game event in the metadata. The location may correspond to the player and/or another in-game object at the time and/or during or immediately after the in-game event. In various embodiments, annotating may comprise displaying and/or updating a path of the in-game object on the map, where an end-point may be based on the location and time. For example, paths or other indicators of one or more locations of the player, enemies, Non-Player Characters (NPCs), the in-game event, items, and/or other elements may be displayed on the map to reflect the state of the game leading up to, during, and/or following the in-game event”).
Thus, it would have been obvious to one having ordinary skill in the at before the effective filing date of the claimed invention to have modified the apparatus disclosed by the combination of XIAO and Morishita, to have the parameterization changed to represent the user input within the virtual 3D space as a drawing anchored to the object, as taught by van Welzen, in order to align the rendered video better with the in-game events for better results.
Regarding claim 16, the combination of XIAO, Morishita and van Welzen discloses the apparatus of Claim 15, wherein the drawing is represented as digital chalk (van Welzen, [0105], “The I/O components 914 may provide a natural user interface (NUI) that processes air gestures, voice, or other physiological inputs generated by a user. In some instances, inputs may be transmitted to an appropriate network element for further processing. An NUI may implement any combination of speech recognition, stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition (as described in more detail below) associated with a display of the computing device 900”).
Regarding claim 17, the combination of XIAO, Morishita and van Welzen discloses the apparatus of Claim 15, wherein the object is a computer game character, and wherein the drawing follows the computer game character as the computer game character moves within the virtual 3D space (van Welzen, [0020], “For example, paths or other indicators of one or more locations of the player, enemies, Non-Player Characters (NPCs), the in-game event, items, and/or other elements may be displayed on the map to reflect the state of the game leading up to, during, and/or following the in-game event”).
Regarding claim 18, the combination of XIAO, Morishita and van Welzen discloses the apparatus of Claim 15, wherein the object is a game world path, and wherein the drawing is virtually laid along the game world path within the virtual 3D space (van Welzen, [0020], “For example, paths or other indicators of one or more locations of the player, enemies, Non-Player Characters (NPCs), the in-game event, items, and/or other elements may be displayed on the map to reflect the state of the game leading up to, during, and/or following the in-game event”).
Response to Arguments
Applicant’s arguments with respect to claim(s) rejections under 35 U.S.C. 102 and 103, have been considered but are moot in view of new ground of rejections.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YINGCHUAN ZHANG whose telephone number is (571)272-1375. The examiner can normally be reached 8:00 - 4:30 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YINGCHUAN ZHANG/Primary Examiner, Art Unit 3715