DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action is in response to Applicant’s RCE amendment filed 1/21/2026 which has been entered and made of record. Claims 1, 14 and 27 have been amended. No
claim has been cancelled or newly added. Claims 1-39 are pending in the application.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 14 and 27 have been considered but are
moot because the new ground of rejection does not rely on any reference applied in the prior
rejection of record for any teaching or matter specifically challenged in the argument.
The arguments regarding dependent claims for the virtue of their dependency are moot
because the independent claims are not allowable.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 9, 11, 14-16, 22, 24, 27-29, 35, and 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kipman et al. (U.S. Patent Application Publication No. 2010/0302253), hereinafter referenced as Kipman, in view of Simic et al. (U.S. Patent Application Publication No. 2023/0120437), hereinafter referenced as Simic and Muniz Simas et al. (U.S. Patent Application Publication No. 2013/0022950 A1), hereinafter referenced as Muniz.
Regarding claim 1, Kipman teaches a method for generating a virtual representation of a user, comprising: (Abstract teaches "Techniques for generating an avatar model during the runtime of an application" and figs. 3-4 and paragraph 51 teaches "The computing environment 304 may also use the audiovisual device 320 to provide a visual representation of a player avatar 324 that the user 302 may control with his or her movements"); avatar is same as virtual representation since paragraph 5 of applicant's specs mentions "as a virtual representation of the user, sometimes referred to as an avatar" and this shows the avatar is of a user; obtaining information describing a virtual representation of the user, (Abstract teaches "avatar model can be generated from an image captured by a capture device" and figs. 3-4 and paragraph 51 teaches "the user 302 in physical space such that the punch may be interpreted as a game control of the player avatar 324 in game space and/or the motion of the punch may be used to animate the player avatar 324 in game space."); the user corresponding to the avatar shows information/data must be obtained to describe the avatar of the user; the information including a hierarchical set of nodes, (paragraph 69 teaches "a user model can be generated that includes nodes"…"Nodes can be connected by interconnects, e.g., bones, and hierarchical relationships that define a parent-child system similar to that of a tree can be established."); wherein a first node of the hierarchical set of nodes comprises a root node for the hierarchical set of nodes, (paragraph 70 teaches "As mentioned above in an embodiment the avatar model 700 can have at least one root node and a relationship can be established using the root node of the avatar and corresponding root node (nodes) of the user model 600"); since relationship is established using the root node, this indicates root node as a first node and one of ordinary skill in the art would understand root node is the first node hence the term "root"; wherein the first node includes a mapping configuration for mapping child nodes of the hierarchical set of nodes to segments of the virtual representation of the user, (paragraph 69 teaches "Each node in the user model can be associated with a part of the user, for example, some nodes can be joint nodes, e.g., nodes the represent a location where two or more bones interact, or appendages such as hands"… "In a specific example, a wrist can be a child of an elbow, and the elbow can be a child of a shoulder. This recursive relationship can continue to one or more root nodes, which can be used as a frame of reference for mapping nodes from a user model to an avatar model."); the root node used for mapping nodes shows the mapping configuration and this mapping is done for the child nodes listed, each of which are mapped to a segment/part of a user corresponding to an avatar; wherein a child node of the hierarchical set of nodes includes data associated with a segment of the virtual representation of the user (paragraph 77 teaches "the model 600 may include one or more data structures that may represent, for example, a human target as a three-dimensional model. Each body part may be characterized as a mathematical vector defining nodes and interconnects of the model 600. As shown in FIG. 6, the model 600 may include one or more nodes such as joints j1-j18. According to an example embodiment, each of the joints j1-j18 may enable one or more body parts defined therebetween to move relative to one or more other body parts"); one or more data structures and each body part characterized as math vector defining nodes indicates the child nodes with data associated with parts/segments of the user model corresponding to the avatar; identifying a portion of the data associated with the child node (paragraph 79 teaches "Pixels in the synthesized image may be compared to pixels associated with the human target in each of the received images to determine whether the human target in a received image has moved."); pixels in synthesized image would be a portion of data associated with a child node because the pixels would be for a portion of the image and they would correspond to a specific part/segment of the avatar which is of a child node; and processing the portion of the data associated with the child node to generate the segment of the virtual representation of the user (paragraph 80 teaches "According to an example embodiment, one or more force vectors may be computed based on the pixels compared between the synthesized image and a received image. The one or more force may then be applied or mapped to one or more force-receiving aspects such as joints of the model to adjust the model into a pose that more closely corresponds to the pose of the human target or user in physical space. For example, a model may be adjusted based on movements or gestures of the user at various points observed and captured"); the force vectors being computed and applied to joints of the model shows processing the portion of data associated with child nodes and this is done to adjust the model into a pose meaning it generates a segment/part of the avatar in a different pose.
However, Kipman fails to teach wherein the information describing the virtual representation of the user is included in a scene description for rendering a view of a virtual environment, and wherein the scene description describes a scene for rendering, and wherein the virtual representation of the user is indicated by a field in the scene description;
However, Simic teaches and wherein the information describing the virtual representation of the user is included in a scene description for rendering a view of a virtual environment (Simic, paragraph 38 teaches “The background creation component 122 stores digital files, models, textures, skins, data files, etc., necessary to define the environment and background features of the video.” and paragraph 40 teaches “environment and background creation component 122 is utilized to generate backgrounds, models, textures, environmental elements, etc., necessary for the video…Thus, the animation may include elements, environment, artifacts, characters, etc., or combinations thereof.”); file storing environment features of video shows scene description [Note: this definition is consistent with applicant’s disclosure paragraph 47 which defines scene description as “scene description is a file or document that includes information describing or defining a 3D scene.”], and this file/scene description is for rendering a video/view of a virtual environment which would include information describing virtual representation/character [of the user when viewed in combination with Kipman] mentioned in the quoted above paragraph 40. Simic is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of storing information for and rendering a field of view which can be used with VR. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Kipman's invention with the scene description/file storing information techniques of Simic to improve the technology of virtual reality rendering by enabling a content creator to direct a user to a particular focus, while simultaneously allowing the user to retain full virtual reality control of determining where, within the currently viewable scene, to focus a field of view to be displayed (Simic, paragraph 28). This would be due to the configurations provided with a file/scene description.
However, the combination of Kipman and Simic fails to explicitly teach and wherein the scene description describes a scene for rendering, and wherein the virtual representation of the user is indicated by a field in the scene description;
However, Muniz explicitly teaches and wherein the scene description describes a scene for rendering, (Muniz, paragraph 128 teaches “Each scene in the virtual environment is represented by a scene description containing all the elements needed to generate the content of the scene”); this shows virtual environment has scenes (for rendering) which are described by scene description; and wherein the virtual representation of the user is indicated by a field in the scene description (Muniz, paragraph 58 teaches “allows the representation of the user in the virtual environment” and paragraph 59 teaches “a container with descriptions of scenes of the virtual environment and scripts for events in these scenes, these scenes includes descriptions of static and dynamic elements”); representation of user in virtual environment shows virtual representation of the user and since a container with description of scenes of the virtual environment exists[virtual environment still contains virtual representation of user], the virtual representation of the user is in a description of the scene (field in scene description). Muniz is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of including virtual representation of user in scene descriptions and scene descriptions describing scenes. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Kipman and Simic with the user’s representation in a field of scene description techniques of Muniz so that the user is as much immersed in this experience as possible (Muniz, paragraph 117). This would be done by allowing an accurate reflection of information by including the user’s virtual representation’s information in the scene description.
Regarding claim 2, the combination of Kipman, Simic and Muniz teaches wherein the segment includes at least one body part of the virtual representation of the user (Kipman, paragraph 69 teaches "Each node in the user model can be associated with a part of the user"… " In a specific example, a wrist can be a child of an elbow, and the elbow can be a child of a shoulder"); node (such as elbow being child node of shoulder) associated with part of user, indicates a segment including a body part of the user model corresponding to the avatar.
Regarding claim 3, the combination of Kipman, Simic and Muniz teaches wherein the at least one body part of the virtual representation of the user comprises a humanoid component (Kipman, paragraph 101 teaches "For example, in an embodiment an avatar model can be loaded from the model library 570 when the videogame is executed. In this example embodiment the videogame can send a signal to the computing environment 304 that indicates what kind of avatar model it uses, e.g., a humanoid model"); humanoid model would mean the body part of the avatar would be a humanoid component.
Regarding claim 9, the combination of Kipman, Simic and Muniz teaches wherein a sub-node of the child node includes data associated with a sub-segment of the segment of the virtual representation of the user (Kipman, paragraph 69 teaches "In a specific example, a wrist can be a child of an elbow, and the elbow can be a child of a shoulder"); child of another child node indicates a sub node and the data associated in the sub-node for a wrist shows a sub-segment since it's a wrist (portion of segment/elbow/arm) of the avatar/virtual representation.
Regarding claim 11, the combination of Kipman, Simic and Muniz teaches wherein the generated segment of the virtual representation of the user comprises mesh information (Kipman, paragraph 76 teaches "According to an example embodiment, a model such as a skeletal model, a mesh model, or the like may then be generated based on the scan"); mesh model would mean the segments have mesh information.
Regarding claim 14, the apparatus claim 14 recites similar limitations as method claim 1, and thus is rejected under similar rationale. In addition, Kipman, fig. 1 teaches a console/apparatus 100 with memory 112 and processor 101.
Regarding claim 15, the apparatus claim 15 recites similar limitations as method claim 2, and thus is rejected under similar rationale.
Regarding claim 16, the apparatus claim 16 recites similar limitations as method claim 3, and thus is rejected under similar rationale.
Regarding claim 22, the apparatus claim 22 recites similar limitations as method claim 9, and thus is rejected under similar rationale.
Regarding claim 24, the apparatus claim 24 recites similar limitations as method claim 11, and thus is rejected under similar rationale.
Regarding claim 27, the non-transitory computer-readable medium claim 27 recites similar limitations as method claim 1, and thus is rejected under similar rationale. In addition, Kipman, paragraph 26 teaches "The logical processor(s) in this example can be configured by software instructions embodying logic operable to perform function(s) that are loaded from memory, e.g., RAM, ROM, firmware, etc. In example embodiments where circuitry includes a combination of hardware and software".
Regarding claim 28, the non-transitory computer-readable medium claim 28 recites similar limitations as method claim 2, and thus is rejected under similar rationale.
Regarding claim 29, the non-transitory computer-readable medium claim 29 recites similar limitations as method claim 3, and thus is rejected under similar rationale.
Regarding claim 35, the non-transitory computer-readable medium claim 35 recites similar limitations as method claim 9, and thus is rejected under similar rationale.
Regarding claim 37, the non-transitory computer-readable medium claim 37 recites similar limitations as method claim 11, and thus is rejected under similar rationale.
Claim(s) 8, 21 and 34 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Kipman, Simic and Muniz as applied to claim 1, 14 and 27 above, and further in view of Lebaredian et al. (U.S. Patent Application Publication No. 2022/0134222), hereinafter referenced as Lebaredian.
Regarding claim 8, the combination of Kipman, Simic and Muniz fails to teach wherein the first node includes source information, and wherein the portion of the data associated with the child node is identified based on source information.
However, Lebaredian teaches wherein the first node includes source information, (Lebaredian, fig. 11 shows object 1102 with Nodes 1114 with Node 1114A includes parent identifier 1118, paragraph 182 teaches "each object (e.g., the object 1102) may represent a scene graph, a root of a hierarchical data structure" and paragraph 184 teaches "The parent identifier 1118 may comprise a node ID 1116 of a parent node of the node"); this displays Node 1114A as first/root node and this parent identifier is source information because it shows where the node is sourced from by identifying the parent; and wherein the portion of the data associated with the child node is identified based on source information (Lebaredian, paragraph 213 teaches "table may use object ID\node ID as a key. The value may represent a structure with the NODE IDs of children in the child list"); since parent ID comprises the node ID as mentioned in paragraph 184, this indicates child node (and data thereof) being identified based on parent ID/source information. Lebaredian is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of using nodes in content management and having the nodes contain type and source information. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Kipman, Simic and Muniz with the type and source information of nodes techniques of Lebaredian so content may be stored using a hierarchy of versions of objects and storage space may be reduced (Lebaredian, paragraph 6). Reduced data would mean less latency and more efficient system overall.
Regarding claim 21, the apparatus claim 21 recites similar limitations as method claim 8, and thus is rejected under similar rationale.
Regarding claim 34, the non-transitory computer-readable medium claim 34 recites similar limitations as method claim 8, and thus is rejected under similar rationale.
Claim(s) 10, 23, and 36 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Kipman, Simic and Muniz as applied to claims 1, 14 and 27 above, and further in view of Hirtzlin et al. (U.S. Patent Application Publication No. 2025/0200881), hereinafter referenced as Hirtzlin.
Regarding claim 10, the combination of Kipman, Simic and Muniz fails to teach wherein the portion of the data associated with the child node includes interactivity information indicating whether the segment of the virtual representation of the user can interact with other objects.
However, Hirtzlin teaches wherein the portion of the data associated with the child node includes interactivity information indicating whether the segment of the virtual representation of the user can interact with other objects (Hirtzlin, paragraph 71 teaches "At the scene level, a single interactivity behavior is defined and is composed of one collision trigger between the first and second node,"); the collision trigger indicates segment/second node interacting with other objects and the interactivity behavior shows the interactivity information allowing whether to do such. Hirtzlin is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of interactivity behavior/information in nodes. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Kipman, Simic and Muniz with the interactivity of nodes techniques of Hirtzlin to simulate frictions at low processing resources cost (Hirtzlin, paragraph 47). This would mean more realistic representations but without the need of extra processing capabilities leading to a more efficient system overall.
Regarding claim 23, the apparatus claim 23 recites similar limitations as method claim 10, and thus is rejected under similar rationale.
Regarding claim 36, the non-transitory computer-readable medium claim 36 recites similar limitations as method claim 10, and thus is rejected under similar rationale.
Claim(s) 12, 25, and 38 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Kipman, Simic and Muniz as applied to claims 11, 24 and 37 above, and further in view of Chou (U.S. Patent Application Publication No. 2014/0160122), hereinafter referenced as Chou.
Regarding claim 12, the combination of Kipman, Simic and Muniz fails to teach further comprising processing the mesh information to render the generated segment of the virtual representation of the user.
However, Chou teaches further comprising processing the mesh information to render the generated segment of the virtual representation of the user (Chou, paragraph 81 teaches "representation corresponding to an object may generated. The representation may include a mesh and a set of bones. For example, in FIG. 1, the computing device may generate the representations 128 corresponding to the participants 110, 116. To illustrate, the representation 206 (e.g., one of the representations 128) of FIG. 2 may include a mesh (e.g., the skin 202)"); this shows the representation for object (avatar from Kipman) can be generated using the mesh information, and the representation must include segments so that means the mesh information is rendered to generate the segments in order to create the representation for object. Chou is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of image processing a mesh to render a representation. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Kipman, Simic and Muniz with the processing mesh to render representation techniques of Chou so the representation may be refined (e.g., improved accuracy) as more and more data is collected (Chou, paragraph 45). This would be done due to more data from the mesh, leading to a more refined model/representation overall.
Regarding claim 25, the apparatus claim 25 recites similar limitations as method claim 12, and thus is rejected under similar rationale.
Regarding claim 38, the non-transitory computer-readable medium claim 38 recites similar limitations as method claim 12, and thus is rejected under similar rationale.
Claim(s) 13, 26 and 39 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Kipman, Simic and Muniz as applied to claims 1, 14 and 27 above, and further in view of Nake et al. (U.S. Patent No. 6,512,520), hereinafter referenced as Nake.
Regarding claim 13, the combination of Kipman, Simic and Muniz fails to teach wherein the data comprises one or more data streams, and wherein the one or more data streams is based on a format for the virtual representation of the user.
However, Nake teaches wherein the data comprises one or more data streams, (Nake, col. 5, lines 58-64 teaches "The 3-DCG data stream transmitting portion 1 includes a skeletal structure stream transmitting portion 11 for generating a skeletal data stream Ss of the avatar, a motion stream transmitting portion 12 for generating a motion data stream Sm of the avatar, an audio stream"); this shows data comprising multiple data streams; and wherein the one or more data streams is based on a format for the virtual representation of the user (Nake, col. 5, lines 65-67 teaches "converting the skeletal data stream Ss, the motion data stream Sm and the audio data stream Sa of the avatar into a predetermined format"); since the data streams are of the avatar, it means they're for the virtual representation and the predetermined format indicates the format for such to ensure efficient parsing and processing as one of ordinary skill in the art would understand. Nake is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of having multiple data streams based on a specific format. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Kipman, Simic and Muniz with the multiple data streams of specific format techniques of Nake so quantity of the data to be transferred can be reduced greatly (Nake, col. 3, line 57). This would be done by utilizing the one type of data stream needed to transfer data in a scenario where just one type of data is needed, making the system faster and more efficient.
Regarding claim 26, the apparatus claim 26 recites similar limitations as method claim 13, and thus is rejected under similar rationale.
Regarding claim 39, the non-transitory computer-readable medium claim 39 recites similar limitations as method claim 13, and thus is rejected under similar rationale.
Claim(s) 4-7, 17-20, and 30-33 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Kipman, Simic and Muniz as applied to claims 1, 14 and 27 above, and further in view of Lebaredian and Ha et al. (U.S. Patent No. 11,724,705), hereinafter referenced as Ha.
Regarding claim 4, the combination of Kipman, Simic and Muniz fails to teach wherein the first node includes type information, and further wherein the portion of the data associated with the child node is processed based on the type information.
However, Lebaredian teaches wherein the first node includes type information, (Lebaredian, fig. 11 teaches node type 1122 of Node 1114A of object 1102 and paragraph 182 teaches "each object (e.g., the object 1102) may represent a scene graph, a root of a hierarchical data structure"); this shows the node 1114A is a root node and has type information. Lebaredian is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of using nodes in content management and having the nodes contain type and source information. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Kipman, Simic and Muniz with the type and source information of nodes techniques of Lebaredian so content may be stored using a hierarchy of versions of objects and storage space may be reduced (Lebaredian, paragraph 6). Reduced data would mean less latency and more efficient system overall.
However, the combination of Kipman, Simic, Muniz and Lebaredian fails to teach and further wherein the portion of the data associated with the child node is processed based on the type information.
However, Ha teaches and further wherein the portion of the data associated with the child node is processed based on the type information (Ha, col. 26, lines 9-18 teaches "data processing capability of the second node is larger than a data processing capability of the first node, wherein transmitting the first data and the second data comprises transferring the first data and the second data, which are classified according to a type of data, to the first node and the second node, respectively, wherein receiving the first data and the second data comprises acquiring the first data processed in the first node and acquiring the second data processed in the second node"); this shows data processing capability of second/child node is classified according to type of data which corresponds to the node. Ha is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of data processing of nodes based on classification according to type information. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Kipman, Simic, Muniz and Lebaredian with the data of node processed based on type information techniques of Ha so it is possible to improve a data processing speed and improve security through distributed processing (Ha, col. 11, lines 63-65). This would be done since processing based on type information is a technique used in distributed processing meaning data processing becomes more efficient due to improvements in speed due to processing based on type information.
Regarding claim 5, the combination of Kipman, Simic, Muniz, Lebaredian and Ha teaches wherein the type information comprises information indication how the virtual representation of the user is represented (Lebaredian, paragraph 68 teaches "structural element is a declaration of the asset 222" and paragraph 184 teaches "The node type 1122 may specify a type of the node (e.g., whether the type of structural element, examples of which are described herein)"); if type of node specifies structural element (which is declaration of asset), then this shows type information indicates how the virtual representation/asset (or avatar of Kipman) is represented since the declaration of the asset is specified by the type. The same motivations used in claim 4 apply here in claim 5.
Regarding claim 6, the combination of Kipman, Simic, Muniz, Lebaredian and Ha teaches wherein the type information comprises a universal resource name indicating a format for the virtual representation of the user (Lebaredian, paragraph 98 teaches "a subscription to a content item (e.g., a layer or other asset type)"..."In some examples, content items and/or resources thereof may be identified within the operating environment 100 using... a Uniform Resource Name (URN)"); this shows asset type (inclusive of format since paragraph 99 mentions JSON format) being identified using a URN and this asset can be the avatar of Kipman since it's of a 3D virtual environment in the data store. The same motivations used in claim 4 apply here in claim 6.
Regarding claim 7, the combination of Kipman, Simic, Muniz, Lebaredian and Ha teaches wherein processing the portion of the data associated with the child node comprises processing the portion of the data associated with the child node based on the indicated format for the virtual representation of the user (Ha, col. 26, lines 17-18 teach "acquiring the second data processed in the second node" and lines 11-14 teach "transmitting the first data and the second data comprises transferring the first data and the second data, which are classified according to a type of data, to the first node and the second node"); one of ordinary skill in the art would understand that data type is inclusive of format (since paragraph 99 of Lebaredian mentions JSON format) and if this is for nodes, it would correspond to the nodes of the virtual representation/avatar of Kipman thus based on such. The same motivations used in claim 4 apply here in claim 7.
Regarding claim 17, the apparatus claim 17 recites similar limitations as method claim 4, and thus is rejected under similar rationale.
Regarding claim 18, the apparatus claim 18 recites similar limitations as method claim 5, and thus is rejected under similar rationale.
Regarding claim 19, the apparatus claim 19 recites similar limitations as method claim 6, and thus is rejected under similar rationale.
Regarding claim 20, the apparatus claim 20 recites similar limitations as method claim 7, and thus is rejected under similar rationale.
Regarding claim 30, the non-transitory computer-readable medium claim 30 recites similar limitations as method claim 4, and thus is rejected under similar rationale.
Regarding claim 31, the non-transitory computer-readable medium claim 31 recites similar limitations as method claim 5, and thus is rejected under similar rationale.
Regarding claim 32, the non-transitory computer-readable medium claim 32 recites similar limitations as method claim 6, and thus is rejected under similar rationale.
Regarding claim 33, the non-transitory computer-readable medium claim 33 recites similar limitations as method claim 7, and thus is rejected under similar rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Deffeyes (U.S. Patent Application Publication No. 2010/0283795) Paragraph 6 teaches “An avatar is a graphical representation of a user that other users in the virtual world can see and interact with.” and paragraph 73 teaches “information that affects the visual or audio representation of the virtual world as rendered in the viewable field of a particular user's avatar are sent in the scene description”; this shows avatar of user in scene description.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAUMAN U AHMAD whose telephone number is (703)756-5306. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.U.A./Examiner, Art Unit 2611
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611