Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Amendment
This is in response to applicant’s amendment/response filed on 01/12/2026, which has been entered and made of record. Claims 1, 9, 10, 11, 19 have been amended. No claim has been cancelled. No claim has been added. Claims 1-20 are pending in the application.
Response to Arguments
Applicant's arguments filed on 01/12/2026 regarding claims rejection under 35 U.S.C 102 have been fully considered but they are not persuasive.
Applicant submits “Amended claim 1 is indeed not anticipated by or obvious over the cited references. For example, the cited references are all silent regarding at least the quoted claim limitation.” (Remarks, Page 9.)
The examiner disagrees with Applicant’s premises and conclusion. The pending claim limitation is different comparing to the claim presented in interview. As examiner explained during the interview, the claimed features are very well known in the art. There are many prior arts displaying advertise in the virtual world. Please see below for the mapping of Upadhyay to the claimed limitation and also refer Terrano for additional details.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-7, 10-17, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Upadhyay et al. (US Pub 2020/0279429 A1) in view of Terrano (US Patent 10,699,488 B1).
As to claim 1, Upadhyay discloses a method, comprising:
receiving a request for visual content, the visual content to illustrate a visual scene (¶0037, “A user device 106 is an electronic device that is capable of requesting and receiving resources (e.g., 3D applications) over the network 102.” ¶0038, “A 3D resource may include data that defines one or more 3D environments and virtual objects within the 3D environments. 3D resources can be provided to user devices 106 by 3D application servers 104. For example, the 3D application servers 104 can include servers that host publisher websites. In this example, the user device 106 can initiate a request for a given 3D resource, and the 3D application server 104 that hosts the given 3D resource can respond to the request by transmitting the 3D resource to the user device 106. In some implementations, the 3D application server can provide one or more definition files to the user device 106. A definition file includes data that represents a 3D environment that can be processed by a 3D application installed on the user device 106 to render the 3D environment.”);
determining dimensions of the visual scene, wherein the dimensions include a spatial dimension and a temporal dimension (¶0004, “The memory subsystem is configured to store first data that (i) defines a three-dimensional (3D) environment and (ii) identifies a virtual object in the 3D environment.” ¶0045, “each given computing device in the set 114 can analyze a different data dimension (or set of dimensions) and pass results (Res 1-Res 3) 118a-118c of the analysis back to the third-party content distribution system 110.” ¶0049, “different display modes provide different types or formats of third-party content to display on the virtual object (or otherwise in connection with the virtual object).” ¶0055, “the client computing system 300 is configured to present other types of 3D environments, such as an augmented reality environment, a mixed reality environment, or a gaming environment on a conventional two-dimensional screen.” ¶0059, “how the object is displayed in the 3D environment, where the object is located in the 3D environment, the eligible types or geometries of the object, constraints on third-party content that can be selected for display on the object, the interaction model associated with the object, the transition-state model associated with the object, or a combination of two or more of these.”),
the spatial dimension indicating a size of the visual scene within a multi-dimensional space, the temporal dimension indicating different times (¶0003, “The 3PCE virtual object can be displayed at any given time in one of multiple display modes associated with the object.” ¶0004, “The memory subsystem is configured to store first data that (i) defines a three-dimensional (3D) environment and (ii) identifies a virtual object in the 3D environment.” ¶0066, “the virtual object manager 312 can add an entry to the log that captures salient information about the interaction, such as a timestamp for the interaction, the targeted object of the interaction, and context associated with the interaction.” The timestamp determining temporal dimension of the visual scene.),
determining a viewpoint in the visual scene (¶0016, “The virtual object manager can be configured to transition the virtual object from the first mode to the second mode based on identifying that the user's gaze has been maintained in the direction of the virtual object for a predetermined length of time.” ¶0061, “The orientation and motion sensors can generate signals indicative of the direction of a user's gaze within the 3D VR environment in real time, and these signals can be interpreted by the input handler 310 to track the direction of the user's gaze in real time.”);
generating one or more visual objects based on the request, dimensions of the visual scene, and the viewpoint (¶0062, “if the input handler 310 detects that the reticle has been held in position over a virtual object in the 3D environment for a predetermined length of time (e.g., a pre-specified threshold time interval), then the input handler 310 may register the occurrence of a user interaction with the virtual object and push a notification of the user interaction with the object to the virtual object manager 312.” “The rendering engine 308 may also draw a progress bar over the 3PCE virtual object, near the object, or otherwise in the 3D environment that indicates how much time the user has to gaze at the object to trigger an action, e.g., how long until the object transitions display modes.” ¶0092, “A transition state selector at the client system or the content distribution system may determine which transition model applies for a given virtual object based, for example, on any explicit constraints specified by the end user, 3D environment developer, and/or third-party content provider that submitted the primary set of third-party content. In addition, or alternatively, the transition state selector may determine which transition model applies for a given virtual object randomly or based on implicit context associated with the object, the 3D environment, and technical capabilities of the client system. For example, under a first model (416), the virtual object may transition from the first display mode by replacing the primary set of third-party content with a different, secondary set of third-party content. At stage 418, the client system obtains the secondary set of third-party content from content distribution system. At stage 420, the rendering engine of the client system re-renders the virtual object to show the secondary set of third-party content. In some implementations, the type or format of the secondary set of third-party content may differ from the type or format of the primary set of third-party content. In some implementations, the 3D model that defines the geometry of the virtual object may change along with the set of third-party content presented by the object. For example, a more complex virtual object having more polygonal surfaces may be rendered in the second display mode than in the first display mode. The client system may request and receive the new 3D model from the content distribution system. In some implementations, the transition between the first and second display modes of the virtual object includes animating the virtual object to smoothly change the visual appearance of the object.” ¶0093-0094.);
generating a visual content item by building the visual scene based on the one or more visual objects, wherein a state of the visual content item is different at the different times (0003, “a virtual reality system or an augmented reality system, can present a 3D environment that includes one or more third-party content eligible (3PCE) virtual objects. A 3PCE virtual object is an object (e.g., a 3D object such as a cube, a sphere, a cylinder, or other geometric shape) that is configured to present third-party content (e.g., content provided by an entity different than an entity that provides the 3D environment) at a specified location of the 3D environment. The 3PCE virtual object can be displayed at any given time in one of multiple display modes associated with the object. When a triggering event is detected, such as a user interaction with the 3PCE virtual object, the system may update the virtual object, such as by transitioning the object from one display mode to another. In some implementations, a user's interactions with a 3PCE virtual object causes the object to display a new set of third-party content, to change the type of third-party content displayed by the object, to open a portal to an external resource (also referred to as a “portal resource”) associated with displayed third-party content (e.g., a website or an application), or to generate notifications related to displayed third-party content in an external environment outside of the 3D environment.” ¶0004, “The memory subsystem is configured to store first data that (i) defines a three-dimensional (3D) environment and (ii) identifies a virtual object in the 3D environment. The communications interface is configured to transmit requests over a network for third-party content to display with the virtual object in the 3D environment and to receive third-party content responsive to the requests. The rendering engine is configured to use the first data from the memory subsystem to render the 3D environment for presentation on a display device that is coupled to the computing system, including rendering the virtual object at a specified location of the 3D environment in a first mode in which the virtual object displays a first set of third-party content. The input handling apparatus is configured to detect user interactions with the virtual object in the 3D environment. The virtual object manager is configured to receive an indication from the input handling apparatus of a first user interaction with the virtual object in the 3D environment, and in response, to instruct the rendering engine to transition the virtual object from the first mode in which the virtual object displays the first set of third-party content to a second mode in which the virtual object displays a second set of third-party content.” ¶0094, “identify the resource for the client system when the virtual object is first instantiated or at another time, e.g., in response to detecting a user's interaction with the object.”); and
transmitting the visual content item to a client device associated with a user, the client device to display the visual content item to the user (Fig. 3, ¶0031, “an example client computing system configured to render a 3D environment showing third-party content specified by a content distribution system.” ¶0054, “The client computing system 300 communicates with the content distribution system 350 over a network (e.g., the Internet, a local area network, a wireless broadband network). Although not shown in FIG. 3, the client computing system 300 can communicate with other systems in addition to content distribution system 350 for various purposes. For example, the client computing system 300 may communicate with servers for an online application store or developer servers to obtain virtual reality, augmented reality, and/or mixed reality applications that enable the system 300 to render a 3D environment. Likewise, the client computing system 300 may communicate the servers for an online application store or developer servers to obtain definition files for a 3D environment, e.g., an immersive virtual reality game.”).
Examiner believes Upadhyya teaches the whole claim.
Assuming arguendo, that Upadhyya does not discloses wherein a state of the visual content item is different at the different times.
Terrano teaches wherein a state of the visual content item is different at the different times (Terrano, Fig. 3B, Col 7, lines 45-67, “the system may determine that the user is looking out of an office building near a lunch time and infer that the user is likely to be hungry.” “The system may determine to render the illustration 316 (related to the nearby restaurant) on the road surface 312 to the user based on above information.” Col 9, lines 35-54, “the digital content to be displayed to a user may be determined based on user information associated with that user. The user information that may be used for determining the digital content may include, for example, but is not limited to, a social network profile, a search history, a browsing history, a purchase history, a user interest, a preference, a user setting, an emotion state, a behavioral state of the user, a time related to the behavioral state of the user, an interaction of the user with an object, an indication of the user, a user input, a gesture, a command, a navigation route, a target place of a navigation system” Col 10, lines 65-67, “the system may send the digital content to the user on a daily basis, on a pre-determined time interval, or a time interval determined by the user. In particular embodiments, the system may send to the digital content to the user based on a time interval adaptively determined by the system (e.g., based on user mood or emotional status).”).
Upadhyya and Terrano are considered to be analogous art because all pertain to viewing virtual contents in a virtual environment. It would have been obvious before the effective filing date of the claimed invention to have modified Upadhyya with the features of “a state of the visual content item is different at the different times” as taught by Terrano. The claim would have been obvious because the technique for improving a particular class of devices was part of the ordinary capabilities of a person of ordinary skill in the art, in view of the teaching of the technique for improvement in other situations and may render the same contents at the same places of the virtual space or replace them with other display contents like arts, daily inspirations, or ads replacement (Terrano, Summary).
As to claim 2, claim 1 is incorporated and the combination of Upadhyya and Terrano discloses the request for visual content comprises a different visual content item having two spatial dimensions (Upadhyya, ¶0020, “The type of the first set of third-party content that the virtual object displays in the first mode can be images. The type of the second set of third-party content that the virtual object displays in the second mode can be videos or 3D models.”), and
determining the dimensions of the visual scene comprises: determining three spatial dimensions of the visual scene, the three spatial dimensions comprising the two spatial dimensions of the different visual content item and a new spatial dimension (Upadhyya, Fig. 2A, ¶0022, “The first set of third-party content can include a collection of images. Rendering the virtual object in the first mode can include rendering a cube that shows a respective image from the collection of images on each face of the cube” ¶0049, “in a first display mode, a set of third-party images may be displayed on the surfaces of a 3D model that defines a shape of the virtual object. In contrast, a second display mode may provide text, video, or content in another media format to be presented with the virtual object.”).
As to claim 3, claim 1 is incorporated and the combination of Upadhyya and Terrano discloses generating the one or more visual objects comprises: selecting the one or more visual objects from a plurality of candidate visual objects, wherein the plurality of candidate visual objects is generated before the request for visual content is generated (Upadhyya, ¶0044, “the distribution parameters (e.g., selection criteria) for a particular third-party content can include distribution keywords that must be matched (e.g., by 3D resources or terms specified in the request 112) in order for the third-party content to be eligible for presentation.” ¶0045, “different computing devices in the set 114 can each analyze a different portion of the third-party corpus database 116 to identify various third-party content having distribution parameters that match information included in the request 112. In some implementations, each given computing device in the set 114 can analyze a different data dimension (or set of dimensions) and pass results (Res 1-Res 3) 118a-118c of the analysis back to the third-party content distribution system 110. For example, the results 118a-118c provided by each of the computing devices in the set may identify a subset of third-party content that are eligible for distribution in response to the request and/or a subset of the third-party content that have certain distribution parameters or attributes.” ¶0046, “enable the user device 106 to integrate the set of winning third-party content into the 3D environment, e.g., for presentation on an eligible virtual object in the 3D environment.” ¶0048, “a virtual reality game may include 3PCE virtual objects to present third-party content (e.g., advertisements) that were generated independently of the game itself and its 3D environment.”).
As to claim 4, claim 1 is incorporated and the combination of Upadhyya and Terrano discloses receiving the request for visual content comprises: receiving the request for visual content from another client device associated with another user that is different from the user (Upadhyya, ¶0058, “developers can insert a tag, a script, or executable code to the definition file(s) for a 3D environment that, when executed, instantiates a 3PCE virtual object in the 3D environment in accordance with any parameters specified therein. For example, the inserted tag, script, or executable code, when processed by the client computing system 302, may cause the client computing system 302 to access a particular 3D model (e.g., a cube or a sphere), to request third-party content from the content distribution system 350, and to render the 3D model with third-party content returned from the content distribution system 350 (or returned from one or more other third-party content servers identified by the content distribution system 350). Developers can manually insert the tag, script, or executable code into the definition (e.g., source code or executable code) for a 3D environment, or the code may be inserted automatically by a programming or design environment used by developers to create 3D environments (e.g., a What You See Is What You Get (WYSIWYG) development environment).” A developer can be the other user.).
As to claim 5, claim 4 is incorporated and the combination of Upadhyya and Terrano discloses the other user is associated with an online system that provides one or more online items for display to the user, and generating the one or more visual objects comprises: generating the one or more visual objects based on an interaction of the user with the one or more online items (Upadhyya, ¶0049, “A user may interact with a 3PCE virtual object, for example, by selecting the object, gazing at the object, approaching the object, or a combination of these. In some implementations, a detected user interaction with a 3PCE object triggers the object to change display modes by transitioning from one active display mode to another active display mode. Thus, if a user has interest in third-party content presented on a virtual object in a first display mode, the user may select the object to view additional or different content related to the same topic of the initially presented third-party content. In some implementations, the system defers obtaining or presenting the additional content until a user indicates his or her interest in the content by interacting with the virtual object, thereby reducing transmissions of third-party content from a content distribution system that would not be rendered in the 3D environment.” ¶0058, “developers can insert a tag, a script, or executable code to the definition file(s) for a 3D environment that, when executed, instantiates a 3PCE virtual object in the 3D environment in accordance with any parameters specified therein. For example, the inserted tag, script, or executable code, when processed by the client computing system 302, may cause the client computing system 302 to access a particular 3D model (e.g., a cube or a sphere), to request third-party content from the content distribution system 350, and to render the 3D model with third-party content returned from the content distribution system 350 (or returned from one or more other third-party content servers identified by the content distribution system 350). Developers can manually insert the tag, script, or executable code into the definition (e.g., source code or executable code) for a 3D environment, or the code may be inserted automatically by a programming or design environment used by developers to create 3D environments (e.g., a What You See Is What You Get (WYSIWYG) development environment).” A developer can be the other user.).
As to claim 6, claim 5 is incorporated and the combination of Upadhyya and Terrano discloses receiving information of the interaction of the user with the one or more online items from the online system associated with the other user (Upadhyya, ¶0062, “the input handler 310 includes logic for detecting predefined user actions within a 3D environment. The predefined user actions can include interactions with a 3PCE virtual object in the 3D environment or actions that indicate a user likely is about to interact with a 3PCE virtual object.” ¶0065-0066, “the virtual object manager 312 is further configured to handle responses to detected user interactions with 3PCE virtual objects in a 3D virtual environment. For instance, upon detecting that a user's actions in the virtual environment match a predefined set of user actions that indicate a user has interacted, or likely is about to interact, with a 3PCE virtual object, the input handler 310 may call on the virtual object manager 312 to determine how to respond to the user interaction, or the anticipated user interaction, with the object. In the call to the virtual object manager 312, the input handler 310 may pass an identifier for the object that is the target of the interaction or anticipated interaction, and may further pass information that characterize the type of interaction with the object (e.g., gaze and select, extended gaze, hand-based object manipulation). The virtual object manager 312 may use the information passed from the input handler 310 to determine how to respond to a given user interaction with an object or how to respond to an anticipated user interaction with the object. In some implementations, the virtual object manager 312 maintains a log of user interactions or anticipated user interactions with 3PCE objects that have been detected in a virtual environment, which is automatically and periodically reported to the content distribution system 350 or another system that analyzes patterns of user interactions with 3PCE objects in 3D virtual environments.” ¶0074, “the third-party content is independently created by third-party content providers (e.g., advertisers) and is then submitted to the content distribution system 350 for targeted distribution to end users. In some implementations, content providers may create the third-party content using a content creation platform or service of the content distribution system 350.” ¶0058, “developers can insert a tag, a script, or executable code to the definition file(s) for a 3D environment that, when executed, instantiates a 3PCE virtual object in the 3D environment in accordance with any parameters specified therein. For example, the inserted tag, script, or executable code, when processed by the client computing system 302, may cause the client computing system 302 to access a particular 3D model (e.g., a cube or a sphere), to request third-party content from the content distribution system 350, and to render the 3D model with third-party content returned from the content distribution system 350 (or returned from one or more other third-party content servers identified by the content distribution system 350).” The developer is the other user.).
As to claim 7, claim 1 is incorporated and the combination of Upadhyya and Terrano discloses the request for visual content comprises a content item, and generating the one or more visual objects comprises:
determining a theme of the visual scene by analyzing the content item (Upadhyya, ¶0059, “The context associated with a 3PCE virtual object can include, for example, characteristics of the 3D environment in which the object is placed, characteristics of the client computing system 302, characteristics of the user or an account of the user viewing the 3D environment, characteristics or preferences of the developer of the 3D environment, or a combination of two or more of these contexts.”); and
generating the one or more visual objects based on the theme of the visual scene (Upadhyya, ¶0082, “To determine winning third-party content, the content selector 366 evaluates eligible third-party content items with respect to various selection criteria associated with a request. The selection criteria may include keywords or other context data specified in a request. In some implementations, the selection criteria include profile data that indicates interests and preferences of the end user of the client system 302, profile data of third-party content providers, and information about the 3D environment in which the virtual object is presented.” ¶0087, “a developer may specify parameter values indicating the shape, size, and location of the virtual object in the 3D environment. These and other attributes may be used by a content selector at the content distribution system, a transition-state model selector at the client system or the content distribution system, or both, to select third-party content to present with the object or to select a transition state model for the object, respectively.”).
As to claim 10, claim 1 is incorporated and the combination of Upadhyya and Terrano discloses the one or more visual objects comprises a visual object that is interactive, and the method further comprises: providing a user interface that allows the user to interact with the visual object (Upadhyya, ¶0049, “A user may interact with a 3PCE virtual object, for example, by selecting the object, gazing at the object, approaching the object, or a combination of these. In some implementations, a detected user interaction with a 3PCE object triggers the object to change display modes by transitioning from one active display mode to another active display mode. Thus, if a user has interest in third-party content presented on a virtual object in a first display mode, the user may select the object to view additional or different content related to the same topic of the initially presented third-party content.”).
As to claim 11, the combination of Upadhyya and Terrano discloses one or more non-transitory computer-readable media storing instructions executable to perform operations, the operations comprising: receiving a request for visual content, the visual content to illustrate a visual scene; determining dimensions of the visual scene, wherein the dimensions include a spatial dimension and a temporal dimension, the spatial dimension indicating a size of the visual scene within a multi-dimensional space, the temporal dimension indicating different times; determining a viewpoint in the visual scene; generating one or more visual objects based on the request, dimensions of the visual scene, and the viewpoint; generating a visual content item by building the visual scene based on the one or more visual objects, wherein a state of the visual content item is different at the different times; and transmitting the visual content item to a client device associated with a user, the client device to display the visual content item to the user (See claim 1 for detailed analysis.).
As to claim 12, claim 11 is incorporated and the combination of Upadhyya and Terrano discloses wherein the request for visual content comprises a different visual content item having two spatial dimensions, and determining the dimensions of the visual scene comprises: determining three spatial dimensions of the visual scene, the three spatial dimensions comprising the two spatial dimensions of the different visual content item and a new spatial dimension (See claim 2 for detailed analysis.).
As to claim 13, claim 11 is incorporated and the combination of Upadhyya and Terrano discloses wherein generating the one or more visual objects comprises: selecting the one or more visual objects from a plurality of candidate visual objects, wherein the plurality of candidate visual objects is generated before the request for visual content is generated (See claim 3 for detailed analysis.).
As to claim 14, claim 11 is incorporated and the combination of Upadhyya and Terrano discloses wherein receiving the request for visual content comprises: receiving the request for visual content from another client device associated with another user that is different from the user (See claim 4 for detailed analysis.).
As to claim 15, claim 14 is incorporated and the combination of Upadhyya and Terrano discloses wherein the other user is associated with an online system that provides one or more online items for display to the user, and generating the one or more visual objects comprises: generating the one or more visual objects based on an interaction of the user with the one or more online items (See claim 5 for detailed analysis.).
As to claim 16, claim 15 is incorporated and the combination of Upadhyya and Terrano discloses receiving information of the interaction of the user with the one or more online items from the online system associated with the other user (See claim 6 for detailed analysis.).
As to claim 17, claim 11 is incorporated and the combination of Upadhyya and Terrano discloses wherein the request for visual content comprises a content item, and generating the one or more visual objects comprises: determining a theme of the visual scene by analyzing the content item; and generating the one or more visual objects based on the theme of the visual scene (See claim 7 for detailed analysis.).
As to claim 19, the combination of Upadhyya and Terrano discloses an apparatus, comprising: a computer processor for executing computer program instructions; and a non-transitory computer-readable memory storing computer program instructions executable by the computer processor to perform operations comprising: receiving a request for visual content, the visual content to illustrate a visual scene, determining dimensions of the visual scene, wherein the dimensions include a spatial dimension and a temporal dimension, the spatial dimension indicating a size of the visual scene within a multi-dimensional space, the temporal dimension indicating different times, determining a viewpoint in the visual scene, generating one or more visual objects based on the request, dimensions of the visual scene, and the viewpoint, generating a visual content item by building the visual scene based on the one or more visual objects, wherein a state of the visual content item is different at the different times and transmitting the visual content item to a client device associated with a user, the client device to display the visual content item to the user (See claim 1 for detailed analysis.).
As to claim 20, claim 19 is incorporated and the combination of Upadhyya and Terrano discloses the request for visual content comprises a different visual content item having two spatial dimensions, and determining the dimensions of the visual scene comprises: determining three spatial dimensions of the visual scene, the three spatial dimensions comprising the two spatial dimensions of the different visual content item and a new spatial dimension (See claim 2 for detailed analysis.).
Claims 8, 9 18 are rejected under 35 U.S.C. 103 as being unpatentable over Upadhyay et al. (US Pub 2020/0279429 A1) in view of Terrano (US Patent 10,699,488 B1) and Menon et al. (US Pub 2025/0157164 A1).
As to claim 8, claim 1 is incorporated and Upadhyya discloses the request for visual content comprises a content item that illustrates the visual scene at a first viewpoint (Upadhyya, ¶00012, “The input handling apparatus can be configured to receive user input to move a user's current viewing location within the 3D environment and to detect a relationship between the user's current viewing location and the location of the virtual object in the 3D environment.”),
determining the viewpoint comprises changing the first viewpoint to a second viewpoint (Upadhyya, ¶0057, “a user of the client computing system 302 can explore a virtual 3D environment by moving his or her head to look around the environment (e.g., in a virtual reality system), by moving around the environment, by manipulating objects in the environment, or a combination of these.”), and
Upadhyya does not explicitly discloses generating the one or more visual objects comprises generating the one or more visual objects based on the second viewpoint.
However, this is obvious to one of ordinary skill in the art.
Menon teaches generating the one or more visual objects comprises generating the one or more visual objects based on the second viewpoint (Menon, ¶0060, “In the example of FIG. 2B, a circular table and a 3D model on top of it is surrounded by avatars of users. To provide the shared perspective mode, the model is rotated about the table's axis, enabling all users to view the model from the same angle. To provide the users with the same perspective view as shown at each of 220, 225, 226, and 227, the asset 230 is rendered at each of the views by applying a perspective transform based on the shared frame of reference, i.e., the frame of reference of the user 223. Each of the views 220, 225, 226, and 227 are associated with a different user from the four users at the table-top review mode. In can be understood that while the four users are together in the XR environment and they have a shared perspective mode, each one of them will have the view as shown at 220, 225, 227, 226 with regard to the asset 230.” ¶0063-0064. ¶0069-0070, ¶0092-0093.).
Upadhyya and Menon are considered to be analogous art because all pertain to viewing virtual contents in a virtual environment. It would have been obvious before the effective filing date of the claimed invention to have modified Upadhyya with the features of “generating the one or more visual objects comprises generating the one or more visual objects based on the second viewpoint” as taught by Menon. The claim would have been obvious because the technique for improving a particular class of devices was part of the ordinary capabilities of a person of ordinary skill in the art, in view of the teaching of the technique for improvement in other situations.
As to claim 9, claim 1 is incorporated and Upadhyya does not disclose the request for visual content is received from the user, the one or more visual objects comprises a graphical representation of the user, and the viewpoint is a viewpoint of the graphical representation of the user.
Menon teaches the request for visual content is received from the user, the one or more visual objects comprises a graphical representation of the user, and the viewpoint is a viewpoint of the graphical representation of the user (Menon, ¶0004, “the users can see other users as spatially distant avatars to maintain the understanding of multi-user viewing. In some implementations, the shared perspective view can be provided by rotating an asset to match the perspective of each user based on the view that is shared. In some implementations, the shared perspective view can be provided by changing the position of the avatars of the users to match the shared perspective so that they can perceive the model from the same perspective.” ¶0004, “The other avatar(s) are rendered by repositioning according to a predefined offset from the shared position, where their gaze and pointing at the content that in the XR environment is reoriented as an adjustment that is needed in view of the offset from the viewpoint of the avatar.” ¶0057, “When users enter an XR environment as a collaborative space substantially similar to the XR environment 200 of FIG. 2A, each user has their own viewpoint that is discrete and matches their view into the XR environment (their viewpoint and view orientation). The viewpoint is a location of an avatar in the XR environment, and this location can be changed based on user input, for example, based on the user moving in the physical world, based on user joystick input, based on the user pointing at something in the XR environment and clicking a button to reposition the avatar within the XR environment, etc. The view orientation of the avatar is the 3D direction of the view into the XR environment. In some instances, such view orientation can be changed based on user actions such as when the user moves their head around while wearing a VR headset, or based on other user instructions.”)
Upadhyya and Menon are considered to be analogous art because all pertain to viewing virtual contents in a virtual environment. It would have been obvious before the effective filing date of the claimed invention to have modified Upadhyya with the features of “the request for visual content is received from the user, the one or more visual objects comprises a graphical representation of the user, and the viewpoint is a viewpoint of the graphical representation of the user” as taught by Menon. The claim would have been obvious because the technique for improving a particular class of devices was part of the ordinary capabilities of a person of ordinary skill in the art, in view of the teaching of the technique for improvement in other situations.
As to claim 18, claim 11 is incorporated and the combination of Upadhyya and Menon discloses wherein: the request for visual content comprises a content item that illustrates the visual scene at a first viewpoint, determining the viewpoint comprises changing the first viewpoint to a second viewpoint, and generating the one or more visual objects comprises generating the one or more visual objects based on the second viewpoint (See claim 8 for detailed analysis.).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Error! Unknown document property name. whose telephone number is Error! Unknown document property name.. The examiner can normally be reached on Error! Unknown document property name..
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Error! Unknown document property name. can be reached on Error! Unknown document property name.. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YU CHEN/
Primary Examiner, Art Unit 2613