Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s amendments and remarks submitted 01/22/2026 have been entered and considered, no claim is amended. This action is made final.
Response to Arguments
Applicant’s arguments filed on 01/22/2026 have been fully considered but are not persuasive.
Regarding Claim 1 limitations “… concurrently displaying, via the one or more display generation components:
an avatar at a first location in a three-dimensional environment; and
a respective user interface that includes one or more controls for editing a visual appearance of the avatar that are incorporated into a surface that is displayed proximate to the avatar but is spaced apart from the avatar in a simulated depth dimension of the three-dimensional environment relative to a viewpoint of a user; …”.
Applicant argues “Applicant respectfully submits that the proposed combination of Valdivia and Dye is improper, as it represents both impermissible hindsight bias and would require a modification to the principle of operation of the primary reference Valdivia…. The Office Action then appears to suggest that Valdivia could be combined with Dye such that the avatar modification features of Dye could be incorporated into interactive surface 2720 of Valdivia. However, such a combination would by necessity change the principle of operation of Valdivia. The user interface in FIG. 27 of Valdivia is a "friend list" that allows a user to select different contacts to "initiate a communication with the corresponding contact." Valdivia, ,i [0171]. Furthermore, the two avatars that are shown in FIG. 27 of Valdivia are avatars of "two other users (e.g., the users "Lucy" and "Christophe").”.
However, Valdivia, abstract, [0005] the invention describes a variety of different ways of rendering and interactive with a virtual (or augmented) reality environment. A method includes receiving a gaze input from a gaze-tracking input device associated with a user, wherein the gaze input indicates a first focal point in a region of a rendered virtual space; determining an occurrence of a trigger event; causing a hit target associated with the focal point to be selected; and sending information configured to render a response to the selection of the hit target on a display device associated with the user.
[0171] FIG. 27 illustrates an example of a user in a virtual room browsing a friend list. In particular embodiments, the virtual reality system may allow for real-time virtual communications among users. In particular embodiments, the communications may be associated or integrated with a communications application (e.g., a communications application associated with the social-networking system 160) that has information about the user's friends or contacts. Users may be able to access a friend or contact list and quickly initiate communications with other users. As an example and not by way of limitation, referencing FIG. 27, a user may activate an interactive element associated with an online social network (e.g., the element 2710 associated with the social-networking system 160), which may summon a menu of contact-items (e.g., the contact item 2730 corresponding to a contact named "Stephanie"), any of which may be activate to initiate a communication with the corresponding contact. In particular embodiments, as described elsewhere herein, these communications may occur within a virtual room (i.e., a virtual room may be a communication session among the users present in the virtual room). In these embodiments, a particular user may join or create a virtual room, and other users may join subsequently (e.g., on their own initiative if they have the requisite permissions, or upon receiving an invite). A user in the virtual room may be able to see avatars of other users. As an example and not by way of limitation, referencing FIG. 27, the user may be in a virtual room with two other users (e.g., the users "Lucy" and "Christophe"), whose avatars may be positioned around the interactive surface 2720. The avatars may move in real-time to reflect motions made by the respective users. As an example and not by way of limitation, when a user raises a right hand, the avatar of the user may correspondingly raise its hand. As another example and not by way of limitation, when a user speaks, the avatar of the user may correspondingly move its mouth to convey that the respective user is speaking.
Therefore, Valdivia teaches a user interface in which an avatar and its corresponding menu items are arranged in different depth. This gives user a more sophisticated 3D virtual experience.
Dye, abstract, the invention describes method of displaying visual effects in image data. In some examples, visual effects include an avatar displayed on a user's face. In some examples, visual effects include stickers applied to image data. In some examples, visual effects include screen effects. In some examples, visual effects are modified based on depth data in the image data.
[0026] A method is described. The method is performed at an electronic device having a camera and a display apparatus. The method comprises: displaying, via the display apparatus, a representation of image data captured via the one or more cameras, wherein the representation includes a representation of a subject and the image data corresponds to depth data that includes depth data for the subject; displaying, via the display apparatus, a representation of a virtual avatar that is displayed in place of at least a portion of the representation of the subject, wherein the virtual avatar is placed at simulated depth relative to the representation of the subject as determined based on the depth data for the subject, displaying the representation of the virtual avatar includes: in accordance with a determination, based on the depth data, that a first portion of the virtual avatar satisfies a set of depth-based display criteria, wherein the depth-based display criteria include a requirement that the depth data for the subject indicate that the first portion of the virtual avatar has a simulated depth that is in front of a corresponding first portion of the subject.
[0218] FIGS. 6A-6BQ illustrate exemplary user interfaces for displaying visual effects in a messaging application, in accordance with some embodiments.
[0229] In FIG. 6E, in response to detecting input 623, device 600 activates camera 602 (e.g., switches from the rear-facing camera) and updates image display region 620 to display live camera preview 620-1 from camera 602, showing a representation of subject 632 positioned in the field-of-view of camera 602 and background 636 displayed behind subject 632. As discussed herein, image data captured using camera 602 includes, in some embodiments, depth data that can be used to determine a depth of objects in the field-of-view of camera 602. In some embodiments, device 600 parses objects (e.g., in image data) based on a detected depth of those objects, and uses this determination to apply the visual effects discussed herein. For example, device 600 can categorize subject 632 as being in the foreground of the live camera preview 620-1 and objects positioned behind the user as being in the background of the live camera preview 620-1. These background objects are referred to generally herein as background 636.
Therefore, Dye teaches a user interface to modify visual appearance of an avatar.
Valdivia and Dye are analogous art because they both teach method of interacting with avatar displayed in 3D virtual environment. Dye further teaches method of modify the appearance of the avatar by user input. Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify the method of VR environment interaction (taught in Valdivia), to further use the avatar editor (taught in Dye), so as to provide user with more interesting communication feature in virtual environment.
Indeed, Valdivia’s GUI is a virtual conference meeting that shows avatars from other participants. While Dye’s GUI is a virtual appearance editing interface that shows avatar of the current user. The avatar appearance editing feature is fairly common in the art. For example, in a video game, a player can choose his/her avatar and add different skin to it to customize. Valdivia is cited for the 3D arrangement between avatar and its corresponding menu items. Such arrangement can be used in avatar editing GUI so that the look-n-feel of customizing appearance of an avatar can be realistic. The modified avatar can be used in applications such as a virtual conference meeting. So that different participants in a meeting can have different appearance.
Therefore, the combination of Valdivia and Dye still teaches the above mentioned limitations of Claim 1.
Regarding Claim 76 limitations: “…in response to detecting the movement by the user, displaying, via the one or more display generation components, movement of the avatar from the first avatar location in the three-dimensional environment to a modified avatar location in the three-dimensional environment different from the first avatar location in the three-dimensional environment while maintaining display of the respective user interface incorporated into the surface at the first user interface location in the threedimensional environment.”.
Applicant argues “The cited portion of Valdivia does not appear to disclose displaying movement of the avatar from a first location in a three-dimensional environment to a different location in the three-dimensional environment in response to detected movement by the user, and also does not appear to disclose maintaining display of a user interface at a set location while the avatar moves in response to detected user movement.”.
However, Valdivia, [0192], In particular embodiments, when a callee-user accepts a communication from a caller-user via the virtual watch (or another similar virtual device), the communication may appear in the virtual space as a window or as an avatar attached or otherwise associated with the virtual watch. … In particular embodiments, the callee-user may be able to detach the window or avatar from the virtual watch and move it into the virtual space (e.g., by picking up the window or avatar with a gesture from the other hand and placing the window or avatar in a region of the virtual space detached from the virtual watch).
Therefore, as Valdivia described in [0192], in the GUI for the user to accept a communication (e.g. a call) from another user, the communication is displayed as an avatar associated with a virtual watch. The current user can move (such as a gesture) the avatar from the virtual watch (first location in virtual environment) into the virtual space (second location in virtual environment). And such location change is in response to detection of the user’s movement, for example, a hand gesture. During some movement, the virtual watch still displays in the virtual environment.
Therefore, Valdivia still teaches the above mentioned limitations of claim 76.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 73-76, 85-86 are rejected under 35 U.S.C. 103 as being unpatentable over Valdivia et al (US20180095635) further in view of Dye (US 20190342507).
Regarding Claim 73. Valdivia teaches A computer system configured to communicate with one or more display generation components, the computer system comprising:
one or more processors; and
memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for:
concurrently displaying, via the one or more display generation components (Valdivia, abstract, [0005] the invention describes a variety of different ways of rendering and interactive with a virtual (or augmented) reality environment. A method includes receiving a gaze input from a gaze-tracking input device associated with a user, wherein the gaze input indicates a first focal point in a region of a rendered virtual space; determining an occurrence of a trigger event; causing a hit target associated with the focal point to be selected; and sending information configured to render a response to the selection of the hit target on a display device associated with the user.
[0222] FIG. 56 illustrates an example computer system 5600. In particular embodiments, one or more computer systems 5600 perform one or more steps of one or more methods described or illustrated herein.
[0224] In particular embodiments, computer system 5600 includes a processor 5602, memory 5604, storage 5606, an input/output (I/O) interface 5608, a communication interface 5610, and a bus 5612.):
an avatar at a first location in a three-dimensional environment; and
a respective user interface that includes one or more controls for editing a visual appearance of the avatar that are incorporated into a surface that is displayed proximate to the avatar but is spaced apart from the avatar (Valdivia, [0171] FIG. 27 illustrates an example of a user in a virtual room browsing a friend list. In particular embodiments, the virtual reality system may allow for real-time virtual communications among users. In particular embodiments, the communications may be associated or integrated with a communications application (e.g., a communications application associated with the social-networking system 160) that has information about the user's friends or contacts. Users may be able to access a friend or contact list and quickly initiate communications with other users. As an example and not by way of limitation, referencing FIG. 27, a user may activate an interactive element associated with an online social network (e.g., the element 2710 associated with the social-networking system 160), which may summon a menu of contact-items (e.g., the contact item 2730 corresponding to a contact named "Stephanie"), any of which may be activate to initiate a communication with the corresponding contact. In particular embodiments, as described elsewhere herein, these communications may occur within a virtual room (i.e., a virtual room may be a communication session among the users present in the virtual room). In these embodiments, a particular user may join or create a virtual room, and other users may join subsequently (e.g., on their own initiative if they have the requisite permissions, or upon receiving an invite). A user in the virtual room may be able to see avatars of other users. As an example and not by way of limitation, referencing FIG. 27, the user may be in a virtual room with two other users (e.g., the users "Lucy" and "Christophe"), whose avatars may be positioned around the interactive surface 2720. The avatars may move in real-time to reflect motions made by the respective users. As an example and not by way of limitation, when a user raises a right hand, the avatar of the user may correspondingly raise its hand. As another example and not by way of limitation, when a user speaks, the avatar of the user may correspondingly move its mouth to convey that the respective user is speaking.
PNG
media_image1.png
702
1028
media_image1.png
Greyscale
As shown in the figure above, the avatar is place behind the editing user interface.);
Valdivia fails to explicitly teach, however, Dye teaches avatar in a simulated depth dimension of the three-dimensional environment relative to a viewpoint of a user;
while concurrently displaying the avatar and the respective user interface, detecting an input directed to a control in the respective user interface (Dye, abstract, the invention describes method of displaying visual effects in image data. In some examples, visual effects include an avatar displayed on a user's face. In some examples, visual effects include stickers applied to image data. In some examples, visual effects include screen effects. In some examples, visual effects are modified based on depth data in the image data.
[0026] A method is described. The method is performed at an electronic device having a camera and a display apparatus. The method comprises: displaying, via the display apparatus, a representation of image data captured via the one or more cameras, wherein the representation includes a representation of a subject and the image data corresponds to depth data that includes depth data for the subject; displaying, via the display apparatus, a representation of a virtual avatar that is displayed in place of at least a portion of the representation of the subject, wherein the virtual avatar is placed at simulated depth relative to the representation of the subject as determined based on the depth data for the subject, displaying the representation of the virtual avatar includes: in accordance with a determination, based on the depth data, that a first portion of the virtual avatar satisfies a set of depth-based display criteria, wherein the depth-based display criteria include a requirement that the depth data for the subject indicate that the first portion of the virtual avatar has a simulated depth that is in front of a corresponding first portion of the subject.
[0218] FIGS. 6A-6BQ illustrate exemplary user interfaces for displaying visual effects in a messaging application, in accordance with some embodiments.
[0229] In FIG. 6E, in response to detecting input 623, device 600 activates camera 602 (e.g., switches from the rear-facing camera) and updates image display region 620 to display live camera preview 620-1 from camera 602, showing a representation of subject 632 positioned in the field-of-view of camera 602 and background 636 displayed behind subject 632. As discussed herein, image data captured using camera 602 includes, in some embodiments, depth data that can be used to determine a depth of objects in the field-of-view of camera 602. In some embodiments, device 600 parses objects (e.g., in image data) based on a detected depth of those objects, and uses this determination to apply the visual effects discussed herein. For example, device 600 can categorize subject 632 as being in the foreground of the live camera preview 620-1 and objects positioned behind the user as being in the background of the live camera preview 620-1. These background objects are referred to generally herein as background 636.); and
in response to detecting the input directed to the control in the respective user interface, changing an appearance of the avatar based on the input (Dye, [0232] In FIG. 6F, device 600 detects input 626 (e.g., a tap gesture on display 601) on avatar effects affordance 624-1.
[0233] In FIG. 6G, in response to detecting input 626, device 600 displays avatar options menu 628 with a scrollable listing of avatar options 630. Avatar options menu 628 also includes selection region 629 for indicating a selected one of avatar options 630. As shown in FIG. 6G, robot avatar option 630-3 is positioned in selection region 629, which indicates robot avatar option 630-1 is selected.
PNG
media_image2.png
787
572
media_image2.png
Greyscale
).
Valdivia and Dye are analogous art because they both teach method of interacting with avatar displayed in 3D virtual environment. Dye further teaches method of modify the appearance of the avatar by user input. Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify the method of VR environment interaction (taught in Valdivia), to further use the avatar editor (taught in Dye), so as to provide user with more interesting communication feature in virtual environment.
Regarding Claim 74. The combination of Valdivia and Dye further teaches The computer system of claim 73, wherein the avatar is positioned in front of the surface relative to a viewpoint of the user (Dye, [0469] Device image data 1201 can be enlarged again (by again switching positions with participant image data 1204) in response to receiving gesture 1233 on window 1202, as shown in FIG. 12AD. Device 600 shows device image data 1201 and participant image data 1204 switched in FIG.12AE, with the displayed visual effects and options display region 1206. Because effects mode is enabled, when options display region 1206 is displayed, visual effects option affordances 1214 are also displayed.
[0477] In FIG. 12AO, device detects input 1275 on user interface 1200 and, in response, hides options display region 1206, and shifts and resizes the position of various participant windows 1255, as shown in FIG. 12AP.
Therefore, by adjusting the simulated depth value, the avatar can be placed in different position relative to other UI elements, including being placed in front of other UI elements.).
The reasoning for combination of Valdivia and Dye is the same as described in Claim 1.
Regarding Claim 75. The combination of Valdivia and Dye further teaches The computer system of claim 73, wherein the avatar is positioned behind the surface relative to a viewpoint of the user (Valdivia, As shown in Fig. 27, the avatar is place behind the editing user interface.).
Regarding Claim 76. The combination of Valdivia and Dye further teaches The computer system of claim 73, the one or more programs further including instructions for:
concurrently displaying, via the one or more display generation components:
the avatar at a first avatar location in the three-dimensional environment; and the respective user interface incorporated into the surface at a first user interface location in the three-dimensional environment that is proximate to the avatar but is spaced apart from the avatar in the simulated depth dimension of the three-dimensional environment relative to the viewpoint of the user;
while concurrently displaying the avatar at the first avatar location in the three-dimensional environment and the respective user interface at the first user interface location in the three-dimensional environment, detecting, via one or more input devices, movement by the user; and
in response to detecting the movement by the user, displaying, via the one or more display generation components, movement of the avatar from the first avatar location in the three-dimensional environment to a modified avatar location in the three-dimensional environment different from the first avatar location in the three-dimensional environment while maintaining display of the respective user interface incorporated into the surface at the first user interface location in the three-dimensional environment (Valdivia, [0192], In particular embodiments, when a callee-user accepts a communication from a caller-user via the virtual watch ( or another similar virtual device), the communication may appear in the virtual space as a window or as an avatar attached or otherwise associated with the virtual watch. … In particular embodiments, the callee-user may be able to detach the window or avatar from the virtual watch and move it into the virtual space (e.g., by picking up the window or avatar with a gesture from the other hand and placing the window or avatar in a region of the virtual space detached from the virtual watch).).
Claim 85 is similar in scope as Claim 73, and thus is rejected under same rationale.
Claim 86 is similar in scope as Claim 73, and thus is rejected under same rationale.
Claims 81 is rejected under 35 U.S.C. 103 as being unpatentable over Valdivia et al (US20180095635) in view of Dye (US 20190342507) further in view of Poot et al (US20100302138), Currivan et al (US20070115350).
Regarding Claim 81. The combination of Valdivia and Dye fails to explicitly teach, however, Poot teaches The computer system of claim 73, the one or more programs further including instructions for:
while concurrently displaying the avatar and the respective user interface incorporated into the surface, detecting, via one or more input devices, fifth movement by the user (Poot, abstract, the invention describes a system tracking a user's motions or gestures performed in a physical space and mapping them to a visual representation of the user. The user's gestures may be translated to a control in a system or application space, such as to open a file or to execute a punch in a punching game. Similarly, the user's gestures may be translated to a control in the system or application space for making modifications to a visual representation. A visual representation may be a display of a virtual object or a display that maps to a target in the physical space. In another example embodiment, the system may track the target in the physical space over time and apply modifications or updates to the visual representation based on the history data.) and
in response to detecting the fifth movement by the user:
in accordance with a determination that the fifth movement by the user includes movement in a third direction that exceeds a threshold distance of movement in the third direction (Poot, [0045] As shown, in FIG. 2, the computing environment 12 may include a gestures library 190 and a gestures recognition engine 192. The gestures recognition engine 192 may include a collection of gesture filters 191. Each filter 191 may comprise information defining a gesture along with parameters, or metadata, for that gesture. For instance, a throw, which comprises motion of one of the hands from behind the rear of the body to past the front of the body, may be implemented as a gesture filter 191 comprising information representing the movement of one of the hands of the user from behind the rear of the body to past the front of the body, as that movement would be captured by a depth camera. Parameters may then be set for that gesture. Where the gesture is a throw, a parameter may be a threshold velocity that the hand has to reach, a distance the hand must travel (either absolute, or relative to the size of the user as a whole), and a confidence rating by the recognizer engine that the gesture occurred. These parameters for the gesture may vary between applications, between contexts of a single application, or within one context of one application over time.
[0046] A gesture may be recognized as a request for avatar modification. In an example embodiment, the motion in the physical space may be representative of a gesture recognized as a request to modify the visual representation of a target.),
Valdivia, Dye and Poot are analogous art because they all teach method of interacting with avatar displayed in virtual environment. Poot further teaches method of modify the appearance of the avatar by user gesture. Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify the method of VR environment interaction (taught in Valdivia and Dye), to further use the user gesture as modification command for avatar (taught in Poot), so as to provide user with more intuitive interaction method in virtual environment.
The combination of Valdivia, Dye and Poot fails to explicitly teach, however, Currivan teaches cropping at least a portion of the avatar (Currivan, abstract, the invention describes method for modifying facial video transmitted from a first videophone to a second videophone during a videophone conversation. A videophone comprises a videophone image processing system (VIPS) that stores one or more preferred images. The one or more preferred images may comprise an image of a person presented in an attractive appearance. The one or more preferred images may comprise one or more avatars. Additionally, the VIPS may be used to incorporate one or more facial features of the person into a preferred image or avatar. Furthermore, a replacement background may be incorporated into the preferred image or avatar. The VIPS transmits a preferred image of a first speaker of a first videophone to a second speaker of a second videophone by capturing an actual image of the first speaker and substituting at least a portion of said actual image with a stored image.
[0014] FIG. 1 is a block diagram illustrating a videophone image processing system (VIPS), as used in a videophone, which transmits video of a preferred image during a video telephony conversation, … The controller/graphics processor 104 may crop a facial feature of a selected avatar and subsequently insert a corresponding facial feature from the actual facial image. For example, a person's lips may be substituted or replaced using the object based video segmentation discussed. As a result, a user's lips, and associated lip movements are captured, and incorporated into the avatar or preferred facial image, for transmission to the other party, during a videophone conversation.).
Valdivia, Dye, Poot and Currivan are analogous art because they all teach method of interacting with avatar displayed in virtual environment. Currivan further teaches method of modify the appearance of the avatar by substituting partial features with features from real facial image. Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify the method of VR environment interaction (taught in Valdivia, Dye and Poot), to further use the feature substituting on avatar (taught in Currivan), so as to provide user with more customized avatar in virtual environment.
Claims 82-84 are rejected under 35 U.S.C. 103 as being unpatentable over Valdivia et al (US20180095635) in view of Dye (US 20190342507) further in view of Ohashi (US20210090329).
Regarding Claim 82. The combination of Valdivia and Dye fails to explicitly teach, however, Ohashi teaches The computer system of claim 73, wherein displaying the avatar comprises displaying the avatar at a height relative to a floor of a three-dimensional environment that corresponds to a height of the user (Ohashi, abstract, the invention describes an information processing device including an acquiring unit that acquires positional information of a flat surface present in a first space around a first user and positional information of a flat surface present in a second space around a second user, and a transformation parameter determining unit that determines a coordinate transformation parameter for transforming position coordinates of the first space and the second space into position coordinates in a virtual space such that a position of the flat surface present in the first space and a position of the flat surface present in the second space coincide with each other. A position of an object present in the first space and a position of another object present in the second space are transformed into positions in the virtual space according to the determined coordinate transformation parameter.
[0025] … By using the position coordinates of each of the plurality of unit portions thus acquired, a position or a shape of an object having a complicated structure, such as the body of the user or a table, can be identified. Hereinafter, a coordinate system indicating positions in the actual space where the user is present with the position of the stereo camera 26 as the reference position is referred to as a local coordinate system.
[0028] Note that it is assumed that each of the plurality of client devices 20 identifies a surface whose relative position (height) to the user is close as the reference surface. For example, in a case where the client device 20a defines a floor surface in the location where the user A is present as the reference surface, the client device 20b also identifies a floor surface in the location where the user B is present as the reference surface.
[0029] … In a case where the reference surface identification unit 52 accepts specification for defining a table surface as the reference surface, the reference surface identification unit 52 may first identify a position of the head of the user and a position of the floor surface from a distribution of unit portions and then define a flat surface present in a predetermined height range between the head of the user and the floor surface as the reference surface.
[0030] Furthermore, the reference surface identification unit 52 may identify a vertical surface such as a wall surface as the reference surface in addition to the horizontal surface such as the floor surface.
[0045] The space updating unit 55 may also determine a position, a shape, and a size of another virtual object disposed in the virtual space by use of the information of the position coordinates of the unit portions received from each client device 20, in addition to the avatar of each user. For example, in a case where a cabinet is placed in the room where the user A is present, the space updating unit 55 disposes a virtual object having a shape similar to that of the cabinet also in the virtual space.
Therefore, the size and shape of the users in different physical environment are scanned and matching to the avatars that are place in the same virtual environment. The size and shape of virtual object (avatar) are created according to the real size and shape. Further see Fig 2 & 4.).
Valdivia, Dye and Ohashi are analogous art because they all teach method of interacting with avatar displayed in virtual environment. Ohashi further teaches method of creating avatar according to the size and shape of the user in the real world. Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify the method of VR environment interaction (taught in Valdivia and Dye), to further use the avatar mapping method (taught in Ohashi), so as to provide user with interactive virtual environment that is closer to the real world.
Regarding Claim 83. The combination of Valdivia, Dye and Ohashi further teaches The computer system of claim 73, wherein displaying the avatar comprises displaying the avatar at a predetermined height in a three-dimensional environment (Ohashi, [0028] Note that it is assumed that each of the plurality of client devices 20 identifies a surface whose relative position (height) to the user is close as the reference surface. For example, in a case where the client device 20a defines a floor surface in the location where the user A is present as the reference surface, the client device 20b also identifies a floor surface in the location where the user B is present as the reference surface.
[0029] … In a case where the reference surface identification unit 52 accepts specification for defining a table surface as the reference surface, the reference surface identification unit 52 may first identify a position of the head of the user and a position of the floor surface from a distribution of unit portions and then define a flat surface present in a predetermined height range between the head of the user and the floor surface as the reference surface.
Therefore, avatars are displayed stand at floor with a predetermined height, so that all avatars representing users from different rooms can be presented in one virtual room.).
The reasoning for combination of Valdivia, Dye and Ohashi is the same as described in Claim 82.
Regarding Claim 84. The combination of Valdivia, Dye and Ohashi further teaches The computer system of claim 73, wherein displaying the avatar comprises displaying that avatar at a scale that corresponds to a size of the user (Ohashi, [0045] The space updating unit 55 may also determine a position, a shape, and a size of another virtual object disposed in the virtual space by use of the information of the position coordinates of the unit portions received from each client device 20, in addition to the avatar of each user. For example, in a case where a cabinet is placed in the room where the user A is present, the space updating unit 55 disposes a virtual object having a shape similar to that of the cabinet also in the virtual space.
Avatar is also a virtual object. Therefore, it is obvious to a person with ordinary skill in the art that avatar is also created and scaled to the size of the virtual space according to the object it represents.).
The reasoning for combination of Valdivia, Dye and Ohashi is the same as described in Claim 82.
Allowable Subject Matter
Claims 77-80 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reason for the indication of allowable subject matter:
Regarding Claim 77, it recites “The computer system of claim 76, wherein:
the avatar is configured to move in a first direction, and is not configured to move in a second direction different from the first direction; and
displaying movement of the avatar from the first avatar location in the three-dimensional environment to the modified avatar location in the three-dimensional environment comprises displaying movement of the avatar in the first direction without moving the avatar in the second direction.” in the context of Claim 77.
The prior arts of record either alone or in combination fails to teach or suggest the above quoted limitation of Claim 77. Therefore, Claim 77 is allowable over prior art.
Regarding Claim 78, it recites “The computer system of claim 76, the one or more programs further including instructions for:
while concurrently displaying the avatar at the first avatar location in the three-dimensional environment and the respective user interface at the first user interface location in the three-dimensional environment, detecting, via the one or more input devices, second movement by the user; and
in response to detecting the second movement by the user:
in accordance with a determination that the second movement by the user exceeds a distance threshold, displaying, via the one or more display generation components, movement of the respective user interface incorporated into the surface from the first user interface location in the three-dimensional environment to a modified user interface location in the three-dimensional environment different from the first user interface location in the three-dimensional environment.” in the context of Claim 78.
The prior arts of record either alone or in combination fails to teach or suggest the above quoted limitation of Claim 78. Therefore, Claim 78 is allowable over prior art.
Regarding Claim 79, it recites “The computer system of claim 76, the one or more programs further including instructions for:
while concurrently displaying the avatar at the first avatar location in the three-dimensional environment and the respective user interface at the first user interface location in the three-dimensional environment, detecting, via the one or more input devices, third movement by the user; and
in response to detecting the third movement by the user:
in accordance with a determination that the third movement by the user exceeds a second distance threshold, ceasing display of the avatar while maintaining display of the respective user interface incorporated into the surface at the first user interface location in the three-dimensional environment.” in the context of Claim 79.
The prior arts of record either alone or in combination fails to teach or suggest the above quoted limitation of Claim 79. Therefore, Claim 79 is allowable over prior art.
Regarding Claim 80, it recites “The computer system of claim 76, the one or more programs further including instructions for:
while concurrently displaying the avatar at the first avatar location in the three-dimensional environment and the respective user interface at the first user interface location in the three-dimensional environment, detecting, via the one or more input devices, fourth movement by the user; and
in response to detecting the fourth movement by the user:
in accordance with a determination that the fourth movement by the user exceeds a third distance threshold, ceasing display of the avatar while maintaining display of the respective user interface incorporated into the surface at the first user interface location in the three-dimensional environment; and
in accordance with a determination that the fourth movement by the user exceeds a fourth distance threshold different from the third distance threshold:
displaying, via the one or more display generation components, movement of the respective user interface incorporated into the surface from the first user interface location in the three-dimensional environment to a second user interface location in the three-dimensional environment different from the first user interface location in the three-dimensional environment; and
displaying, via the one or more display generation components, the avatar at a second avatar location in the three-dimensional environment, wherein:
the second user interface location in the three-dimensional environment and the second avatar location in the three-dimensional environment are centered on a viewpoint of the user after the fourth movement by the user; and
the second user interface location in the three-dimensional environment is proximate to the second avatar location in the three-dimensional environment but is spaced apart from the second avatar location in the three-dimensional environment in a simulated depth dimension of the three-dimensional environment relative to a viewpoint of the user after the fourth movement by the user.” in the context of Claim 80.
The prior arts of record either alone or in combination fails to teach or suggest the above quoted limitation of Claim 80. Therefore, Claim 80 is allowable over prior art.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIN SHENG whose telephone number is (571)272-5734. The examiner can normally be reached M-F 9:30AM-3:30PM 6:00PM-8:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 5712723022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Xin Sheng/ Primary Examiner, Art Unit 2619