Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-20 are pending.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 6-10, 12-18, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Inomata (Japan Patent Application Publication JP2018147086A).
Regarding Claim 1, Inomata teaches a method comprising:
at a processor of a head-mounted device (HMD) and one or more sensors (par 0047 Fig 4 at a processor within control unit 121 of control device 120 of HMD 110 and sensors at least wearing sensor 115, face cameras 113,117, par 0095 HMD orientation sensor 114 and wearing sensor 115):
obtaining sensor data via the one or more sensors during a period of time (par 0044,0076-0079 Figs 2,4 moving images are acquired by the face cameras 113 and 117, par 0095 HMD orientation sensor 114 and wearing sensor 115 detect HMD statuses, during a period of time while the user is wearing the HMD through a period of the user removing the HMD; par 0060 images are obtained repeatedly/continuously during the period);
determining a removal of the HMD during the period of time based on the sensor data (par 0095 Figs 4,14 HMD orientation sensor 114 and wearing sensor 115 may detect the user removing the HMD during the period), wherein
the removal of the HMD corresponds to a change of the HMD from a first position at which one or more displays of the HMD are positioned in front of eyes of a user (par 0060 Fig 3 the control unit 121, in normal HMD use in wearing position, displays a left eye field image on the left eye display unit and displays a right eye field image on the right eye display unit; and par 0038 Fig 4 wearing sensor 115, in normal HMD use in wearing position, senses current between two pads at the bridge of the nose) to a second position at which the one or more displays are positioned elsewhere with respect to the user (paras 0038,0098 Fig 4 wearing sensor 115, in HMD transitioning from worn to non-worn position, senses current between two pads at the bridge of the nose go away and generates a remove event signal; par 0096 Fig 14 HMD orientation sensor 114 indicates a remove state when the HMD tilt is beyond a threshold angle; Fig 14 the HMD unworn state shown for user A indicates a second position at which the one or more displays are positioned elsewhere with respect to the user); and
generating user representation data (par 0044 Fig 10 control unit 121 generates the facial expression (state) of the user U's avatar based on the moving images acquired by the face cameras 113 and 117 of the facial expression (state) of the user U, and par 0056 generates the facial orientation of the user U's avatar based on information regarding the position and tilt of the HMD 110 [of user U’s HMD system during the period]; par 0088 Fig 10 avatar positions, hand locations, face orientations, and face expressions are updated (all corresponding to a three-dimensional (3D) appearance of the person during the period of time)), wherein
the user representation data is generated based on determining the removal of the HMD during the period of time (par 0101 Fig 12 when user A non-wearing is determined, non-wearing information is sent to second user’s system which then sets the facial expression of user A’s avatar/user representation to a default expression), wherein
a view of the user representation is provided based on the user representation data (par 0107 Fig 14 the avatar 4A, now based on the user representation whose facial expression is now set to the default facial expression, and the speech bubble object 42A are visualized on the field of view image displayed on the HMD 110 of the user terminal 1B).
Regarding Claim 2, Inomata teaches the method of Claim 1, wherein
determining removal of the HMD comprises determining that the HMD is being doffed (paras 0038,0098 Fig 4 wearing sensor 115, in HMD transitioning from worn to non-worn position, senses current between two pads at the bridge of the nose go away and generates a remove event signal; par 0096 Fig 14 HMD orientation sensor 114 indicates a remove state when the HMD tilt is beyond a threshold angle; Fig 14 the HMD unworn state shown for user A indicates a second position at which the one or more displays are positioned elsewhere with respect to the user).
Regarding Claim 3, Inomata teaches the method of Claim 1, wherein
determining removal of the HMD comprises detecting a change in position or orientation of the HMD based on motion sensor data (par 0096 Fig 14 HMD orientation/motion sensor 114 indicates a remove state when the HMD tilt changes to beyond a threshold angle).
Regarding Claim 4, Inomata teaches the method of Claim 1, wherein
determining removal of the HMD comprises determining a change of user eye position relative to an eye box region of the device (paras 0038,0098 Fig 4 wearing sensor 115, in HMD transitioning from worn to non-worn position, senses current between two pads at the bridge of the nose go away as the HMD [and correspondingly an eye box region of the device] is moved relative to the eyes of the user, and generates a remove event signal; Fig 14 the HMD unworn state shown for user A indicates a second position at which the one or more displays are positioned elsewhere with respect to the user).
Regarding Claim 6, Inomata teaches the method of Claim 1, wherein
generating the user representation data based on determining the removal of the HMD during the period of time comprises:
generating the user representation data using current user data prior to the removal (par 0044 Fig 10 control unit 121 generates the facial expression (state) of the user U's avatar based on the moving images acquired by the face cameras 113 and 117 of the facial expression (state) of the user U, and par 0056 generates the facial orientation of the user U's avatar based on information regarding the position and tilt of the HMD 110 [of user U’s HMD system during the period]; par 0088 Fig 10 avatar positions, hand locations, face orientations, and face expressions are updated (all corresponding to a three-dimensional (3D) appearance of the person during the period of time) prior to the removal); and
generating the representation data without using current user data after the removal (par 0101 Fig 12 when user A non-wearing is determined, non-wearing information is sent to second user’s system which then sets the facial expression of user A’s avatar/user representation to a default expression, i.e. generating the representation data without using at least current user face expression data after the removal).
Regarding Claim 7, Inomata teaches the method of Claim 1, wherein
after the removal, the user representation data represents a neutral appearance of user (par 0020 Fig 14 when non-wearing information indicating that the first user is not wearing the first head-mounted device (first HMD) is received, the facial expression of the first avatar is changed to an expression pre-selected by the first user (hereinafter referred to as the selected expression) as a mode indicating that the first user is not wearing the first head-mounted device; par 0079 Fig 14 facial expressions may be selected from available stored expressions taught to include a smile, a sad expression, a neutral expression, an angry expression, a surprised expression, a troubled expression, etc.).
Regarding Claim 8, Inomata teaches the method of Claim 1, wherein
after the removal, the user representation data represents a prior appearance of user (par 0101 Fig 14 after receiving the non-wearing information from the server 2, the control unit 121 of the user terminal 1B sets the facial expression of the avatar 4A placed in the virtual space 200B to the default set facial expression, i.e. the facial expression of the avatar 4A in the initial state before updating of the facial expression of the avatar 4A starts, which is a prior appearance of user).
Regarding Claim 9, Inomata teaches the method of Claim 1, wherein
after the removal the user representation data represents a fixed appearance of the user corresponding to a most recent appearance of the user prior to the removal (par 0016 Fig 14 when the first user is not wearing [removes] the first HMD, the facial expression of the first avatar is not updated at all [from the last appearance update just before the removal).
Regarding Claim 10, Inomata teaches the method of Claim 1, wherein
after the removal the user representation data represents a fixed appearance of the user corresponding to an appearance of the user prior to the period of time (par 0101 Fig 14 after receiving the non-wearing information from the server 2, the control unit 121 of the user terminal 1B sets the facial expression of the avatar 4A placed in the virtual space 200B to the default set facial expression, i.e. the facial expression of the avatar 4A in the initial state before updating of the facial expression of the avatar 4A starts, which is an appearance of the user prior to the period of time).
Regarding Claim 12, Inomata teaches the method of Claim 1 further comprising
providing a visual treatment for the user representation indicating that the appearance of the user representation after the removal may not depict an actual current appearance of the user (par 0104 Fig 15 providing a visual treatment for the user representation comprising a caption showing information indicating that user A is not wearing the HMD 110 may be superimposed on the field of view image displayed on the HMD 110).
Regarding Claim 13, Inomata teaches the method of Claim 1, wherein
the view of the user representation is presented to another user during a live communication session (par 0107 Fig 14 the avatar 4A, now based on the user representation whose facial expression is now set to the default facial expression, and the speech bubble object 42A are visualized on the field of view image displayed on the HMD 110 of the user terminal 1B; par 0090 Fig 10 the view of the user representation is presented to another user during a voice chat (VR chat) between users (avatars) that is realized in the virtual space).
Regarding Claim 14, Inomata teaches a head mounted device (HMD) (par 0032 Fig 1 HMD 110) comprising:
a non-transitory computer-readable storage medium (par 0051 Fig 4 memory comprising e.g. a ROM);
one or more sensors (par 0047 Fig 4 sensors at least wearing sensor 115, face cameras 113,117, par 0095 HMD orientation sensor 114 and wearing sensor 115); and
one or more processors coupled to the non-transitory computer-readable storage medium (par 0049 Fig 4 processor within control unit 121 of control device 120 of HMD 110 coupled to the non-transitory computer-readable storage medium [ROM]), wherein
the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the one or more processors to perform operations (par 0049 ROM includes program [instructions] that are executed on the processor to perform processes) comprising:
obtaining sensor data via the one or more sensors during a period of time (par 0044,0076-0079 Figs 2,4 moving images are acquired by the face cameras 113 and 117, par 0095 HMD orientation sensor 114 and wearing sensor 115 detect HMD statuses, during a period of time while the user is wearing the HMD through a period of the user removing the HMD; par 0060 images are obtained repeatedly/continuously during the period);
determining a removal of the HMD during the period of time based on the sensor data (par 0095 Figs 4,14 HMD orientation sensor 114 and wearing sensor 115 may detect the user removing the HMD during the period), wherein
the removal of the HMD corresponds to a change of the HMD from a first position at which one or more displays of the HMD are positioned in front of eyes of a user (par 0060 Fig 3 the control unit 121, in normal HMD use in wearing position, displays a left eye field image on the left eye display unit and displays a right eye field image on the right eye display unit; and par 0038 Fig 4 wearing sensor 115, in normal HMD use in wearing position, senses current between two pads at the bridge of the nose) to a second position at which the one or more displays are positioned elsewhere with respect to the user (paras 0038,0098 Fig 4 wearing sensor 115, in HMD transitioning from worn to non-worn position, senses current between two pads at the bridge of the nose go away and generates a remove event signal; par 0096 Fig 14 HMD orientation sensor 114 indicates a remove state when the HMD tilt is beyond a threshold angle; Fig 14 the HMD unworn state shown for user A indicates a second position at which the one or more displays are positioned elsewhere with respect to the user); and
generating user representation data corresponding to a three-dimensional (3D) appearance of the person during the period of time (par 0044 Fig 10 control unit 121 generates the facial expression (state) of the user U's avatar based on the moving images acquired by the face cameras 113 and 117 of the facial expression (state) of the user U, and par 0056 generates the facial orientation of the user U's avatar based on information regarding the position and tilt of the HMD 110 [of user U’s HMD system during the period]; par 0088 Fig 10 avatar positions, hand locations, face orientations, and face expressions are updated (all corresponding to a three-dimensional (3D) appearance of the person during the period of time)), wherein
the user representation data is generated based on determining the removal of the HMD during the period of time (par 0101 Fig 12 when user A non-wearing is determined, non-wearing information is sent to second user’s system which then sets the facial expression of user A’s avatar/user representation to a default expression), wherein
a view of the user representation is provided based on the user representation data (par 0107 Fig 14 the avatar 4A, now based on the user representation whose facial expression is now set to the default facial expression, and the speech bubble object 42A are visualized on the field of view image displayed on the HMD 110 of the user terminal 1B).
Regarding Claim 15, Inomata teaches the HMD of Claim 14, wherein
determining removal of the HMD comprises determining that the HMD is being doffed (paras 0038,0098 Fig 4 wearing sensor 115, in HMD transitioning from worn to non-worn position, senses current between two pads at the bridge of the nose go away and generates a remove event signal; par 0096 Fig 14 HMD orientation sensor 114 indicates a remove state when the HMD tilt is beyond a threshold angle; Fig 14 the HMD unworn state shown for user A indicates a second position at which the one or more displays are positioned elsewhere with respect to the user).
Regarding Claim 16, Inomata teaches the HMD of Claim 14, wherein determining removal of the HMD comprises:
detecting a change in position or orientation of the HMD based on motion sensor data (par 0096 Fig 14 HMD orientation/motion sensor 114 indicates a remove state when the HMD tilt changes to beyond a threshold angle);
determining a change of user eye position relative to an eye box region of the device; or
detecting that an eye of the user is no longer within an eye box region of the HMD based on eye sensor data.
Claim 17 presents the limitations of Claim 6 in a different claim category, and therefore Claim 17 is rejected with a rationale similar to Claim 6, mutatis mutandis.
Regarding Claim 18, Inomata teaches the device of Claim 14, wherein,
after the removal, the user representation data represents:
a neutral appearance of the user (par 0020 Fig 14 when non-wearing information indicating that the first user is not wearing the first head-mounted device (first HMD) is received, the facial expression of the first avatar is changed to an expression pre-selected by the first user (hereinafter referred to as the selected expression) as a mode indicating that the first user is not wearing the first head-mounted device; par 0079 Fig 14 facial expressions may be selected from available stored expressions taught to include a smile, a sad expression, a neutral expression, an angry expression, a surprised expression, a troubled expression, etc.);
a prior appearance of user;
a fixed appearance of the user corresponding to a most recent appearance of the user prior to the removal; or
a fixed appearance of the user corresponding to an appearance of the user prior to the period of time.
Claim 20 presents the limitations of Claim 14 in a different claim category, and therefore Claim 20 is rejected with a rationale similar to Claim 14, mutatis mutandis.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Inomata (Japan Patent Application Publication JP2018147086A) in view of Dedonato et al. (U.S. Patent Application 20220269333 A1, hereinafter “Dedonato”).
Regarding Claim 5, Inomata teaches the method of Claim 1. However, Inomata appears not to expressly teach wherein
determining removal of the HMD comprises detecting that an eye of the user is no longer within an eye box region of the HMD based on eye sensor data.
Dedonato teaches wherein
determining removal of the HMD comprises detecting that an eye of the user is no longer within an eye box region of the HMD based on eye sensor data (par 0030 Fig 1 the computer system 101 [par 0006 a head-mounted device] detects, via its one or more input devices [par 0032 input devices 125 (e.g., an eye tracking device 130)], that the computer system has been removed from the body of the first user).
Inomata and Dedonato are analogous art as they each pertain to head mounted device methods. It would have been obvious to a person of ordinary skill in the art to modify the method of Inomata with the inclusion of the detecting that an eye of the user is no longer within an eye box region of the HMD based on eye sensor data of Dedonato. The motivation would have been in order to provide security steps including that the device logs out of any user account into which the device was logged in, and ceases to display any user interface that was being displayed (Dedonato par 0121).
Claims 11 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Inomata (Japan Patent Application Publication JP2018147086A) in view of Steptoe et al. (U.S. Patent Application 20240029329 A1, hereinafter “Steptoe”).
Regarding Claim 11, Inomata teaches the method of Claim 1. However, Inomata appears not to expressly teach wherein
generating the user representation data based on determining the removal of the HMD during the period of time comprises:
providing a gradual change in the user representation data from representing a current user appearance prior to the removal to representing a fixed user appearance after the removal.
Steptoe teaches
generating the user representation data based on determining the loss of user tacking during the period of time comprises:
providing a gradual change in the user representation data from representing a current user appearance prior to the loss of user tacking to representing a fixed user appearance after the loss of user tacking (par 0019 Fig 5 the system provides an animation between the current [last tracked] user appearance prior to loss of tracking and a fixed user appearance comprising a rest state after loss of tracking, the animation a gradual change in the user representation).
Inomata and Steptoe are analogous art as they each pertain to head mounted device methods. It would have been obvious to a person of ordinary skill in the art to modify the method of generating the user representation data based on determining the removal of the HMD during the period of time of Inomata with the inclusion of the gradual user representation change based on determining loss of user tacking during the period of time of Steptoe, to provide generating the user representation data based on determining the removal of the HMD during the period of time comprises: providing a gradual change in the user representation data from representing a current user appearance prior to the removal to representing a fixed user appearance after the removal. The motivation would have been in order to provide minimized disruption of the user's experience (Steptoe par 0020).
Claim 19 presents the limitations of Claim 11 in a different claim category, and therefore Claim 19 is rejected with a rationale similar to Claim 11, mutatis mutandis.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK EDWARDS whose telephone number is (571)270-7731. The examiner can normally be reached on Mon-Fri 9a-5p EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached on 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARK EDWARDS/Primary Examiner, Art Unit 2624