DETAILED ACTION
Claims 1-24 filed January 22nd 2025 are pending in the current action.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 8-11, 16-19, and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buerli et al. (US2021/0326094) in view of Velez (US2022/0165013)
Consider claim 1, where Buerli teaches a method comprising: at a first electronic device in communication with one or more displays and one or more input devices of the first electronic device: while in a communication session with a second electronic device, (See Buerli Fig. 1 and ¶23-26 where there is a first electronic device 105 and a second electronic device 110 in communication with each other) presenting, via the one or more displays, a visual representation an application of a user of the second electronic device in a three-dimensional environment; (See Buerli Fig. 3 and ¶39 where electronic device 105 obtains three-dimensional (3D) information about electronic device 110 to facilitate transfer of control of the application running on electronic device 110 (and/or content associated therewith) to an application running on electronic device 105… using the 3D location, orientation, and/or display location, electronic device 105 may display an additional UI 302, with display 201 of electronic device 105, that is overlying and aligned with display 208 (e.g., overlaid on and aligned with the UI 202 that is still displayed on display 208) of electronic device 110.) while presenting the visual representation of the application of the user of the second electronic device in the three-dimensional environment, detecting an indication corresponding to a request from the second electronic device to share content with the first electronic device; (See Buerli ¶48 where in one or more implementations, send the electronic device 105 state and/or context information regarding the application that is being used by electronic device 110 (e.g., that a document that is open, etc.). Handoff logic between electronic device 110 and electronic device 105 may include, for example, a handoff request from electronic device 105 to electronic device 110 (e.g., responsive to a detection of electronic device 110 by electronic device 105), handoff operations performed by electronic device 110 and/or electronic device 105, and/or a handoff confirmation provided from electronic device 105 to electronic device 110. ) in response to detecting the indication, in accordance with a determination that the request is accepted by the first electronic device, presenting, via the one or more displays, a first representation of a first user interface corresponding to the content in the three-dimensional environment, wherein the first user interface is configured to be displayed on the second electronic device; (See Buerli Fig. 6 and ¶49 where electronic device 105 may take a snapshot of the UI 202 displayed on the electronic device 110 and then display that snapshot again through the electronic device 105. In other examples described herein, the UI 302 generated by the electronic device 105 can also, or alternatively, be driven by UI data provided by the electronic device 110. The electronic device 105 can render a UI 302, for display in an XR environment, based UI information (e.g., information describing the content and/or layout of the UI 202 and/or a rendered UI) sent from the electronic device 110 to the electronic device 105.) while presenting the first representation of the first user interface in the three-dimensional environment, detecting an indication of an input directed to the first representation of the first user interface; and in response to detecting the input: forgoing performing an operation directed to the first representation of the first user interface in accordance with the input. (See Buerli ¶54-57 where after the UI 302 provided by electronic device 105 is moved away from electronic device 110 (e.g., and display 208 stops displaying UI 202, deactivates UI 202, and/or display 208 is powered off or placed in a low power mode), control of the application corresponding to UI 302, control of UI 302 itself, and control of the content therein is handled by processor 500 of electronic device 105. Thus, operations performed on the UI 302 may not be reflected on UI 202)
Buerli teaches a visual representation an application of a user of the second electronic device. (See Buerli ¶29 where a transfer of editing control of, for example, a word processing document, an email, a text message, and/or or content for another application can be provided from one device to another ) However, Buerli does not explicitly teach a visual representation of a user of the second electronic device. However, in an analogous field of endeavor Velez teaches a visual representation of a user of the second electronic device. (See Velez ¶51-52 where the avatar 102 is a representation of the recipient user's sister (also not shown). The user and the user's sister have an ongoing text conversation 106. When message 104 is received from the user's contact in relation to the avatar 102, a sentiment of “excited” is determined for the message.) associated with the identified user account, in operation S940.) Therefore, it would have been obvious for one of ordinary skill in the art that the email/messaging application of Buerli may comprise avatar information as taught by Velez. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods of personalization within existing messaging services to yield predictable results.
Consider claim 2, where Buerli in view of Velez The method of claim 1, wherein detecting the indication of the input directed to the first representation of the first user interface includes detecting, via the one or more input devices, an air gesture performed by a hand of the user of the first electronic device while a gaze of the user of the first electronic device is directed to the first representation of the first user interface. (See Buerli ¶25 where input modalities may include, but are not limited to, facial tracking, eye tracking (e.g., gaze direction), hand tracking, gesture tracking, biometric readings (e.g., heart rate, pulse, pupil dilation, breath, temperature, electroencephalogram, olfactory), recognizing speech or audio (e.g., particular hotwords), and activating buttons or switches, etc. See Figs. 7, 8 where gestures 700 and 800 are in midair)
Consider claim 3, where Buerli in view of Velez teaches the method of claim 1, wherein the visual representation of the user of the second electronic device corresponds to a three-dimensional virtual avatar of the user of the second electronic device. (See Velez ¶51-52 where the avatar 102 is a representation of the recipient user's sister (also not shown). The user and the user's sister have an ongoing text conversation 106. When message 104 is received from the user's contact in relation to the avatar 102, a sentiment of “excited” is determined for the message.) associated with the identified user account, in operation S940.) Therefore, it would have been obvious for one of ordinary skill in the art that the email/messaging application of Buerli may comprise avatar information as taught by Velez. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods of personalization within existing messaging services to yield predictable results.
Consider claim 8, where Buerli in view of Velez teaches the method of claim 1, wherein, prior to detecting the indication, the visual representation of the user of the second electronic device is presented at a first location in the three-dimensional environment from a viewpoint of the first electronic device, the method further comprising: in response to detecting the indication, in accordance with the determination that the request is accepted by the first electronic device: updating presentation, via the one or more displays, of the visual representation of the user of the second electronic device to be at a second location, different from the first location, in the three-dimensional environment, wherein the second location is adjacent to the viewpoint of the first electronic device in the three-dimensional environment. (See Buerli ¶39 where as shown in FIG. 3, using the 3D location, orientation, and/or display location, electronic device 105 may display an additional UI 302, with display 201 of electronic device 105, that is overlying and aligned with display 208 (e.g., overlaid on and aligned with the UI 202 that is still displayed on display 208) of electronic device 110. )
Consider claim 9, where Buerli teaches a first electronic device comprising: one or more processors; memory; and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing a method comprising: while in a communication session with a second electronic device, (See Buerli Fig. 1 and ¶23-26 where there is a first electronic device 105 and a second electronic device 110 in communication with each other) presenting, via one or more displays, a visual representation of an application of a user of the second electronic device in a three-dimensional environment; while presenting the visual representation of the application of the user of the second electronic device in the three-dimensional environment, (See Buerli Fig. 3 and ¶39 where electronic device 105 obtains three-dimensional (3D) information about electronic device 110 to facilitate transfer of control of the application running on electronic device 110 (and/or content associated therewith) to an application running on electronic device 105… using the 3D location, orientation, and/or display location, electronic device 105 may display an additional UI 302, with display 201 of electronic device 105, that is overlying and aligned with display 208 (e.g., overlaid on and aligned with the UI 202 that is still displayed on display 208) of electronic device 110.) detecting an indication corresponding to a request from the second electronic device to share content with the first electronic device; in response to detecting the indication, in accordance with a determination that the request is accepted by the first electronic device, presenting, via the one or more displays, a first representation of a first user interface corresponding to the content in the three-dimensional environment, wherein the first user interface is configured to be displayed on the second electronic device; (See Buerli Fig. 6 and ¶49 where electronic device 105 may take a snapshot of the UI 202 displayed on the electronic device 110 and then display that snapshot again through the electronic device 105. In other examples described herein, the UI 302 generated by the electronic device 105 can also, or alternatively, be driven by UI data provided by the electronic device 110. The electronic device 105 can render a UI 302, for display in an XR environment, based UI information (e.g., information describing the content and/or layout of the UI 202 and/or a rendered UI) sent from the electronic device 110 to the electronic device 105.) while presenting the first representation of the first user interface in the three-dimensional environment, detecting, via one or more input devices, an input directed to the first representation of the first user interface; and in response to detecting the input: forgoing performing an operation directed to the first representation of the first user interface in accordance with the input. (See Buerli ¶54-57 where after the UI 302 provided by electronic device 105 is moved away from electronic device 110 (e.g., and display 208 stops displaying UI 202, deactivates UI 202, and/or display 208 is powered off or placed in a low power mode), control of the application corresponding to UI 302, control of UI 302 itself, and control of the content therein is handled by processor 500 of electronic device 105. Thus, operations performed on the UI 302 may not be reflected on UI 202)
Buerli teaches a visual representation an application of a user of the second electronic device. (See Buerli ¶29 where a transfer of editing control of, for example, a word processing document, an email, a text message, and/or or content for another application can be provided from one device to another ) However, Buerli does not explicitly teach a visual representation of a user of the second electronic device. However, in an analogous field of endeavor Velez teaches a visual representation of a user of the second electronic device. (See Velez ¶51-52 where the avatar 102 is a representation of the recipient user's sister (also not shown). The user and the user's sister have an ongoing text conversation 106. When message 104 is received from the user's contact in relation to the avatar 102, a sentiment of “excited” is determined for the message.) associated with the identified user account, in operation S940.) Therefore, it would have been obvious for one of ordinary skill in the art that the email/messaging application of Buerli may comprise avatar information as taught by Velez. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods of personalization within existing messaging services to yield predictable results.
Consider claim 10, where Buerli in view of Velez teaches the first electronic device of claim 9, wherein detecting the indication of the input directed to the first representation of the first user interface includes detecting, via the one or more input devices, an air gesture performed by a hand of the user of the first electronic device while a gaze of the user of the first electronic device is directed to the first representation of the first user interface. (See Buerli ¶25 where input modalities may include, but are not limited to, facial tracking, eye tracking (e.g., gaze direction), hand tracking, gesture tracking, biometric readings (e.g., heart rate, pulse, pupil dilation, breath, temperature, electroencephalogram, olfactory), recognizing speech or audio (e.g., particular hotwords), and activating buttons or switches, etc. See Figs. 7, 8 where gestures 700 and 800 are in midair)
Consider claim 11, where Buerli in view of Velez teaches the first electronic device of claim 9, wherein the visual representation of the user of the second electronic device corresponds to a three-dimensional virtual avatar of the user of the second electronic device. (See Velez ¶51-52 where the avatar 102 is a representation of the recipient user's sister (also not shown). The user and the user's sister have an ongoing text conversation 106. When message 104 is received from the user's contact in relation to the avatar 102, a sentiment of “excited” is determined for the message.) associated with the identified user account, in operation S940.) Therefore, it would have been obvious for one of ordinary skill in the art that the email/messaging application of Buerli may comprise avatar information as taught by Velez. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods of personalization within existing messaging services to yield predictable results.
Consider claim 16, where Buerli in view of Velez teaches the first electronic device of claim 9, wherein, prior to detecting the indication, the visual representation of the user of the second electronic device is presented at a first location in the three-dimensional environment from a viewpoint of the first electronic device, the method further comprising: in response to detecting the indication, in accordance with the determination that the request is accepted by the first electronic device: updating presentation, via the one or more displays, of the visual representation of the user of the second electronic device to be at a second location, different from the first location, in the three-dimensional environment, wherein the second location is adjacent to the viewpoint of the first electronic device in the three-dimensional environment. (See Buerli ¶39 where as shown in FIG. 3, using the 3D location, orientation, and/or display location, electronic device 105 may display an additional UI 302, with display 201 of electronic device 105, that is overlying and aligned with display 208 (e.g., overlaid on and aligned with the UI 202 that is still displayed on display 208) of electronic device 110.)
Consider claim 17, where Buerli teaches a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a first electronic device, cause the first electronic device to perform a method comprising: while in a communication session with a second electronic device, (See Buerli Fig. 1 and ¶23-26 where there is a first electronic device 105 and a second electronic device 110 in communication with each other) presenting, via the one or more displays, a visual representation an application of a user of the second electronic device in a three-dimensional environment; (See Buerli Fig. 3 and ¶39 where electronic device 105 obtains three-dimensional (3D) information about electronic device 110 to facilitate transfer of control of the application running on electronic device 110 (and/or content associated therewith) to an application running on electronic device 105… using the 3D location, orientation, and/or display location, electronic device 105 may display an additional UI 302, with display 201 of electronic device 105, that is overlying and aligned with display 208 (e.g., overlaid on and aligned with the UI 202 that is still displayed on display 208) of electronic device 110.) while presenting the visual representation of the application of the user of the second electronic device in the three-dimensional environment, detecting an indication corresponding to a request from the second electronic device to share content with the first electronic device; (See Buerli ¶48 where in one or more implementations, send the electronic device 105 state and/or context information regarding the application that is being used by electronic device 110 (e.g., that a document that is open, etc.). Handoff logic between electronic device 110 and electronic device 105 may include, for example, a handoff request from electronic device 105 to electronic device 110 (e.g., responsive to a detection of electronic device 110 by electronic device 105), handoff operations performed by electronic device 110 and/or electronic device 105, and/or a handoff confirmation provided from electronic device 105 to electronic device 110. ) in response to detecting the indication, in accordance with a determination that the request is accepted by the first electronic device, presenting, via the one or more displays, a first representation of a first user interface corresponding to the content in the three-dimensional environment, wherein the first user interface is configured to be displayed on the second electronic device; (See Buerli Fig. 6 and ¶49 where electronic device 105 may take a snapshot of the UI 202 displayed on the electronic device 110 and then display that snapshot again through the electronic device 105. In other examples described herein, the UI 302 generated by the electronic device 105 can also, or alternatively, be driven by UI data provided by the electronic device 110. The electronic device 105 can render a UI 302, for display in an XR environment, based UI information (e.g., information describing the content and/or layout of the UI 202 and/or a rendered UI) sent from the electronic device 110 to the electronic device 105.) while presenting the first representation of the first user interface in the three-dimensional environment, detecting an indication of an input directed to the first representation of the first user interface; and in response to detecting the input: forgoing performing an operation directed to the first representation of the first user interface in accordance with the input. (See Buerli ¶54-57 where after the UI 302 provided by electronic device 105 is moved away from electronic device 110 (e.g., and display 208 stops displaying UI 202, deactivates UI 202, and/or display 208 is powered off or placed in a low power mode), control of the application corresponding to UI 302, control of UI 302 itself, and control of the content therein is handled by processor 500 of electronic device 105. Thus, operations performed on the UI 302 may not be reflected on UI 202)
Buerli teaches a visual representation an application of a user of the second electronic device. (See Buerli ¶29 where a transfer of editing control of, for example, a word processing document, an email, a text message, and/or or content for another application can be provided from one device to another ) However, Buerli does not explicitly teach a visual representation of a user of the second electronic device. However, in an analogous field of endeavor Velez teaches a visual representation of a user of the second electronic device. (See Velez ¶51-52 where the avatar 102 is a representation of the recipient user's sister (also not shown). The user and the user's sister have an ongoing text conversation 106. When message 104 is received from the user's contact in relation to the avatar 102, a sentiment of “excited” is determined for the message.) associated with the identified user account, in operation S940.) Therefore, it would have been obvious for one of ordinary skill in the art that the email/messaging application of Buerli may comprise avatar information as taught by Velez. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods of personalization within existing messaging services to yield predictable results.
Consider claim 18, where The non-transitory computer readable storage medium of claim 17, wherein detecting the indication of the input directed to the first representation of the first user interface includes detecting, via the one or more input devices, an air gesture performed by a hand of the user of the first electronic device while a gaze of the user of the first electronic device is directed to the first representation of the first user interface. (See Buerli ¶25 where input modalities may include, but are not limited to, facial tracking, eye tracking (e.g., gaze direction), hand tracking, gesture tracking, biometric readings (e.g., heart rate, pulse, pupil dilation, breath, temperature, electroencephalogram, olfactory), recognizing speech or audio (e.g., particular hotwords), and activating buttons or switches, etc. See Figs. 7, 8 where gestures 700 and 800 are in midair)
Consider claim 19, where Buerli in view of Velez teaches the non-transitory computer readable storage medium of claim 17, wherein the visual representation of the user of the second electronic device corresponds to a three-dimensional virtual avatar of the user of the second electronic device. (See Velez ¶51-52 where the avatar 102 is a representation of the recipient user's sister (also not shown). The user and the user's sister have an ongoing text conversation 106. When message 104 is received from the user's contact in relation to the avatar 102, a sentiment of “excited” is determined for the message.) associated with the identified user account, in operation S940.) Therefore, it would have been obvious for one of ordinary skill in the art that the email/messaging application of Buerli may comprise avatar information as taught by Velez. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known methods of personalization within existing messaging services to yield predictable results.
Consider claim 24, where Buerli in view of Velez teaches the non-transitory computer readable storage medium of claim 17, wherein, prior to detecting the indication, the visual representation of the user of the second electronic device is presented at a first location in the three-dimensional environment from a viewpoint of the first electronic device, the method further comprising: in response to detecting the indication, in accordance with the determination that the request is accepted by the first electronic device: updating presentation, via the one or more displays, of the visual representation of the user of the second electronic device to be at a second location, different from the first location, in the three-dimensional environment, wherein the second location is adjacent to the viewpoint of the first electronic device in the three-dimensional environment. (See Buerli ¶39 where as shown in FIG. 3, using the 3D location, orientation, and/or display location, electronic device 105 may display an additional UI 302, with display 201 of electronic device 105, that is overlying and aligned with display 208 (e.g., overlaid on and aligned with the UI 202 that is still displayed on display 208) of electronic device 110.)
Allowable Subject Matter
Claims 4-7, 12-15, and 20-23 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: Claim 4 recites: “The method of claim 1, further comprising: capturing, via the one or more input devices, at least a portion of a real-world environment including a third electronic device with a display configured to display a second representation of a second user interface of the third electronic device; and presenting, via the one or more displays, a representation of the portion of the real-world environment captured via the one or more input devices and a first affordance associated with a representation of the third electronic device in the three-dimensional environment.” While Wallen et al. (US2022/0253125 teaches a representation of the portion of the real-world environment (See Wallen Figs. 8A-B) it would be non-obvious to combine with the affordances of UI 302 of Buerli. The real-world unpredictable movement of the third electronic device presents a challenging integration with floating UI affordances. Claims 12 and 20 recite a similar limitation to claim 4 and are objected to as allowable for similar reasons. Claims 5-7, 13-15, and 21-23 are objected to as allowable as being dependent from an objected claim.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM LU whose telephone number is (571)270-1809. The examiner can normally be reached 10am-6:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
WILLIAM LU
Primary Examiner
Art Unit 2624
/WILLIAM LU/Primary Examiner, Art Unit 2624