Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Amendment
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 4, 7-9, 11, 14-16 and 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Vandyke et al (11,812,194).
Consider claims 1, 8 and 15, Vandyke et al teach a processor-implemented method, system and the computer program product comprising: one or more computer-readable tangible storage medium and program instructions stored on at least one of the one or more tangible storage medium, the program instructions executable by a processor capable of performing a method (par. 0018 of the specification positively excludes transitory signals), the method comprising: establishing a virtual conference room and connections thereto by at least an initiating participant, a recipient participant, and a non-recipient participant (i.e., participants of virtual conference) (col. 1 lines 49-51; “As a result, several users may conduct a virtual conference through their respective avatars within a same CGR setting (e.g., a virtual conference room), regardless of the user's physical location”; col. 5 lines 35-54; “Through the use the electronic devices, both users are participating in a virtual meeting in the CGR setting 115 (e.g., a virtual conference room) that includes avatars 135-150, each corresponding to a user (or participant) in the meeting. In particular, electronic devices 130 and 131 both present the CGR setting 115 to each user 120 and 125 through the perspective of their respective avatars 135 and 140 (e.g., in a first-person perspective). While electronic devices of other users (not shown) present the CGR setting 115 through the perspective of avatars 145 and 150”); detecting a gesture by the initiating participant toward the recipient participant indicating an intention for a private communication channel (col. 6 lines 10-03; “To initiate the private conversation, user 1 120 leans towards user 2 (in either the physical or CGR setting) or brings electronic device 130 towards electronic device 131, as if to whisper something in user 2's ear”; col. 15 lines 21-32; “the electronic device 130 may perform this determination according to the physical characteristic represented by the sensor data. For instance, the electronic device 130 may compare a movement and/or command to predefined movements and/or commands that are associated with initiating a private conversation. These movements may include a physical gesture of the user, such as a hand movement, a head movement, an eye movement, or a combination thereof”); modifying a user interface of the virtual conference room, of the initiating participant and the recipient participant, to display on-screen notification and audio cue of the intention (col. 6 lines 35-41; “In response to determining that user 1 120 wishes to engage in the private conversation, the electronic device 130 is configured to cause a privacy cloak 160 to activate in the CGR setting 115. The privacy cloak 160 is a visual indication presented in the CGR setting 115 to represent that users associated with avatars within the confines of the cloak 160 are having a private conversation”; col. 20 lines 54 – col. 21 line 2; “In one aspect, the request message may indicate that the first user is requesting to initiate the private conversation. For instance, the request may be a textual message that is displayed on the display screen 340 and/or may be an audible message that is outputted through one or more speakers. In another aspect, the electronic device 131 may display a visual representation of the request within the CGR setting. For example, the avatar 135 of the first user 120 may exhibit the visual representation as an aura that surrounds the avatar 135 or a token that is displayed atop the avatar 135”); identifying an acceptance gesture from the recipient participant to initiate the private communication channel (col. 21 lines 2-14; “If the second user 125 wishes to accept the request, the second user may cause the avatar 140 to enter (or touch) the visual representation (e.g., causing avatar 140 to enter and/or touch the aura of the avatar 135). In one aspect, the second user may accept the request by activating (e.g., through a selection of a UI item) an aura for avatar 140 and having the aura of avatar 135 touch the aura of avatar 135. In another aspect, the second user 125 may accept by performing a movement and/or issuing an explicit command. In one aspect, the movement and/or explicit command performed by the second user may be the same or similar to the movement and/or command performed by the first user to initiate the private conversation”); and generating the private communication channel (col. 6 lines 51-56; “In this example, electronic device 130 may exchange audio data (e.g., through a private two-way audio channel) with electronic device 131, and vice a versa. More about how electronic devices conduct the private conversation is disclosed herein”; col. 21 lines 15-19; “Once accepted, the second user's electronic device may transmit an acceptance message to the electronic device 130, and the process 400 may proceed. In one aspect, upon deactivation of the privacy cloak, auras of avatars may split or “pop” to form each avatar's individual aura”).
Consider claims 2, 9 and 16, Vandyke et al teach wherein determining the recipient participant is based on the gesture (col. 10 lines 49-53; “For instance, the physical characteristic may be a movement of the user, such as a physical gesture, a physical position of the user, and an eye gaze of the user. More about physical characteristics is described herein”).
Consider claims 4, 11 and 18, Vandyke et al teach further comprising: detecting a gaze of the initiating participant contemporaneously with the gesture, and wherein determining the recipient is based on the gaze (col. 15 lines 26-43; “For instance, the electronic device 130 may compare a movement and/or command to predefined movements and/or commands that are associated with initiating a private conversation. These movements may include a physical gesture of the user, such as a hand movement, a head movement, an eye movement, or a combination thereof. The controller 305 may perform a table lookup into a data structure that associates movements (or physical characteristics) and/or explicit commands with a request from the user of the electronic device to initiate a private conversation. If the movements and/or commands match those stored within the data structure, it is determined that a private conversation is to be initiated”).
Consider claims 7 and 14, Vandyke et al teach wherein the on-screen notification displayed on the user interface of the recipient participant is an icon overlayed on a static avatar or on a video feed of the initiating participant (col. 18 lines 38-54; “the privacy cloak is a visual indication displayed on electronic devices participating within the CGR setting that users within the cloak are having a private conversation. As illustrated in FIG. 1, the cloak may be a barrier (e.g., a cylinder) that surrounds the avatars. In one aspect, the barrier may be one of transparent (e.g., not shown in the CGR setting), translucent, and opaque; and may be any color (e.g., blue). In another aspect, the cloak may be a tunnel that stretches (or projects) between the avatars that are engaged in the private conversation. In another aspect, the privacy cloak may be any type of visual indication, such as a token that floats above an avatar's head”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Vandyke et al (11,812,194) in view of Chun et al (2023/0282224).
Consider claims 3, 10 and 17, Vandyke et al suggest of predefined movements and/or commands that are associated with initiating a private conversation. Vandyke et al did not explicitly suggest wherein the gesture comprises the initiating participant covering their mouth with their hand. In the same field of endeavor, Chun et al suggest such (par. 0037; “For example, if a secondary conversation participant turns his face or body to be oriented in a direction facing away from the primary conversation, or covers his mouth with his hand, then the secondary conversation can be identified as private”). Therefore, it would have been obvious to one of the ordinary skills in the art before the effective filing date to apply such known gesture into Vandyke et al predefined movements/commands and the results would have been predictable and resulted in enabling users to initiate non-verbal commands during the meeting thereby enhanced and improve user experience.
Claims 5, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Vandyke et al (11,812,194) in view of Lian et al (2013/0120522).
Consider claims 5, 12 and 19, Vandyke et al did not explicitly suggest further comprising: determining that a speaking participant is directing a speech segment to an addressee participant based on detecting a gaze of the speaking participant to the addressee participant. In the same field of endeavor, Lian et al teach method that identify identifying and directing a target participant with whom the active speaker seeks to interact based on gaze and/or speech analyzer (par. 0009; 0041; “identifying an active speaker of a video session; analyzing a signal from an originating endpoint associated with the active speaker; and identifying a target participant with whom the active speaker seeks to interact (e.g., communicate, share information, solicit information from, etc.). The method also includes providing a notification to the target participant that alerts the target participant that the active speaker is seeking to interact with the target participant”). Therefore, it would have been obvious to one of the ordinary skills in the art before the effective filing date to apply such known concept of Lian et al into view of Vandyke et al and the results would have been predictable and resulted provide awareness to the participant thereby enhanced meeting experience.
Response to Arguments
Applicant's arguments filed 1/ have been fully considered but they are not persuasive.
In response to applicant assertion that Vandyke references fail to disclose, suggest, or render predictable "detecting a gesture by the initiating participant toward the recipient participant indicating an on-screen notification and an audio cue for a private communication channel", "receiving an acceptance gesture from the recipient participant to initiate the private communication channel", and "generating the private communication channel". Accordingly, upon carefully review the cited reference, the examiner respectfully disagrees with applicant assertion as follow.
Vandyke et al disclosure relates to engaging in private conversation between several users within a virtual setting (e.g., a virtual conference room). A user can request a private conversation with other participants in the virtual conference using gesture commands and receiving acceptance from the requested participating user through gesture command.
Vandyke et al disclosed the amended feature of "detecting a gesture by the initiating participant toward the recipient participant indicating an on-screen notification and an audio cue for a private communication channel". Vandyke et al recite: “In response to determining that user 1 120 wishes to engage in the private conversation, the electronic device 130 is configured to cause a privacy cloak 160 to activate in the CGR setting 115. The privacy cloak 160 is a visual indication presented in the CGR setting 115 to represent that users associated with avatars within the confines of the cloak 160 are having a private conversation” (col. 6 lines 35-41); “the electronic device 130 may perform this determination according to the physical characteristic represented by the sensor data. For instance, the electronic device 130 may compare a movement and/or command to predefined movements and/or commands that are associated with initiating a private conversation. These movements may include a physical gesture of the user, such as a hand movement, a head movement, an eye movement, or a combination thereof” (col. 15 lines 21-32); “In one aspect, the request message may indicate that the first user is requesting to initiate the private conversation. For instance, the request may be a textual message that is displayed on the display screen 340 and/or may be an audible message that is outputted through one or more speakers. In another aspect, the electronic device 131 may display a visual representation of the request within the CGR setting. For example, the avatar 135 of the first user 120 may exhibit the visual representation as an aura that surrounds the avatar 135 or a token that is displayed atop the avatar 135” (col. 20 lines 54 – col. 21 line 2).
"receiving an acceptance gesture from the recipient participant to initiate the private communication channel". Vandyke et al recite: “If the second user 125 wishes to accept the request, the second user may cause the avatar 140 to enter (or touch) the visual representation (e.g., causing avatar 140 to enter and/or touch the aura of the avatar 135). In one aspect, the second user may accept the request by activating (e.g., through a selection of a UI item) an aura for avatar 140 and having the aura of avatar 135 touch the aura of avatar 135. In another aspect, the second user 125 may accept by performing a movement and/or issuing an explicit command. In one aspect, the movement and/or explicit command performed by the second user may be the same or similar to the movement and/or command performed by the first user to initiate the private conversation” (col. 21 lines 2-14).
"generating the private communication channel". Vandyke e al recite: “In this example, electronic device 130 may exchange audio data (e.g., through a private two-way audio channel) with electronic device 131, and vice a versa. More about how electronic devices conduct the private conversation is disclosed herein” (col. 6 lines 51-56); “Once accepted, the second user's electronic device may transmit an acceptance message to the electronic device 130, and the process 400 may proceed. In one aspect, upon deactivation of the privacy cloak, auras of avatars may split or “pop” to form each avatar's individual aura” (col. 21 lines 15-19).
Thus, Vandyke et al clearly teach the claimed features as addressed above. For this reason, the rejection is maintained.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any response to this action should be mailed to:
Mail Stop ____(explanation, e.g., Amendment or After-final, etc.) Commissioner for Patents
P.O. Box 1450
Alexandria, VA 22313-1450
Facsimile responses should be faxed to:
(571) 273-8300
Hand-delivered responses should be brought to:
Customer Service Window
Randolph Building
401 Dulany Street
Alexandria, VA 22314
Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUOC DUC TRAN whose telephone number is (571)272-7511. The examiner can normally be reached Monday-Friday 8:30am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached on (571) 272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Quoc D Tran/
Primary Examiner, Art Unit 2691
March 12, 2026