DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on April 1, 2025 has been entered.
Claims 2 and 28 have been amended. Claim 32 has been added. Claims 2-7, 10-25, and 27-32 are pending.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2-7, 10-25, and 27-32 are rejected under 35 U.S.C. 103 as being unpatentable over Smus (US Patent Application Publication No. 2017/0171261), hereinafter Smus, in view of Kapur et al (US Patent Application Publication No. 2019/0074012), hereinafter Kapur, in view of Florencio et al (US Patent Application Publication No. 2010/0195812), hereinafter Florencio.
Smus teaches systems and methods for directing communications using gaze interactions. Regarding claim 2, Smus teaches a method [Figure 6] for enabling communications between a user and one or more recipients, the method comprising:(a) using one or more sensors to track at least one position or motion characteristic of the user thereby generating sensor data, wherein the at least one position or motion characteristic is associated with an eye or a head of the user [gaze interaction module…..camera designed to determine eye and head movement of user -- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087] and indicative of the user’s attention towards the one or more recipients [para 0091 -- The process 600 can include, based on evaluating the gaze direction of the first user, transmitting different audio to the at least two remote users (660). For instance, based on evaluating the gaze direction of the user 102a, the device 104 can transmit different audio to the device 160 and the device 170 during the communication session]; (b) processing the sensor data to identify the one or more recipients with whom the user intends to communicate [gaze interaction module…..gaze directed to user 102b and not entire group ….private conversations-- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087]; and (c) processing data associated the user to generate one or more utterances, wherein the one or more utterances comprises audio or text intended by the user for the one or more recipients [gaze interaction module…..gaze directed to user 102b and not entire group ….private conversations-- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087]. Smus teaches the system is useful in providing private communications [para 0050]. Smus fails to teach the data is associated with non-audible speech of the user. In a similar field of endeavor, Kapur teaches methods and apparatus for silent speech interface that detects silent internal articulation of words by a human user to detect the content and provides for sending a message regarding the content to a device associated with another person to silently communicate with another person [Fig 1; para 0008; 0045; 0047; 0107; 0138-0139]. One having ordinary skill in the art at the time of the invention would have recognized the advantages of implementing the silent speech interaction communications taught by Kapur, in the communication system of Smus, for the purpose of facilitating private communications amongst speakers in public or non-private environments. Smus fails to teach the communication between the user and one or more persons are co-physically located and visual proximity to the user within a same physical environment. In a similar field of endeavor, Florencio provides communications of both public and private speech in a multiparty environment comprising both local and remote participants [para 0026 –local parties and remote parties]; provides for selection of target parties [target audience 124] to be the recipients of private communications [para 0031]; local members can transmit private messages to other local members without the fear of offending others by leaning over to convey private vocalizations to a neighbor and/or without fear of violating some manner of etiquette or protocol [Fig 3A; para 0034-0038] and also allows for transmission of messages between remote and local parties [0038]. One having ordinary skill in the art at the time of the invention would have recognized the advantages of implementing private messaging in a multiparty environment of local and remote participants, as suggested by Florencio, for the purpose of providing meeting participants the ability to communicate without offending others, violating etiquette or protocol, or disturbing other meeting participants, as taught by Florencio.
Regarding claim 3, the combination of Smus, Florencio and Kapur teaches transmitting the one or more utterances to the one or more target recipients, based at least in part on a gaze direction of the user towards the one or more target recipients [gaze interaction module…..gaze directed to user 102b and not entire group ….private conversations-- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087].
Regarding claim 4, the combination of Smus, Florencio and Kapur teaches wherein the one or more target recipients comprise a first target recipient and a second target recipient, wherein the one or more utterances comprise a first utterance intended by the user for the first recipient and a second utterance intended by the user for the second recipient [gaze interaction module…..gaze directed to user 102b and not entire group ….private conversations…different audio presented to different users --- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087].
Regarding claim 5, the combination of Smus, Florencio and Kapur teaches transmitting the first utterance to the first target recipient without the second target recipient having access to or information about the first utterance, or transmitting the second utterance to the second target recipient without the first target recipient having access to or information about the second utterance [gaze interaction module…..gaze directed to user 102b and not entire group ….private conversations…different audio presented to different users --- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087].
Regarding claim 6, the combination of Smus, Florencio and Kapur teaches transmitting the first utterance to the first target recipient and transmitting the second utterance to the second target recipient, with the first recipient having access to or information about the second utterance, and the second recipient having access to or information about the first utterance [audio provided to groups of users--- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087].
Regarding claim 7, the combination of Smus, Florencio and Kapur teaches transmitting the first utterance to the first target recipient and transmitting the second utterance to the second target recipient, with the first target recipient having access to or information about the second utterance, and without the second target recipient having access to or information about the first utterance [gaze interaction module…..gaze directed to user 102b and not entire group ….private conversations…different audio presented to different users --- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087].
Regarding claim 10, the combination of Smus, Florencio and Kapur teaches the one or more utterances by the user comprises at least 10 words [private conversations…different audio presented to different users --- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087].
Regarding claim 11, the combination of Smus, Florencio and Kapur teaches transmitting the one or more utterances comprising the audio or the text to the one or more recipients with a time delay of no more than 5 seconds [private conversations…different audio presented to different users --- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087].
Regarding claim 12, the combination of Smus, Florencio and Kapur teaches comprises generating the one or more utterances comprising the audio or the text in one or more languages, based at least in part on a preferred language of each recipient of the one or more recipients [Smus para 0060].
Regarding claim 13, the combination of Smus, Florencio and Kapur teaches the one or more utterances comprising the audio or the text in one or more communication styles or formats, based at least in part on a preferred communication style or format of each recipient of the one or more recipients [Smus para 0060].
Regarding claim 14, the combination of Smus, Florencio and Kapur teaches using a display or a speaker to communicate the text or the audio to the one or more recipients [Smus at Figure 5A-5C; para 0065].
Regarding claim 15, the combination of Smus, Florencio and Kapur teaches one or more sensors comprises at least one of a radio beacon, a camera, or a radar sensor [Smus’ gaze interaction module…..camera designed to determine eye and head movement of user -- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087].
Regarding claim 16, the combination of Smus, Florencio and Kapur teaches the non-audible speech comprises silent speech [Kapur para 0008; 0107; 0138].
Regarding claim 17, the combination of Smus, Florencio and Kapur teaches the non-audible speech comprises non- audible murmur [Kapur para 0008; 0107; 0138].
Regarding claim 18, the combination of Smus, Florencio and Kapur teaches using a radio frequency (RF) sensing device coupled to a head of the user to collect the data associated with the non- audible speech of the user [Smus at para 0056-0057; 0105; Kapur Fig 1; para 0008; 0025; 0045; 0047; 0068; 0070-0079; 0105-0107; 0129; 0131-0139; 0197].
Regarding claim 19, the combination of Smus, Florencio and Kapur teaches data associated with the non-audible speech of the user comprises RF signal data associated with movement of one or more speech articulators of the user [Kapur Fig 1; para 0008; 0025; 0045; 0047; 0068; 0070-0079; 0105-0107; 0129; 0131-0139; 0197].
Regarding claim 20, the combination of Smus, Florencio and Kapur teaches the RF sensing device comprises one or more antennas [Kapur Fig 1; para 0008; 0025; 0045; 0047; 0068; 0070-0079; 0105-0107; 0129; 0131-0139; 0197].
Regarding claim 21, the combination of Smus, Florencio and Kapur teaches the RF sensing device has a headphone form factor [Smus at para 0056-0057; 0105; Kapur Fig 1; para 0008; 0025; 0045; 0047; 0068; 0070-0079; 0105-0107; 0129; 0131-0139; 0197].
Regarding claim 22, the combination of Smus, Florencio and Kapur teaches the RF sensing device is coupled to the head of the user absent of contact with a face of the user, which face comprises a mouth, lip, chin, jaw or cheek of the user [Kapur Fig 1; Fig 4-6; Fig 10; para 0008; 0025; 0045; 0047; 0068; 0070-0079; 0105-0107; 0129; 0131-0139; 0197].
Regarding claim 23, the combination of Smus, Florencio and Kapur teaches the RF sensing device is coupled to the head of the user by being supported on ears of the user [Smus at para 0056-0057; 0105; Kapur Fig 1; para 0008; 0025; 0045; 0047; 0068; 0070-0079; 0105-0107; 0129; 0131-0139; 0197].
Regarding claim 24, the combination of Smus, Florencio and Kapur teaches one or more speech articulators include a lip, tongue, jaw, larynx or vocal tract of the user [Kapur Fig 1; Fig 4-6; Fig 10; para 0008; 0025; 0045; 0047; 0068; 0070-0079; 0105-0107; 0129; 0131-0139; 0197].
Regarding claim 25, the combination of Smus, Florencio and Kapur teaches the non-audible speech comprises continuous speech [Smus gaze interaction module…..gaze directed to user 102b and not entire group ….private conversations…different audio presented to different users --- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087].
Regarding claim 27, the combination of Smus, Florencio and Kapur teaches in (a), the at least one position or motion characteristic is associated with the eye or the head of the user and is indicative of the user's attention towards (i) the one or more recipients or (ii) one or more devices associated with or worn by the one or more recipients [Smus’ gaze interaction module…..camera designed to determine eye and head movement of user -- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087; para 0091 -- The process 600 can include, based on evaluating the gaze direction of the first user, transmitting different audio to the at least two remote users (660). For instance, based on evaluating the gaze direction of the user 102a, the device 104 can transmit different audio to the device 160 and the device 170 during the communication session].
Regarding claim 28, the combination of Smus, Florencio and Kapur teaches in (b) the sensor data is processed to identify the one or more recipients whom the user intends to silently communicate with in the physical environment [Smus’ gaze interaction module…..camera designed to determine eye and head movement of user -- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087; para 0091 -- The process 600 can include, based on evaluating the gaze direction of the first user, transmitting different audio to the at least two remote users (660). For instance, based on evaluating the gaze direction of the user 102a, the device 104 can transmit different audio to the device 160 and the device 170 during the communication session ..in combination with Boesen at col. 2, lines 16-24; col. 9, lines 36-40].
Regarding claim 29, the combination of Smus, Florencio and Kapur teaches processing of the data associated with the non-audible speech of the user is performed on a remote computing system [Florencio para 0083-0090].
Regarding claim 30, the combination of Smus, Florencio and Kapur teaches processing of the data associated with the non-audible speech of the user is performed on a cloud computing system [Florencio para 0083-0090].
Regarding claim 31, the combination of Smus, Florencio and Kapur teaches transmitting the one or more utterances to the one or more recipients with a time delay of no more than 30 seconds [private conversations…different audio presented to different users --- para 0032; 0035-0041; 0050; 0055-0057; 0080-0083; 0087].
Regarding claim 32, the combination of Smus, Florencio and Kapur teaches the sensors are on a device worn by the user [Kapur para 0047-0050].
Response to Arguments
Applicant’s arguments with respect to claims 2-7, 10-25, and 27-31 have been considered but they are not persuasive.
Applicant argues Smus is silent about "using one or more sensors to track at least one position or motion characteristic of the user thereby generating sensor data, wherein the at least one position or motion characteristic is associated with an eye or a head of the user and indicative of the user's attention towards the one or more persons, wherein the one or more persons are in physical and visual proximity to the user within the same physical environment". Applicant argues Florencio does not teach or suggest "using one or more sensors to track at least one position or motion characteristic of the user thereby generating sensor data, wherein the at least one position or motion characteristic is associated with an eye or a head of the user and indicative of the user's attention towards the one or more persons, wherein the one or more persons are in physical and visual proximity to the user within the same physical environment". In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Applicant argues modifying Smus with Florencio would lead to frustration of purpose and render the system in Smus inoperable for its intended purpose. The Examiner respectfully disagrees. Florencio provides communications of both public and private speech in a multiparty environment comprising both local and remote participants [para 0026 –local parties and remote parties]; provides for selection of target parties [target audience 124] to be the recipients of private communications [para 0031]; local members can transmit private messages to other local members without the fear of offending others by leaning over to convey private vocalizations to a neighbor and/or without fear of violating some manner of etiquette or protocol [Fig 3A; para 0034-0038] and also allows for transmission of messages between remote and local parties [0038]. The teachings of Florencio would expand the capabilities of Smus and would not only allow direct communications with only remote communication participants but also provide for direct private conversations between any and all participants, locally or remote. Thus, one having ordinary skill in the art at the time of the invention would have recognized the advantages of implementing private messaging in a multiparty environment of local and remote participants, as suggested by Florencio, for the purpose of providing meeting participants the ability to communicate without offending others, violating etiquette or protocol, or disturbing other meeting participants, as specifically suggested and taught by Florencio.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANGELA A ARMSTRONG whose telephone number is (571)272-7598. The examiner can normally be reached M,T,TH,F 11:30-8:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ANGELA A. ARMSTRONG
Primary Examiner
Art Unit 2659
/ANGELA A ARMSTRONG/Primary Examiner, Art Unit 2659