DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 03/02/2026 has been entered.
Response to Arguments/Amendments
3. With respect to Rejection under 35 U.S.C. § 101, Applicant indicates on page 1 of the Remarks that “Applicant respectfully submits that claims 1-23 as amended, are directed to eligible subject matter and therefore respectfully requests withdrawal of the present rejection under 35 U.S.C. § 101.” The Applicant does not indicate why the amended clams is/are eligible subject matter.
In response, Examiner respectfully notes that the amended claims do not qualify as eligible subject matter under 35 U.S.C. 101. Please, see more detail on next section (e.g., Claim Rejections - 35 USC § 101 section.)
With respect to Allowable Subject Matter, the amendment in the independent claims 1, 10 and 17 does not include limitation(s) which is/are indicated as Allowable Subject Matter under prior art. Thus, Allowable Subject Matter has been withdrawn.
Claim Rejections - 35 USC § 101
4. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
5. Claims 1-2, 6-15 and 17-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claim 1 recites
“1. (Currently Amended) A computer-implemented method, comprising:
obtaining, by a participant computing device comprising one or more processor devices, sensor data from one or more sensors of the participant computing device, wherein the one or more sensors comprise an inertial measurement unit (IMU), and wherein the participant computing device and one or more other participant computing devices are connected to a teleconference orchestrated by a teleconference computing system;
receiving, by the participant computing device, IMU data comprising one or more measurements of one or more movements of another participant computing device captured by at least one of the one or more other participant computing devices;
processing, by the participant computing device, the IMU data with a machine- learned speaking intent model to obtain a speaking intent output indicating whether pre-configured speaking intent gesture has been detected by the IMU;
based at least in part on the sensor data and the speaking intent output, determining, by the participant computing device, that a participant associated with the participant computing device has performed the pre-configured speaking intent gesture; and
providing, by the participant computing device, information indicating that the participant has performed the pre-configured speaking intent gesture that is associated with an intent to speak to one or more of:
the teleconference computing system; or
at least one of the one or more other participant computing devices.”
Claim 10 recites
“10. (Currently Amended) A participant computing device, comprising:
one or more processors;
one or more sensors;
one or more non-transitory computer-readable media that store instructions that, when executed by the one or more processors, cause the participant computing device to perform operations, the operations comprising:
connecting to a teleconference orchestrated by a teleconference computing system, wherein the participant computing device is associated with a participant of the teleconference;
receiving information indicating that a second participant of the teleconference has performed a pre-configured speaking intent gesture that is associated with an intent to speak, wherein the second participant is associated with a second participant computing device that is connected to the teleconference, wherein the second participant computing device comprises an inertial measurement unit (IMU) that is configured to generate IMU data comprising one or more measurements of one or more movements of the second participant computing device, and wherein the information indicating that the second participant of the teleconference has performed the pre-configured speaking intent gesture that is associated with an intent to speak is determined based at least in part on:
a machine-learned speaking intent output indicating that the pre- configured speaking intent gesture has been detected by the IMU; and
a machine-learned speaking intent output indicating that the second participant of the teleconference has performed the pre-configured speaking intent gesture that is associated with an intent to speak, wherein the machine-learned speaking intent output is derived from sensor data captured at the second participant computing device, wherein the sensor data comprises the IMU data; and
responsive to the information indicating that the second participant has performed the pre-configured speaking intent gesture that is associated with an intent to speak, performing one or more actions to indicate, to the participant associated with the participant computing device, that some other participant of the teleconference intends to speak.”
Claim 17 recites
“17. (Currently Amended) One or more non-transitory computer-readable media that store instructions that, when executed by one or more processors of a teleconference computing system, cause the teleconference computing system to perform operations, the operations comprising:
receiving speaking intent information from a participant computing device of a plurality of participant computing devices connected to a teleconference orchestrated by the teleconference computing system, wherein the participant computing device comprises one or more sensors comprising an inertial measurement unit (IMU) that is configured to generate IMU data comprising one or more measurements of one or more movements of the participant computing device, and wherein the speaking intent information indicates that a participant associated with the participant computing device has performed a pre-configured speaking intent gesture that is associated with an intent to speak;
making an evaluation of one or more indication criteria based on the speaking intent information, wherein making the evaluation of the one or more indication criteria comprises:
receiving he IMU data; and
processing the IMU data with a machine-learned speaking intent model to obtain a speaking intent output indicating whether the pre-configured speaking intent gesture has been detected by the IMU; and
based on the evaluation, instructing a second participant computing device of the plurality of participant computing devices connected to the teleconference to perform one or more actions to indicate, to a second participant associated with the second participant computing device, that some other participant of the teleconference has performed the pre- configured speaking intent gesture associated with an intent intends.”
The limitations recited in claims as drafts cover a mental processes. More specifically, the underlying abstract idea revolved around what happen once a human determines that a participant of the meeting intend to talk. Then, the human could send a message to another participant to let him know that the participant intends to talk.
The judicial exception is not integrated into a practical application. In particular, claims recite the additional limitations of a computing device, a teleconference computing system, a processor, a sensor, an IMU, a non-transitory computer-readable media and a machine-learned speaking intent model. The additional element(s) or combination of elements such as a computing device, a teleconference computing system, a processor, a sensor, an IMU, a non-transitory computer-readable media in the claim(s) other than the abstract idea per se amount(s) to no more than (i) mere instructions to implement the idea on a computer, and/or (ii) recitation of generic computer structure that serves to perform generic computer functions that are well-understood, routine, and conventional activities previously known to the pertinent industry. Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. There is further no improvement to the computing device other than let one participant in the conference knows that other participant intends to talk. The mere recitation of a computing device, a teleconference computing system, a processor, a sensor, a non-transitory computer-readable media and/or the like is akin of adding the word “apply it” and/or “use it” with a computer in conjunction with the abstract idea.
Claims recite “a machine-learned speaking intent model” provide nothing more than instruction to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. The judicial exception of indicating whether a participant intends to speak is/are performed by using “machine-learned speaking intent model”. The machine-learned speaking intent model is used to generally apply the abstract idea without placing any limits on how the machine-learned speaking intent model functions. Rather, these limitations only recite the outcomes of “an intent to speak” and do not include any details about how the outcomes are accomplished. See MPEP 2106.05(f).
Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362; Claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983; Use of a computer or other machinery in its ordinary capacity for economic or other tasks. Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit); Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display, Electric Power Group, LLC v. Alstom S.A. 830 F.3d 1350, 1354-55, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016).
The paragraph [0004] discloses “[0004] One example aspect of the present disclosure is directed to a participant computing device. The participant computing device includes one or more processors, one or more sensors, and one or more non-transitory computer-readable media that store instructions that, when executed by the one or more processors, cause the participant computing device to perform operations. The operations include obtaining sensor data from the one or more sensors of the participant computing device, wherein the participant computing device and one or more other participant computing devices are connected to a teleconference orchestrated by a teleconference computing system. The operations include, based at least in part on the sensor data, determining that a participant associated with the participant computing device intends to speak to other participants of the teleconference. The operations include providing information indicating that the participant intends to speak to one or more of the teleconference computing system or at least one of the one or more other participant computing devices.” As filed in the specification, the computer is listed as a general-purpose computer and are mainly used as an application thereof. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
The dependent claims further do not remedy the issues noted above. More specifically, Claim 2 recites a list of elements. As stated previously, the mere recitation of a computing device, a teleconference computing system, a processor, a sensor, a non-transitory computer-readable media and/or the like is akin of adding the word “apply it” and/or “use it” with a computer in conjunction with the abstract idea. There is no additional limitation presented. Claim 6 recites receiving the information of the remote participant and determining whether the participant intents to speak. This reads on the human could see the remote participant (i.e., via a camera) or listen to the remote participant (i.e., loudspeaker) and determines that the remote participant intents to speak. There is no additional limitation presented. Claims 7-9 recites how to alert that the participant intents to speak. Sending a haptic or give somebody a call to alert something is a mental process. There is no additional limitation presented. Claim 11-13 recites the similar features as Claims 7-9 respectively. Claim 14 recites the similar as Claim 1. Claim 15 recites the similar features as Claim 2. Claim 18 recites indication criteria in determining whether the participant intents to speak. Using actions in the past or based on the degree of certainty is/are a mental process. There is no additional limitation presented. Claim 19 recites the degree of speaking priority. Comparing the degree of speaking priority is a mental process. There is no additional limitation presented. Claim 20 recites comparing priority metric between the participant devices. Comparing priority metric between the participant devices is a mental process. There is no additional limitation presented. Claim 21 recites the similar features as Claim 8 and 12. Claim 22 recites the similar features as Claims 9 and 13. Claim 23 recites a pattern of movement and face in order to detect user want to talk. There is no additional limitation presented. Claim 24 recites the similar features as Claim 23.
For at least the supra provided reasons, claims 1-2, 6-15, 17-24 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 112
6. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
7. Claims 1-2, 6-9 and 24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the
subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 1 recites
“obtaining, by a participant computing device comprising one or more processor devices, sensor data from one or more sensors of the participant computing device, wherein the one or more sensors comprise an inertial measurement unit (IMU), and wherein the participant computing device and one or more other participant computing devices are connected to a teleconference orchestrated by a teleconference computing system;” In this limitation, the sensor data is obtained from one or more sensors of the participant computing device, and the sensor data includes IMU data.
Claim 1 recites
“receiving, by the participant computing device, IMU data comprising one or more measurements of one or more movements of another participant computing device captured by at least one of the one or more other participant computing devices;” In this limitation, IMU data is received from another participant computing device.
Claim 1 recites
“processing, by the participant computing device, the IMU data with a machine- learned speaking intent model to obtain a speaking intent output indicating whether pre-configured speaking intent gesture has been detected by the IMU;” In this limitation, the IMU data is processed.
Thus, it is unclear that which IMU data is processed (e.g., IMU data from the participant device or IMU data from the other participant devices).
For compact prosecution, Examiner interprets IMU data is obtained from the participant computer device because in this claim, the method detects whether the participant intends to speak and sends an alert to other participant device(s).
Claims 2, 6-9 and 24 depends directly or indirectly on claim 1. Thus, Claims 2, 6-9 and 24 are rejected as the same ground as Claim 10.
Claim Rejections - 35 USC § 103
8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
9. Claims 1-2, 6-15, 17, 21-24 are rejected under 35 U.S.C.103 as being unpatentable over Desai et al. (US 2022/0398064 A1) in view of Eubank et al. (US 2021/0397407 A1.)
With respect to Claim 1, Desai et al. disclose
A computer-implemented method, comprising:
obtaining, by a participant computing device comprising one or more processor devices, sensor data from one or more sensors of the participant computing device (Desai [0010] While the microphone is muted, the controller monitors an image stream received from the mage capturing device for movements by the local participant in the communication session, [0019] Communication device 100 is managed by controller 101, which is communicatively coupled to image capturing device 102 and to at least one user interface device 103 that includes at least one microphone 104), wherein the one or more sensors comprise an inertial measurement unit (IMU), and wherein the participant computing device and one or more other participant computing devices are connected to a teleconference orchestrated by a teleconference computing system (Desai et al. Fig. 2 Communication device 100a and Communication device 100c are connected to a teleconference computing system);
receiving, by the participant computing device, IMU data comprising one or more measurements of one or more movements of another participant computing device captured by at least one of the one or more other participant computing devices (Desai et al. [0034] User interface 108a receives image streams 221-223 from respective image capturing devices 102 of each communication device 100a-100c. Each of device also has its own camera.);
processing, by the participant computing device, the IMU data with a machine- learned speaking intent model to obtain a speaking intent output indicating whether pre-configured speaking intent gesture has been detected by the IMU (Desai et al. [0010] The controller autonomously generates a prompt to unmute the microphone in response to determining that the microphone is muted while identifying at least one of a speaking movement of a mouth of the local participant or a gesture by the local participant associated with unmuting the microphone, [0019] Controller 101 can use image recognition engine 109 to characterize the movement. As an example, image recognition engine 109 can be neural net that is trained to recognize anatomical features including facial features and hand movements. As another example, image recognition engine 109 can have a library objects 111 of objects that are used to compare to images. As an additional example, image recognition engine 109 can perform a two-dimensional correlation with library objects 111);
based at least in part on the sensor data and the speaking intent output, determining, by the participant computing device, that a participant associated with the participant computing device has performed the pre-configured speaking intent gesture (Desai et al. [0011] the electronic device can determine, based on visually monitoring movements of the local participant during the communication session, that the local participant is attempting to speak to other remote participants who are using respective second communication devices, [0032] Controller 101 autonomously generates a prompt to unmute the at least one microphone 104 in response to determining that the at least one microphone 104 is muted while identifying at least one of a speaking movement of a mouth of the local participant to speak or a gesture by the local participant that correlates with the participant wanting to speak, [0036-0040] using predefined movement data 122 stored in device memory 106 of non-presenter communication device 100a to compare to the image live stream in order to determine whether the non-presenter attempt to speak); and
providing, by the participant computing device, information indicating that the participant has performed the pre-configured speaking intent gesture that is associated with an intent to speak to one or more of:
the teleconference computing system; or
at least one of the one or more other participant computing devices (Desai et al. [0014] Having a visually triggered indication, a participant can more intuitively speak or gesture to trigger an automatic alert to the presenting participant. One or more types of alerts can be triggered, Fig. 2 element 234 Participant ABC is raising hand, [0050] method 400 further includes transmitting a raised hand indication to at least one second electronic device to alert the associated second participant that the local participant is desirous of speaking or has started speaking.)
Desai et al. teach all the limitations of claim 1 except the inertial measurement unit. Desai et al. uses camera to capture image of participant in the teleconference, monitors image stream from the camera, compares pre-define movement data with the image live stream, detects that the participant wants to speak, sends alert to other device corresponding with other participant. However, Eubank et al. teach using an inertial measurement unit (IMU) to detect movement of the participant, and determine whether the participant intends to engage in the conversation based on the user movement.
Eubank et al. teaches
wherein the one or more sensors comprise an inertial measurement unit (IMU) (Eubank et al. [0005] In one aspect, the system may determine that the user intends to engage in the conversation based a gesture that is performed by the user. For instance, the system may determine, using several microphones (e.g., of a microphone array), a direction of arrival (DoA) of the speech. The system may determine that the user has performed a gesture that indicates that the user's attention is directed towards the DoA. For example, the user may gesture by moving towards the DoA or may gesture by turning towards the DoA. This determination may be based on motion data that indicates movement of the user, which is received from an inertial measurement unit (IMR) sensor. In some aspects, the system may determine that the user intends to engage in the conversation based on whether the user is looking towards the DoA.)
receiving, by the participant computing device, IMU data comprising one or more measurements of one or more movements of another participant computing device (Eubank et al. [0005] In one aspect, the system may determine that the user intends to engage in the conversation based a gesture that is performed by the user. For instance, the system may determine, using several microphones (e.g., of a microphone array), a direction of arrival (DoA) of the speech. The system may determine that the user has performed a gesture that indicates that the user's attention is directed towards the DoA. For example, the user may gesture by moving towards the DoA or may gesture by turning towards the DoA. This determination may be based on motion data that indicates movement of the user, which is received from an inertial measurement unit (IMR) sensor. In some aspects, the system may determine that the user intends to engage in the conversation based on whether the user is looking towards the DoA, [0095] at least some of the operations of described herein (e.g., in processes 60, 70, and/or 80 of FIGS. 7-9, respectively, may be performed by a machine learning algorithm that is configured to detect speech, determine whether the user intends to engage in a conversation based on sensor data.)
Desai et al. and Eubank et al. are analogous art because they are from a similar field of endeavor in the Signal Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of determining whether a local participant in a teleconference desire to speech as taught by Desai et al., using teaching of the inertial measurement unit as taught by Eubank et al. for the benefit of determining whether the user intends to engage in the conversation based on the IMU data (Eubank et al. [0005] In one aspect, the system may determine that the user intends to engage in the conversation based a gesture that is performed by the user. For instance, the system may determine, using several microphones (e.g., of a microphone array), a direction of arrival (DoA) of the speech. The system may determine that the user has performed a gesture that indicates that the user's attention is directed towards the DoA. For example, the user may gesture by moving towards the DoA or may gesture by turning towards the DoA. This determination may be based on motion data that indicates movement of the user, which is received from an inertial measurement unit (IMR) sensor. In some aspects, the system may determine that the user intends to engage in the conversation based on whether the user is looking towards the DoA.)
With respect to Claim 2, Desai et al. in view of Eubank et al. teach
wherein the one or more sensors of the participant computing device comprise at least one of.
a camera (Desai et al. [0044] Non-presenting user interface 108a includes camera);
a microphone;
a button;
a touch surface;
a gyroscope; or
an accelerometer.
With respect to Claim 6, Desai et al. disclose
wherein the method further comprises:
receiving, by the participant computing device, information indicating that a second participant associated with one of the one or more other participant computing devices intends to speak (Desai et al. Fig. 2 element 234 Participant ABC is raising hand, [0035] User interface 108c includes screen sharing window 225 that could be controlled by any of communication device 100a, communication device 100b or communication device 100c. Alert window 234 can provide indications of participants that are attempting to speak, allowing communication device 100c to assist communication device 100b by having host presenter 203 inform participants who are inadvertently muted. Desai et al. disclose detecting a participant wants to speak and sending an alert to other device corresponding to other participant.); and
responsive to the information indicating that the second participant intends to speak, performing, by the participant computing device, one or more actions to indicate, to the participant associated with the participant computing device, that some other participant of the teleconference intends to speak (Desai et al. Fig. 2 element 234 Participant ABC is raising hand, [0035] User interface 108c includes screen sharing window 225 that could be controlled by any of communication device 100a, communication device 100b or communication device 100c. Alert window 234 can provide indications of participants that are attempting to speak, allowing communication device 100c to assist communication device 100b by having host presenter 203 inform participants who are inadvertently muted).
With respect to Claim 7, Desai et al. in view of Eubank et al. teach
wherein performing the one or more actions comprises causing, by the participant computing device, playback of audio with an audio output device associated with the participant computing device, wherein the audio indicates to the participant that some other participant intends to speak (Desai et al. [0050] method 400 further includes transmitting a raised hand indication to at least one second electronic device to alert the associated second participant that the local participant is desirous of speaking or has started speaking (block 427). In one or more embodiments, method 400 includes presenting the alert that the at least one microphone is muted. In different embodiments, the alert is presented as one or more of an audible output, a visual output, and a haptic output via an integrated output device of the electronic device or on the external output device.)
With respect to Claim 8, Desai et al. in view of Eubank et al. teach
wherein performing the one or more actions comprises generating, by the participant computing device, a haptic feedback signal for one or more haptic feedback devices associated with the participant computing device, wherein the haptic feedback signal indicates that some other participant intends to speak (Desai et al. [0050] method 400 further includes transmitting a raised hand indication to at least one second electronic device to alert the associated second participant that the local participant is desirous of speaking or has started speaking (block 427). In one or more embodiments, method 400 includes presenting the alert that the at least one microphone is muted. In different embodiments, the alert is presented as one or more of an audible output, a visual output, and a haptic output via an integrated output device of the electronic device or on the external output device.)
With respect to Claim 9, Desai et al. in view of Eubank et al. teach
wherein performing the one or more actions comprises making, by the participant computing device, a modification to an interface of an application that facilitates participation in the teleconference, wherein the interface of the application is displayed within a display device associated with the participant computing device, and wherein the modification indicates that some other participant intends to speak (Desai et al. [0043] Detection of an attempt to speak by non-presenting participant 201 can automatically resize screen share box 309 to present an alert or can position an alert within alert box 312 on top of screen share box 309.)
With respect to Claim 10, Desai et al. disclose
A participant computing device, comprising:
one or more processors (Desai et al. [0022] processor, Fig. 2 Second Communication device 100b, the second communication device is a device of a presenter, the presenter device is the claimed participant computing device, each of communication device in the system of Desai et al. has the same elements with other devices);
one or more sensors (Desai [0010] While the microphone is muted, the controller monitors an image stream received from the mage capturing device for movements by the local participant in the communication session, [0019] Communication device 100 is managed by controller 101, which is communicatively coupled to image capturing device 102 and to at least one user interface device 103 that includes at least one microphone 104);
one or more non-transitory computer-readable media that store instructions that, when executed by the one or more processors (Desai et al. [0022] processor, [0028] non-transitory computer program), cause the participant computing device to perform operations, the operations comprising:
connecting to a teleconference orchestrated by a teleconference computing system, wherein the participant computing device is associated with a participant of the teleconference (Desai et al. Fig. 2 Communication device 100a and Communication device 100c are connected to a teleconference computing system, [0033] First, second, and third communication devices 100a-100c are communicatively coupled via network 204 during a video communication session);
receiving information indicating that a second participant of the teleconference has performed a pre-configured speaking intent gesture that is associated with an intent to speak (Desai et al. [0050] method 400 further includes transmitting a raised hand indication to at least one second electronic device to alert the associated second participant that the local participant is desirous of speaking or has started speaking), wherein the second participant is associated with a second participant computing device that is connected to the teleconference), wherein the second participant computing device comprises an inertial measurement unit (IMU) that is configured to generate IMU data comprising one or more measurements of one or more movements of the second participant computing device, and wherein the information indicating that the second participant of the teleconference has performed the pre-configured speaking intent gesture that is associated with an intent to speak is determined based at least in part on (Desai et al. [0011] the electronic device can determine, based on visually monitoring movements of the local participant during the communication session, that the local participant is attempting to speak to other remote participants who are using respective second communication devices, [0032] Controller 101 autonomously generates a prompt to unmute the at least one microphone 104 in response to determining that the at least one microphone 104 is muted while identifying at least one of a speaking movement of a mouth of the local participant to speak or a gesture by the local participant that correlates with the participant wanting to speak, [0036-0040] using predefined movement data 122 stored in device memory 106 of non-presenter communication device 100a to compare to the image live stream in order to determine whether the non-presenter attempt to speak):
a machine-learned speaking intent output indicating that the pre- configured speaking intent gesture has been detected by the IMU (Desai et al. [0010] The controller autonomously generates a prompt to unmute the microphone in response to determining that the microphone is muted while identifying at least one of a speaking movement of a mouth of the local participant or a gesture by the local participant associated with unmuting the microphone, [0019] Controller 101 can use image recognition engine 109 to characterize the movement. As an example, image recognition engine 109 can be neural net that is trained to recognize anatomical features including facial features and hand movements. As another example, image recognition engine 109 can have a library objects 111 of objects that are used to compare to images. As an additional example, image recognition engine 109 can perform a two-dimensional correlation with library objects 111); and
a machine-learned speaking intent output indicating that the second participant of the teleconference has performed the pre-configured speaking intent gesture that is associated with an intent to speak, wherein the machine-learned speaking intent output is derived from sensor data captured at the second participant computing device (Desai et al. [0034] User interface 108a receives image streams 221-223 from respective image capturing devices 102 of each communication device 100a-100c, Fig. 2B, [0019] using trained neural network to recognize and characterize facial features and hand movements); and
responsive to the information indicating that the second participant has performed the pre-configured speaking intent gesture that is associated with an intent to speak, performing one or more actions to indicate, to the participant associated with the participant computing device, that some other participant of the teleconference intends to speak (Fig. 2 element 234 Participant ABC is raising hand, [0050] method 400 further includes transmitting a raised hand indication to at least one second electronic device to alert the associated second participant that the local participant is desirous of speaking or has started speaking.)
Desai et al. teach all the limitations of claim 1 except the inertial measurement unit. Desai et al. uses camera to capture image of participant in the teleconference, monitors image stream from the camera, compares pre-define movement data with the image live stream, detects that the participant wants to speak, sends alert to other device corresponding with other participant. However, Eubank et al. teach using an inertial measurement unit (IMU) to detect movement of the participant, and determine whether the participant intends to engage in the conversation based on the user movement.
Eubank et al. teaches
,wherein the sensor data comprises the IMU data (Eubank et al. [0005] In one aspect, the system may determine that the user intends to engage in the conversation based a gesture that is performed by the user. For instance, the system may determine, using several microphones (e.g., of a microphone array), a direction of arrival (DoA) of the speech. The system may determine that the user has performed a gesture that indicates that the user's attention is directed towards the DoA. For example, the user may gesture by moving towards the DoA or may gesture by turning towards the DoA. This determination may be based on motion data that indicates movement of the user, which is received from an inertial measurement unit (IMR) sensor. In some aspects, the system may determine that the user intends to engage in the conversation based on whether the user is looking towards the DoA, [0095] at least some of the operations of described herein (e.g., in processes 60, 70, and/or 80 of FIGS. 7-9, respectively, may be performed by a machine learning algorithm that is configured to detect speech, determine whether the user intends to engage in a conversation based on sensor data.)
Desai et al. and Eubank et al. are analogous art because they are from a similar field of endeavor in the Signal Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of determining whether a local participant in a teleconference desire to speech as taught by Desai et al., using teaching of the inertial measurement unit as taught by Eubank et al. for the benefit of determining whether the user intends to engage in the conversation based on the IMU data (Eubank et al. [0005] In one aspect, the system may determine that the user intends to engage in the conversation based a gesture that is performed by the user. For instance, the system may determine, using several microphones (e.g., of a microphone array), a direction of arrival (DoA) of the speech. The system may determine that the user has performed a gesture that indicates that the user's attention is directed towards the DoA. For example, the user may gesture by moving towards the DoA or may gesture by turning towards the DoA. This determination may be based on motion data that indicates movement of the user, which is received from an inertial measurement unit (IMR) sensor. In some aspects, the system may determine that the user intends to engage in the conversation based on whether the user is looking towards the DoA.)
With respect to Claim 11, Desai et al. in view of Eubank et al. teach
wherein performing the one or more actions comprises causing playback of audio with an audio output device associated with the participant computing device, wherein the audio indicates to the participant that some other participant intends to speak (Desai et al. [0050] method 400 further includes transmitting a raised hand indication to at least one second electronic device to alert the associated second participant that the local participant is desirous of speaking or has started speaking (block 427). In one or more embodiments, method 400 includes presenting the alert that the at least one microphone is muted. In different embodiments, the alert is presented as one or more of an audible output, a visual output, and a haptic output via an integrated output device of the electronic device or on the external output device.)
With respect to Claim 12, Desai et al. in view of Eubank et al. teach
wherein performing the one or more actions comprises generating a haptic feedback signal for one or more haptic feedback devices associated with the participant computing device, wherein the haptic feedback signal indicates that some other participant intends to speak (Desai et al. [0050] method 400 further includes transmitting a raised hand indication to at least one second electronic device to alert the associated second participant that the local participant is desirous of speaking or has started speaking (block 427). In one or more embodiments, method 400 includes presenting the alert that the at least one microphone is muted. In different embodiments, the alert is presented as one or more of an audible output, a visual output, and a haptic output via an integrated output device of the electronic device or on the external output device.)
With respect to Claim 13, Desai et al. in view of Eubank et al. teach
wherein performing the one or more actions comprises making a modification to an interface of an application that facilitates participation in the teleconference, wherein the interface of the application is displayed within a display device associated with the participant computing device, and wherein the modification indicates that some other participant intends to speak (Desai et al. [0043] Detection of an attempt to speak by non-presenting participant 201 can automatically resize screen share box 309 to present an alert or can position an alert within alert box 312 on top of screen share box 309.)
With respect to Claim 14, Desai et al. in view of Eubank et al. teach
wherein the operations further comprise:
obtaining second sensor data from the one or more sensors of the participant computing device (Desai [0010] While the microphone is muted, the controller monitors an image stream received from the mage capturing device for movements by the local participant in the communication session, [0019] Communication device 100 is managed by controller 101, which is communicatively coupled to image capturing device 102 and to at least one user interface device 103 that includes at least one microphone 104);
based at least in part on the second sensor data, determining that the participant associated with the participant computing device intends to speak to the other participants of the teleconference (Desai et al. [0011] the electronic device can determine, based on visually monitoring movements of the local participant during the communication session, that the local participant is attempting to speak to other remote participants who are using respective second communication devices, [0032] Controller 101 autonomously generates a prompt to unmute the at least one microphone 104 in response to determining that the at least one microphone 104 is muted while identifying at least one of a speaking movement of a mouth of the local participant to speak or a gesture by the local participant that correlates with the participant wanting to speak, [0036-0040] using predefined movement data 122 stored in device memory 106 of non-presenter communication device 100a to compare to the image live stream in order to determine whether the non-presenter attempt to speak); and
providing information indicating that the participant intends to speak to one or more of:
the teleconference computing system; or
one or more other participant computing devices associated with the other participants of the teleconference (Desai et al. [0014] Having a visually triggered indication, a participant can more intuitively speak or gesture to trigger an automatic alert to the presenting participant. One or more types of alerts can be triggered, Fig. 2 element 234 Participant ABC is raising hand, [0050] method 400 further includes transmitting a raised hand indication to at least one second electronic device to alert the associated second participant that the local participant is desirous of speaking or has started speaking.)
With respect to Claim 15, Desai et al. in view of Eubank et al. teach
wherein the one or more sensors of the participant computing device comprise at least one of:
a camera (Desai et al. [0044] Non-presenting user interface 108a includes camera);
a microphone;
a button;
a touch surface;
a gyroscope; or
an accelerometer.
With respect to Claim 17, Desai et al. disclose
One or more non-transitory computer-readable media that store instructions that, when executed by one or more processors of a teleconference computing system, cause the teleconference computing system to perform operations (Desai et al. [0022] processor, [0028] non-transitory computer program), the operations comprising:
receiving speaking intent information from a participant computing device of a plurality of participant computing devices connected to a teleconference orchestrated by the teleconference computing system, wherein the participant computing device comprises one or more sensors comprising an inertial measurement unit (IMU) that is configured to generate IMU data comprising one or more measurements of one or more movements of the participant computing device, and wherein the speaking intent information indicates that a participant associated with the participant computing device has performed a pre-configured speaking intent gesture that is associated with an intent to speak (Desai [0010] While the microphone is muted, the controller monitors an image stream received from the mage capturing device for movements by the local participant in the communication session, [0019] Communication device 100 is managed by controller 101, which is communicatively coupled to image capturing device 102 and to at least one user interface device 103 that includes at least one microphone 104, [0050] method 400 further includes transmitting a raised hand indication to at least one second electronic device to alert the associated second participant that the local participant is desirous of speaking or has started speaking);
making an evaluation of one or more indication criteria based on the speaking intent information, wherein making the evaluation of the one or more indication criteria comprises:
receiving he IMU data; and
processing the IMU data with a machine-learned speaking intent model to obtain a speaking intent output indicating whether the pre-configured speaking intent gesture has been detected by the IMU (Desai et al. [0010] The controller autonomously generates a prompt to unmute the microphone in response to determining that the microphone is muted while identifying at least one of a speaking movement of a mouth of the local participant or a gesture by the local participant associated with unmuting the microphone, [0019] Controller 101 can use image recognition engine 109 to characterize the movement. As an example, image recognition engine 109 can be neural net that is trained to recognize anatomical features including facial features and hand movements. As another example, image recognition engine 109 can have a library objects 111 of objects that are used to compare to images. As an additional example, image recognition engine 109 can perform a two-dimensional correlation with library objects 111, Fig. 2B detect one participant intends to speak based on the pre-defined movement data); and
based on the evaluation, instructing a second participant computing device of the plurality of participant computing devices connected to the teleconference to perform one or more actions to indicate, to a second participant associated with the second participant computing device, that some other participant of the teleconference has performed the pre- configured speaking intent gesture associated with an intent intends (Desai et al. [0014] Having a visually triggered indication, a participant can more intuitively speak or gesture to trigger an automatic alert to the presenting participant. One or more types of alerts can be triggered, Fig. 2 element 234 Participant ABC is raising hand, [0050] method 400 further includes transmitting a raised hand indication to at least one second electronic device to alert the associated second participant that the local participant is desirous of speaking or has started speaking.)
Desai et al. teach all the limitations of claim 1 except the inertial measurement unit. Desai et al. uses camera to capture image of participant in the teleconference, monitors image stream from the camera, compares pre-define movement data with the image live stream, detects that the participant wants to speak, sends alert to other device corresponding with other participant. However, Eubank et al. teach using an inertial measurement unit (IMU) to detect movement of the participant, and determine whether the participant intends to engage in the conversation based on the user movement.
Eubank et al. teaches
receiving the IMU data (Eubank et al. [0005] In one aspect, the system may determine that the user intends to engage in the conversation based a gesture that is performed by the user. For instance, the system may determine, using several microphones (e.g., of a microphone array), a direction of arrival (DoA) of the speech. The system may determine that the user has performed a gesture that indicates that the user's attention is directed towards the DoA. For example, the user may gesture by moving towards the DoA or may gesture by turning towards the DoA. This determination may be based on motion data that indicates movement of the user, which is received from an inertial measurement unit (IMR) sensor. In some aspects, the system may determine that the user intends to engage in the conversation based on whether the user is looking towards the DoA); and
processing the IMU data with a machine-learned speaking intent model to obtain a speaking intent output indicating whether the pre-configured speaking intent gesture has been detected by the IMU (Eubank et al. [0005] In one aspect, the system may determine that the user intends to engage in the conversation based a gesture that is performed by the user. For instance, the system may determine, using several microphones (e.g., of a microphone array), a direction of arrival (DoA) of the speech. The system may determine that the user has performed a gesture that indicates that the user's attention is directed towards the DoA. For example, the user may gesture by moving towards the DoA or may gesture by turning towards the DoA. This determination may be based on motion data that indicates movement of the user, which is received from an inertial measurement unit (IMR) sensor. In some aspects, the system may determine that the user intends to engage in the conversation based on whether the user is looking towards the DoA, [0095] at least some of the operations of described herein (e.g., in processes 60, 70, and/or 80 of FIGS. 7-9, respectively, may be performed by a machine learning algorithm that is configured to detect speech, determine whether the user intends to engage in a conversation based on sensor data.)
Desai et al. and Eubank et al. are analogous art because they are from a similar field of endeavor in the Signal Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of determining whether a local participant in a teleconference desire to speech as taught by Desai et al., using teaching of the inertial measurement unit as taught by Eubank et al. for the benefit of determining whether the user intends to engage in the conversation based on the IMU data (Eubank et al. [0005] In one aspect, the system may determine that the user intends to engage in the conversation based a gesture that is performed by the user. For instance, the system may determine, using several microphones (e.g., of a microphone array), a direction of arrival (DoA) of the speech. The system may determine that the user has performed a gesture that indicates that the user's attention is directed towards the DoA. For example, the user may gesture by moving towards the DoA or may gesture by turning towards the DoA. This determination may be based on motion data that indicates movement of the user, which is received from an inertial measurement unit (IMR) sensor. In some aspects, the system may determine that the user intends to engage in the conversation based on whether the user is looking towards the DoA.)
With respect to Claim 21, Desai et al. in view of Eubank et al. teach
wherein performing the one or more actions comprises generating a haptic feedback signal for one or more haptic feedback devices associated with the participant computing device, wherein the haptic feedback signal indicates that some other participant intends to speak (Desai et al. [0050] method 400 further includes transmitting a raised hand indication to at least one second electronic device to alert the associated second participant that the local participant is desirous of speaking or has started speaking (block 427). In one or more embodiments, method 400 includes presenting the alert that the at least one microphone is muted. In different embodiments, the alert is presented as one or more of an audible output, a visual output, and a haptic output via an integrated output device of the electronic device or on the external output device.)
With respect to Claim 22, Desai et al. in view of Eubank et al. teach
wherein performing the one or more actions comprises making a modification to an interface of an application that facilitates participation in the teleconference, wherein the interface of the application is displayed within a display device associated with the participant computing device, and wherein the modification indicates that some other participant intends to speak (Desai et al. [0043] Detection of an attempt to speak by non-presenting participant 201 can automatically resize screen share box 309 to present an alert or can position an alert within alert box 312 on top of screen share box 309.)
With respect to Claim 23, Desai et al. in view of Eubank et al. teach
wherein the pre-configured speaking intent gesture comprises a pattern of movement corresponding to movement of the participant computing device to a face of the participant (Desai et al. Fig. 2B.)
With respect to Claim 24, Desai et al. in view of Eubank et al. teach
wherein the pre-configured speaking intent gesture comprises a pattern of movement corresponding to movement of the participant computing device to a face of the participant (Desai et al. Fig. 2B.)
10. Claims 18-20 are rejected under 35 U.S.C.103 as being unpatentable over Desai et al. (US 2022/0398064 A1) in view of Eubank et al. (US 2021/0397407 A1) and Deng et al. (US 2023/0208898 A1.)
With respect to Claim 18, Desai et al. in view of Eubank et al. teach all the limitations of claim 17 upon which Claim 18 depends. Desai et al. in view of Eubank et al. fail to explicitly teach
wherein the one or more indication criteria comprise at least one of.
a number of times that a speaking intent has been previously indicated for the participant associated with the participant computing device;
a degree of certainty associated with the speaking intent information; or
a connection quality associated with a connection of the participant computing device to the teleconference; or
a number of other participant computing devices of the plurality of participant computing devices that have also provided speaking intent information to the teleconference computing system.
However, Deng et al. teach
wherein the one or more indication criteria comprise at least one of:
a number of times that a speaking intent has been previously indicated for the participant associated with the participant computing device;
a degree of certainty associated with the speaking intent information (Deng et al. [0005] The user interface format can also include an ordered queue based on a time stamp and a system priority recommendation based on respective a participant's familiarity to the topic, historical or potential impact on the effectiveness of a meeting having a particular topic, or individual participation score. The time stamp can be based on the timing of any type of input indicating an intent to speak, e.g., a user input on an input device or a gesture captured by video camera or an audio device. The input can include, but is not limited to, video data showing a person raising a hand raise, video data showing a movement indicating a person's intent to speak, audio data containing spoken words or a vocal request to speak, etc. See paragraphs [0034, 0037-0039]); or
a connection quality associated with a connection of the participant computing device to the teleconference; or
a number of other participant computing devices of the plurality of participant computing devices that have also provided speaking intent information to the teleconference computing system.
Desai et al., Eubank et al. and Deng et al. are analogous art because they are from a similar field of endeavor in the Signal Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of determining whether a local participant in a teleconference desire to speech as taught by Desai et al., using teaching of the inertial measurement unit as taught by Eubank et al. for the benefit of determining whether the user intends to engage in the conversation based on the IMU data, using teaching of the confidence level as taught by Deng et al. for the benefit of determining whether the participant intents to speak (Deng et al. [0005] The user interface format can also include an ordered queue based on a time stamp and a system priority recommendation based on respective a participant's familiarity to the topic, historical or potential impact on the effectiveness of a meeting having a particular topic, or individual participation score. The time stamp can be based on the timing of any type of input indicating an intent to speak, e.g., a user input on an input device or a gesture captured by video camera or an audio device. The input can include, but is not limited to, video data showing a person raising a hand raise, video data showing a movement indicating a person's intent to speak, audio data containing spoken words or a vocal request to speak, etc.)
With respect to Claim 19, Desai et al. in view of Eubank et al. teach all the limitations of Claim 17 upon which Claim 19 depends. Desai et al. in view of Eubank et al. fail to explicitly teach
wherein receiving the speaking intent information from the participant computing device further comprises receiving additional speaking intent information from a third participant computing device of the plurality of participant computing devices, wherein the speaking intent information indicates that a third participant associated with the third participant computing device intends to speak; and
wherein the one or more indication criteria comprises a priority criteria indicative of a degree of speaking priority for the participant computing device and the third participant computing device.
However, Deng et al. teach
wherein receiving the speaking intent information from the participant computing device further comprises receiving additional speaking intent information from a third participant computing device of the plurality of participant computing devices, wherein the speaking intent information indicates that a third participant associated with the third participant computing device intends to speak (Deng et al. [0005] In some configurations a system can generate a user interface format that includes a time ordered queue with identifications of system recommendations of a priority for each recommended speaker. The user interface format can also include an ordered queue based on a time stamp and a system priority recommendation based on respective a participant's familiarity to the topic, historical or potential impact on the effectiveness of a meeting having a particular topic, or individual participation score. The time stamp can be based on the timing of any type of input indicating an intent to speak, e.g., a user input on an input device or a gesture captured by video camera or an audio device. The input can include, but is not limited to, video data showing a person raising a hand raise, video data showing a movement indicating a person's intent to speak, audio data containing spoken words or a vocal request to speak, etc.); and
wherein the one or more indication criteria comprises a priority criteria indicative of a degree of speaking priority for the participant computing device and the third participant computing device (Deng et al. [0086] The top recommendation 203D can include the top system recommended speaker. It can be one of the three candidates in the time ordered queue, or a candidate who raised his/her hand later than the first three candidates, or someone who didn't raise his/her hand. The top recommendation speaker is chosen from the waiting queue with the highest recommendation score. The icon for the top recommendation can also be labeled in a similar fashion explained before. With a up arrow from bottom to top of the icon to illustrate recommendation strength is the highest.)
Desai et al., Eubank et al. and Deng et al. are analogous art because they are from a similar field of endeavor in the Signal Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of determining whether a local participant in a teleconference desire to speech as taught by Desai et al., using teaching of the inertial measurement unit as taught by Eubank et al. for the benefit of determining whether the user intends to engage in the conversation based on the IMU data, using teaching of the confidence level as taught by Deng et al. for the benefit of determining the priority for each recommended speaker (Deng et al. [0005] In some configurations a system can generate a user interface format that includes a time ordered queue with identifications of system recommendations of a priority for each recommended speaker. The user interface format can also include an ordered queue based on a time stamp and a system priority recommendation based on respective a participant's familiarity to the topic, historical or potential impact on the effectiveness of a meeting having a particular topic, or individual participation score. The time stamp can be based on the timing of any type of input indicating an intent to speak, e.g., a user input on an input device or a gesture captured by video camera or an audio device. The input can include, but is not limited to, video data showing a person raising a hand raise, video data showing a movement indicating a person's intent to speak, audio data containing spoken words or a vocal request to speak, etc.)
With respect to Claim 20, Desai et al. in view of Eubank et al. and Deng et al. teach
wherein making the evaluation of one or more indication criteria based on the speaking intent information comprises:
determining a priority metric for the participant computing device based on the evaluation of the one or more indication criteria for the participant computing device (Deng et al. [0086] The top recommendation 203D can include the top system recommended speaker. It can be one of the three candidates in the time ordered queue, or a candidate who raised his/her hand later than the first three candidates, or someone who didn't raise his/her hand. The top recommendation speaker is chosen from the waiting queue with the highest recommendation score. The icon for the top recommendation can also be labeled in a similar fashion explained before. With a up arrow from bottom to top of the icon to illustrate recommendation strength is the highest. See paragraphs [0085 and 0092]);
determining a priority metric for the third participant computing device based on an evaluation of the one or more indication criteria for the third participant computing device (Deng et al. [0086] The top recommendation 203D can include the top system recommended speaker. It can be one of the three candidates in the time ordered queue, or a candidate who raised his/her hand later than the first three candidates, or someone who didn't raise his/her hand. The top recommendation speaker is chosen from the waiting queue with the highest recommendation score. The icon for the top recommendation can also be labeled in a similar fashion explained before. With a up arrow from bottom to top of the icon to illustrate recommendation strength is the highest. See paragraphs [0085 and 0092]); and
based on the priority metric for the participant computing device and the priority metric for the third participant computing device, selecting the participant computing device for indication of speaking intent (Deng et al. [0036] A dynamic meeting moderation system uses these determined values and recommendations to assist a meeting organizer, which can be an active speaker, in moderating the meeting. This enables a system to dynamically encourage remote participation. For example, even if a moderator can't see every aspect of remote user, the techniques disclosed herein can raise awareness of remote speaker's intention to speak, e.g., flash the intended speaker's image, increase his/her volume settings automatically to an appropriate level etc. The system can also send a notification to the meeting organizer to actively moderate the conversation flow to allow remote speaker to chime in. The system can also provide dynamic interactive enhancement. For example, the system can present an ordered list of names to assist the meeting organizer to find the right online speaker. In large online meetings, when a question is raised to the audience, the system can display an ordered list of names, e.g., a queue, to the people who asked the question. The person who raised the question can use the list to seek answers, [0086] The top recommendation 203D can include the top system recommended speaker. It can be one of the three candidates in the time ordered queue, or a candidate who raised his/her hand later than the first three candidates, or someone who didn't raise his/her hand. The top recommendation speaker is chosen from the waiting queue with the highest recommendation score. The icon for the top recommendation can also be labeled in a similar fashion explained before. With a up arrow from bottom to top of the icon to illustrate recommendation strength is the highest. See paragraphs [0085 and 0092].)
Conclusion
11. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See PTO-892.
a. Jorasch et al. (US 2021/0399911 A1.) In this reference, Jorasch et al. disclose a method and a system for meeting management.
b. Sanaullah et al. (US 2015/0085064 A1.) In this reference, Sanaullah et al. disclose a method and a system for managing teleconference participant mute state.
c. Kanevsky et al. (US 2008/0167868 A1.) In this reference, Kanevsky et al. disclose a method and a system for controlling of microphones for speech recognition applications.
12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to THUYKHANH LE whose telephone number is (571)272-6429. The examiner can normally be reached Mon-Fri: 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew C. Flanders can be reached on 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THUYKHANH LE/Primary Examiner, Art Unit 2655