Prosecution Insights
Last updated: April 19, 2026
Application No. 18/690,557

WEB CONFERENCE SYSTEM, TERMINAL APPARATUS, AND WEB CONFERENCE METHOD

Final Rejection §102§103§112
Filed
Mar 08, 2024
Examiner
BOOK, PHYLLIS A
Art Unit
2454
Tech Center
2400 — Computer Networks
Assignee
Maxell, Ltd.
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 3m
To Grant
97%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
390 granted / 473 resolved
+24.5% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
10 currently pending
Career history
483
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
22.7%
-17.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 473 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment filed on December 29, 2025 has been entered. Claims 1-6 and 8-21 are pending. Claim 7 has been canceled. Claims 1, 5-6, 8-9, 11-12, 14-19, and 21 have been amended. Claims 8 and 10 are objected to. Claims 1-6, 9, and 11-21 are rejected. Response to Arguments Applicant's arguments filed December 29, 2025 have been fully considered. Regarding the Allowable Subject Matter, Applicant argues as follows: Applicant thanks the Examiner for indication that claims 8 and 10 contain allowable subject matter, but are objected to as being dependent upon a rejected base claim, and would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Applicant has not rewritten these claims in independent form at this time, as all of the claims in the application are believed to be in condition for allowance, as discussed herein. Applicant’s contention that the claims are in condition for allowance is not persuasive. The claims remain as rejected in the current office action. Regarding the Claim Objections, Applicant argues as follows: Claims 5, 7-9, 11-12, 15, and 19-21 are objected to because of informalities. First, for Claims 5, 7-9, 11-12, 15, and 21, the Office requests that the claim limitations be separated by semicolons. Although the MPEP does not require that the claim limitations be separated by semicolons, the claims are amended to separate the claim limitations by semicolons in order to expedite prosecution. Second, the Office suggests amending all method claims (claims 19 and 20) such that method steps start with a verb describing what is being performed, and not with labeling using "step," as is done with the current claims. The claims are amended as suggested. The amendments to the claims alleviate the above-noted objections. Accordingly, Applicant respectfully requests withdrawal of the claim objections. Examiner appreciates Applicant’s agreement to follow standard USPTO customs in claim recitation. The objections have been withdrawn. Regarding the Claim Rejections under 35 U.S.C. 112(a), Applicant argues as follows: Claim 14 is rejected under 35 U.S.C. § 112(a) as failing to comply with the written description requirement, because the limitation "wherein the headphone type device includes a headphone, an earphone, a headset, an earset, a head-mounted display, a bone- conduction speaker, and a phone call speaker of a smartphone" (emphasis added) is recited in a manner that states that the "headphone type device" includes all of the listed types of audio devices, but that is not supported by the specification, which discloses as follows: The limitation is amended to recite "wherein the headphone type device includes a headphone, an earphone, a headset, an earset, a head-mounted display, a bone- conduction speaker, or a phone call speaker of a smartphone" (emphasis added). The amendments to the claim are believed to obviate the rejections. Reconsideration and withdrawal of the rejection is respectfully requested. The argument is persuasive and the rejection has been withdrawn. Regarding the Claim Rejections under 35 U.S.C. 102 and 103, Applicant argues as follows: 1. Claims 1-3, 6-7, 9, and 13-20 are rejected under 35 U.S.C. § 102 as being anticipated by Japanese Publication No. 7132478 ("Oda Ryo"). 2. Claims 4-5, 11-12, and 21 are rejected under 35 U.S.C. § 103 as being unpatentable over Oda Ryo, in view of U.S. Patent Application Publication No. 2020/0274905 ("Nishino"). Independent claim 1 is amended to incorporate the subject matter of claim 7 (cancelled herein). Applicant respectfully submits that the cited references, taken individually or in combination, fail to disclose or suggest the features of amended independent claim 1, particularly with respect to the features of: wherein the server includes: ...; and a transmission device configured to transmit the device type information for each terminal apparatus to the plurality of terminal apparatuses, and wherein the terminal apparatus includes: a display device; and a display control device configured to control the display device to display the received device type information for each terminal apparatus. In the rejection of now-cancelled claim 7, the Office asserts that Oda Ryo discloses the features of (i) "a transmission device configured to transmit the device type information for each terminal apparatus to the plurality of terminal apparatuses" and (ii) "a display control device configured to control the display device to display the received device type information for each terminal apparatus. The cited portions of Oda Ryo is directed to transmitting voice data (and, in the broader description, images and sounds) between participants in a web conference, not transmitting device type information for each terminal apparatus. Also, when Oda Ryo describes the server transmitting information from the host to the guest, it describes transmitting a host screen image for screen sharing and transmitting images in a queue, rather than transmitting device type information for each terminal apparatus. Relatedly, Oda Ryo describes that the CPU of the information processing apparatus (terminal side) displays a device setting screen, receives the settings of devices to be used (camera, microphone, speaker), and saves the usage status of each device in a device status table. In other words, in the cited disclosure, the device status information including device related information is stored on the terminal side after local device selection, and Oda Ryo does not appear to describe the server transmitting device type information for each terminal apparatus to the plurality of terminal apparatuses as now required by amended claim 1. Amended claim 1 also describes that each terminal apparatus includes a display control device that controls the display device to display the received device type information for each terminal apparatus. The Office relies on a general statement about a CPU comprehensively controlling devices and on a description of screen sharing. But Oda Ryo's screen sharing description is that the host's screen is transmitted to the server and then transmitted to the guest so the guest can share the host screen, not that the terminal receives device type information for each terminal apparatus from the server and displays that received device type information. Accordingly, Oda Ryo fails to disclose or suggest the features of: wherein the server includes: ...; and a transmission device configured to transmit the device type information for each terminal apparatus to the plurality of terminal apparatuses, and wherein the terminal apparatus includes: a display device; and a display control device configured to control the display device to display the received device type information for each terminal apparatus. Nishino cited in combination with Oda Ryo in the other rejections does not remedy the deficiencies of Oda Ryo, and the Office does not suggest otherwise. Based on the foregoing, the applied combination of the cited references, alone or in combination, fails to disclose or suggest each and every feature of amended independent claim 1, which is believed to be in condition for allowance. Amended independent claims 15 and 19, although differing in scope, recites subject matter similar to that discussed above with respect to amended independent claim 1. The dependent claims depend from their respective base claims and add further limitations thereto. Reconsideration and withdrawal of the rejections under 35 U.S.C. § 102 and 35 U.S.C. § 103 are therefore respectfully requested. The argument is moot, because the independent claims 1, 15, and 19 are now rejected under 35 U.S.C. 103 over ODA RYO and ESAKA, as are the dependent claims. Information Disclosure Statement The information disclosure statement (IDS) submitted on December 23, 2025 complies with the provisions of 37 CFR 1.97, and has been considered by the Examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 21 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claim 21, The claim has been amended to recite “wherein, in determining whether or not the audio output from the audio output device is being sued in each terminal or controlling the volume based on the retained device type information for each terminal apparatus, the server determines the terminal apparatus whose type of the audio output device being used is the headphone type as a transmission destination of audio data input from the terminal apparatus whose secure transmission request has been accepted.” It is not clear what is meant by “whether or not the audio output from the audio output device is being sued in each terminal.” Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 1-3, 6-7, and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over ODA RYO (JP 7132478 B2, hereinafter referred to as ODA RYO) in view of ESAKA et al. (US 2025/0141945 A1, hereinafter referred to as ESAKA). Regarding Claims 1 and 19, ODA RYO teaches: “A WEB conference system comprising: a server; and a plurality of terminal apparatuses connected to the server” as recited in Claim 1 and “A WEB conference method in a WEB conference system including a server and a plurality of terminal apparatuses connected to the server” as recited in Claim 19 (page 2, paragraph 4). [In a web conference system using an Internet line or the like, voice data and image data are transmitted and received between a server device and a plurality of client terminals (page 2, paragraph 4).] “wherein the server includes: a retention device configured to acquire device type information indicating types of audio output devices being used in the terminal apparatuses from the plurality of terminal apparatuses, and retain the device type information” as recited in Claim 1 and “a retention step in which the server acquires device type information indicating types of audio output devices being used in the terminal apparatuses from the plurality of terminal apparatuses, and retaining the device type information” as recited in Claim 19 (page 3, paragraph 6; page 10, paragraph 7; page 10, paragraph 8). [The web conference system includes a web conference server and information processing devices (page 3, paragraph 6). The CPU of the information processing apparatus receives settings of devices to be used, such as the camera, microphone, and speaker (page 10, paragraph 7). The device status table holds whether or not the camera, microphone, and speaker are used, and indicates the type of device and whether the device is being used (page 10, paragraph 8).] “a control device configured to determine whether or not audio is output from the audio output device being used in each terminal apparatus or control a volume output from the terminal apparatus based on the device type information for each terminal apparatus retained in the retention device” as recited in Claim 1 and “a control step in which the server determines whether or not audio is output from the audio output device being used in each terminal apparatus or controls a volume based on the retained device type information for each terminal apparatus” as recited in Claim 19 (page 2, paragraph 3; page 5, paragraph 5; page 7, last paragraph - page 8, paragraph 1; page 18, paragraph 2). [A server device transmits voice data received from a client terminal to a different client terminal, synthesizes all of the audio data received from the client terminals, and distributes the synthesized audio data to each client terminal (page 2, paragraph 3). The CPU 201 of the web conference server 100 comprehensively controls each device and controller connected to the system bus (page 5, paragraph 5). An audio controller controls audio acquisition by a microphone (input unit), a speaker (output unit), and controls output to the speaker (page 7, last paragraph - page 8, paragraph 1). The CPU of the information processing apparatus synthesizes the acquired microphone input sound and the acquired PC playback sound as one sound data, and preferably adjusts the volume of the audio data after synthesizing the microphone input sound and the PC reproduced sound in order to prevent sound distortion (page 18, paragraph 2).] “a transmission device configured to transmit the device type information for each terminal apparatus to the plurality of terminal apparatuses” (abstract; page 2, paragraph 2; page 10, paragraph 7; page 10, paragraph 8; fig 7, elements 701, 702). [A web conferencing system including a synthesis means for synthesizing acquired sound data and voice data, and a transmission means for transmitting the sound data (abstract). In a web conference system using the Internet, voice data and image data are transmitted and received between a server device and a plurality of client terminals (page 2, paragraph 2). The CPU of the information processing apparatus receives settings of devices to be used, and the usage status of each device is saved in the device status table 700 (page 10, paragraph 7). The device status table 700 includes a device 701 indicating the type of device and a use 702 indicating whether the device is being used (page 10, paragraph 8).] (NOTE: The transmission means is equivalent to the “transmission device configured to transmit,” the client terminals to the “terminal apparatus,” and the contents of the device status table to the “device type information for each terminal apparatus.”) ODA RYO does not teach: “wherein the terminal apparatus includes: a display device; and a display control device configured to control the display device to display the received device type information for each terminal apparatus.” ESAKA teaches: “wherein the terminal apparatus includes: a display device; and a display control device configured to control the display device to display the received device type information for each terminal apparatus” (paragraphs [0106], [0117]; fig 3, element 3k; fig 8, element 33; fig 11, element D12) [The “audio and video input and output device type” is an item indicating whether the audio or video input device is built-in or externally connected, or the headphone type or the speaker type ([0106]). The display control device 33 controls the display unit 3k to display a text indicating a warning, a caution, a situation, and the like on a screen based on a control signal or data acquired from the WEB conference server ([0117]). In a display screen D12, "audio is distributed only to the terminal apparatus whose audio output device is headphone type device" is displayed ([0118]).] (NOTE: The display control device is equivalent to the “display control device configured to control the display device,” the display unit to the “display device,” and the message on screen D12 to “the received device type information for each terminal apparatus.”) Both ODA RYO and ESAKA teach web conferencing systems, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the ODA RYO disclosure, the ability to collect and display device type information, as taught by ESAKA. Such inclusion would have increased the information available to the user, and would have been consistent with the rationale of using known techniques to improve similar devices (methods, or products) in the same way to show a prima facie case of obviousness (MPEP 2143(I)(C)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007). Regarding Claim 15, ODA RYO teaches: “A terminal apparatus connected to a server used in a WEB conference” (page 2, paragraph 4). [In a web conference system using an Internet line or the like, voice data and image data are transmitted and received between a server device and a plurality of client terminals (page 2, paragraph 4).] “the terminal apparatus comprising: one or more audio output devices; a control unit configured to control output from the one or more audio output devices” (page 7, last paragraph - page 8, paragraph 1). [An audio controller controls audio acquisition by a microphone (input unit), a speaker (output unit), and controls output to the speaker; the destination to which the audio controller outputs audio is an output terminal for outputting audio (page 7, last paragraph - page 8, paragraph 1).] “control the one or more audio output devices to output audio of the WEB conference based on audio data transmitted from the server in a case where the audio output device being used is a headphone type” (page 3, paragraph 6; page 10, paragraph 7; page 8, paragraph 1). [The web conference system includes a web conference server and information processing devices (page 3, paragraph 6). The CPU of the information processing apparatus receives settings of devices to be used, such as the camera, microphone, and speaker (page 10, paragraph 7). The destination to which the audio controller 313 outputs audio can be an output terminal for outputting audio to headphones (page 8, paragraph 1).] ODA RYO does not teach: “a display device; and a display control device configured to control the display device; wherein the control unit is configured to: receive, from the server, device type information indicating types of audio output devices being used by a plurality of terminal apparatuses including the terminal apparatus; control the display control device to control display device to display the received device type information for each terminal apparatus; and control controls the one or more audio output devices device to output audio of the WEB conference based on audio data transmitted from the server in a case where the audio output device being used is a headphone type.” ESAKA teaches: “a display device; and a display control device configured to control the display device; wherein the control unit is configured to: receive, from the server, device type information indicating types of audio output devices being used by a plurality of terminal apparatuses including the terminal apparatus; control the display control device to control display device to display the received device type information for each terminal apparatus” (paragraphs [0071] [0106], [0117]; fig 1, elements 2, 3; fig 3, element 3k; fig 8, element 33; fig 11, element D12) [The participant management device acquires various kinds of information from the terminal apparatuses 3 used by the participants, that is, the terminal apparatuses 3 connected to the WEB conference server 2 ([0071]). The “audio and video input and output device type” is an item indicating whether the audio or video input device is built-in or externally connected, or the headphone type or the speaker type ([0106]). The display control device 33 controls the display unit 3k to display a text indicating a warning, a caution, a situation, and the like on a screen based on a control signal or data acquired from the WEB conference server ([0117]). In a display screen D12, "audio is distributed only to the terminal apparatus whose audio output device is headphone type device" is displayed ([0118]).] (NOTE: The display control device is equivalent to the “display control device configured to control the display device,” the display unit to the “display device,” acquiring various kinds of information from the terminal apparatuses from the server to “receive, from the server, device type information,” and the message on screen D12 to “display the received device type information for each terminal apparatus.”) Both ODA RYO and ESAKA teach web conferencing systems, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the ODA RYO disclosure, the ability to collect and display device type information, as taught by ESAKA. Such inclusion would have increased the information available to the user, and would have been consistent with the rationale of using known techniques to improve similar devices (methods, or products) in the same way to show a prima facie case of obviousness (MPEP 2143(I)(C)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007). Regarding Claim 2, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 1. ODA RYO teaches: “wherein the control device determines the terminal apparatus to be a transmission destination of audio data in a WEB conference based on the device type information for each terminal apparatus” (page 2, paragraph 3; page 10, paragraph 7; page 10, paragraph 8). [A server device transmits voice data received from a client terminal to a different client terminal, synthesizes all of the audio data received from the client terminals, and distributes the synthesized audio data to each client terminal (page 2, paragraph 3). The usage status of each device, including the camera, microphone, and speaker, is saved in the device status table (page 10, paragraph 7). The device status table holds whether or not the camera, microphone, and speaker are used, and indicates the type of device and whether the device is being used (page 10, paragraph 8).] Regarding Claim 3, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 2. ODA RYO teaches: “wherein the control device determines the terminal apparatus to be a transmission destination of audio data in a WEB conference based on the device type information for each terminal apparatus” (page 8, paragraph 1; page 10, paragraph 8). [The destination to which the audio controller outputs audio is an output terminal for outputting audio to headphones (page 8, paragraph 1). The device status table holds whether or not the camera, microphone, and speaker are used, and indicates the type of device and whether the device is being used (page 10, paragraph 8).] Regarding Claim 6, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 1. ODA RYO teaches: “wherein the control device determines one of the plurality of terminal apparatuses controlling a volume of audio output from the audio output device and controls the volume of the determines one of the plurality of terminal apparatuses such that the volume is smaller than a first volume level based on the device type information of the determined one of the plurality of terminal apparatuses” (page 10, paragraph 8; page 18, paragraph 2). [The device status table includes a device indicating the type of device and a use indicating whether the device is being used (page 10, paragraph 8). The CPU of the information processing apparatus synthesizes the acquired microphone input sound and the acquired PC playback sound as one sound data, and by this synthesizing process, it becomes possible to reduce the capacity of the audio data of the PC reproduced sound and share it; it is preferable to adjust the volume of the audio data after synthesizing the microphone input sound and the PC reproduced sound in order to prevent sound distortion (page 18, paragraph 2).] (NOTE: The reduction in the capacity of the audio data of the PC reproduced sound is equivalent to “controls the volume of the determines one of the plurality of terminal apparatuses such that the volume is smaller than a first volume level” and the type of device in the device status table to the “device type information for each terminal apparatus.”) Regarding Claim 9, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 1. ODA RYO teaches: “wherein the transmission device transmits participant information indicating a participant who uses the terminal apparatus in each terminal apparatus to the plurality of terminal apparatuses” (page 2, paragraph 3). [A control method for a web conference system obtaining first sound data output from software and second sound data input from a microphone, to generate third sound data; and transmitting the third sound data (page 2, last paragraph - page 3, paragraph 1). In a web conference system, it is possible to share sound data output from software with conference participants (page 3, paragraph 2). The video acquired from a web camera and the audio acquired from the microphone are each compressed and transmitted to the information processing devices of other participants (page 4, paragraph 5).] “wherein the display control device controls the display device to display the participant information and the device type information in association with each terminal apparatus” (page 16, paragraph 3; page 10, paragraph 5; page 10, paragraph 8). [The audio data written to the speaker 316 by the web conference application includes audio data received by the web conference server from other information processing apparatuses of the participants in the web conference; the PC playback sound synthesized by the information processing device includes voice data of the information processing device 102 transmitted by other participants (page 16, paragraph 3). The CPU of the information processing apparatus controls the screen of the information processing apparatus to display a device setting screen (page 10, paragraph 5). The device status table holds whether or not the camera, microphone, and speaker are used, and indicates the type of device and whether the device is being used (page 10, paragraph 8).] Regarding Claim 13, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 1. ODA RYO teaches: “wherein the control device controls the terminal apparatus such that a volume of the audio output from the audio output device becomes smaller than a first volume level in a case where the type of the audio output device being used indicated by the device type information of the terminal apparatus is different from the headphone type” (page 5, paragraph 5; page 8, paragraph 1; page 10, paragraph 8; page 18, paragraph 2). [The CPU of the web conference server comprehensively controls each device and controller connected to the system bus (page 5, paragraph 5). The destination to which the audio controller outputs audio is an output terminal for outputting audio to headphones (page 8, paragraph 1). The device status table includes a device indicating the type of device and a use indicating whether the device is being used (page 10, paragraph 8). The CPU of the information processing apparatus synthesizes the acquired microphone input sound and the acquired PC playback sound as one sound data, and by this synthesizing process, it becomes possible to reduce the capacity of the audio data of the PC reproduced sound and share it; it is preferable to adjust the volume of the audio data after synthesizing the microphone input sound and the PC reproduced sound in order to prevent sound distortion (page 18, paragraph 2).] (NOTE: The reduction in the capacity of the audio data of the PC reproduced sound is equivalent to “the audio output from the audio output device becomes smaller than a first volume level” and the type of device in the device status table to the “by the device type information of the terminal apparatus.”) Regarding Claim 14, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 3. ODA RYO teaches: “wherein the headphone type device includes a headphone, an earphone, a headset, an earset, a head-mounted display, a bone- conduction speaker, and a phone call speaker of a smartphone” (page 5, paragraph 5; page 10, paragraph 8; page 18, paragraph 2). [The destination to which the audio controller outputs audio is an output terminal for outputting audio to headphones (page 8, paragraph 1).] (NOTE: This limitation is not supported by the specification.) Regarding Claim 16, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 15. ODA RYO teaches: “wherein the control unit controls the display device to display participant information indicating a participant of the WEB conference and device type information indicating a type of the audio output device being used in the terminal apparatus of the participant in association with each participant based on data output from the server” (page 16, paragraph 3; page 10, paragraph 5; page 10, paragraph 8). [The audio data written to the speaker 316 by the web conference application includes audio data received by the web conference server from other information processing apparatuses of the participants in the web conference; the PC playback sound synthesized by the information processing device includes voice data of the information processing device 102 transmitted by other participants (page 16, paragraph 3). The CPU of the information processing apparatus controls the screen of the information processing apparatus to display a device setting screen (page 10, paragraph 5). The device status table holds whether or not the camera, microphone, and speaker are used, and indicates the type of device and whether the device is being used (page 10, paragraph 8).] Regarding Claim 17, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 15. ODA RYO teaches: “wherein the control unit controls the display device to display information indicating that audio is output only to a terminal apparatus whose type of the audio output device being used is the headphone type based on a control signal or data output from the server” (page 16, paragraph 3; page 10, paragraph 5; page 10, paragraph 8). [The audio data written to the speaker 316 by the web conference application includes audio data received by the web conference server from other information processing apparatuses of the participants in the web conference; the PC playback sound synthesized by the information processing device includes voice data of the information processing device 102 transmitted by other participants (page 16, paragraph 3). The CPU of the information processing apparatus controls the screen of the information processing apparatus to display a device setting screen (page 10, paragraph 5). The device status table holds whether or not the camera, microphone, and speaker are used, and indicates the type of device and whether the device is being used (page 10, paragraph 8). The destination to which the audio controller outputs audio is an output terminal for outputting audio to headphones (page 8, paragraph 1). The transmission control unit 412 is a functional unit that transmits various image and audio data; transmission control to the server is performed (page 8, paragraph 10).] Regarding Claim 18, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 15. ODA RYO teaches: “wherein the display control device controls the display device to display information prompting to switch the type of the audio output device being used to the headphone type in a case where the type of the audio output device being used in the own terminal apparatus is different from the headphone type in the WEB conference” (page 5, paragraph 10; page 14, paragraph 4; page 14, paragraph 6; page 8, paragraph 1). [A video controller controls display on a display device (page 5, paragraph 10). The CPU of the information processing apparatus refers to the device state table and determines whether the speaker used in the web conference is the default OS speaker (page 14, paragraph 4). If the speaker used in the web conference is not the default OS speaker, the information processing apparatus switches the PC playback sound sharing flag in the device state table and ends the process (page 14, paragraph 6). The destination to which the audio controller outputs audio can be an output terminal for outputting audio to headphones (page 8, paragraph 1).] Regarding Claim 20, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 19. ODA RYO teaches: “wherein, in the control step, the terminal apparatus whose type of the audio output device being used is a headphone type is determined as a transmission destination of audio data in the WEB conference” (page 8, paragraph 1; page 10, paragraph 8). [The destination to which the audio controller outputs audio is an output terminal for outputting audio to headphones (page 8, paragraph 1). The device status table holds whether or not the camera, microphone, and speaker are used, and indicates the type of device and whether the device is being used (page 10, paragraph 8).] Claims 4-5, 11-12, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over ODA RYO (JP 7132478 B2, hereinafter referred to as ODA RYO) in view of ESAKA et al. (US 2025/0141945 A1, hereinafter referred to as ESAKA), and further in view of Nishino et al. (US 2020/0274905 A1, hereinafter referred to as Nishino). Regarding Claim 4, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 3. ODA RYO teaches: “wherein the control device determines the terminal apparatus whose type of the audio output device being used is the headphone type as the transmission destination of the audio data in the WEB conference” (page 8, paragraph 1; page 10, paragraph 8). [The destination to which the audio controller outputs audio is an output terminal for outputting audio to headphones (page 8, paragraph 1). The device status table holds whether or not the camera, microphone, and speaker are used, and indicates the type of device and whether the device is being used (page 10, paragraph 8).] ODA RYO does not teach: “wherein the server includes a setting acceptance device configured to accept setting of a secure mode for a conference to be held … in the conference in which the secure mode is set.” Nishino teaches: “wherein the server includes a setting acceptance device configured to accept setting of a secure mode for a conference to be held … in the conference in which the secure mode is set” (paragraphs [0150], [0152], [0151]). [The conference apparatus changes the PIN, when the conference apparatus is started or rebooted and where a new conference is started ([0150]). The conference apparatus acquires the PIN from the management server ([0152]). In the conference system, the PIN is changed each time, which enhances security [0151]).] Both ODA RYO and Nishino teach conferencing systems, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the ODA RYO disclosure, the ability to provide security via a PIN, as taught by Nishino. Such inclusion would have prevented system disruptions due to unacceptable apparatuses entering the conference, and would have been consistent with the rationale of using known techniques to improve similar devices (methods, or products) in the same way to show a prima facie case of obviousness (MPEP 2143(I)(C)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007). Regarding Claim 5, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 3. ODA RYO teaches: “wherein the control device determines the terminal apparatus whose type of the audio output device being used is the headphone type as a transmission destination of audio data input from the terminal apparatus” (page 8, paragraph 1; page 10, paragraph 8). [The destination to which the audio controller outputs audio is an output terminal for outputting audio to headphones (page 8, paragraph 1). The device status table holds whether or not the camera, microphone, and speaker are used, and indicates the type of device and whether the device is being used (page 10, paragraph 8).] ODA RYO does not teach: “wherein the server includes a request acceptance device configured to accept a secure transaction request from the terminal apparatus ... whose secure transmission request has been accepted” Nishino teaches: “wherein the server includes a request acceptance device configured to accept a secure transaction request from the terminal apparatus ... whose secure transmission request has been accepted” (paragraphs [0066], [0150], [0152], [0151]). [In response to a PIN generation request from the conference apparatus via the NW communication unit, the PIN generation unit generates the PIN corresponding to the conference apparatus, and stores the generated PIN in the server storage unit; the PIN generation unit also transmits, as a meeting ID, the generated PIN to the conference apparatus that has transmitted the PIN generation request ([0066]). The conference apparatus changes the PIN, when the conference apparatus is started or rebooted and where a new conference is started ([0150]). The conference apparatus acquires the PIN from the management server ([0152]). In the conference system, the PIN is changed each time, which enhances security [0151]). ] Both ODA RYO and Nishino teach conferencing systems, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the ODA RYO disclosure, the ability to provide security via a PIN, as taught by Nishino. Such inclusion would have prevented system disruptions due to unacceptable apparatuses entering the conference, and would have been consistent with the rationale of using known techniques to improve similar devices (methods, or products) in the same way to show a prima facie case of obviousness (MPEP 2143(I)(C)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007). Regarding Claim 11, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 4. ODA RYO teaches: “wherein the terminal apparatus includes: a display device; and a display control device” (page 5, paragraph 10). [A video controller controls display on a display device such as the display, which refers to a display device such as a CRT or a liquid crystal display (page 5, paragraph 10).] “wherein the display control device controls the display device to display information indicating that audio is output only to the terminal apparatus whose type of the audio output device being used is the headphone type in the WEB conference in which the secure mode is set” (page 5, paragraph 10; page 14, paragraph 4; page 10, paragraph 8). [A video controller controls display on a display device (page 5, paragraph 10). The CPU of the information processing apparatus refers to the device state table and determines whether the speaker used in the web conference is the default OS speaker (page 14, paragraph 4). The destination to which the audio controller 313 outputs audio can be an output terminal for outputting audio to headphones (page 8, paragraph 1).] ODA RYO does not teach: “conference in which the secure mode is set.” Nishino teaches: “conference in which the secure mode is set” (paragraphs [0152], [0151]). [The conference apparatus acquires the PIN from the management server ([0152]). In the conference system, the PIN is changed each time, which enhances security [0151]).] Both ODA RYO and Nishino teach conferencing systems, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the ODA RYO disclosure, the ability to provide security via a PIN, as taught by Nishino. Such inclusion would have prevented system disruptions due to unacceptable apparatuses entering the conference, and would have been consistent with the rationale of using known techniques to improve similar devices (methods, or products) in the same way to show a prima facie case of obviousness (MPEP 2143(I)(C)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007). Regarding Claim 12, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 4. ODA RYO teaches: “wherein the terminal apparatus includes: a display device; and a display control device” (page 5, paragraph 10). [A video controller controls display on a display device such as the display, which refers to a display device such as a CRT or a liquid crystal display (page 5, paragraph 10).] “wherein the display control device controls the display device to display information prompting to switch the type of the audio output device being used to the headphone type in a case where the type of the audio output device being used in the own terminal apparatus is different from the headphone type in the WEB conference” (page 5, paragraph 10; page 14, paragraph 4; page 14, paragraph 6; page 8, paragraph 1). [A video controller controls display on a display device (page 5, paragraph 10). The CPU of the information processing apparatus refers to the device state table and determines whether the speaker used in the web conference is the default OS speaker (page 14, paragraph 4). If the speaker used in the web conference is not the default OS speaker, the information processing apparatus switches the PC playback sound sharing flag in the device state table and ends the process (page 14, paragraph 6). The destination to which the audio controller outputs audio can be an output terminal for outputting audio to headphones (page 8, paragraph 1).] ODA RYO does not teach: “conference in which the secure mode is set.” Nishino teaches: “conference in which the secure mode is set” (paragraphs [0152], [0151]). [The conference apparatus acquires the PIN from the management server ([0152]). In the conference system, the PIN is changed each time, which enhances security [0151]).] Both ODA RYO and Nishino teach conferencing systems, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the ODA RYO disclosure, the ability to provide security via a PIN, as taught by Nishino. Such inclusion would have prevented system disruptions due to unacceptable apparatuses entering the conference, and would have been consistent with the rationale of using known techniques to improve similar devices (methods, or products) in the same way to show a prima facie case of obviousness (MPEP 2143(I)(C)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007). Regarding Claim 21, ODA RYO in view of ESAKA teaches all the limitations of parent Claim 19. ODA RYO does not teach: “wherein, in determining whether or not the audio output from the audio output device is being sued in each terminal or controlling the volume based on the retained device type information for each terminal apparatus” This portion of the claim is rejected under 35 U.S.C. 112(b) and will not be assessed as to prior art. “a request acceptance step in which the server accepts a secure transmission request from the terminal apparatus … whose secure transmission request has been accepted” Nishino teaches: “a request acceptance step in which the server accepts a secure transmission request from the terminal apparatus … whose secure transmission request has been accepted” (paragraphs [0060]). [The management server performs various management for the conference apparatus, generates a PIN, and provides the PIN to the conference apparatus in response to a request from each conference apparatus; the PIN generated by the management server is apparatus identification information unique to the conference apparatus [0060]).] Both ODA RYO and Nishino teach conferencing systems, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the ODA RYO disclosure, the ability to provide security via a PIN for conference security, as taught by Nishino. Such inclusion would have prevented system disruptions due to unacceptable apparatuses entering the conference, and would have been consistent with the rationale of using known techniques to improve similar devices (methods, or products) in the same way to show a prima facie case of obviousness (MPEP 2143(I)(C)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007). Allowable Subject Matter Claims 8 and 10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, in this case intervening Claims 1 and 7. The following subject matter of Claim 8, which is associated with intervening Claims 1 and 7, was not found in the prior art: wherein the retention device detects that the type of the audio output device being used in any of the terminal apparatuses is changed, and updates the device type information for each terminal apparatus, wherein the transmission device transmits the updated device type information for each terminal apparatus, and wherein the display control device controls the display device to display the updated device type information for each terminal apparatus on a screen. The following subject matter of Claim 10, which is associated with intervening Claims 1, 7, and 9, was not found in the prior art: wherein the display control device controls the display device to display an icon corresponding to a headphone in a case where the type indicated by the device type information is a headphone type, and to display an icon corresponding to a speaker in a case where the type indicated by the device type information is a speaker type. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHYLLIS A BOOK whose telephone number is (571)272-0698. The examiner can normally be reached M-F 10:00 am - 7:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, GLENTON BURGESS can be reached at 571-272-3949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHYLLIS A BOOK/Primary Examiner, Art Unit 2454
Read full office action

Prosecution Timeline

Mar 08, 2024
Application Filed
Jun 28, 2025
Non-Final Rejection — §102, §103, §112
Dec 29, 2025
Response Filed
Feb 18, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592905
METHOD FOR DETERMINING NAT TRAVERSAL POLICY AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12587467
SYSTEM AND METHOD FOR PATH COMPUTATION SERVICE FOR A SERVICE AWARE VIRTUAL TOPOLOGY OVER A WIDE AREA NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12581361
Optimizing Traffic Redirection Operations
2y 5m to grant Granted Mar 17, 2026
Patent 12580785
ENHANCED TECHNIQUES FOR REDUCING AUDIO FEEDBACK DURING COMMUNICATION SESSIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12563446
WEIGHTED LOAD BALANCING FOR MULTI-LINK OPERATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
97%
With Interview (+14.3%)
2y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 473 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month