DETAILED ACTION
This action is in response to the remarks filed 12/15/2025. Claims 1, 3 – 5 and 8 - 11 are pending and have been examined. Claims 2, 6 and 7 has been cancelled.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 12/15/2025 have been fully considered but they are not persuasive. In Remarks filed 12/15/2025, Applicant states:
“According to paras. [0044]-[0045] of the published specification of the present application, the display 160 may provide options for user selection, wherein one of the options may be associated with an external data source. For example, the sources of video data (e.g., first video data Sl) of the electronic device 100 may include one or more electronic device 200, or second video data S2 provided by the image capturing device 140. The display 160 displays these electronic devices 200 as external data sources, and displays the image capturing device 140 as the data source of the second video data S2 for user selection.
Referring to paras. [0103] and [0111] and FIGs. 8a-8b of Kim, Kim discloses that the selection screen 815 for selecting one of a plurality of audio inputs together with an image 80 may be displayed by the electronic device, and Kim discloses that the electronic device 101 may include at least two or more of the plurality of camera elements of the front camera 701 and the plurality of camera elements of the rear camera 703. The electronic device 101 obtains the image via its own cameras, not from the external electronic device. That is, Kim fails to disclose "a display, configured to display at least one a first option associated with at least one external data source and a second option associated with at least one video capturing device for selection"”
Kim discloses in Figures 8A and 8B a screen for selecting a data input such as audio, and Paragraph [0103] discloses “the electronic device 101 may display a selection screen 815 for selecting one of a plurality of audio inputs together with an image 805. In an embodiment, the selection screen 815 may guide a user input for selecting at least one of at least one microphone included in the electronic device 101 or at least one external electronic device having a microphone”. Additionally, Kim discloses in Paragraph [0111], “in the multi-camera shooting mode, the electronic device 101 includes at least two or more of the plurality of camera elements of the front camera 701 and the plurality of camera elements of the rear camera 703 (for example, the front at least two cameras located on the rear side or at least two cameras located on the rear side), the at least two or more camera elements may be selected according to a user input”. Therefore, Kim does teach "a display, configured to display at least one a first option associated with at least one external data source and a second option associated with at least one video capturing device for selection".
“and "the receiving terminal application is configured to receive at least one first video data provided by the at least one external data source through the transceiver in response to the at least one external data source of the first option being selected, and the receiving terminal application is configured to receive at least one second video data provided by the at least one video capturing device in response to the at least one video capturing device of the second option being selected" recited in amended claim 1 of the present application.”
Kim does not expressively teach "the receiving terminal application is configured to receive at least one first video data provided by the at least one external data source through the transceiver in response to the at least one external data source of the first option being selected, and the receiving terminal application is configured to receive at least one second video data provided by the at least one video capturing device in response to the at least one video capturing device of the second option being selected". However, Turbell teaches “wherein the receiving terminal application is configured to receive the at least one first video data provided by the at least one external data source through the transceiver in response to the at least one external data source of the first option being selected” (see Turbell Abstract, first camera 116, second camera 124, Paragraph [0022], at 312, the method includes obtaining a first video stream captured via a first camera associated with a first communication device engaged in the multi-party video conference and Paragraph [0033], an order of layering of imagery components within a composite video stream may be defined by a Z-order value that represents a depth of an imagery component (e.g., a subset of pixels corresponding to a human subject, a video stream, or background imagery) within the composite video stream or individual image frames thereof, Z-order value may be assignable or selectable by users, such as by directing a user input to a selector of a graphical user interface or other user input modality and Paragraph [0058], In this example or any other example disclosed herein, the method further comprises, responsive to a user selection of the image capture selector, capturing an image of the composite video), and Zhang teaches “the receiving terminal application is configured to receive at least one second video data provided by the at least one video capturing device” (see Zhang, Paragraph [0004], capturing a first image data frame using an image capture device of the teleconferencing endpoint; determining a first region of interest within the first image data frame; rendering the first image data frame as a key frame; capturing, using the camera, a second image data frame; updating data in the key frame corresponding to the second region of interest, to produce a subsequent frame; and transmitting the subsequent frame to a remote endpoint through a network interface of the teleconferencing endpoint). Turbell teaches a user selecting the image capturing device then capturing video.
“Moreover, after studying throughout the disclosures of Turbell and Rajamani, Applicant respectfully submits that Turbell and Rajamani also fail to disclose "a display, configured to display at least one a first option associated with at least one external data source and a second option associated with at least one video capturing device for selection" and "the receiving terminal application is configured to receive at least one first video data provided by the at least one external data source through the transceiver in response to the at least one external data source of the first option being selected, and the receiving terminal application is configured to receive at least one second video data provided by the at least one video capturing device in response to the at least one video capturing device of the second option being selected" recited in amended claim 1 of the present application.”
As stated above, Kim, Turbell and Zhang in combination teach these limitations.
“For at least the evidences and reasons submitted above, amended claim 1 should stand non-obvious and patentable over the cited reference and it is respectfully requested to withdraw the rejection of the claim 1 on merits.”
In regards to amended claim 1, it is rejected under 35 U.S.C. 103 as being unpatentable over Turbell in view of Rajamani, Kim and Zhang. Please see below rejection.
“With regard to independent claim 11, for the same reasons set forth above for traversal of claim 1, amended claim 11 should stand non-obvious and patentable over the cited reference and it is respectfully requested to withdraw the rejection of the claim 11 on merits.”
In regards to amended claim 11, it is rejected similarly as claim 1. Please see below rejection.
“With regard to the dependent claims 3-5 and 8-10, these dependent claims should overcome the rejections of the Office as a matter of law (In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988)), for at least the reason that these dependent claims contain all features of the independent claim 1.”
In regards to dependent claims 3 – 5 and 8 - 10, they are rejected under 35 U.S.C. 103 as being unpatentable over Turbell in view of Rajamani, Kim and Zhang. Please see below rejection.
Response to Amendment
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3 – 5, and 8 - 11 are rejected under 35 U.S.C. 103 as being unpatentable over Turbell et al. (W.O. Publication 2020/131177, hereinafter "Turbell") in view of Rajamani et al. (U.S. Pub. No. 2021/0368134, hereinafter "Rajamani") Kim et al. (KR Pub. No. KR 20230054158 A, hereinafter “Kim”) and Zhang (U.S. Pub. No. 2023/0100130).
Regarding Claim 1, Turbell teaches
An electronic device for a video conference (see Turbell Abstract, communication device (110) engaged in a multi-party video conference) comprising:
a storage medium storing a plurality of modules (see Turbell Paragraph [0047], storage machine 512 includes one or more physical devices configured to hold instructions executable by the logic machine); and
wherein the plurality of modules comprises a receiving terminal application and virtual device (see Turbell Figure 5, computing system 500, Paragraph [0045], Logic machine 510 includes one or more physical devices configured to execute instructions. For example, the logic machine may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs, Paragraph [0041], Graphical selector 416 may correspond to a mode selector that enables a user to stitch between displaying a composite video stream (e.g., as depicted at 130 in FIG. 1) or displaying one or more discrete video streams of local or remote participants (e.g., as depicted at display device 114 in FIG. 1) and Figure 2, first communication device 210),
wherein the receiving terminal application is configured to receive the at least one first video data provided by the at least one external data source through the transceiver in response to the at least one external data source of the first option being selected (see Turbell Abstract, first camera 116, second camera 124, Paragraph [0022], at 312, the method includes obtaining a first video stream captured via a first camera associated with a first communication device engaged in the multi-party video conference and Paragraph [0033], an order of layering of imagery components within a composite video stream may be defined by a Z-order value that represents a depth of an imagery component (e.g., a subset of pixels corresponding to a human subject, a video stream, or background imagery) within the composite video stream or individual image frames thereof, Z-order value may be assignable or selectable by users, such as by directing a user input to a selector of a graphical user interface or other user input modality and Paragraph [0058], In this example or any other example disclosed herein, the method further comprises, responsive to a user selection of the image capture selector, capturing an image of the composite video, in which Turbell teaches a user selection of the image capturing device, then capturing video),
and wherein
the virtual device is configured to convert the synthesized video data into formatted video data (see Turbell Paragraph [0035], process 328 may be performed at one or more of the communication devices, in one example, an individual communication device renders the composite video stream, and transmits an instance of the composite video stream to some or all of the other communication devices engaged in the video conference. In another example, each communication device renders its own instance of the composite video stream, thereby enabling two or more communication devices to render and present different composite video streams), and to output the formatted video data (see Turbell Paragraph [0036], at 330, the method includes outputting the composite video stream for presentation by one or more communication devices engaged in the video conference).
Turbell does not expressively teach
a transceiver;
a display, configured to display a first option associated with at least one external data source and a second option associated with at least one video capturing device for selection;
a processor coupled to the at least one video capturing device, the display, the storage medium and the transceiver, and being configured to access and execute the plurality of modules
and the receiving terminal application is configured to receive at least one second video data provided by the at least one video capturing device
wherein the receiving terminal application is configured to synthesize the at least one first video data and the at least one second video data into synthesized video data
However, Rajamani teaches
a transceiver (see Rajamani Paragraph [0032], a transceiver 206);
a processor coupled to the at least one video capturing device, the display, the storage medium and the transceiver, and being configured to access and execute the plurality of modules (see Rajamani Paragraph [0032], The processor 202 may be communicatively coupled to the non-transitory computer readable medium 203, the memory 204, the transceiver 206, the input/output unit 208, the adaptive video layout unit 210, and the trigger event determination unit 212 and may operate in conjunction with each other to update the area allocated for display of video feed, Paragraph [0037], The input/output unit 208 comprises suitable logic, circuitry, interfaces, and/or code that may be configured to display the video feed associated with each of the plurality of participants during the video conference meeting, and Figure 2, transceiver is coupled to input/output unit, and Paragraph [0037], The input/output unit 208 comprises of various input and output devices that are configured to communicate with the processor 202. Examples of the input devices include, but are not limited to, a keyboard, a mouse, a joystick, a touch screen, a microphone, a camera, and/or a docking station)
It would have been obvious to one of ordinary skill in the art before the effective filing date of
the claimed invention to combine the teaching of a video conference device that receives video data, synthesizes said video data, converts into formatted video data, and outputs the formatted video data (as taught in Turbell), with a processor coupled to a camera, display, storage medium and transceiver, and the transceiver configured to access and execute modules (as taught in Rajamani), the motivation being to address the high bandwidth demands of video conferencing by ensuring strong network connections, and reducing power consumption when inputting, synthesizing, formatting, and outputting imagery by combining a processor with a storage medium and transceiver (see Rajamani Paragraphs [0032]-[0033]).
Turbell in view of Rajamani does not expressively teach
a display, configured to display a first option associated with at least one external data source and a second option associated with at least one video capturing device for selection;
and the receiving terminal application is configured to receive at least one second video data provided by the at least one video capturing device
wherein the receiving terminal application is configured to synthesize the at least one first video data and the at least one second video data into synthesized video data
However, Kim teaches
a display, configured to display a first option associated with at least one external data source and a second option associated with at least one video capturing device for selection (see Kim Figures 8A and 8B, a screen for selecting a data input (audio), Paragraph [0103], the electronic device 101 may display a selection screen 815 for selecting one of a plurality of audio inputs together with an image 805. In an embodiment, the selection screen 815 may guide a user input for selecting at least one of at least one microphone included in the electronic device 101 or at least one external electronic device having a microphone and Paragraph [0111], in the multi-camera shooting mode, the electronic device 101 includes at least two or more of the plurality of camera elements of the front camera 701 and the plurality of camera elements of the rear camera 703 (for example, the front at least two cameras located on the rear side or at least two cameras located on the rear side), the at least two or more camera elements may be selected according to a user input);
It would have been obvious to one of ordinary skill in the art before the effective filing date of
the claimed invention to combine the teaching of a video conference device that receives video data, synthesizes said video data, converts into formatted video data, and outputs the formatted video data (as taught in Turbell), with a processor coupled to a camera, display, storage medium and transceiver, and the transceiver configured to access and execute modules (as taught in Rajamani), the motivation being to address the high bandwidth demands of video conferencing by ensuring strong network connections, and reducing power consumption when inputting, synthesizing, formatting, and outputting imagery by combining a processor with a storage medium and transceiver (see Rajamani Paragraphs [0032]-[0033]).
It would have been further obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of a video conference device, with a processor coupled to a camera, display, storage medium and transceiver, that receives video data, synthesizes said video data, converts into formatted video data, and outputs the formatted video data (as taught in Turbell in view of Rajamani), with displaying at least one option associated with at least one external data source for selection and a second option associated with at least one video capturing device for selection on a video conference screen (as taught in Kim), the motivation being to provide a video conference user the ability to customize and manually choose the inputs being used on their device to then synchronize (such as audio/video), rather than allowing a system to automatically pick and cause issues (see Kim Figures 8A and 8B, Paragraph [0103], and Paragraph [0111]).
Turbell in view of Rajamani and Kim do not expressively teach
and the receiving terminal application is configured to receive at least one second video data provided by the at least one video capturing device
wherein the receiving terminal application is configured to synthesize the at least one first video data and the at least one second video data into synthesized video data
However, Zhang teaches
and the receiving terminal application is configured to receive at least one second video data provided by the at least one video capturing device (see Zhang, Paragraph [0004], capturing a first image data frame using an image capture device of the teleconferencing endpoint; determining a first region of interest within the first image data frame; rendering the first image data frame as a key frame; capturing, using the camera, a second image data frame; updating data in the key frame corresponding to the second region of interest, to produce a subsequent frame; and transmitting the subsequent frame to a remote endpoint through a network interface of the teleconferencing endpoint),
wherein the receiving terminal application is configured to synthesize the at least one first video data and the at least one second video data into synthesized video data (see Zhang, Paragraph [0004], capturing a first image data frame using an image capture device of the teleconferencing endpoint; determining a first region of interest within the first image data frame; rendering the first image data frame as a key frame; capturing, using the camera, a second image data frame; updating data in the key frame corresponding to the second region of interest, to produce a subsequent frame; and transmitting the subsequent frame to a remote endpoint through a network interface of the teleconferencing endpoint);
It would have been obvious to one of ordinary skill in the art before the effective filing date of
the claimed invention to combine the teaching of a video conference device that receives video data, synthesizes said video data, converts into formatted video data, and outputs the formatted video data (as taught in Turbell), with a processor coupled to a camera, display, storage medium and transceiver, and the transceiver configured to access and execute modules (as taught in Rajamani), the motivation being to address the high bandwidth demands of video conferencing by ensuring strong network connections, and reducing power consumption when inputting, synthesizing, formatting, and outputting imagery by combining a processor with a storage medium and transceiver (see Rajamani Paragraphs [0032]-[0033]).
It would have been further obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of a video conference device, with a processor coupled to a camera, display, storage medium and transceiver, that receives video data, synthesizes said video data, converts into formatted video data, and outputs the formatted video data (as taught in Turbell in view of Rajamani), with displaying at least one option associated with at least one external data source for selection and a second option associated with at least one video capturing device for selection on a video conference screen (as taught in Kim), the motivation being to provide a video conference user the ability to customize and manually choose the inputs being used on their device to then synchronize (such as audio/video), rather than allowing a system to automatically pick and cause issues (see Kim Figures 8A and 8B, Paragraph [0103], and Paragraph [0111]).
It would have been further obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of a video conference device, with a processor coupled to a camera, display, storage medium and transceiver, that receives video data, synthesizes said video data, converts into formatted video data, and outputs the formatted video data that provides at least one option associated with at least one external data source for selection and a second option associated with at least one video capturing device for selection on a video conference screen (as taught in Turbell in view of Rajamani and Kim), with receiving second video data provided by a camera and synthesizing a first video data and a second video data into synthesized video data in a teleconference (as taught in Zhang), the motivation being to address the computational expense of capturing high-resolution data by determining which (sub)regions of an existing frame should be updated with higher resolution data (e.g., regions of interest) while other regions can be updated with lower resolution data or not updated at all (see Zhang Paragraph [0002]).
Regarding Claim 3, Turbell in view of Rajamani, Kim and Zhang teaches
The electronic device according to claim 1, wherein the plurality of modules further comprises:
a transmitting terminal application configured to output the at least one first video data (see Turbell Paragraph [0041], Graphical selector 416 may correspond to a mode selector that enables a user to stitch between displaying a composite video stream (e.g., as depicted at 130 in FIG. 1) or displaying one or more discrete video streams of local or remote participants (e.g., as depicted at display device 114 in FIG. 1)) through the transceiver (see Rajamani Paragraph [0036], the transceiver 206 may be further configured to transmit the updated area allocated for display of video feed to each of the plurality of electronic devices, such as electronic device 102, 104 and 106, via the communication network 108).
Regarding Claim 4, Turbell in view of Rajamani, Kim and Zhang teaches
The electronic device according to claim 3, wherein:
the electronic device is configured to receive a user command (see Turbell Paragraph [0041], While FIG. 4 depicts a variety of graphical selectors for performing or initiating a variety of functions, it will be understood that a GUI may support touch interaction and touch-based gestures that may be used by the user to perform or initiate these various functions) through the transceiver (see Rajamani Paragraph [0036], The transceiver 206 may be further configured to transmit the updated area allocated for display of video feed to each of the plurality of electronic devices, such as electronic device 102, 104 and 106, via the communication network 108), and to enable one of the receiving terminal application and the transmitting terminal application according to the user command (see Turbell Paragraph [0041], Graphical selector 416 may correspond to a mode selector that enables a user to stitch between displaying a composite video stream (e.g., as depicted at 130 in FIG. 1) or displaying one or more discrete video streams of local or remote participants (e.g., as depicted at display device 114 in FIG. 1)).
Regarding Claim 5, Turbell in view of Rajamani, Kim and Zhang teaches
The electronic device according to claim 3, wherein:
the electronic device is configured to receive a user command through the transceiver, and to disable the transmitting terminal application and to enable the receiving terminal application according to the user command (see Turbell Paragraph [0041], While FIG. 4 depicts a variety of graphical selectors for performing or initiating a variety of functions, it will be understood that a GUI may support touch interaction and touch-based gestures that may be used by the user to perform or initiate these various functions, and Paragraph [0041], Graphical selector 416 may correspond to a mode selector that enables a user to stitch between displaying a composite video stream (e.g., as depicted at 130 in FIG. 1) or displaying one or more discrete video streams of local or remote participants (e.g., as depicted at display device 114 in FIG. 1), and see Rajamani Paragraph [0036], The transceiver 206 may be further configured to transmit the updated area allocated for display of video feed to each of the plurality of electronic devices, such as electronic device 102, 104 and 106, via the communication network 108).
Regarding Claim 8, Turbell in view of Rajamani, Kim and Zhang teaches
The electronic device according to claim 1, wherein the plurality of modules further comprises:
a videoconferencing application communicatively connected to a remote communication device through the transceiver (see Turbell Figure 3, step 310 initiate a video conference with two or more communication devices, and Figure 2, in which communication devices are connected to a network), wherein the videoconferencing application is configured to receive the formatted video data from the virtual device (see Turbell Paragraph [0032], At 328, the method includes rendering a composite video stream formed by at least a portion of the second video stream and the subset of pixels of the first video stream, Paragraph [0035], Process 328 may be performed at one or more of the communication devices, an individual communication device renders the composite video stream, and transmits an instance of the composite video stream to some or all of the other communication devices engaged in the video conference), and to perform a video conference between the electronic device and the remote communication device according to the formatted video data (see Turbell Paragraph [0036], At 330, the method includes outputting the composite video stream for presentation by one or more communication devices engaged in the video conference).
Regarding Claim 9, Turbell in view of Rajamani, Kim and Zhang teaches
The electronic device according to claim 1, wherein the receiving terminal application is configured to access a wireless local area network through the transceiver to detect the at least one data source in the wireless local area network (see Rajamani Paragraph [0032], the transceiver 206 may be communicatively coupled to the communication network 108 and Paragraph [0036], transceiver may communicate via wireless communication with networks, such as the Internet, an Intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and Paragraph [0042], the transceiver 206 may be configured to receive the network information and the meeting data associated with each of the plurality of participants, such as Participant A 102a, Participant B 104a, and Participant C 106a), and
the receiving terminal application is configured to receive the at least one first video data from the at least one external electronic device in response to the at least one data source being detected (see Kim Figures 8A and 8B, a screen for selecting a data input (audio), Paragraph [0103], the electronic device 101 may display a selection screen 815 for selecting one of a plurality of audio inputs together with an image 805. In an embodiment, the selection screen 815 may guide a user input for selecting at least one of at least one microphone included in the electronic device 101 or at least one external electronic device having a microphone and Paragraph [0111], in the multi-camera shooting mode, the electronic device 101 includes at least two or more of the plurality of camera elements of the front camera 701 and the plurality of camera elements of the rear camera 703 (for example, the front at least two cameras located on the rear side or at least two cameras located on the rear side), the at least two or more camera elements may be selected according to a user input).
Regarding Claim 10, Turbell in view of Rajamani, Kim and Zhang teaches
The electronic device according to claim 3, wherein the transmitting terminal application is configured to enable an access right to the at least one first video data to the at least one external electronic device in response to the transmitting terminal application being enabled (see Kim Paragraph [0032], The communication module 190 is a direct (eg, wired) communication channel or a wireless communication channel between the electronic device 101 and an external electronic device (eg, the electronic device 102, the electronic device 104, or the server 108). It is possible to support the establishment of and communication through the established communication channel).
Regarding Claim 11, it is rejected similarly as Claim 1. The method can be found in Turbell (Paragraph [0020], method).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of References Cited for a listing of analogous art.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARISSA A JONES whose telephone number is (703)756-1677. The examiner can normally be reached Telework M-F 6:30 AM - 4:00 PM CT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached on 5712727503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CARISSA A JONES/Examiner, Art Unit 2691
/DUC NGUYEN/Supervisory Patent Examiner, Art Unit 2691