Prosecution Insights
Last updated: April 19, 2026
Application No. 18/631,811

WIRELESS MULTI-STREAM BIDIRECTIONAL VIDEO PROCESSING DEVICE

Non-Final OA §103
Filed
Apr 10, 2024
Examiner
TRAN, QUOC DUC
Art Unit
2691
Tech Center
2600 — Communications
Assignee
Magic Control Technology Corporation
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
720 granted / 841 resolved
+23.6% vs TC avg
Minimal +5% lift
Without
With
+4.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
17 currently pending
Career history
858
Total Applications
across all art units

Statute-Specific Performance

§101
5.0%
-35.0% vs TC avg
§103
43.3%
+3.3% vs TC avg
§102
30.5%
-9.5% vs TC avg
§112
5.3%
-34.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 841 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-11 are rejected under 35 U.S.C. 103 as being unpatentable over Spattini (2018/0013981) in view of Murase et al (2012/0047538). Consider claim 1, Spattini teaches a wireless multi-stream bidirectional video processing device for connecting with a plurality of wireless devices and a host device, the wireless multi-stream bidirectional video processing device comprising: a wireless communication unit for wirelessly connecting with the wireless devices to receive at least one video from the wireless devices (par. 0022; “the apparatus is wirelessly connected with the local mobile devices via an IEEE 802.11 interface for receiving the video signals generated by the video cameras of the plurality of local mobile devices and for sharing the output signal representing the content displayable by the display of the processing device”); and an image processing unit for transmitting a connection request signal to the wireless devices through the wireless communication unit, so that the wireless devices returns a connection permission signal (i.e., standard IEEE 802.11 protocol) and the at least one video according to the connection request signal, and performing processing on the at least one video to become a plurality of output videos (par. 0022-0023; “the apparatus is wirelessly connected with the local processing device executing the video conferencing software via an IEEE 802.11 interface for transmitting the output video communication signal to the local processing device and for receiving the output signal representing the content displayable by the display of the processing device. According to a further aspect, the apparatus is further configured to receive from the local processing device a video signal generated by a video camera associated to the local processing device, the generating means being configured to generate said output video communication stream also based on the video signal received from the local processing device”; par. 0065-0069; “the apparatus 1 captures the video images from all the devices connected to it, crops the images around the face of the participant…Once it has collected the video images, the apparatus 1, by means of the mixing module, is capable of composing the video communication stream 6 toward the personal computer 4…the mixing of the video signals 9 relating to all participants with equal emphasis (all the video images are conveyed in the same video communication stream without any difference between speakers and non-speakers)”); and a mode control unit for transmitting the at least one display information to the wireless devices through the wireless communication unit (par. 0085; “the apparatus 1 comprises sharing means 18 for sharing the desktop of the personal computer 4 connected in the video conference. In particular, the sharing means 18 is operatively connected to the management and control means 14 of each mobile device 10, for the sharing, on the mobile devices, of an output signal 15 originating from the personal computer 4 and relating to the desktop of the personal computer 4 adapted to manage the video conference”), wherein, the wireless multi-stream bidirectional video processing device provides a plurality of endpoints for the host device according to a quantity of the output videos and the at least one display video, so as to transmit the output videos to the host device and receive the at least one display video from the host device (par. 0069; 0085-0088; “the output signal 15 originating directly from the video output of the personal computer 4 is transmitted to the video input of the apparatus 1 and, from there, is transmitted via the sharing means 18 to each of the mobile devices”). Spattini suggest of formatting of video images for transmission thereof (par. 0074; “different configurations with different formats can be adopted as regards both the resolution and disposition of each video image inside the screen 5”; par. 0090; “The video images thus captured are then encapsulated in a stream format that is sent from the personal computer 4 to the apparatus 1, and from the latter to all the connected mobile devices 10”). Spattini does not explicitly suggest of transmitting a format request signal to the wireless devices through the wireless communication unit according to at least one display video received from the host device, so that the wireless devices return a format reply signal according to the format request signal, adjusting the at least one display video according to the format reply signal to become at least one display information. Murase et al teach an adaptor device and a method for controlling the adaptor device for the source device in a wireless communications system that wirelessly transmits and receives a transmission signal including a video signal. Murase et al teach a mode control unit for transmitting a format request signal to the wireless devices through the wireless communication unit according to at least one display video received from the host device, so that the wireless devices return a format reply signal according to the format request signal, adjusting the at least one display video according to the format reply signal to become at least one display information (par. 0054; “The control signal is a signal needed for an authentication processing and a synchronization processing of the video signal, for example, performed among the source device, the adaptor device for the source device, the adaptor device for the sink device, and the sink device in order to transmit the AV signal from the source device to the sink device in the format that can be displayed in the sink device. More specifically, the control signal includes an authentication requesting signal, an authentication responding signal, an EDID requesting signal, an EDID responding signal, a connection completion notifying signal, and a connection completion responding signal”; par. 0125; “When the format converting unit 609 obtains the format information from the control unit 603, the format converting unit 609 converts the format of the AV signal obtained from the wired transmitting and receiving unit 201 to the format specified by the format information obtained from the control unit 603. Then, the format converting unit 609 outputs the converted format to the wireless transmitting and receiving unit 202”). Therefore, it would have been obvious to one of the ordinary skills in the art before the effective filing date to incorporate the concept of Murase et al into Spattini and the result would have been predictable and resulted in providing display video according to the format supported by the destination devices thereby prevent communications interruption and improved communications between devices. Consider claim 2, Spattini teaches wherein the host device generates the at least one display video according to the output videos (par. 0085; “the sharing means 18 is operatively connected to the management and control means 14 of each mobile device 10, for the sharing, on the mobile devices, of an output signal 15 originating from the personal computer 4 and relating to the desktop of the personal computer 4 adapted to manage the video conference”). Consider claim 3, Spattini teaches wherein the wireless communication unit and the wireless devices use a Wi-Fi protocol for wireless connection (par. 0089; “With reference to a possible alternative embodiment, the personal computer 4 on which the program used for the video conference is being run (for example Skype) is connected via USB, Ethernet or Wi-Fi to the apparatus 1”). Consider claim 4, Spattini teaches wherein, the image processing unit includes a video processor, a neural network processor, a memory, and a storage unit, and wherein, the video processor and the neural network processor use a program stored in the storage unit to perform the processing, and the memory is used to store the at least one video and the output videos (par. 0109-0110; “The computer system 300 may include one or more processors 310 and one or more non-transitory computer-readable storage media (e.g., memory 320 and/or one or more non-volatile storage media 330). The processor 310 may control writing data to and reading data from the memory 320 and/or the nonvolatile storage device 330 in any suitable known manner. Processor 310, for example, may form the processing means 7, the generating means 17 and the recognition means 13 provided as part of the apparatus 1; the processor 310 may perform the functionality above described for the processing means 7, the generating means 17 and the recognition means 13. To perform the functionality above described of the apparatus 1, the processor 310 may execute instructions stored in one or more computer-readable storage media (e.g., the memory 320, storage media, etc.), which may serve as non-transitory computer-readable storage media storing instructions for execution by processor 310. The computer system 300 includes an input/output functionality 340 to receive data and to provide data, and may include a control apparatus to perform I/O functionality. In particular, the computer system 300, when implementing the apparatus 1, includes one or more antennas for receiving/transmitting video signals from/to the mobile devices 10 and the local processing device 4. The function of the above described sharing means 18 may be performed by the 110 340 under the control of the processor 310”). Consider claim 5, Spattini teaches further comprising: a USB control unit for receiving the output videos (par. 0082; “the video communication stream 6 is processed by the generating means 17 inside the apparatus 1 and can be transmitted via a physical output such as a USB port, an antenna, or the like to the personal computer 4 and, therefore, to the management software of the video conference. However, different embodiments in which the generating means 17 are implemented in a device separate from the apparatus 1, for example on a personal computer 4, are not ruled out”), reporting an endpoint information to the host device according to the output videos, and receiving the at least one display video, so as to provide the endpoints for the host device according to the quantity of the output videos and the at least one display video (par. 0087; “the apparatus 1 comprises a video input (for example, of the HDMI, DVI or VGA type) and the personal computer 4, on which the program used for the video conference (for example Skype) is being run, is connected via a video output (HDMI/DVI/VGA) to said video input. Therefore, the output signal 15 originating directly from the video output of the personal computer 4 is transmitted to the video input of the apparatus 1 and, from there, is transmitted via the sharing means 18 to each of the mobile devices 10”); and a connection port unit for receiving the output videos from the USB control unit to transmit the output videos to the host device, and for receiving the at least one display video from the host device to transmit the at least one display video to the USB control unit (par. 0089; “the personal computer 4 on which the program used for the video conference is being run (for example Skype) is connected via USB, Ethernet or Wi-Fi to the apparatus 1. In such a case, on the personal computer 4 there is an acquisition module 19 operatively connected to the sharing means 18 on the apparatus 1 and adapted to perform a screen-capture of the desktop of the personal computer 4. For example, the acquisition module 19 can consist of specific client software installable in the personal computer 4”). Consider claim 6, Murase et al teach wherein, the mode control unit includes: a video conversion unit for adjusting the at least one display video according to the format reply signal, wherein, the mode control unit receives the at least one display video from the USB control unit, and wherein, the adjustment includes adjusting a format of the at least one display video and compressing the adjusted format of the at least one display video into the at least one display information, so as to facilitate wireless transmission (par. 0125; “When the format converting unit 609 obtains the format information from the control unit 603, the format converting unit 609 converts the format of the AV signal obtained from the wired transmitting and receiving unit 201 to the format specified by the format information obtained from the control unit 603. Then, the format converting unit 609 outputs the converted format to the wireless transmitting and receiving unit 202”; par. 0150; “With reference to the EDID information obtained from the EDID table, of formats of the AV signal that can be displayed in the currently connected sink device, only the format information for specifying the format allowing wireless transmission with desired transmission quality and corresponding to the video and audio with the highest quality is stored”). Consider claim 7, Spattini teaches wherein, the processing of the at least one video includes at least one video display mode, the image processing unit processes the at least one video according to the video display mode that is selected, and wherein, the video display mode includes picture-in-picture, side-by-side picture, picture cropping, picture overlapping, picture zoom-in and zoom-out, and original picture (par. 0060-0065; “the apparatus 1 captures the video images from all the devices connected to it, crops the images around the face of the participant and is capable of identifying which one of them is actually speaking”). Consider claim 8, Spattini teaches wherein the processing further includes a format processing to convert a format of the received at least one video into a format that complies with a USB video class, so as to allow the connection port unit to transmit the output videos (par. 0082; “the video communication stream 6 is processed by the generating means 17 inside the apparatus 1 and can be transmitted via a physical output such as a USB port, an antenna, or the like to the personal computer 4 and, therefore, to the management software of the video conference. However, different embodiments in which the generating means 17 are implemented in a device separate from the apparatus 1, for example on a personal computer 4, are not ruled out”; par. 0090; “The video images thus captured are then encapsulated in a stream format that is sent from the personal computer 4 to the apparatus 1, and from the latter to all the connected mobile devices 10”). Consider claim 9, Spattini teaches wherein the processing further includes a resolution processing to convert a resolution of the received at least one video into a resolution consistent with the host device (par. 0074; “different configurations with different formats can be adopted as regards both the resolution and disposition of each video image inside the screen 5”). Consider claim 10, Spattini teaches wherein, the image processing unit processes the at least one display video before the mode control unit adjust the at least one display video (par. 0082; “the video communication stream 6 is processed by the generating means 17 inside the apparatus 1 and can be transmitted via a physical output such as a USB port”; i.e., video is process prior to sending the video to be display). Consider claim 11, Spattini teaches wherein, the processing of the at least one display video includes the video display mode, the image processing unit processes the at least one display video according to the video display mode that is selected (par. 0060-0069; “the generating means 17 can be configured to change a mode of generating the output video communication stream depending on a recognition of a speaker based on the received framing command”; “the apparatus 1, by means of the mixing module, is capable of composing the video communication stream 6 toward the personal computer 4. Depending on the specific layout selected by the user”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any response to this action should be mailed to: Mail Stop ____(explanation, e.g., Amendment or After-final, etc.) Commissioner for Patents P.O. Box 1450 Alexandria, VA 22313-1450 Facsimile responses should be faxed to: (571) 273-8300 Hand-delivered responses should be brought to: Customer Service Window Randolph Building 401 Dulany Street Alexandria, VA 22314 Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUOC DUC TRAN whose telephone number is (571) 272-7511. The examiner can normally be reached Monday-Friday 8:30am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached on (571) 272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Quoc D Tran/ Primary Examiner, Art Unit 2691 November 19, 2025
Read full office action

Prosecution Timeline

Apr 10, 2024
Application Filed
Nov 19, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598268
STAGE USER REPLACEMENT TECHNIQUES FOR ONLINE VIDEO CONFERENCES
2y 5m to grant Granted Apr 07, 2026
Patent 12598251
PREVENTING DEEP FAKE VOICEMAIL SCAMS
2y 5m to grant Granted Apr 07, 2026
Patent 12592989
DETECTING A SPOOFED CALL
2y 5m to grant Granted Mar 31, 2026
Patent 12593011
APPARATUS AND METHODS FOR VISUAL SUMMARIZATION OF VIDEOS
2y 5m to grant Granted Mar 31, 2026
Patent 12581033
ENFORCING A LIVENESS REQUIREMENT ON AN ENCRYPTED VIDEOCONFERENCE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
90%
With Interview (+4.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 841 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month