Prosecution Insights
Last updated: April 19, 2026
Application No. 18/119,071

INPUT AND OUTPUT DEVICE FOR MULTIPLE FORMATS OF VIDEO CLASS

Non-Final OA §103
Filed
Mar 08, 2023
Examiner
MA, MICHELLE HAU
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Magic Control Technology Corporation
OA Round
3 (Non-Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
17 granted / 21 resolved
+19.0% vs TC avg
Strong +36% interview lift
Without
With
+36.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
35 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
3.0%
-37.0% vs TC avg
§103
84.2%
+44.2% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 21 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed April 24, 2025 has been entered. Claims 1-12 are pending in the application. Applicant’s amendments to the Claims have overcome the rejections under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph as failing to comply with the written description requirement. Applicant’s amendments to the Claims have overcome the rejections under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Furthermore, the amendments to the Claims no longer invoke claim interpretation under 35 U.S.C. 112f. Response to Arguments Applicant’s arguments, see page, filed August 14, with respect to the rejection(s) of claim(s) 1-12 under 35 U.S.C 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Nakanishi. See 35 U.S.C 103 rejection below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-5, 7, and 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Soffer et al. (US 20160050375 A1) in view of Hsueh (CN 110162284 A) and Nakanishi et al. (US 20160179722 A1), hereinafter Soffer, Hsueh, and Nakanishi respectively. Regarding claim 1, Soffer teaches an input and output device for multiple formats of video class (Fig. 2 201, Paragraph 0118 – “Device 201 is having multiple format video input ports 10x to enable connection of wide variety of video devices”; Note: Device 201 is the equivalent of the input and output device), which receives images from at least two image sources connected thereto and outputs the images to a device connected thereto (Fig. 2 201, 2; Paragraph 0049, 0056-0057, 0118, 0131 – “at least one external device other than the meeting room power and video center device is selected from a group consisting of: video display, video projector, audio speakers, audio microphone, video camera, and video camera panning actuator…when at least two valid video signals are present in two video input ports, said video processor integrates video signals from at least a first video input port and a second video input port from said plurality of video input ports…Device 201 is having multiple format video input ports 10x to enable connection of wide variety of video devices…Video rescaler 77 output 21 is coupled to external wired display port 27 that used to connected a display or projector 2 using proper cable”; Note: Device 201 receives video input, which can be from multiple sources, and the video input is displayed as output on display or projector 2), the input and output device for multiple formats of video class comprising: a USB host controller receiving the images (Fig. 2 10x, 20; Paragraph 0049, 0118-0122, 0149 – “at least one external device other than the meeting room power and video center device is selected from a group consisting of: video display, video projector, audio speakers, audio microphone, video camera, and video camera panning actuator…Device 201 is having multiple format video input ports 10x to enable connection of wide variety of video devices. All video inputs are connected to the main video switch 14. Video input ports 10x are either directly connected to video switch 14 (for example video input port 10a) or indirectly through various electronic circuitry…Video input port 10c may support connectivity protocols such as USB 3.0, Thunderbolt, Lightning, or DockPort by connecting input port 10c via a Docking Controller (DCO) 6…The results of this analysis are communicated to the SC 20 via serial lines 19. SC 20 uses these results to provide clear user indications for each input and to enable automatic switching of an active video input”; Note: System controller function 20, which is equivalent to the USB host controller, can receive video input from a camera through USB ports 10x, as shown in Fig. 2 below) from the at least two image sources (Paragraph 0056-0057 – “when at least two valid video signals are present in two video input ports, said video processor integrates video signals from at least a first video input port and a second video input port from said plurality of video input ports”; Note: the device can receive videos from more than two input sources); an image processor connected to the USB host controller and receiving the images and performing an image merge processing on the images to merge the images into a merged image (Fig. 5 60, 20, 2; Paragraph 0164-0167 – “The use of video processor function 60 enables integration of multiple video input sources into one display or projector 2. As an example, in this FIG. 5, the user had selected to see video source from computer 4 at display 2 background 66 and video source 3 at display 2 window 67 Picture in Picture display (PIP)”; Note: The video processor function 60 is equivalent to the image processor. The video processor function 60 is connected to system controller 20, as shown in Fig. 5 below, receives video input, and can integrate multiple video input into one display); and a wireless transmitter connected to the USB host controller and wirelessly connected to the computing device, and receiving the merged image from the image processor to wirelessly transmit the merged image to the computing device (Fig. 5 22, 20, 21a, 60, 2; Paragraph 0131 – “Another function coupled to the video output lines 21 is the wireless video transmitter (VL) 22 that is coupled to antenna 24. This video transmitter (VL) 22 transmits the selected video output into a matching video receiver, for example video receiver dongle 25 that is coupled through video connector 26 into display or projector 2. Video connector 26 is preferably HDMI type. This arrangement enables wireless support to nearby display or projector thus eliminating the need to wire the meeting room with video cables”; Note: Wireless video transmitter 22, which is equivalent to the wireless module, is connected to the system controller 20 through output 21a and video processor 60, as shown in Fig. 5 below. It is wirelessly connected to the display 2 and receives video input from the video processor to transmit to the display 2), wherein the image merge processing includes at least one merged image display mode, the image processor transmits the merged image to the device according to a selected merged image display mode (Fig. 5 60, 2; Paragraph 0165 – “As an example, in this FIG. 5, the user had selected to see video source from computer 4 at display 2 background 66 and video source 3 at display 2 window 67 Picture in Picture display (PIP)”; Note: The video processor function 60 can merge the video input for a selected Picture in Picture display, which is displayed on display 2. The Picture in Picture display is a merged image display mode). PNG media_image1.png 763 997 media_image1.png Greyscale Modified screenshot of Fig. 2 (taken from Soffer) PNG media_image2.png 733 1038 media_image2.png Greyscale Modified screenshot of Fig. 5 (taken from Soffer) Soffer does not teach the “computing device” in the following claim limitations: “an input and output device for multiple formats of video class, which…outputs the images to a computing device connected thereto” and “the image processor transmits the merged image to the computing device”; Soffer only teaches outputting and transmitting to a display device. However, Hsueh teaches outputting and transmitting to a computing device (Paragraph 0051 – “in order to allow the user to further see the image output by the controlled device 30, the control interface device 2b is further connected to a portable device 31 through the first USB interface 21, which can be a device with display function such as a smart phone, a smart tablet computer or a laptop computer, or a display that simply supports the UVC image format. In this embodiment, the portable device 31 is a first handheld smart device 310, which is electrically connected to the control interface device 2 b via the USB interface 21 to receive and display USB video type image signals”; Note: the portable device 31 is a computing device, such as a smart phone, and it receives image signals). A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the computing device of Hsueh could have been substituted for the display device of Soffer because both the display device and computing device serve the purpose of providing a way to view, present, and/or interact with the output image. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of displaying the output image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the display device of Soffer for the computing device of Hsueh according to known methods to yield the predictable result of displaying the output image. Furthermore, Soffer does not teach wherein the merged image outputted to the computing device includes any one of multiple formats of USB video class. However, Hsueh teaches wherein output to the computing device includes any one of multiple formats of USB video class (Paragraph 0040 – “The image conversion module 20 is electrically connected to an image source via a signal transmission line having an image input interface 24 to convert an image signal output by the image source into a USB video class (UVC) signal having an image format different from that of the image signal”; Note: the output image signal is a USB video class). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Soffer to incorporate the teachings of Hsueh to output a format of USB video class for the benefit of increasing compatibility with computing devices so that many different computing devices can be used with the input and output device for displaying images (Hsueh: Paragraph 0008). Moreover, Soffer does not teach the “wireless transceiver” from the claim limitation: “a wireless transceiver connected to the USB host controller and wirelessly connected to the computing device, and receiving the merged image from the image processor to wirelessly transmit the merged image to the computing device”. However, Nakanishi teaches a wireless transceiver connected to the USB host controller and wirelessly connected to the computing device (Paragraph 0024, 0028-0029 – “When the computer 10 and the USB device 20 are located close to each other within the communication range, the wireless communications can be established between the computer 10 and the USB device 20…The computer 10 communicates with USB device 20 using the USB host controller and wireless transceiver. The USB host controller is a host controller conforming to the USB 3.0 standard. The wireless transceiver is connected to the USB host controller through a USB bus”; Note: the wireless transceiver is connected to the USB host controller and wireless connects to the computer. Because it is a transceiver, it has transmitting and receiving capabilities). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Soffer to incorporate the teachings of Nakanishi to have a wireless transceiver because while Soffer has a wireless transmitter (Fig. 5 22) and wireless link receiver (Fig. 5 15), having a single device that can both transmit and receive image data is beneficial for reducing the amount of hardware required for communication. Additionally, a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the wireless transmitter and receiver of Soffer could have been substituted for the wireless transceiver of Nakanishi because both the wireless transmitter and receiver and the wireless transceiver serve the purpose of wirelessly obtaining and sending data from a source device to a destination device. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of transmitting and receiving data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the wireless transmitter and receiver of Soffer for the wireless transceiver of Nakanishi according to known methods to yield the predictable result of obtaining and sending data. Regarding claim 3, Soffer in view of Hsueh and Nakanishi teaches the input and output device for multiple formats of video class from claim 1. Soffer further teaches at least one USB port connected to the at least two image sources, wherein the USB host controller is connected to the at least one USB port to receive the images (Fig. 2 10x, 20, 14, 17; Paragraph 0049, 0118-0122, 0149 – “at least one external device other than the meeting room power and video center device is selected from a group consisting of: video display, video projector, audio speakers, audio microphone, video camera, and video camera panning actuator…Device 201 is having multiple format video input ports 10x to enable connection of wide variety of video devices… Video input port 10c may support connectivity protocols such as USB 3.0…the results of this analysis are communicated to the SC 20 via serial lines 19. SC 20 uses these results to provide clear user indications for each input and to enable automatic switching of an active video input”; Note: Video input can come from a camera through USB ports 10x, and these USB ports are connected to the system controller function 20, which is equivalent to the USB host controller, by the video switch 14 and line 17, as shown in Fig. 2 above) from the at least two image sources (Paragraph 0056-0057 – “when at least two valid video signals are present in two video input ports, said video processor integrates video signals from at least a first video input port and a second video input port from said plurality of video input ports”; Note: the device can receive videos from more than two input sources). Regarding claim 4, Soffer in view of Hsueh and Nakanishi teaches the input and output device for multiple formats of video class from claim 1. Soffer further teaches wherein the image processor comprises a processor, a memory, and a storage unit, wherein the processor uses a program stored in the storage unit to execute the at least one merged image display mode, and the memory is used for storing the merged image (Fig. 5 60, 64; Paragraph 0166, 0205 – “Frame Buffer memory (FB) 64 coupled to video processor function 60 is used to temporarily store video frames data while in processing… the term ‘microcontroller function’ or other references to ‘function’ or ‘functions’ may refer to hardware capable of performing the logical function. The hardware may comprise one or a plurality of electronic circuitries. The hardware may be based on an ASIC (Application Specific Integrated Circuit), a processor accompanied with the necessary memory”; Note: Video processor function 60, which is equivalent to the image processor, has memory 64 for storing video frame data. It can have hardware based on a processor and memory for storing the logical function). Regarding claim 5, Soffer in view of Hsueh and Nakanishi teaches the input and output device for multiple formats of video class from claim 1. Soffer further teaches an image input controller connected to the image processor, receiving the images, and converting an image format into another image format supported by the image processor to allow the image processor to receive the image and perform an image merge processing on the images (Fig. 2 7b, 14; Paragraph 0021-0024 – “device further comprises at least one video converter, said at least one video converter is for: receiving video signal from one of said plurality of video input ports in a first video standard; converting said video signal in a first video standard to video signal in a different second video standard; and transmitting said video signal in a different second video standard to said video processor”; Note: Video converter 7b, which is equivalent to the image input controller is connected to the video switch/processor 14, which is equivalent to the image processor. It can receive video signals, convert them into a different standard, and send them to the video switch/processor 14). Soffer does not directly teach that the image input controller receives images from the at least two image sources. However, Soffer separately teaches an image input controller (Fig. 2 7b, 14; Paragraph 0021-0024 – “device further comprises at least one video converter, said at least one video converter is for: receiving video signal from one of said plurality of video input ports in a first video standard; converting said video signal in a first video standard to video signal in a different second video standard; and transmitting said video signal in a different second video standard to said video processor”) and receiving images from the at least two image sources (Paragraph 0056-0057 – “when at least two valid video signals are present in two video input ports, said video processor integrates video signals from at least a first video input port and a second video input port from said plurality of video input ports”). Soffer has multiple image input controllers, each receiving from one image source (Modified screenshot of Fig. 2 above shows have there is a video converter for USB ports 10b and 10e). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined those features of Soffer to have one image input controller receive images from at least two image sources, because if there are a couple of image sources, then having one image input controller instead of multiple ones would make the device more compact and require less hardware. Furthermore, when receiving from multiple image sources, there is a finite number of realistic ways to arrange the image input control; either there can be multiple image input controllers, such as one for each image source, or there can be a single image input controller. One of ordinary skill in the art could have one image input controller receive images from two or more image sources with a reasonable expectation of success and would have done so for the benefit of having a compact device. Therefore, it would have been obvious to try the solution of having one image input controller receive images from two or more image sources. Regarding claim 7, Soffer teaches a method of inputting and outputting multiple formats of video class (Fig. 2 201, Paragraph 0118, 0131 – “Device 201 is having multiple format video input ports 10x to enable connection of wide variety of video devices…video transmitter (VL) 22 transmits the selected video output into a matching video receiver, for example video receiver dongle 25 that is coupled through video connector 26 into display or projector 2”; Note: Provides a method for inputting and outputting videos by using device 201) for receiving images from at least two image sources (Paragraph 0056-0057 – “when at least two valid video signals are present in two video input ports, said video processor integrates video signals from at least a first video input port and a second video input port from said plurality of video input ports”; Note: the device can receive videos from more than two input sources) and outputting the images to a device (Fig. 2 201, 2; Paragraph 0049, 0118, 0131 – “at least one external device other than the meeting room power and video center device is selected from a group consisting of: video display, video projector, audio speakers, audio microphone, video camera, and video camera panning actuator…Device 201 is having multiple format video input ports 10x to enable connection of wide variety of video devices…Video rescaler 77 output 21 is coupled to external wired display port 27 that used to connected a display or projector 2 using proper cable”; Note: Device 201 receives video input, and the video input is displayed as output on display or projector 2), the method comprising: receiving the images from at least two image sources using a USB host controller (Fig. 2 10x, 20; Paragraph 0049, 0118-0122, 0149 – “at least one external device other than the meeting room power and video center device is selected from a group consisting of: video display, video projector, audio speakers, audio microphone, video camera, and video camera panning actuator…Device 201 is having multiple format video input ports 10x to enable connection of wide variety of video devices. All video inputs are connected to the main video switch 14. Video input ports 10x are either directly connected to video switch 14 (for example video input port 10a) or indirectly through various electronic circuitry…Video input port 10c may support connectivity protocols such as USB 3.0, Thunderbolt, Lightning, or DockPort by connecting input port 10c via a Docking Controller (DCO) 6…The results of this analysis are communicated to the SC 20 via serial lines 19. SC 20 uses these results to provide clear user indications for each input and to enable automatic switching of an active video input”; Note: System controller function 20, which is equivalent to the USB host controller, can receive video input from a camera through USB ports 10x, as shown in Fig. 2 above), connecting an image processor to the USB host controller to receive the images and perform an image merge processing on the images to merge the images into a merged image (Fig. 5 60, 20, 2; Paragraph 0164-0167 – “The use of video processor function 60 enables integration of multiple video input sources into one display or projector 2. As an example, in this FIG. 5, the user had selected to see video source from computer 4 at display 2 background 66 and video source 3 at display 2 window 67 Picture in Picture display (PIP)”; Note: The video processor function 60 is equivalent to the image processor. The video processor function 60 is connected to system controller 20, as shown in Fig. 5 above, receives video input, and can integrate multiple video input into one display); and connecting a wireless transmitter to the USB host controller and wirelessly connecting the wireless transceiver to the computing device, and receiving the merged image from the image processor to wirelessly transmit the merged image to the computing device (Fig. 5 22, 20, 21a, 60, 2; Paragraph 0131 – “Another function coupled to the video output lines 21 is the wireless video transmitter (VL) 22 that is coupled to antenna 24. This video transmitter (VL) 22 transmits the selected video output into a matching video receiver, for example video receiver dongle 25 that is coupled through video connector 26 into display or projector 2. Video connector 26 is preferably HDMI type. This arrangement enables wireless support to nearby display or projector thus eliminating the need to wire the meeting room with video cables”; Note: Wireless video transmitter 22, which is equivalent to the wireless module, is connected to the system controller 20 through output 21a and video processor 60, as shown in Fig. 5 above. It is wirelessly connected to the display 2 and receives video input from the video processor to transmit to the display 2), wherein the image merge processing includes at least one merged image display mode, the image processor transmits the merged image to the device according to a selected merged image display mode (Fig. 5 60, 2; Paragraph 0165 – “As an example, in this FIG. 5, the user had selected to see video source from computer 4 at display 2 background 66 and video source 3 at display 2 window 67 Picture in Picture display (PIP)”; Note: The video processor function 60 can merge the video input for a selected Picture in Picture display, which is displayed on display 2. The Picture in Picture display is a merged image display mode). Soffer does not teach the “computing device” in the following claim limitations: “an input and output device for multiple formats of video class, which…outputs the images to a computing device connected thereto” and “the image processor transmits the merged image to the computing device”; Soffer only teaches outputting and transmitting to a display device. However, Hsueh teaches outputting and transmitting to a computing device (Paragraph 0051 – “in order to allow the user to further see the image output by the controlled device 30, the control interface device 2b is further connected to a portable device 31 through the first USB interface 21, which can be a device with display function such as a smart phone, a smart tablet computer or a laptop computer, or a display that simply supports the UVC image format. In this embodiment, the portable device 31 is a first handheld smart device 310, which is electrically connected to the control interface device 2 b via the USB interface 21 to receive and display USB video type image signals”; Note: the portable device 31 is a computing device, such as a smart phone, and it receives image signals). A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the computing device of Hsueh could have been substituted for the display device of Soffer because both the display device and computing device serve the purpose of providing a way to view, present, and/or interact with the output image. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of displaying the output image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the display device of Soffer for the computing device of Hsueh according to known methods to yield the predictable result of displaying the output image. Furthermore, Soffer does not teach wherein the merged image outputted to the computing device includes any one of multiple formats of USB video class. However, Hsueh teaches wherein output to the computing device includes any one of multiple formats of USB video class (Paragraph 0040 – “The image conversion module 20 is electrically connected to an image source via a signal transmission line having an image input interface 24 to convert an image signal output by the image source into a USB video class (UVC) signal having an image format different from that of the image signal”; Note: the output image signal is a USB video class). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Soffer to incorporate the teachings of Hsueh to output a format of USB video class for the benefit of increasing compatibility with computing devices so that many different computing devices can be used with the input and output device for displaying images (Hsueh: Paragraph 0008). Moreover, Soffer does not teach the “wireless transceiver” from the claim limitation: “connecting a wireless transceiver to the USB host controller and wirelessly connecting the wireless transceiver to the computing device, and receiving the merged image from the image processor to wirelessly transmit the merged image to the computing device”. However, Nakanishi teaches a wireless transceiver connected to the USB host controller and wirelessly connected to the computing device (Paragraph 0024, 0028-0029 – “When the computer 10 and the USB device 20 are located close to each other within the communication range, the wireless communications can be established between the computer 10 and the USB device 20…The computer 10 communicates with USB device 20 using the USB host controller and wireless transceiver. The USB host controller is a host controller conforming to the USB 3.0 standard. The wireless transceiver is connected to the USB host controller through a USB bus”; Note: the wireless transceiver is connected to the USB host controller and wireless connects to the computer. Because it is a transceiver, it has transmitting and receiving capabilities). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Soffer to incorporate the teachings of Nakanishi to have a wireless transceiver because while Soffer has a wireless transmitter (Fig. 5 22) and wireless link receiver (Fig. 5 15), having a single device that can both transmit and receive image data is beneficial for reducing the amount of hardware required for communication. Additionally, a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the wireless transmitter and receiver of Soffer could have been substituted for the wireless transceiver of Nakanishi because both the wireless transmitter and receiver and the wireless transceiver serve the purpose of wirelessly obtaining and sending data from a source device to a destination device. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of transmitting and receiving data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the wireless transmitter and receiver of Soffer for the wireless transceiver of Nakanishi according to known methods to yield the predictable result of obtaining and sending data. Regarding claim 9, Soffer in view of Hsueh and Nakanishi teaches the method of claim 7. Soffer further teaches connecting at least one USB port to the at least two image sources, wherein the USB host controller is connected to the at least one USB port to receive the images (Fig. 2 10x, 20, 14, 17; Paragraph 0049, 0118-0122, 0149 – “at least one external device other than the meeting room power and video center device is selected from a group consisting of: video display, video projector, audio speakers, audio microphone, video camera, and video camera panning actuator…Device 201 is having multiple format video input ports 10x to enable connection of wide variety of video devices… Video input port 10c may support connectivity protocols such as USB 3.0…the results of this analysis are communicated to the SC 20 via serial lines 19. SC 20 uses these results to provide clear user indications for each input and to enable automatic switching of an active video input”; Note: Video input can come from a camera through USB ports 10x, and these USB ports are connected to the system controller function 20, which is equivalent to the USB host controller, by the video switch 14 and line 17, as shown in Fig. 2 above) from the at least two image sources (Paragraph 0056-0057 – “when at least two valid video signals are present in two video input ports, said video processor integrates video signals from at least a first video input port and a second video input port from said plurality of video input ports”; Note: the device can receive videos from more than two input sources). Regarding claim 10, Soffer in view of Hsueh and Nakanishi teaches the method of claim 7. Soffer further teaches wherein the image processor comprises a processor, a memory, and a storage unit, wherein the processor uses a program stored in the storage unit to execute the at least one merged image display mode, and the memory is used for storing the merged image (Fig. 5 60, 64; Paragraph 0166, 0205 – “Frame Buffer memory (FB) 64 coupled to video processor function 60 is used to temporarily store video frames data while in processing… the term ‘microcontroller function’ or other references to ‘function’ or ‘functions’ may refer to hardware capable of performing the logical function. The hardware may comprise one or a plurality of electronic circuitries. The hardware may be based on an ASIC (Application Specific Integrated Circuit), a processor accompanied with the necessary memory”; Note: Video processor function 60, which is equivalent to the image processor, has memory 64 for storing video frame data. It can have hardware based on a processor and memory for storing the logical function). Regarding claim 11, Soffer in view of Hsueh and Nakanishi teaches the method of claim 7. Soffer further teaches connecting an image input controller to the image processor, receiving the image, and converting an image format into another image format supported by the image processor to allow the image processor to receive the image and perform an image merge processing on the images (Fig. 2 7b, 14; Paragraph 0021-0024 – “device further comprises at least one video converter, said at least one video converter is for: receiving video signal from one of said plurality of video input ports in a first video standard; converting said video signal in a first video standard to video signal in a different second video standard; and transmitting said video signal in a different second video standard to said video processor”; Note: Video converter 7b, which is equivalent to the image input controller is connected to the video switch/processor 14, which is equivalent to the image processor. It can receive video signals, convert them into a different standard, and send them to the video switch/processor 14). Soffer does not directly teach that the image input controller receives images from the at least two image sources. However, Soffer separately teaches an image input controller (Fig. 2 7b, 14; Paragraph 0021-0024 – “device further comprises at least one video converter, said at least one video converter is for: receiving video signal from one of said plurality of video input ports in a first video standard; converting said video signal in a first video standard to video signal in a different second video standard; and transmitting said video signal in a different second video standard to said video processor”) and receiving images from the at least two image sources (Paragraph 0056-0057 – “when at least two valid video signals are present in two video input ports, said video processor integrates video signals from at least a first video input port and a second video input port from said plurality of video input ports”). Soffer has multiple image input controllers, each receiving from one image source (Modified screenshot of Fig. 2 above shows have there is a video converter for USB ports 10b and 10e). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined those features of Soffer to have one image input controller receive images from at least two image sources, because if there are a couple of image sources, then having one image input controller instead of multiple ones would make the device more compact and require less hardware. Furthermore, when receiving from multiple image sources, there is a finite number of realistic ways to arrange the image input control; either there can be multiple image input controllers, such as one for each image source, or there can be a single image input controller. One of ordinary skill in the art could have one image input controller receive images from two or more image sources with a reasonable expectation of success and would have done so for the benefit of having a compact device. Therefore, it would have been obvious to try the solution of having one image input controller receive images from two or more image sources. Claims 2 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Soffer in view of Hsueh, Nakanishi, and Chen et al. (US 8156267 B2), hereinafter Chen. Regarding claim 2, Soffer in view of Hsueh and Nakanishi teaches the input and output device for multiple formats of video class from claim 1. Soffer does not teach at least two USB ports connected to the at least two image sources and the computing device, respectively, wherein the USB host controller is connected to at least one USB port of the at least two USB ports connected to the at least two image sources to receive the images from the at least two image sources; and a USB device controller connected to the USB host controller and at least one USB port of the at least two USB ports connected to the computing device, and receiving the merged image from the image processor to transmit the merged image to the computing device. However, Chen teaches at least two USB ports connected to the at least two image sources and a device, respectively, wherein the USB host controller is connected to at least one USB port of the at least two USB ports (Fig. 3 104a, 100, 100a, 20a, 102a, 30; Col. 5 lines 5-11, 18-21, 29-37 – “first controller 104a selectively connects one of the plurality of the first USB hubs 100 via a switch form …When the first controller 104a receives the control signal from the processor 105, the first controller 104a connects with the target image input apparatus 20a corresponding to the first port 100a. By doing so, a signal channel is established, which makes the target image input apparatus 20a connect with the target peripheral apparatus 30a via the first port 100a, the first controller 104a, the second controller 104b, the second USB hub 102 and the second port 102a to communicate with the target peripheral apparatus 30a”; Note: The first controller 104a, which is equivalent to the USB host controller, is connected to the first USB hub 100, as shown in Fig. 3 below. First USB hub 100 has USB ports 100a that connect to image input device 20. There are also USB ports 102a for peripheral devices 30) connected to the at least two image sources to receive the images from the at least two image sources (Col. 3 lines 37-46 – “In addition to the image input apparatus 20 or the said peripheral apparatus, the first USB hubs 100 and the second USB hubs 102 may also connect with other peripheral apparatus or other data processing equipments. For example, the first ports 100a of the first hubs 100 and the second ports 102a of the second hubs 102 may connect with the input apparatus such as a keyboard, a mouse, and so on or the storage apparatus such as a hard disk, a CD-ROM drive, a flash drive and so on or the peripheral apparatus such as a multi-media player”; Note: the USB ports can be connected to multiple image input sources); and a USB device controller connected to the USB host controller and at least one USB port of the at least two USB ports connected to the device, and receiving the merged image from the image processor to transmit the merged image to the device (Fig. 3 104b, 104a, 102, 102a, 30, 105; Col. 5 lines 7-11, 22-37 – “the first controller 104a selectively connects one of the plurality of the first USB hubs 100 via a switch form and the second controller 104b also selectively connects one of the plurality of the second USB hubs 102 via a switch form; the first controller 104a and the second controller 104b connect with the processor 105 respectively…the second controller 104b, according to an assignment signal, selects a second USB hub 102 and its connecting peripheral apparatus 30 as the target peripheral apparatus 30a… When the first controller 104a receives the control signal from the processor 105, the first controller 104a connects with the target image input apparatus 20a corresponding to the first port 100a. By doing so, a signal channel is established, which makes the target image input apparatus 20a connect with the target peripheral apparatus 30a via the first port 100a, the first controller 104a, the second controller 104b, the second USB hub 102 and the second port 102a to communicate with the target peripheral apparatus 30a”; Note: Second controller 104b is the equivalent to the USB device controller. It is connected to the first controller 104a, which is equivalent to the USB host controller, and connected to the second USB hub 102, which has USB ports 102a connected to peripheral device 30, as shown in Fig. 3 below. Second controller 104b can receive the input from the processor 105 to transmit to the peripheral device 30). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Soffer to incorporate the teachings of Chen to have a USB device controller for the benefit of allowing “all peripheral apparatuses to communicate with the data processing equipment via the unified interface... the USB port at present is mostly regarded as the standard connection interfaces for the peripheral apparatus as well as the data processing equipment in the market” (Chen: Col. 1 lines 39-44). The USB connections between the input and output devices would allow for effective resource sharing (Chen: Col. 1 lines 52-57). PNG media_image3.png 770 1133 media_image3.png Greyscale Modified screenshot of Fig. 3 (taken from Chen) Regarding claim 8, Soffer in view of Hsueh and Nakanishi teaches the method from claim 7. Soffer does not teach connecting at least two USB ports to the at least two image sources and the computing device, respectively, wherein the USB host controller is connected to at least one USB port of the at least two USB ports connected to the at least two image sources to receive the images from the at least two image sources; and connecting a USB device controller to the USB host controller and at least one USB port of the at least two USB ports connected to the computing device, and receiving the merged image from the image processor to transmit the merged image to the computing device. However, Chen teaches connecting at least two USB ports to the at least two image sources and the computing device, respectively, wherein the USB host controller is connected to at least one USB port of the at least two USB ports (Fig. 3 104a, 100, 100a, 20a, 102a, 30; Col. 5 lines 5-11, 18-21, 29-37 – “first controller 104a selectively connects one of the plurality of the first USB hubs 100 via a switch form …When the first controller 104a receives the control signal from the processor 105, the first controller 104a connects with the target image input apparatus 20a corresponding to the first port 100a. By doing so, a signal channel is established, which makes the target image input apparatus 20a connect with the target peripheral apparatus 30a via the first port 100a, the first controller 104a, the second controller 104b, the second USB hub 102 and the second port 102a to communicate with the target peripheral apparatus 30a”; Note: The first controller 104a, which is equivalent to the USB host controller, is connected to the first USB hub 100, as shown in Fig. 3 above. First USB hub 100 has USB ports 100a that connect to image input device 20. There are also USB ports 102a for peripheral devices 30) connected to the at least two image sources to receive the images from the at least two image sources (Col. 3 lines 37-46 – “In addition to the image input apparatus 20 or the said peripheral apparatus, the first USB hubs 100 and the second USB hubs 102 may also connect with other peripheral apparatus or other data processing equipments. For example, the first ports 100a of the first hubs 100 and the second ports 102a of the second hubs 102 may connect with the input apparatus such as a keyboard, a mouse, and so on or the storage apparatus such as a hard disk, a CD-ROM drive, a flash drive and so on or the peripheral apparatus such as a multi-media player”; Note: the USB ports can be connected to multiple image input sources); and connecting a USB device controller to the USB host controller and at least one USB port of the at least two USB ports connected to the computing device, and receiving the merged image from the image processor to transmit the merged image to the computing device (Fig. 3 104b, 104a, 102, 102a, 30, 105; Col. 5 lines 7-11, 22-37 – “the first controller 104a selectively connects one of the plurality of the first USB hubs 100 via a switch form and the second controller 104b also selectively connects one of the plurality of the second USB hubs 102 via a switch form; the first controller 104a and the second controller 104b connect with the processor 105 respectively…the second controller 104b, according to an assignment signal, selects a second USB hub 102 and its connecting peripheral apparatus 30 as the target peripheral apparatus 30a… When the first controller 104a receives the control signal from the processor 105, the first controller 104a connects with the target image input apparatus 20a corresponding to the first port 100a. By doing so, a signal channel is established, which makes the target image input apparatus 20a connect with the target peripheral apparatus 30a via the first port 100a, the first controller 104a, the second controller 104b, the second USB hub 102 and the second port 102a to communicate with the target peripheral apparatus 30a”; Note: Second controller 104b is the equivalent to the USB device controller. It is connected to the first controller 104a, which is equivalent to the USB host controller, and connected to the second USB hub 102, which has USB ports 102a connected to peripheral device 30, as shown in Fig. 3 above. Second controller 104b can receive the input from the processor 105 to transmit to the peripheral device 30). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Soffer to incorporate the teachings of Chen to have a USB device controller for the benefit of allowing “all peripheral apparatuses to communicate with the data processing equipment via the unified interface... the USB port at present is mostly regarded as the standard connection interfaces for the peripheral apparatus as well as the data processing equipment in the market” (Chen: Col. 1 lines 39-44). The USB connections between the input and output devices would allow for effective resource sharing (Chen: Col. 1 lines 52-57). Claims 6 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Soffer in view of Hsueh, Nakanishi, Koo (US 20150054838 A1), Hun (KR 101847626 B1), Texas Instruments (“Applications of Low-Voltage Differential Signaling (LVDS) in LED Walls”), and Veinbergs et al. (“Video Surveillance Systems”), hereinafter Hsueh, Koo, Hun, Texas Instruments, and Veinbergs respectively. Regarding claim 6, Soffer in view of Hsueh and Nakanishi teaches the input and output device for multiple formats of video class from claim 5. Soffer further teaches wherein the multiple formats of video class comprise HDMI and DP (Paragraph 0119-0121 – “Video input port 10a may support HDMI (High-Definition Multimedia Interface) video. It can connect directly to HDMI main video switch 14; Video input port 10b my support DisplayPort video and connected via Video Converter (VC) 7b”). Soffer does not teach wherein the multiple formats of video class comprise LVDS and AHD. However, Koo teaches wherein the multiple formats of video class comprise LVDS (Paragraph 0074 – “the image signals may be HDMI, DVI, DP, LVDS, VGA, and eDP, but are not limited thereto”; Note: Supports LVDS). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Soffer to incorporate the teachings of Koo to provide LVDS as a format of video class for the benefit of LVDS having less noise issues. This would result in less signal voltage amplitude and thus “increase the data transmission rate and reduce power consumption” (Texas Instruments: Page 2, Section 1, Paragraph 4). Soffer modified by Koo still does not teach wherein the multiple formats of video class comprise AHD, but Hun teaches wherein the multiple formats of video class comprise AHD (Paragraph 0038 – “the camera image (CAM) may be an image having an analog format. For example, the analog method may refer to, but is not limited to, methods such as AHDTM (analog high definition)”; Note: Supports AHD). It also would have been obvious
Read full office action

Prosecution Timeline

Mar 08, 2023
Application Filed
Jan 21, 2025
Non-Final Rejection — §103
Apr 24, 2025
Response Filed
May 14, 2025
Final Rejection — §103
Aug 14, 2025
Response after Non-Final Action
Sep 22, 2025
Request for Continued Examination
Oct 01, 2025
Response after Non-Final Action
Oct 08, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602750
DIFFERENTIABLE EMULATION OF NON-DIFFERENTIABLE IMAGE PROCESSING FOR ADJUSTABLE AND EXPLAINABLE NON-DESTRUCTIVE IMAGE AND VIDEO EDITING
2y 5m to grant Granted Apr 14, 2026
Patent 12597208
BUILDING INFORMATION MODELING SYSTEMS AND METHODS
2y 5m to grant Granted Apr 07, 2026
Patent 12573217
SERVER, METHOD AND COMPUTER PROGRAM FOR GENERATING SPATIAL MODEL FROM PANORAMIC IMAGE
2y 5m to grant Granted Mar 10, 2026
Patent 12561851
HIGH-RESOLUTION IMAGE GENERATION USING DIFFUSION MODELS
2y 5m to grant Granted Feb 24, 2026
Patent 12536734
Dynamic Foveated Point Cloud Rendering System
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+36.4%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 21 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month