DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-3 is/are rejected under 35 U.S.C. 103 as being unpatentable over ALEXANDER, (From IDS: US 2003/0206172) in view of HAN et al., (From IDS: KR 20210037916, see translation provided).
Regarding claim 1: ALEXANDER teaches a device for encoding a multi-channel image, comprising: a collecting processor configured to collect data of a plurality of capture cards in units of frames [¶0022 teaches: collector formats the video capture board 104 and begins a frame collection process]; an encoding processor configured to encode the data collected in the collecting section using a plurality of encoders [¶0028 teaches: processing application 116 can encode all video data], wherein the encoding section encodes the data according to a configuration parameter defined irrespective of interface types of the capture cards [¶0031 teaches: If the image capture device parameters have not been refreshed, the routine 300 returns to block 306 to process additional frame data in the shared memory 118. If the image capture device parameters have been refreshed, the routine 300 returns to block 302. Accordingly, the routine 300 will continue to independently process image data until terminated.].
However, it does not appear that ALEXANDER explicitly teaches an out-put section configured to output the encoded data.
In a related field of endeavor, HAN teaches an out-put section configured to output the encoded data [Page 4, ¶7 teaches: The compressed image output unit 158 outputs a synthesized image synthesized for each multi-channel.].
Given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate HAN’s teaching of an out-put section into ALEXANDER’s device for encoding a multi-channel image for the benefit, as taught by HAN, of an ability to review the synthesized image. [HAN, Page 3, ¶5]
Regarding claim 2: the essence of the claim is taught above in the rejection of claim 1.
In addition, ALEXANDER teaches wherein the configuration parameter is configured to be defined in the order of a video acquisition support flag [¶0020 teaches: the video capture board parameters], a unique number of each interface channel [¶0020 teaches: collection application 114 can retrieve the parameter data from a database including parameter information for each attached video capture device 104], a scaling type [¶0020 teaches: manufacturer-specific communication protocol information], a unique number of each video interface and channel [Claim 5 teaches: collection process includes: allocating a location in the shared memory corresponding to a video capture device providing the video data], a pixel representation definition [¶0025 teaches: processor performs the actual manipulations of bits in RAM that correspond to pixels on a display], an image size [¶0020 teaches: manufacturer-specific communication protocol information], a number of frames per second [¶0004 teaches: The rate and speed at which video is collected is measured in the number of frames per second ("FPS")], a reserved space for configuration addition [¶0022 teaches: collection application 114 will instruct the video capture board's DSP to collect video data from the appropriate input channel on the video capture board], an audio acquisition support flag [¶0020 teaches: manufacturer-specific communication protocol information], and an audio-related parameter [¶0020 teaches: manufacturer-specific communication protocol information].
Regarding claim 3: the essence of the claim is taught above in the rejection of claim 1.
In addition, ALEXANDER teaches wherein the collecting processor includes: a plurality of single channel collectors configured to collect data of each capture card [¶0022 teaches: collection application 114 will instruct the video capture board's DSP to collect video data from the appropriate input channel on the video capture board.]; and a synchronizer configured to synchronize data in units of frames of the single channel collectors [¶0023 teaches: collection application 114 transfers the collected frame data into the appropriate shared memory segment].
Claim(s) 4-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over ALEXANDER modified by HAN and in view of TODA et al., (US 2021/0144393).
Regarding claim 4: the essence of the claim is taught above in the rejection of claim 3.
In addition, ALEXANDER teaches wherein each of the single channel collectors includes: a video frame buffer configured to store image data in units of frames [¶0008 teaches: a collection process obtains video data and stores the video data in a shared memory],
However, it does not appear that ALEXANDER modified by HAN explicitly teaches an audio frame buffer configured to store audio data in units of frames; a data frame collector configured to collect and output data stored in the video frame buffer and the audio frame buffer in units of frames; and a configuration parameter convertor configured to identify a configuration parameter of each capture card and convert it into the configuration parameter for encoding in the encoding section.
In a related field of endeavor, TODA teaches: an audio frame buffer configured to store audio data in units of frames [¶0067 teaches: When the image data is moving image data (movie), the encoding parameter further includes information on the sound quality of the audio data,]; a data frame collector configured to collect and output data stored in the video frame buffer and the audio frame buffer in units of frames [¶0093 teaches: the parameter setting unit 16 determines that the encoding parameter is changed, when a parameter change request from the communication terminal 50 is received at the communication unit 11 or the input/output unit 13, or when a parameter change request is received]; and a configuration parameter convertor configured to identify a configuration parameter of each capture card and convert it into the configuration parameter for encoding in the encoding section [¶0093 teaches: the parameter setting unit 16 changes a value of the encoding parameter stored in the encoding parameter management DB 1001 according to the parameter change request. In an example case illustrated in FIG. 11, the parameter setting unit 16 changes the frame rate of the stream 2 from “10 fps” to “30 fps”, from among encoding parameters having different frame rates for each frame.].
Given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate TODA’s teaching of an out-put section into ALEXANDER modified by HAN’s device for encoding a multi-channel image for the benefit, as taught by TODA, of an ability to distribute image data having different image qualities in a plurality of streams substantially at a same time. [TODA, ¶0036]
Regarding claim 5: the essence of the claim is taught above in the rejection of claim 3.
In addition, TODA teaches wherein the encoding processor is configured such that the plurality of encoders are arranged in parallel [¶0034 teaches: the frame X, which is one frame of the captured image data, is input to three encoders in parallel].
The motivation to combine is the same as for claim 4. [See teaching above]
Regarding claim 6: the essence of the claim is taught above in the rejection of claim 5.
In addition, TODA teaches wherein each of the encoders is configured to selectively encode only image data [¶0032 teaches: encoders of the image capturing device 10, which are respectively allocated to the plurality of streams, process the plurality of streams in parallel.], and audio data is encoded in a central processor of a computing device in which the device for encoding is installed [¶0047 teaches: The audio processor 109 acquires the audio data output from the microphone 108 via an I/F bus and performs predetermined processing on the audio data.].
The motivation to combine is the same as for claim 4. [See teaching above]
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over ALEXANDER modified by HAN and in view of WELLS et al., (From IDS: US US 2004/0179600).
Regarding claim 7: the essence of the claim is taught above in the rejection of claim 3.
However, it does not appear that ALEXANDER modified by HAN explicitly teaches wherein the out-put section includes: multiplexers configured to select and output the data encoded in the encoding section; file storage modules configured to store the data selected from each of the multiplexers as a file in a storage means; and a transmission module configured to transmit the data selected from each of the multiplexers.
In a related field of endeavor, WELLS teaches wherein the out-put section includes: multiplexers configured to select and output the data encoded in the encoding section [¶0028 teaches: wherein the out-put section includes: multiplexers configured to select and output the data encoded in the encoding section; file storage modules configured to store the data selected from each of the multiplexers as a file in a storage means; and a transmission module configured to transmit the data selected from each of the multiplexers.]; file storage modules configured to store the data selected from each of the multiplexers as a file in a storage means ¶0028 teaches: The multiplexer 120 may also multiplex the one output signal ENC to a second output video signal OUTPUT1-OUTPUTj for transmission to a storage system (see FIG. 7 150) for archiving and subsequent retrieval.]; and a transmission module configured to transmit the data selected from each of the multiplexers [¶0044 teaches: The multiplexer 120 may then multiplex the encoded digest video signal OUT to one or more of the output video signals OUTPUT1-OUTPUTj; and ¶0054 teaches: Therefore, the static background may only be transmitted once]..
Given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate WELLS’ teaching of an out-put section into ALEXANDER modified by HAN’s device for encoding a multi-channel image for the benefit, as taught by WELLS, of enabling a further gain in video processing efficiency. [WELLS, Background/Summary]
Conclusion
Prior art not relied upon: Please refer to the references listed in an attached PTO-892 and that are not relied upon for the claim rejections detailed above. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
In particular, ASHBACHER et al., (US 2021/0289240) teaches multiple devices and/or complex infrastructure can be used to provide a content feed with the contextually-relevant material;
WU et al., (US 2017/0366819) teaches a video encoder is configured to receive first and second sets of pixels and to encode the multi-channel image based on received first and second set of pixels for first and second color channel;
BOROCZY et al., (US Patent No 6,859,496) teaches adaptively encoding multiple streams of video data in parallel for multiplexing onto a constant bit rate channel; and
CHEN et al., (US Patent No US 6,275,536) teaches implementation architectures of a multi-channel MPEG video transcoder using multiple programmable processors.
In the case of amending the claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Marnie Matt whose telephone number is (303)297-4255. The examiner can normally be reached Monday - Friday, 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARNIE A MATT/Primary Examiner, Art Unit 2485