DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Terminal Disclaimer
The terminal disclaimer filed on 10/20/2025 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of 9288513,10945017 has been reviewed and is accepted. The terminal disclaimer has been recorded.
Response to Arguments
Applicant's arguments filed on 10/20/2025 have been fully considered but they are not persuasive.
In re pages 6-8, the applicant argues that “Watabe, however, does not teach or suggest that the camera on the aircraft is configured to capture still images and at the same time capture video signals. Only a still image mark is provided so that the system extracts a still image from the underlined video signal after the still image mark and video signal are transmitted to the still picture extracting part of the system for extraction. Thus, Watabe does not teach or suggest that "a system for transmitting still images and a video feed to a remote location, the system comprising: an aircraft including a digital video camera to capture still images and video frames of an object" as recited in claim 1. Therefore, Watabe, alone or in combination with Takahashi, would not have disclosed all features recited in claim 1.
Moreover, in Watabe, the information transmitted to the receiving station from the aircraft camera does not include still images (i.e., still image marks are not standalone still images). Watabe's system and method is designed for the specific intended use: verifying a disastrous site using the video signals and specific still images extracted thereafter from the video signals at the specific still image marks. See col. 2, lines 11-20 of Watabe.”
In response, the examiner respectfully disagrees. Watabe et al. discloses in claim 10 lines 3-18 that “By reference to altitude data of the three-dimensional map data 104 read out of the map data part 22 and the position/attitude information 105 on the helicopter 3 detected in the position/attitude detecting part 23, the shooting planning part 24 computes, based on the shooting instructions 103 received in the instruction receiving part 21, the flight path, flight altitude and flying speed of the helicopter 3, the position of a shooting target (latitude and longitude of the center of a still picture), the altitude of the shooting target (the altitude of the center of the still picture), a still picture marking position where to extract a still picture from video images being shot, and various other camera conditions including the camera angle of view corresponding to the still picture size.”, col. 17 lines 39-47 teaches “When the shooting instruction 103 sent from the shooting instruction input part 13 contains instructions for still picture processing such as joining of the still pictures or the estimation of a burning area, the still picture extracting part 42 processes the extracted still pictures accordingly. FIG. 22 is a diagram for explaining the still picture processing for the estimation of a burning area. To make this estimation, use is made of still pictures from the infrared camera as well as from the visible-light camera.” Herein, Watabe et al. teaches shooting instruction for shooting video and still pictures, where shooting instruction may include the position of a shooting target (latitude and longitude of the center of a still picture), the altitude of the shooting target (the altitude of the center of the still picture), a still picture marking position where to extract a still picture from video images being shot. Based on shooting instruction, camera shoots video, still pictures and instruction to extract still pictures using marking position. Thus, Watabe et al. meets the claimed invention.
Therefore, in view of the above, the examiner believes that the features of the claims are taught by the applied arts. See also the Office Action sets for the below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-4, 9-10, 13, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 6,731,331 by Watabe et al. in view of US 2003/0099457 by Takahashi et al.
Regarding claim 1, Watabe et al. discloses a system for transmitting still images and a video feed to a remote location (fig. 2), the system comprising:
an aircraft including a camera to capture still images (claim 10 lines 3-18 that “By reference to altitude data of the three-dimensional map data 104 read out of the map data part 22 and the position/attitude information 105 on the helicopter 3 detected in the position/attitude detecting part 23, the shooting planning part 24 computes, based on the shooting instructions 103 received in the instruction receiving part 21, the flight path, flight altitude and flying speed of the helicopter 3, the position of a shooting target (latitude and longitude of the center of a still picture), the altitude of the shooting target (the altitude of the center of the still picture), a still picture marking position where to extract a still picture from video images being shot, and various other camera conditions including the camera angle of view corresponding to the still picture size.”, col. 17 lines 39-47 teaches “When the shooting instruction 103 sent from the shooting instruction input part 13 contains instructions for still picture processing such as joining of the still pictures or the estimation of a burning area, the still picture extracting part 42 processes the extracted still pictures accordingly. FIG. 22 is a diagram for explaining the still picture processing for the estimation of a burning area. To make this estimation, use is made of still pictures from the infrared camera as well as from the visible-light camera.”, col 17 lines 33-38 teaches “In step ST32 the still picture extracting part 42 extracts still pictures 115 from the video signals 113 by detecting therein the still picture marks, then outputs the extracted still pictures together with the associated data 114, and stored them in the database storage part 43, using the shooting instruction ID number in the associated data 114 as a memory address.”) and video frames of an object (col 17 lines 25-30 teaches “FIG. 21 is a flowchart depicting the operation of the information display device 4. In step ST31 the shooting result receiving part 41 receives the video signals 113 with still picture marks sent over the analog radio channel from the shooting result transmitting part 30 and the associated data 114 sent over the digital radio channel.”);
a transmitter to send the data transmission to the remote location (col 1 lines 25-35 teaches “Reference character L1 denotes a first radio channel which is used to transmit speech signals between the ground base station 7 and the helicopter 9 or shooting instruction information to the video camera apparatus 8 and its response information; and L2 denotes a second radio channel which is used to transmit video signals from the video camera apparatus to the ground base station together with shooting conditions.”).
Watabe et al. fails to disclose
a video encoder coupled to the camera to provide a video output including video packets;
a file server coupled to the camera to provide a still image output including image data packets;
a multiplexer coupled to the video output and the still image output, the multiplexer producing a data transmission including the video packets and the image data packets;
Takahashi et al. discloses
a video encoder coupled to the camera to provide a video output including video packets (paragraph 0052 teaches “FIG. 1 is a block diagram showing a structure of a broadcasting station side transmission system according to the embodiment. A video/audio generating device 101 outputs video data (image data) and audio data (sound data) which are sent out from a video camera and a video server which are not shown in the figure to a video/audio data encoder (hereinafter, referred to as " video/audio data encoder") 103. In the specification, it is assumed that the video data and the audio data are handled as one data as long as it does not stick to it particularly, and this is called as video/audio data. A data broadcasting generating device 102 outputs contents data for use in data broadcasting (hereinafter referred to as "data broadcasting data") to a data broadcasting data encoder 104. The data broadcasting data comprises text data, image data (still picture/motion picture data), audio data, script (control program) and display object data and so on. In addition, the motion picture data has the same meaning as the video data but is used as such a meaning for distinguishing it from video data in television broadcasting. The data broadcasting data is produced by use of a not shown authoring terminal system based upon the video/audio data and other digital data and stored in a file server and so on.”, paragraph 0083 teaches “As for the catch-up reproduction processing, in addition to processing for thinning out a particular program content itself, so-called trick plays including a fast play reproduction, a frame drop reproduction and so on are exemplified. These are realized by thinning out sequentially I, P and B picture frames in the MPEG2 standard from the B picture frame.”);
a file server coupled to the camera to provide a still image output including image data packets (as discussed above);
a multiplexer coupled to the video output and the still image output, the multiplexer producing a data transmission including the video packets and the image data packets (in addition to discussion above, fig. 1 shows multiplexer 108 to produce data for transmission through transmitting device 110);
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the ability to include encoder, file server and multiplexer, as taught by Takahashi et al. into the system of Watabe et al., because such incorporation would allow a user to watch program content without interruption, thus increase user accessibility of the system. (Takahashi et al., paragraph 0009)
Regarding claim 3, the system wherein the multiplexer is controlled to combine a predetermined ratio of video packets to image data packets in the data transmission (in addition to discussion above, Takahashi et al., paragraph 0054 teaches “The video/audio data encoder 104 compresses and encodes the video/audio data which is sent out from the video/audio generating device 101 by control of the broadcasting management device 104 in accordance with MPEG2 video and MPEG2 audio to generate a video/audio stream. The video/audio data encoder 104 sends out the generated video/audio stream to a multiplexing device 108 in PES format. The data broadcasting data encoder 105 compresses and encodes the data broadcasting data which is sent out from the data broadcasting generating device 102 by the control of the broadcasting management device 103 to generate a data broadcasting stream. The data broadcasting data encoder 105 sends out the generated data broadcasting data to the multiplexing device 108 in a section format.” Fig. 1 shows multiplexer is element 108 wherein element 104 carries the video/audio ratio).
The motivation for combining references has been discussed in independent claim above.
Regarding claim 4, the system, wherein the still image is one of a plurality of still images captured by the camera, the image data packets including an identifier field identifying the still image associated with the image data packet (in addition to discussion above, Watabe et al., col 17 lines 33-38 teaches “In step ST32 the still picture extracting part 42 extracts still pictures 115 from the video signals 113 by detecting therein the still picture marks, then outputs the extracted still pictures together with the associated data 114, and stored them in the database storage part 43, using the shooting instruction ID number in the associated data 114 as a memory address.”; Takahashi et al., paragraph 0052 teaches “FIG. 1 is a block diagram showing a structure of a broadcasting station side transmission system according to the embodiment. A video/audio generating device 101 outputs video data (image data) and audio data (sound data) which are sent out from a video camera and a video server which are not shown in the figure to a video/audio data encoder (hereinafter, referred to as " video/audio data encoder") 103. In the specification, it is assumed that the video data and the audio data are handled as one data as long as it does not stick to it particularly, and this is called as video/audio data. A data broadcasting generating device 102 outputs contents data for use in data broadcasting (hereinafter referred to as "data broadcasting data") to a data broadcasting data encoder 104. The data broadcasting data comprises text data, image data (still picture/motion picture data), audio data, script (control program) and display object data and so on. In addition, the motion picture data has the same meaning as the video data but is used as such a meaning for distinguishing it from video data in television broadcasting. The data broadcasting data is produced by use of a not shown authoring terminal system based upon the video/audio data and other digital data and stored in a file server and so on.”).
The motivation for combining references has been discussed in independent claim above.
Claim 9 is rejected for the same reason as discussed in the corresponding claim 3 above.
Claim 10 is rejected for the same reason as discussed in the corresponding claim 4 above.
Claim 13 is rejected for the same reason as discussed in the corresponding claim 3 above.
Claim 16 is rejected for the same reason as discussed in the corresponding claim 4 above.
Claim(s) 2, 6-8, 12, 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 6,731,331 by Watabe et al. in view of US 20030099457 by Takahashi et al., US 20140192155 by Choi et al. and US 5537446 by Yamamoto et al.
Regarding claim 2, Watabe et al. discloses the system further comprising:
a ground station receiver at the remote location to receive the data transmission (col 1 lines 25-35 teaches “Reference character L1 denotes a first radio channel which is used to transmit speech signals between the ground base station 7 and the helicopter 9 or shooting instruction information to the video camera apparatus 8 and its response information; and L2 denotes a second radio channel which is used to transmit video signals from the video camera apparatus to the ground base station together with shooting conditions.”);
Watabe et al. fails to disclose
a demultiplexer coupled to the receiver to demultiplex the video packets and the image data packets from the data transmission;
a video decoder coupled to the demultiplexer to output the video stream; and
a combiner coupled to the demultiplexer to combine the image data packets in the still image.
Takahashi et al. discloses
a demultiplexer coupled to the receiver to demultiplex the video packets and the image data packets from the data transmission (paragraph 0061 teaches “The demultiplexer 203 separates the multiplexed stream to be sent out and selects a particular video/audio stream as the need arises and sends out it to a stream controlling unit 204. The demultiplexer 203, as to the separated data broadcasting stream, sends out it to a data broadcasting decoder 205b. The demultiplexer 203 obtains PID (Packet Identifier) corresponding to a program to be selected and separates a stream in accordance with this PID. The demultiplexer 203 selects a video/audio stream, based upon a program selection operation of a viewer and program switching control by the event message. The demultiplexer 203, when it executes program switching processing, informs the stream controlling unit 206 of it.”);
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the ability to include a demultiplexer coupled to the receiver to demultiplex the video packets and the image data packets from the data transmission, as taught by Takahashi et al. into the system of Watabe et al., because such incorporation would allow a user to watch program content without interruption, thus increase user accessibility of the system. (Takahashi et al., paragraph 0009)
Watabe et al. and Takahashi et al. fail to disclose
a video decoder coupled to the demultiplexer to output the video stream; and
a combiner coupled to the demultiplexer to combine the image data packets in the still image.
Choi et al. discloses
a video decoder coupled to the demultiplexer to output the video stream (paragraph 0241 teaches “FIG. 28 illustrates an internal structure of the mobile phone 1250, according to an exemplary embodiment. To systemically control parts of the mobile phone 1250 including the display screen 1252 and the operation panel 1254, a power supply circuit 1270, an operation input controller 1264, an image encoding unit 1272, a camera interface 1263, an LCD controller 1262, an image decoding unit 1269, a multiplexer/demultiplexer 1268, a recording/reading unit 1267, a modulation/demodulation unit 1266, and a sound processor 1265 are connected to a central controller 1271 via a synchronization bus 1273.”);
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the ability to include a video decoder coupled to the demultiplexer to output the video stream, as taught by Choi et al. into the system of Watabe et al., and Takahashi et al., because such incorporation would allow a user to watch the video stream seamlessly, thus increase user accessibility of the system.
Watabe et al., Takahashi et al. and Choi et al. fail to disclose
a combiner coupled to the demultiplexer to combine the image data packets in the still image.
Yamamoto et al. discloses
a combiner coupled to the demultiplexer to combine the image data packets in the still image (fig. 1 shows combiner 112 coupled to the decoding circuit 106).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the ability to include a combiner coupled to the demultiplexer to combine the image data packets in the still image, as taught by Yamamoto et al. into the system of Watabe et al., Takahashi et al. and Choi et al., because such incorporation would allow a user to watch the image data packets in the still image, thus increase user accessibility of the system.
Claim 6 is rejected for the same reason as discussed in the corresponding claims 1 and 2 above.
Regarding claim 7, the system further comprising a display coupled to the video decoder and the combiner to display the at least one video frame or the at least one still image (in addition o discussion above, Watabe et al., fig. 2 element 4 shows display device; Takahashi et al. fig. 2 element 211 shows display device)
Claim 8 is rejected for the same reason as discussed in the corresponding claim 1 above.
Claim 12 is rejected for the same reason as discussed in the corresponding claims 1 and 2 above.
Claim 14 is rejected for the same reason as discussed in the corresponding claim 7 above.
Claim 15 is rejected for the same reason as discussed in the corresponding claim 1 above.
Claim(s) 5, 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 6,731,331 by Watabe et al., US 20030099457 by Takahashi et al., US 20140192155 by Choi et al. and US 5537446 by Yamamoto et al. in view of US 5424854 by Hashimoto.
Regarding claim 5, the system wherein Watabe et al. discloses an aircraft including a digital video camera to capture still images (as discussed above), Takahashi et al. discloses a video encoder coupled to the camera to provide a video output including video packets (as discussed above), Choi et al. discloses a video decoder coupled to the demultiplexer to output the video stream (as discussed above), Yamamoto et al. discloses a combiner coupled to the demultiplexer to combine the image data packets in the still image (as discussed above), but fail to disclose the multiplexer includes an image only mode which sets the rate of video images to a minimum number of frames per second and a minimum resolution.
Hashimoto the multiplexer includes an image only mode which sets the rate of video images to a minimum number of frames per second and a minimum resolution (col. 2 lines 53-68 teaches “FIG. 1 is a block diagram of a facsimile apparatus according to an embodiment of the invention. In the diagram, reference numeral 1 denotes an original read unit for reading an original by scanning the original by a CCD or the like; 2 a resolution conversion processing unit for converting a resolution of the image data which was read by the original read unit 1; 3 a simple binary processing unit for binarizing the image data sent from the resolution conversion processing unit 2 on the basis of a fixed threshold value; 4 a low-resolution dither processing unit for binarizing the image data by using a dither matrix for a low resolution; 5 a high-resolution dither processing unit for binarizing the image data by using a dither matrix for a high resolution; and 6 an operation panel to instruct the input of a facsimile number, the selection of a resolution, the selection of either a character mode or a photograph mode, the reading of the original, and the start of a transmitting operation. Reference numeral 7 denotes a multiplexer for selecting either one of output signals of the processing units 2, 3, 4, and 5 in accordance with the image read mode which was instructed by the operation panel 6 and for outputting the selected signal. Reference numeral 8 denotes an encode processing unit for encoding white and black image data which is output from the multiplexer 7; 9 a CCU (communication control unit) to transmit the encoded data which is output from the encode processing unit 8 to a line; and 10 a CPU to control the whole apparatus in accordance with a flowchart, which will be explained hereinlater. FIG. 2 is a diagram showing the details of the operation panel 6. Reference numeral 201 denotes a key to select an original reading resolution. Each time the key 201 is depressed, the resolution is switched between 200 ppi.times.200 ppi and 400 ppi.times.400 ppi. Reference numeral 202 denotes a key to select a read mode of an original. Each time the key 202 is depressed, the read mode is switched between the character mode and the photograph mode. Reference numeral 203 denotes a start key to instruct the start of the transmitting operation. Reference numerals 204 to 213 indicate dial keys to input a facsimile number.”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the ability to include the multiplexer includes an image only mode which sets the rate of video images to a minimum number of frames per second and a minimum resolution, as taught by Hashimoto into the system of Watabe et al., Takahashi et al., Choi et al., and Yamamoto because such incorporation would allow a user to have more options to view video images, thus increase user flexibility of the system.
Claim 11 is rejected for the same reason as discussed in the corresponding claim 5 above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NIGAR CHOWDHURY whose telephone number is (571)272-8890. The examiner can normally be reached Monday-Friday 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Tran can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NIGAR CHOWDHURY/ Primary Examiner, Art Unit 2484