DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. There are a total of 20 claims and claims 1-20 are pending.
Specification
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words. The form and legal phraseology often used in patent claims, such as "means" and "said," should be avoided. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, "The disclosure concerns," "The disclosure defined by this invention," "The disclosure describes," etc.
The abstract of the disclosure is objected to because it contains a phrase that can be implied (“This disclosure describes techniques to concurrently capture several redundant (e.g., overlapping) video image streams on electronic devices having multiple image capture devices”).
Appropriate correction is required. Also see MPEP 608.01(b), Paragraph C – “Language and Format”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 9, 11, 13-16, 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Grasmug et al. (US PGPub 2015/0262380 A1) in view of Edpalm et al. (US PGPub 2024/0333948 A1).
Regarding claim 1, Grasmug et al. teach a method of compressed streaming of digital video images ([0021]; Fig. 1 shows the compression of video streaming), comprising:
encoding a first video image stream captured by a first image capture device of an electronic device ([0021]; It teaches that a first image source provides a first video image stream HT), wherein:
(a) the first video image stream is captured at a first frame rate and has a first field of view (FOV) (Fig. 3, reference numeral 305 shows that the first stream of images are captured at a first frame rate),
(b) at least a first portion of images in the first video image stream are captured at a first resolution (Fig. 3, reference numeral 305 shows that a portion of the first stream of images are captured at a first resolution), and
(c) at least a second portion of images in the first video image stream are captured at a second resolution that is lower than the first resolution (Fig. 3, reference numeral 310 shows that a second portion of the first stream of images are captured at a second resolution less than the first resolution); and
encoding a second video image stream captured by a second image capture device of the electronic device ([0021]; It teaches that a second image source provides a second video image stream LT), wherein:
(d) the second video image stream is captured concurrently with the first video image stream ([0029], L15-18; it teaches that some mobile platforms may provide for a low-resolution video stream or feed, while concurrently allowing for a high-resolution still image to be captured at the maximum sensor resolution, which means the two streams are captured concurrently),
(e) the second video image stream is captured at a second frame rate and has a second FOV that at least partially contains the first FOV (Fig. 3, reference numeral 310 shows that the second stream of images are captured at a second frame rate. It is to be noted that Fig. 3 flow diagram is applicable independently for both the video streams as evident from [0040], where it says, the first plurality of images and the second plurality of images are received from a same camera sensor), and
(f) the second frame rate is greater than the first frame rate (Fig. 3, reference numeral 310 shows that the second stream of images are captured at a second frame rate which is greater than the first frame rate).
Although, Grasmug et al. teach two cameras providing two video streams but it does not explicitly teach that the two cameras have FOV that partially overlaps with each other.
However, Edpalm et al., in the same field of endeavor (Abstract), teach a video streaming system where the two video image frames have partially overlapped FOVs (Edpalm et al.; [0006], L3-10. NOTE: Edpalm et al. in [0013] teaches that the images are captured by the cameras after the FOV-changing has begun are encoded, which means there could be plurality of cameras having different but partially overlapping FOVs).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Grasmug et al’s invention of determining optical flow from a plurality of image sources to include Edpalm et al's usage of partially overlapping FOVs of the cameras, because it improves way of encoding a video stream with image frames captured during a changing FOV of the camera (Edpalm et al.; [0003]-[0004]).
Regarding claim 2, Grasmug et al. and Edpalm et al. teach the method of claim 1, further comprising:
decoding the first video image stream (Grasmug et al.; Fig. 3, reference numeral 320 shows outputting of a third image stream based on first and second image streams, meaning the first image stream is being decoded. Edpalm et al. also teach in [0041], L10-12, Fig. 2, that the first video frame 220a contains all the necessary image data required to decode the first image 120a again at the decoder side);
decoding the second video image stream (Grasmug et al.; Fig. 3, reference numeral 320 shows outputting of a third image stream based on first and second image streams, meaning the second image stream is being decoded. Edpalm et al. also teach in [0045], L5-9, Fig. 2, that the second video frame 220b with motion vectors indicating what parts of the (decoded) first additional video frame 230a that are to be used to decode what parts of the second video frame 220b); and
reconstructing an enhanced version of the first video image stream based, at least in part, on information obtained from the decoded second video image stream (Grasmug et al.; [0042]; Fig. 3, reference numeral 320; it says, based at least in part on the first optical flow from the first image frame to the second image frame, a third image frame as part of an output stream, which is the enhanced version of the first video stream),
wherein the enhanced version of the first video image stream has at least one of: an increased frame rate as compared to the first frame rate, or an increased resolution as compared to the second resolution (Grasmug et al.; [0042]; Fig. 3, reference numeral 320; it says, based at least in part on the first optical flow from the first image frame to the second image frame, a third image frame as part of an output stream, the output stream having a frame rate greater than or equal to the first frame rate, and the third image frame has a resolution greater than or equal to the second resolution).
Regarding claim 3, Grasmug et al. and Edpalm et al. teach the method of claim 2, wherein reconstructing the enhanced version of the first video image stream further comprises:
using optical flow (OF) information computed for the second video image stream to derive OF information for the first video image stream (Grasmug et al.; [0041]; Fig. 3, reference numeral 315).
Regarding claim 4, Grasmug et al. and Edpalm et al. teach the method of claim 3, wherein reconstructing the enhanced version of the first video image stream further comprises:
reconstructing based, at least in part, on the derived OF information, a plurality of additional video images for the enhanced version of the first video image stream (Grasmug et al.; [0041]-[0042]; Fig. 3, reference numeral 315, 320),
wherein, after the reconstruction of the plurality of additional video images, the enhanced version of the first video image stream has the second frame rate (Grasmug et al.; [0042]; Fig. 3, reference numeral 320; it says, based at least in part on the first optical flow from the first image frame to the second image frame, a third image frame as part of an output stream, the output stream having a frame rate greater than or equal to the first frame rate, and the third image frame has a resolution greater than or equal to the second resolution).
Regarding claim 5, Grasmug et al. and Edpalm et al. teach the method of claim 1, wherein reconstructing the enhanced version of the first video image stream further comprises:
computing an amount of disparity between the first image capture device and the second image capture device (Grasmug et al.; [0031]; It teaches that after the optical flow is computed, each pixel of the high-resolution image frame may be moved according to the displacement vectors of the flow field, where the displacement vector represents the disparity).
Regarding claim 6, Grasmug et al. and Edpalm et al. teach the method of claim 1, wherein reconstructing the enhanced version of the first video image stream further comprises:
upscaling at least one image in the first video image stream having the second resolution to have the first resolution in the enhanced version of the first video image stream (Grasmug et al.; [0036]. L4-7; [0037], L10-11; It teaches that MROF (Multi-Resolution Optical Flow) may initiate or perform blending of a morphed current (high-resolution) image frame with an up sampled version of the previous image frame).
Regarding claim 9, Grasmug et al. and Edpalm et al. teach the method of claim 1, wherein the second FOV fully contains the first FOV (Edpalm et al.; Fig. 2 shows that the FOV of second image frame 120b fully contains the FOV of the first image frame 120a).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Grasmug et al’s invention of determining optical flow from a plurality of image sources to include Edpalm et al's usage of partially overlapping FOVs of the cameras, because it improves way of encoding a video stream with image frames captured during a changing FOV of the camera (Edpalm et al.; [0003]-[0004]).
Regarding claim 11, Grasmug et al. and Edpalm et al. teach the method of claim 2, further comprising:
storing the enhanced version of the first video image stream and the second video image stream together in a single video file object (Grasmug et al.; [0042], L14-15; It teaches that MROF may keep the latest "N" input image frames in memory or some equivalent storage).
Regarding claim 13, Grasmug et al. and Edpalm et al. teach the method of claim 1, wherein the first resolution comprises a 4K resolution or an 8K resolution, and wherein the second resolution comprises, at most, a 1080p resolution (Edpalm et al.; [0010]; It teaches that the base layer may be encoded to provide a lower resolution (e.g. full HD resolution or similar), while the enhancement layer may be encoded to provide a higher resolution (such as e.g. 4K resolution or similar), Here the full HD resolution is 1080p).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Grasmug et al’s invention of determining optical flow from a plurality of image sources to include Edpalm et al's usage of multiple resolution and bitrate streaming, because it improves way of encoding a video stream with image frames captured during a changing FOV of the camera (Edpalm et al.; [0003]-[0004]).
Regarding claim 15, Grasmug et al. and Edpalm et al. teach the method of claim 1, wherein at least one of: the first resolution, the second resolution, the first frame rate, or the second frame rate are determined dynamically when the second image capture device begins to capture the second video image stream (Grasmug et al.; [0045]; It teaches that camera 502, which may constitute multiple cameras, is capable of switching between high-resolution images and high frame rate captures, which means the change is resolution and frame rates can dynamically change).
Regarding claim 16, Grasmug et al. and Edpalm et al. teach the method of claim 11, further comprising:
playing back the single video file object, wherein, during the playback, at least one video image is played back from each of: the enhanced version of the first video image stream and the second video image stream (Grasmug et al.; [0043]; it teaches that processing unit 400 is shown as generating a high-resolution, high frame rate output to be displayed to a user, where the high-resolution, high frame rate output is the enhanced version of the first video stream).
Regarding claim 18, Grasmug et al. and Edpalm et al. teach the method of claim 1, wherein: (g) at least some images in the second video image stream are captured at a resolution higher than the second resolution (Grasmug et al.; Fig. 2, reference numeral 208 says computing OF between low resolution frames until next high resolution frame comes and in 220 it says computing OF from last low resolution frame to next high resolution frame. This means the next high resolution frame in the second video stream is higher than the previous low resolution frames).
Regarding claim 19, Grasmug et al. and Edpalm et al. teach an electronic device, comprising:
a memory (Grasmug et al.; Fig. 5, reference numeral 514);
a first image capture device (Grasmug et al.; Fig. 5, reference numeral 502);
a second image capture device (Grasmug et al.; Fig. 5, reference numeral 502); and
one or more processors operatively coupled to the memory (Grasmug et al.; Fig. 5, reference numeral 508), wherein the one or more processors are configured to execute instructions (Grasmug et al.; Fig. 5, reference numeral 515) causing the one or more processors to:
perform the method of claim 4 (See the rejection citation of claim 4 above).
Regarding claim 20, Grasmug et al. and Edpalm et al. teach a non-transitory computer readable medium comprising computer readable instructions executable by one or more processors (Grasmug et al.; [0051]) to:
perform the method of claim 4 (See the rejection citation of claim 4 above).
Claims 14 are rejected under 35 U.S.C. 103 as being unpatentable over Grasmug et al. (US PGPub 2015/0262380 A1) in view of Edpalm et al. (US PGPub 2024/0333948 A1) and further in view of Woodman et al. ( US PGPub 2022/0086369 A1).
Regarding claim 14, Grasmug et al. and Edpalm et al. teach the method of claim 1, wherein the first frame rate is 30 frames per second (fps), and wherein the second frame rate is 60 fps, 90 fps, 120 fps, or 240 fps (Grasmug et al.; [0045]; It teaches that camera 502 may capture high-resolution still images while also capturing 30 or higher frames per second video having a lower resolution).
Even though, Grasmug et al. teach the low resolution first frame rate to be 30 fps or higher, but Grasmug et al. or Edpalm et al. do not explicitly teach the second frame rate to be 60 fps, 90 fps, 120 fps, or 240 fps.
However, Woodman et al., in the same field of endeavor (Abstract), teach a frame rate of 30 fps and a frame rate of 60fps (Woodman et al.; [0023], L25-27).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Grasmug et al’s invention of determining optical flow from a plurality of image sources and Edpalm et al's usage of partially overlapping FOVs of the cameras, to include Woodman et al’s wide range of frame rates, because it gives wide range of variability in frame rates so that different applications can use different frame rates as needed.
Allowable Subject Matter
Claims 7-8, 10, 12, 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
“METHODS AND SYSTEMS FOR AUTO-ZOOM BASED ADAPTIVE VIDEO STREAMING” – Chen et al., US PGPub 2017/0302719 A1.
“METHODS AND APPARATUS FOR GENERATING A LIVE STREAM FOR A CONNECTED DEVICE” - Mallegowda et al., US PGPub 2024/0373076 A1.
“PROCESSING MULTI-VIEW DIGITAL IMAGES” - Ma et al., US PGPub 2010/0271511 A1.
“Robust Distributed Multiview Video Compression for Wireless Camera Networks” – Yeo et al., IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL 2010.
“STREAMING SPHERICAL VIDEO” – Adams et al., US PGPub 2016/0352791 A1.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAINUL HASAN whose telephone number is (571)272-0422. The examiner can normally be reached on MON-FRI: 10AM-6PM, Alternate FRIDAYS, EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAY PATEL can be reached on (571)272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Mainul Hasan/
Primary Examiner, Art Unit 2485