DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 3,6,9,11,13 are objected to because of the following informalities:
Regarding claim 3,6,9,11, the term “in response to that…” is unclear and needs correction.
Regarding claim 13, the term “wherein in in…” is unclear and needs correction.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1,3,4,7-9,18, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iyer in view of Sundaresan US 20170236288.
Regarding claim 1, Iyer meets the claim limitations, as follows:
A video processing method, comprising:
obtaining a video captured by a photographing device (i.e. videos are commonly capturing using a photographic device) [7,23];
dividing the video into a plurality of regions based on information associated with a global motion state of the video (i.e. video frame 220 has ROI 214 and non ROI 216. Motion detected in videos and ROI is determined based on motion.) [34-35,48; fig. 2b],
wherein the plurality of regions includes a region of interest (ROI) and a non-region of interest (non-ROI) (i.e. frame 210 and ROI 214) [126-127; fig. 13]; and
performing different image processing on the ROI and the non-ROI to achieve different levels of clarity for the ROI and the non-ROI (i.e. ROI is displayed in HD where non ROI is SD.) [48].
Iyer do/does not explicitly disclose(s) the following claim limitations:
dividing the video into a plurality of regions based on information associated with a global motion state between frames of the video
However, in the same field of endeavor Sundaresan discloses the deficient claim limitations, as follows:
dividing the video into a plurality of regions based on information associated with a global motion state between frames of the video (i.e. global motion determined between previous and current frame and will separate the regions if one or more criteria is met) [7,65,67; fig. 5]
It would have been obvious to one with ordinary skill in the art at the time of filing to modify the teachings of Iyer with Sundaresan to have dividing the video into a plurality of regions based on information associated with a global motion state between frames of the video.
It would be advantageous because "The framework may take one or more object characteristics (e.g., color, structure, etc.) and motion into account to accurately segment and select the object. Some configurations of the systems and methods described herein may relax some constraints in a segmentation algorithm (e.g., scribble-based segmentation algorithm), which may achieve a significant increase in speed. Increases in segmentation speed may enable real-time performance.” [31].
Therefore, it would have been obvious to one with ordinary skill, in the art at the time of filing, to modify the teachings of Iyer with Sundaresan to obtain the invention as specified in claim 1.
Regarding claim 3, Sundaresan meets the claim limitations, as follows:
wherein the dividing of the video into the plurality of regions based on the information associated with the global motion state between the frames of the video includes: in response to that a global motion change characterized by the information associated with the global motion state between frames of the video satisfies a preset change condition, dividing the video into the plurality of regions (i.e. global motion determined and will separate the regions if one or more criteria is met) [7,65,67; fig. 5].
Regarding claim 4, Sundaresan meets the claim limitations, as follows:
The method according to claim 1, wherein the information associated with the global motion state between the frames of the video includes at least one of: global motion information between the frames of the video; information associated with the motion state of a target object, wherein the target object includes at least one of the photographing device or a device carrying the photographing device (i.e. global motion determined and will separate the regions if one or more criteria is met) [7,65,67].
Regarding claim 7, Sundaresan meets the claim limitations, as follows:
The method according to claim 1, further comprising: determining, based on the information associated with the global motion state between the frames of the video, at least one of an area of the ROI or an area of the non-ROI (i.e. global motion determined and will separate the regions if one or more criteria is met) [7,65,67; fig. 5].
Regarding claim 8, Sundaresan meets the claim limitations, as follows:
The method according to claim 7, wherein the ROI and non-ROI satisfy at least one of: the area of the ROI is negatively correlated with the global motion change characterized by the information associated with the global motion state between the frames of the video; or the area of the non-ROI is positively correlated with the global motion change characterized by the information associated with the global motion state between the frames of the video (i.e. fig. 6 shows local motion vectors that correlate to the global motion being rejected for use and motion outside the ROI being correlated to global motion vectors being indicative of global motion) [108,120-121; fig. 6].
Regarding claim 9, Sundaresan meets the claim limitations, as follows:
The method according to claim 7, wherein the method is implemented according to one of the following strategies:
in response to that the global motion information between the frames of the video includes the global motion vector between the frames of the video, at least the area of the ROI is negatively correlated with the absolute value of the global motion vector between the frames of the video, or the area of the non-ROI is positively correlated with the absolute value of the global motion vector between the frames of the video (i.e. fig. 6 shows local motion vectors that correlate to the global motion being rejected for use and motion outside the ROI being correlated to global motion vectors being indicative of global motion) [108,120-121; fig. 6];
in response to that the information associated with the motion state of the target object includes the motion speed of the target object and the relative distance or relative height between the target object and the photographed object, and the relative distance/relative height between the target object and the photographed object remains constant, at least the area of the ROI is negatively correlated with the absolute value of the motion speed of the target object, or the area of the non-ROI is positively correlated with the absolute value of the motion speed of the target object;
in response to that the information associated with the motion state of the target object includes the motion speed of the target object and the relative distance or relative height between the target object and the photographed object, and the motion speed of the target object remains constant, at least the area of the ROI is positively correlated with the relative distance or relative height between the target object and the photographed object, or the area of the non-ROI is negatively correlated with the relative distance or relative height between the target object and the photographed object;
in response to that the information associated with the global motion state between frames of the video remains constant, at least one of the area of the ROI or the area of the non-ROI is positively correlated with a field of view angle of the photographing device.
Claim 18 is rejected using similar rationale as claim 1 and further below. Iyer teaches a processor and memory [11].
Claim 19 is rejected using similar rationale as claim 1 and further below. Sundaresan teaches a camera mounted on a device [35].
Regarding claim 20, Sundaresan meets the claim limitations, as follows:
The apparatus according to claim 19, wherein the apparatus comprises any one of a mobile phone, a tablet computer, a smart wearable device, a handheld gimbal, and a movable device [2].
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iyer and Sundaresan in view of Lu US 20220303555.
Regarding claim 2, Iyer and Sundaresan do/does not explicitly disclose(s) the following claim limitations:
herein the dividing of the video into the plurality of regions based on the information associated with the global motion state between the frames of the video includes: in a case where a transmission condition of a transmission device corresponding to the photographing device does not meet a preset transmission condition, dividing the video into the plurality of regions based on the information associated with the global motion state between the frames of the video
However, in the same field of endeavor Lu discloses the deficient claim limitations, as follows:
wherein the dividing of the video into the plurality of regions based on the information associated with the global motion state between the frames of the video includes: in a case where a transmission condition of a transmission device corresponding to the photographing device does not meet a preset transmission condition, dividing the video into the plurality of regions based on the information associated with the global motion state between the frames of the video (i.e. Video module B (650) transmits the encoded low-quality background to one or more remote endpoints (610) at a bitrate of 3045.4 kilobits per second and transmits the encoded high-quality ROI at a bitrate of 1637.5 kilobits per second. Thus, the bitrate used by video module B (650) represents an approximately 14.43% reduction relative to the bitrate used by video module A (600). The bitrate reduction achieved by video module B (650) relative to video module A (600) depends on the size of the ROI (504).) [67].
It would have been obvious to one with ordinary skill in the art at the time of filing to modify the teachings of Iyer and Sundaresan with Lu to have the dividing of the video into the plurality of regions based on the information associated with the global motion state between the frames of the video includes: in a case where a transmission condition of a transmission device corresponding to the photographing device does not meet a preset transmission condition, dividing the video into the plurality of regions based on the information associated with the global motion state between the frames of the video.
It would be advantageous because "these methods may reduce the size of an encoded video frame without incurring a noticeable loss of quality when the frame is decoded and/or displayed.” [22].
Therefore, it would have been obvious to one with ordinary skill, in the art at the time of filing, to modify the teachings of Iyer and Sundaresan with Lu to obtain the invention as specified in claim 2.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iyer and Sundaresan in view of Barbour US 20220405874.
Regarding claim 5, Iyer and Sundaresan do/does not explicitly disclose(s) the following claim limitations:
wherein the information associated with the motion state of the target object includes: a motion speed of the target object; and a relative distance or relative height between the target object and the photographed object.
However, in the same field of endeavor Barbour discloses the deficient claim limitations, as follows:
wherein the information associated with the motion state of the target object includes: a motion speed of the target object; and a relative distance or relative height between the target object and the photographed object (i.e. real-time or near-time identification and tracking of mobile features include identification and tracking of vehicles, people, animals and other objects in motion—this includes speed, direction and distance information for each allowing for trajectories to be created and used for motion prediction.) [98].
It would have been obvious to one with ordinary skill in the art at the time of filing to modify the teachings of Iyer and Sundaresan with Barbour to have the information associated with the motion state of the target object includes: a motion speed of the target object; and a relative distance or relative height between the target object and the photographed object.
It would be advantageous because "This provides the advantage of upward compatibility with any currently available imaging modality.” [60].
Therefore, it would have been obvious to one with ordinary skill, in the art at the time of filing, to modify the teachings of Iyer and Sundaresan with Barbour to obtain the invention as specified in claim 5.
Claim(s) 10-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iyer and Sundaresan in view of Leung US 20180082428.
Regarding claim 10, Iyer and Sundaresan do/does not explicitly disclose(s) the following claim limitations:
determining, based on information associated with an attitude change of the target object, a position change of the ROI, wherein the target object includes at least one of the photographing device or a device carrying the photographing device.
However, in the same field of endeavor Leung discloses the deficient claim limitations, as follows:
determining, based on information associated with an attitude change of the target object, a position change of the ROI, wherein the target object includes at least one of the photographing device or a device carrying the photographing device (i.e. ROI position changes based on downward shifted perspective of the image sensor) [67-68; fig. 5a].
It would have been obvious to one with ordinary skill in the art at the time of filing to modify the teachings of Iyer and Sundaresan with Leung to determine, based on information associated with an attitude change of the target object, a position change of the ROI, wherein the target object includes at least one of the photographing device or a device carrying the photographing device.
It would be advantageous because " A video processing system implementing a CAMShift algorithm that is enhanced with such motion information may more effectively track fast-moving objects.” [3].
Therefore, it would have been obvious to one with ordinary skill, in the art at the time of filing, to modify the teachings of Iyer and Sundaresan with Leung to obtain the invention as specified in claim 10.
Regarding claim 11, Iyer meets the claim limitations, as follows:
The method according to claim 10, wherein the determining, based on information associated with an attitude change of the target object, a position change of the ROI includes at least one of: in response to that the information associated with the attitude change of the target object satisfies a preset condition, determining the position change of the ROI; or the position change of the ROI includes at least one of a horizontal displacement or a vertical displacement of the ROI (i.e. ROI position changes based on downward shifted perspective of the image sensor) [67-68; fig. 5a].
Regarding claim 12, Iyer meets the claim limitations, as follows:
The method according to claim 1, wherein the clarity of the non- ROI is lower than the clarity of the ROI (i.e. ROI is displayed in HD where non ROI is SD.) [48].
Regarding claim 13, Iyer meets the claim limitations, as follows:
The method according to claim 12, wherein in in response to that the information associated with the global motion state between the frames of the video remains constant, at least one of the clarity of the ROI or the clarity of the non-ROI is related to a transmission condition corresponding to a transmission device associated with the photographing device (i.e. ROI is displayed in HD where non ROI is SD. HD and SD will are related to transmission condition of the transmission device. HD requires more resources to transmit) [48].
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iyer, Sundaresan and Leung in view of Patel US 20230298276.
Regarding claim 14, Iyer, Sundaresan and Leung do/does not explicitly disclose(s) the following claim limitations:
wherein the transmission condition includes a transmission bitrate, and the clarity of at least one of the ROI or the non- ROI decreases as the transmission bitrate decreases.
However, in the same field of endeavor Patel discloses the deficient claim limitations, as follows:
wherein the transmission condition includes a transmission bitrate, and the clarity of at least one of the ROI or the non- ROI decreases as the transmission bitrate decreases (i.e. bit rate increased for ROI and decreased for non ROI) [53,55].
It would have been obvious to one with ordinary skill in the art at the time of filing to modify the teachings of Iyer, Sundaresan and Leung with Patel to have the transmission condition includes a transmission bitrate, and the clarity of at least one of the ROI or the non- ROI decreases as the transmission bitrate decreases
It would be advantageous because "Some configurations of the systems and methods disclosed herein may reliably segment (e.g., improve the quality of segmentation) and/or select an object of interest” [33].
Therefore, it would have been obvious to one with ordinary skill, in the art at the time of filing, to modify the teachings of Iyer, Sundaresan and Leung with Patel to obtain the invention as specified in claim 14.
Claim(s) 16,17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iyer, Sundaresan and Leung in view of Bhagavathy US 20120224629.
Regarding claim 16, Iyer, Sundaresan and Leung do/does not explicitly disclose(s) the following claim limitations:
wherein the performing of different image processing on the ROI and the non-ROI to achieve different levels of clarity for the ROI and the non-ROI includes: performing blurring on the non-ROI; or performing different image processing on the ROI and the non-ROI such that a quantization parameter of the ROI is smaller than a quantization parameter of the non-ROI.
However, in the same field of endeavor Bhagavathy discloses the deficient claim limitations, as follows:
wherein the performing of different image processing on the ROI and the non-ROI to achieve different levels of clarity for the ROI and the non-ROI includes: performing blurring on the non-ROI; or performing different image processing on the ROI and the non-ROI such that a quantization parameter of the ROI is smaller than a quantization parameter of the non-ROI (i.e. QP parameters are different for ROI and non ROI) [7,19].
It would have been obvious to one with ordinary skill in the art at the time of filing to modify the teachings of Iyer, Sundaresan and Leung with Bhagavathy to perform different image processing on the ROI and the non-ROI to achieve different levels of clarity for the ROI and the non-ROI includes: performing blurring on the non-ROI; or performing different image processing on the ROI and the non-ROI such that a quantization parameter of the ROI is smaller than a quantization parameter of the non-ROI.
It would be advantageous because "objects or regions of interest are detected and their coded quality is improved by pre-processing and/or using an object-aware encoder to better preserve important objects. This done because it is important to viewers be able to clearly see objects of interest in a video such as the ball or players in soccer videos.” [4].
Therefore, it would have been obvious to one with ordinary skill, in the art at the time of filing, to modify the teachings of Iyer, Sundaresan and Leung with Bhagavathy to obtain the invention as specified in claim 16.
Regarding claim 17, Bhagavathy meets the claim limitations, as follows:
The method according to claim 16, characterized in that the performing of blurring on the non-ROI includes: performing sharpness enhancement on the ROI and performing blurring on the non-ROI [32].
Allowable Subject Matter
Claims 6,15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and correcting all of the above rejections.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JARED T WALKER whose telephone number is (571)272-1839. The examiner can normally be reached M-F: 8:00 - 4:30 Mountain.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached on 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Jared Walker/Primary Examiner, Art Unit 2426