Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 11, and 19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-8 and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over Tandon et al. (US 20190130585 A1) in view of Tico et al. (US 20240020807 A1).
Re claim 1, Tandon discloses a system comprising:
one or more processing units (Tandon: paragraph [0019]) to:
identify the frame of the videos stream as a new blurred frame for the video stream based in part on the motion data being equal to or less than an adaptive threshold determined based at least in part on an average of the motion data and additional motion data from a predefined blurred frame (Tandon: paragraphs [0013]-[0015]; Fig. 4).
Tandon discloses that a traditional blur detection algorithm may be used to detect completely blurry frames (Tandon: paragraph [0035]), but Tandon does not specifically disclose the process is configured to determine motion data corresponding to a frame of a video stream is above a motion threshold. However, Tico discloses a blurred frame elimination process may be executed on the set of images 118 selected for fusion into the synthetic long exposure image 120, wherein any EV0 frames that have greater than a threshold amount of blur (wherein blur amount may be estimated based on one or more criteria, e.g., information output by gyroscopes or other motion sensors, autofocus score metadata, or other metadata) may be discarded from use in the creation of the synthetic long exposure intermediate asset image (Tico: paragraph [0060]). In some embodiments, the permissible threshold amount of blur may be determined based on a comparison to the amount of blur in the selected reference image (i.e., EV0.sub.3 112 in the case of FIG. 1A) (Tico: paragraph [0060]). Since Tandon and Tico relate to evaluation of groups of frames, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the motion evaluation of Tico with the system of Tandon in order to perform high-resolution and low latency image fusion and noise reduction for images captured in a wide variety of capturing conditions (Tico: paragraph [0002]).
Re claim 2, Tandon discloses
wherein the average comprises a moving average (Tandon: paragraph [0030]), and
one or more processing units are further to:
compute the additional motion data and an initial moving average based at least in part on content of the video stream (Tandon: paragraphs [0030]-[0032]), and
wherein the initial moving average is subject to change using at least the additional motion data from the predefined blurred frame to provide the moving average of the motion data (Tandon: paragraphs [0030]-[0032]).
Re claim 3, Tandon discloses
wherein the one or more processing units are further to:
maintain the adaptive threshold when a motion level for one or more individual frames of the video stream is equal to or greater than the motion threshold or when different motion data corresponding to at least one of the individual frames is equal to or less than the motion threshold (Tandon: paragraph [0016]).
Re claim 4, Tandon discloses that the one or more processing units are further to: identify the one of the individual frames as a second new blurred frame in the video stream (Tandon: paragraph [0035], a traditional blur detection algorithm could be used to filter out completely blurred video frames and an algorithm according to the embodiments described herein could be used solely to find partially motion-blurred video frames).
Re claim 5, Tandon discloses that the one or more processing units are further to: identify a second individual frame of the video stream as a non-blurred frame in the video stream when the different motion data is above the adaptive threshold (Tandon: Fig. 4, frame 426 selected; paragraph [0035], Video frame 426 is largely free of any motion blur and also exhibits a face, which was detected through facial recognition).
Re claim 6, Tandon does not specifically disclose that the motion data corresponding to the frame of the video stream comprises at least one of: a Laplacian of the frame or a Variance of the Laplacian of the frame. However, Tico discloses, in some embodiments, a high-resolution SR image 316 may be generated via a detail transferring process, wherein the additional detail provided in the high-resolution asset(s) 310 may be transferred over to the SR image 304 according to a motion mask that indicates only those portions of the high-resolution image 310 exhibiting less than a threshold level of estimated motion to corresponding portions of the SR image 304 (since transferring over higher resolution details in portions of the captured scene that do not match well to the reference image would result in unwanted artifacts) (Tico: paragraph [0079]). In some implementations, the motion mask may be computed via a Gaussian and/or Laplacian pyramid decomposition process that is refined at each level of the pyramid, in order to maximize the amount of detail transfer that can take place (Tico: paragraph [0079]). Since Tandon and Tico relate to evaluation of groups of frames, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the motion evaluation of Tico with the system of Tandon in order to perform high-resolution and low latency image fusion and noise reduction for images captured in a wide variety of capturing conditions (Tico: paragraph [0002]).
Re claim 7, Tandon discloses
wherein the average comprises a moving average (Tandon: paragraphs [0030]-[0032]), and
the one or more processing units are further to:
determine at least one initial moving average based in part on one or more historical frames, the one or more historical frames including the predefined blurred frame (Tandon: paragraphs [0030]-[0032]); and
average the at least one initial moving average with the motion data corresponding to the frame of the video stream to determine the moving average of the motion data (Tandon: paragraphs [0030]-[0032]).
Re claim 8, Tandon discloses
wherein the one or more processing units are further to:
determine the one or more historical frames from a reference video stream having at least one similar aspect to the video stream (Tandon: paragraph [0017]),
the at least one similar aspect including at least one of a similar feature, similar object, or similar background to the video stream (Tandon: paragraph [0017]).
Claim 11 recites the corresponding method for implementation by the system of claim 1. Therefore, arguments analogous to those presented for claim 1 are applicable to claim 11. Accordingly, claim 11 has been analyzed and rejected with respect to claim 1 above.
Claim 12 has been analyzed and rejected with respect to claim 2 above.
Claim 13 has been analyzed and rejected with respect to claim 3 above.
Claim 14 has been analyzed and rejected with respect to claim 4 above.
Claim 15 has been analyzed and rejected with respect to claim 5 above.
Claim 16 has been analyzed and rejected with respect to claim 6 above.
Claim 17 has been analyzed and rejected with respect to claim 7 above.
Claim 18 has been analyzed and rejected with respect to claim 8 above.
Re claim 19, Tandon discloses a system comprising: one or more processing units to identify a frame of a video stream as a new blurred frame for the video stream based in part on motion data determined for the frame being above a motion threshold and being equal to or less than an adaptive threshold which is determined based at least in part on the motion data and additional motion data from a predefined blurred frame (Tandon: paragraph [0019]; paragraphs [0013]-[0015]; Fig. 4).
Re claim 20, Tandon discloses that the system is comprised in at least one of: a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center (Tandon: paragraph [0022], remote client-computing device); or a system implemented at least partially using cloud computing resources (Tandon: paragraph [0022], Data network 114 can also connect computing device 101 to a network storage device 124, which can be used as a repository for stored video clips for use with the video editing application 110, as well as updated or archived versions of the video editing software for distribution).
Claim(s) 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Tandon et al. (US 20190130585 A1) in view of Tico et al. (US 20240020807 A1), and further in view of Sentinelli et al. (US 20130336590 A1).
Re claim 9, neither Tandon nor Tico specifically discloses the one or more processing units are further to: determine the predefined blurred frame using an arbitrary value applied to the motion data corresponding to the frame of the video stream. However, Sentinelli discloses the similarity matching produces at least one numerical value representing the degree of similarity of the semantic descriptions of the received image frame and the at least one of the plurality of image frames; the result of the comparison includes a logical value representing whether the corresponding image frames possess at least a pre-determined degree of similarity; and the comparison determines the logical value by comparing the at least one numerical value with at least one pre-determined threshold representing the pre-determined degree of similarity, wherein, in particular, the at least one pre-determined threshold is adapted during the recording of the video (Sentinelli: paragraphs [0036]-[0038]). Since Tandon, Tico, and Sentinelli relate to evaluation of groups of frames, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the filtering of Sentinelli with the system of Tandon and Tico in order to facilitate improved readiness of storyboard scene summaries (Sentinelli: paragraphs [0008]-[0009]).
Re claim 10, neither Tandon nor Tico specifically discloses wherein the one or more processing units are further to determine luma and color difference (YUV) values corresponding to a plurality of frames including the frame of the video stream, wherein the motion data corresponding to the frame of the video stream is determined from the YUV values of the frame. However, Sentinelli discloses , in an embodiment, the semantic description may include information about the spatial distribution of at least one color or color component within the image frame (Sentinelli: paragraph [0024]). To extract the GLACE histogram description from the image data of an image frame, the image frame is first divided into a grid of equally sized segments, for instance in the form of rectangular pixel blocks (Sentinelli: paragraph [0025]). For each segment, the mean values of the basic colors of the color space, e.g., red, green, and blue for RGB, or Y, U, and V for YUV, and the number of saturated pixels per basic color are evaluated and stored for each basic color in a vector representing the GLACE histogram description (Sentinelli: paragraph [0025]). A pixel is considered as saturated in a basic color if the corresponding color channel of the photo sensor responds at or above a predefined value (for example, a maximum value or a value close to a maximum value) (Sentinelli: paragraph [0025]). Since Tandon, Tico, and Sentinelli relate to evaluation of groups of frames, one of ordinary skill in the art before the effective filing date would have found it obvious to combine the filtering of Sentinelli with the system of Tandon and Tico in order to facilitate improved readiness of storyboard scene summaries (Sentinelli: paragraphs [0008]-[0009]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER G FINDLEY whose telephone number is (571)270-1199. The examiner can normally be reached Monday-Friday 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571)272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER G FINDLEY/Primary Examiner, Art Unit 2482