DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claims 1, 14 and 20, and their dependencies, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, 7-11, 14-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Qiu et al. (CN 116112707A video processing method and device, electronic device and storage medium.) hereinafter, “Qui”, in view of Khalatian (USPAP 2022/0284,324).
Regarding claim 1, Qui recites, generating a first image frame embedding from a first image frame in an image stream (Please note, page 3, 4th. Paragraph. As indicated based on the first frame, the second frame and the inserting frame time); predicting a synthetic image frame in the image stream using the first image frame embedding of the first image frame in the image stream (Please note, page 3, 4th. Paragraph. As indicated determining the target synthetic frame); displaying the synthetic image frame after the first image frame in the image stream. (Please note, page 3, 4th. Paragraph. As indicated inserting the target composite frame between the first frame and the second frame based on the frame insertion time.).
Qui does not expressly teach, before generating a second image frame embedding from a second image in the image stream, predicting a synthetic image frame; and after displaying the synthetic image frame, displaying the second image frame in the image stream.
Khalatian teaches, before generating a second image frame embedding from a second image in the image stream, predicting a synthetic image frame (Please note, paragraph 0018. As indicated embeddings of images of multiple respective faces are applied to the artificial intelligence engine and to the models to predict the relative attractiveness of those images to the individuals.); and after displaying the synthetic image frame, displaying the second image frame in the image stream (Please note, figure 2, block 36).
Qui & Khalatian are combinable because they are from the same field of endeavor.
At the time before the effective filing date, it would have been obvious to a person of ordinary skill in the art to utilize this before generating a second image frame embedding from a second image in the image stream, predicting a synthetic image frame; and after displaying the synthetic image frame, displaying the second image frame in the image stream of Khalatian in Qui’s invention.
The suggestion/motivation for doing so would have been as indicated on paragraph 0018,“ to predict which image was more attractive.”.
Therefore, it would have been obvious to combine Khalatian with Qui to obtain the invention as specified in claim 1.
Regarding claim 2, Qui recites, generating the first image frame embedding with user input information. (Please note, page 21, last paragraph. As indicated a keyboard and a pointing device (e.g., mouse or trackball), the user can through the keyboard and the pointing device to provide the input to the computer.).
Regarding claim 3, Qui recites, one or more button presses of an input device. (Please note, page 21, last paragraph. As indicated a keyboard and a pointing device (e.g., mouse or trackball).
Regarding claim 4, Qui recites, wherein the user input information includes one or more movements of an input device. (Please note, page 21, last paragraph. As indicated a keyboard and a pointing device (e.g., mouse or trackball), the user can through the keyboard and the pointing device to provide the input to the computer.).
Regarding claim 5, Qui recites, wherein the input device is a mouse or trackball or joystick. (Please note, page 21, last paragraph. As indicated a keyboard and a pointing device (e.g., mouse or trackball), the user can through the keyboard and the pointing device to provide the input to the computer.).
Regarding claim 7, Qui recites, wherein generating the first image frame embedding further comprises generating the first image frame embedding with motion vectors from the image stream. (Please note, page 11, last paragraph. As indicated in FIGS. 2 and 3, the motion vector may indicate the size of the motion intensity of the region in the video frame relative to the reference frame. Therefore, based on the motion vector of each region in the video frame relative to the reference frame corresponding to the region, can determine the global motion parameter of the video frame. The global motion parameter can reflect whether there is large motion between the first frame and the second frame, and can determine the inserting frame time based on the global motion parameter.).
Regarding claim 8, Qui recites, generating the first image frame embedding in conjunction with a previous image frame in the image stream. (Please note, page 13, last paragraph. As indicated in some embodiments, can use the inserting frame model, based on the frame time determined in step S106 for processing the image of the first frame and the second frame, to obtain the target synthetic frame).
Regarding claim 9, Qui recites, using a neural network trained with a machine learning algorithm. (Please note, page 13, last paragraph. As indicated the frame model can be based on the light deep learning model.).
Regarding claim 10, Qui recites, using a neural network trained with a machine learning algorithm. (Please note, page 6. As indicated relative to the picture of the previous frame by copying or the front and back two frames for fuzzy processing to obtain the inserted frame mode of the synthesized intermediate frame, based on the inserting frame mode of the deep learning flow model can effectively model the mapping relation between the target intermediate frame and the front and back two frames, so as to generate the intermediate frame in a more reasonable way.).
Regarding claims 14-18, analysis similar to those presented for claims 1-5, respectively, are applicable.
Regarding claim 20, analysis similar to those presented for claim 1, are applicable.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 6, 12-13 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Qiu et al. (CN 116112707A video processing method and device, electronic device and storage medium.) hereinafter, “Qui”, in view of Holzer et al. (USPN 10,210,662), hereinafter, “Holzer”.
Regarding claim 6, Qui recites, generating a first image frame embedding from a first image frame in an image stream (Please note, page 3, 4th. Paragraph. As indicated based on the first frame, the second frame and the inserting frame time); predicting a synthetic image frame in the image stream using the first image frame embedding of the first image frame in the image stream (Please note, page 3, 4th. Paragraph. As indicated determining the target synthetic frame); displaying the synthetic image frame after the first image frame in the image stream. (Please note, page 3, 4th. Paragraph. As indicated inserting the target composite frame between the first frame and the second frame based on the frame insertion time.).
Qui does not expressly teach, utilizing an inertial measurement unit.
Holzer teaches, an inertial measurement unit utilization (Please note, column 3, line 1. As indicated an Inertial Measurement Unit.).
Qui & Holzer are combinable because they are from the same field of endeavor.
At the time before the effective filing date, it would have been obvious to a person of ordinary skill in the art to utilize this inertial measurement unit operation of Holzer in Qui’s invention.
The suggestion/motivation for doing so would have been as indicated on column 2, lines 63-65 “to help the user guide the mobile device along a desirable path useful for creating the surround view.”.
Therefore, it would have been obvious to combine Holzer with Qui to obtain the invention as specified in claim 6.
Regarding claim 12, Holzer recites, displaying the other synthetic image frame after displaying the synthetic image frame in the image stream. (Please note, claim 1. As indicated generating a third synthetic images including the second virtual object rendered into the second 2-D pixel data at third pixel locations positioned relative to the second pixel locations of the first tracking point.).
Regarding claim 13, Holzer recites, displaying a third image frame after displaying the other synthetic image frame. (Please note, claim 1. As indicated outputting the third synthetic images to the display wherein each of the third synthetic images shows one of the different views of the real object as currently being captured by the camera and the second virtual object.).
Regarding claim 19, analysis similar to those presented for claim 6, are applicable.
Examiner’s Note
The examiner cites particular figures, paragraphs, columns and line numbers in the references as applied to the claims for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claims, other passages and figures may apply as well.
It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR ALAVI whose telephone number is (571)272-7386. The examiner can normally be reached on M-F from 8:00-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format.
For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMIR ALAVI/Primary Examiner, Art Unit 2668 Thursday, March 5, 2026