Prosecution Insights
Last updated: April 19, 2026
Application No. 18/468,162

APPARATUS AND METHOD FOR GENERATING MOVING VIEWPOINT MOTION PICTURE

Non-Final OA §103
Filed
Sep 15, 2023
Examiner
NGUYEN, PHU K
Art Unit
2616
Tech Center
2600 — Communications
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
2 (Non-Final)
86%
Grant Probability
Favorable
2-3
OA Rounds
2y 10m
To Grant
93%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
1019 granted / 1184 resolved
+24.1% vs TC avg
Moderate +7% lift
Without
With
+7.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
1224
Total Applications
across all art units

Statute-Specific Performance

§101
7.1%
-32.9% vs TC avg
§103
66.6%
+26.6% vs TC avg
§102
3.8%
-36.2% vs TC avg
§112
4.6%
-35.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1184 resolved cases

Office Action

§103
Response to Applicant’s Arguments Applicant’s arguments filed 09/19/2025 have been fully considered. They are not deemed to be persuasive based on the new references of BEN PLAY VR (Travel to Beaches in VR + more Relaxing Experiences) and GUERRA et al (FlightGoggles: Photorealistic Sensor Simulation for Perception-driven Robotics using Photogrammetry and Virtual Reality). Specifically, Ben teaches the claimed “generating a camera trajectory by assuming a movement of a virtual camera, wherein the generating of the moving viewpoint motion picture comprises generating the moving viewpoint motion picture using the foreground mesh/texture map model and a background mesh/texture map model at a moving viewpoint generated based on the camera trajectory” in the video (https://www.youtube.com/watch?v=0BCbSNDIzRU; 00:16-01:32 – the creation of motion picture based on a moving viewpoint on a trajectory and the computer generated map model of a scene; 05:49-06:43 – the motion picture of a cabin scene based on the virtual path of a camera) (see also Guerra, Abstract - FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s) in flight). Accordingly, the claimed invention as represented in the claims does not represent a patentable distinction over the art of record. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-7, 8-17, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over ZHAO et al (Automatic Matting Using Depth and Adaptive Trimap) in view of LAI et al (High- Resolution Texture Mapping Technique for 3D Textured Model), and further in view of BEN PLAY VR (Travel to Beaches in VR + more Relaxing Experiences) and GUERRA et al (FlightGoggles: Photorealistic Sensor Simulation for Perception-driven Robotics using Photogrammetry and Virtual Reality). As per claim 1, Zhao teaches the claimed “apparatus comprising: a memory; and a processor configured to execute at least one instruction stored in the memory” (Zhao, Abstract — computer-performed image matting technique), wherein, by executing the at least one instruction, the processor is configured to: “obtain an input image; generate a trimap from the input image” (Zhao, 3.1 Trimap Initializing - the goal of trimap generation is to mark out the foreground boundary in which the aloha values are to be calculated. Therefore, we firstly use depth information to find out the target foreground. We use mean shift cluster method to segment the depth map into several blobs. We take the blob with nearest depth and largest size as the target foreground of image matting. After that, we use the foreground depth as a threshold and generate a mask in which the foreground is labeled as 1 while the others are labeled as 0. Since the depth map contains noise and missing data, the found foreground is fairly rough); “generate a foreground mesh/texture map model based on a foreground alpha map obtained based on the trimap and foreground depth information obtained based on the trimap” (Zhao, 3.2 Depth Assisted Sampling and 3.3 Alpha Calculation - The trimap segments the input image into three non-overlapping regions: known foreground F, known background B and unknown U. Next, the algorithm selects F and B samples for U by optimizing an energy function based on depth and color features, followed by estimating foreground and background colors for U. Finally, the aloha values of U are calculated using the estimated colors according to their confidences). It is noted that Zhao does not explicitly teach “generate a moving viewpoint motion picture based on the foreground mesh/texture map model’. However, Zhao’s foreground, background extracted from the matting process of the input image and its depth information (e.g., 5.7 Experiment on RGB-D data - In this experiment, we test proposed image matting method using depth and adaptive trimap on RGB-D data. We generate adaptive trimaps using depth and color images captured by Kinect v2 sensor) suggests a 3D texture model of the input image (see also Lai, 3. Overview of the Proposed Method and Figure 5 - texture transferring is implemented to extract pixels from the image domain, and place them on the texture domain appropriately (Figure 1d). This procedure comprises three main steps: grouping the 3D meshes, extracting pixels from the image domain, and placing pixels onto the texture domain). It is noted that Zhao does not explicitly teach “generating a camera trajectory by assuming a movement of a virtual camera, wherein the generating of the moving viewpoint motion picture comprises generating the moving viewpoint motion picture using the foreground mesh/texture map model and a background mesh/texture map model at a moving viewpoint generated based on the camera trajectory.” However, given Zhao’s computer generated mesh/texture model of a real scene, it would have been obvious to simulation of immersive vision in the virtual scene as a moving viewpoint generated based on the virtual camera trajectory (00:16-01:32 – the creation of motion picture based on a moving viewpoint on a trajectory and the computer generated map model of a scene; 05:49-06:43 – the motion picture of a cabin scene based on the virtual path of a camera) (see also Guerra, Abstract - FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s) in flight). Thus, it would have been obvious, in view of Lai, Ben, and Guerra to configure Zhao’s system as claimed by building a 3D texture model of the input image for generating a moving viewpoint motion picture, based on the extracted foreground, background and depth data of the input image. The motivation is to provide a simulation of a virtual path of a camera capturing a computer-generated scene (Guerra, Abstract - The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection). Claim 2 adds into claim 1 “generate the trimap to include an extended foreground area including a first region and a second region, the first region being an invariant foreground region of the input image and the second region being a boundary region between a foreground and a background of the input image” (Zhao, 3.2 Depth Assisted Sampling and 3.3 Alpha Calculation - The trimap segments the input image into three non-overlapping regions: known foreground F, known background B and unknown U. Next, the algorithm selects F and B samples for U by optimizing an energy function based on depth and color features, followed by estimating foreground and background colors for U. Finally, the aloha values of U are calculated using the estimated colors according to their confidences); and “generate the foreground mesh/texture map model to include a three-dimensional (83D) mesh model for the extended foreground area” (Zhao, 5.7 Experiment on RGB-D data - In this experiment, we test proposed image matting method using depth and adaptive trimap on RGB-D data. We generate adaptive trimaps using depth and color images captured by Kinect v2 sensor; Lai, 3. Overview of the Proposed Method and Figure 5 - texture transferring is implemented to extract pixels from the image domain, and place them on the texture domain appropriately (Figure 1d). This procedure comprises three main steps: grouping the 3D meshes, extracting pixels from the image domain, and placing pixels onto the texture domain). Thus, it would have been obvious, in view of Lai, Ben, and Guerra, to configure Zhao’s system as claimed by building a 3D texture model of the input image including foreground, and background 3D mesh models and their texture, based on the extracted foreground, background and depth data of the input image. The motivation is to provide a 3D textured model which requires less memory and can freely be oriented in 3D space (Lai, 1. Introduction and Figure 1 (a) - a 3D textured model requires less memory and can freely be oriented in 3D space). Claim 3 adds into claim 2 “wherein the processor is further configured to apply the foreground alpha map to a texture map for the second region” (Zhao, 5.7 Experiment on RGB-D data - In this experiment, we test proposed image matting method using depth and adaptive trimap on RGB-D data. We generate adaptive trimaps using depth and color images captured by Kinect v2 sensor). Claim 4 adds into claim 1 “to generate the foreground mesh/texture map model to include information of a relation between texture data generated based on the foreground alpha map and a 3D mesh for an extended foreground area including a first region being an invariant foreground region of the input image and a second region being a boundary region between a foreground and a background of the input image” (Zhao, 3.2 Depth Assisted Sampling and 3.3 Alpha Calculation - The trimap segments the input image into three non-overlapping regions: known foreground F, known background B and unknown U. Next, the algorithm selects F and B samples for U by optimizing an energy function based on depth and color features, followed by estimating foreground and background colors for U. Finally, the aloha values of U are calculated using the estimated colors according to their confidences; 5.1 Experiment on RGB-D data - In this experiment, we test proposed image matting method using depth and adaptive trimap on RGB-D data. We generate adaptive trimaps using depth and color images captured by Kinect v2 sensor). Claim 5 adds into claim 1 “generate a depth map for the input image” (Zhao, 5.1 Experiment on RGB-D data - In this experiment, we test proposed image matting method using depth and adaptive trimap on RGB-D data. We generate adaptive trimaps using depth and color images captured by Kinect v2 sensor); and “generate the foreground depth information using the trimap and the depth map, wherein the trimap includes an extended foreground area including a first region being an invariant foreground region of the input image and a second region being a boundary region between a foreground and a background of the input image” (Zhao, 3.2 Depth Assisted Sampling and 3.3 Alpha Calculation - The trimap segments the input image into three non-overlapping regions: known foreground F, known background B and unknown U. Next, the algorithm selects F and B samples for U by optimizing an energy function based on depth and color features, followed by estimating foreground and background colors for U. Finally, the aloha values of U are calculated using the estimated colors according to their confidences). Claim 6 adds into claim 1 “perform hole painting on a background image including a third region being an invariant background region of the input image; and generate a background mesh/texture map model using a result of the hole painting on the background image” which is obvious in view of Zhao’s foreground mesh/texture map model by treating the background image in a similar process applied on the foreground image (Zhao, 3.2 Depth Assisted Sampling and 3.3 Alpha Calculation - The trimap segments the input image into three non-overlapping regions: known foreground F, known background B and unknown U. Next, the algorithm selects F and B samples for U by optimizing an energy function based on depth and color features, followed by estimating foreground and background colors for U. Finally, the aloha values of U are calculated using the estimated colors according to their confidences). Thus, it would have been obvious, in view of Lai, Ben, and Guerra, to configure Zhao’s system as claimed by building a 3D texture model of the input image including a foreground 3D model, and a background 3D model for generating multiple view images, or a moving viewpoint motion picture, based on the extracted foreground, background and depth data of the input image. The motivation is to provide a 3D textured model which requires less memory and can freely be oriented in 3D space (Lai, 1. Introduction and Figure 1 (a) - a 3D textured model requires less memory and can freely be oriented in 3D space). Claim 7 adds into claim 6 “generate a depth map for the input image” (Zhao, 5.1 Experiment on RGB-D data - In this experiment, we test proposed image matting method using depth and adaptive trimap on RGB-D data. We generate adaptive trimaps using depth and color images captured by Kinect v2 sensor); “generate initialized background depth information by applying the depth map to the third region which is the invariant background region of the input image; perform hole painting on the background depth information; and generate the background mesh/texture map model using a result of hole painting on the background depth information and the result of hole painting on the background image” which is obvious in view of Zhao’s foreground mesh/texture map model by treating the background image in a similar process applied on the foreground image (Zhao, 3.2 Depth Assisted Sampling and 3.3 Alpha Calculation - The trimap segments the input image into three non-overlapping regions: known foreground F, known background B and unknown U. Next, the algorithm selects F and B samples for U by optimizing an energy function based on depth and color features, followed by estimating foreground and background colors for U. Finally, the aloha values of U are calculated using the estimated colors according to their confidences). Thus, it would have been obvious, in view of Lai, Ben, and Guerra, to configure Zhao’s system as claimed by building a 3D texture model of the input image including a foreground 3D model, and a background 3D model for generating multiple view images, or a moving viewpoint motion picture, based on the extracted foreground, background and depth data of the input image. The motivation is to provide a 3D textured model which requires less memory and can freely be oriented in 3D space (Lai, 7. Introduction and Figure 1 (a) - a 3D textured model requires less memory and can freely be oriented in 3D space). Claim 9 adds into claim 1 “generate the trimap based on a user input for the input image” (Zhao, 3.4 Discussion on Trimap - Figure 3(c) is the same color image overlaid by trimap generated by user strokes together with corresponding matting result in Fig. 3(d)). Claim 10 adds into claim 1 “automatically generate the trimap based on the input image” (Zhao, 4 Adaptive Trimap Generation - To solve the problems caused by improper trimap, in this section, we present an approach that automatically generates an adaptive trimap in 3 steps: initializing, dividing, and refining). Claims 11-17, and 19-20 claim a method based on the apparatus of claims 1-10; therefore, they are rejected under a similar rationale. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHU K NGUYEN whose telephone number is (571)272-7645. The examiner can normally be reached M-F 8-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel F. Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHU K NGUYEN/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Sep 15, 2023
Application Filed
Jun 22, 2025
Non-Final Rejection — §103
Sep 19, 2025
Response Filed
Nov 03, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602147
ZOOM ACTION BASED IMAGE PRESENTATION
2y 5m to grant Granted Apr 14, 2026
Patent 12602874
FRAGMENTATION MODEL GENERATION METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602836
METHOD TO GENERATE DISPLACEMENT FOR SYMMETRY MESH
2y 5m to grant Granted Apr 14, 2026
Patent 12599485
SYSTEMS AND METHODS FOR ORTHOPEDIC IMPLANTS
2y 5m to grant Granted Apr 14, 2026
Patent 12597206
MECHANICAL WEIGHT INDEX MAPS FOR MESH RIGGING
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
86%
Grant Probability
93%
With Interview (+7.3%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 1184 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month