Prosecution Insights
Last updated: April 19, 2026
Application No. 18/592,313

APPLYING AND BLENDING NEW TEXTURES TO SURFACES ACROSS FRAMES OF A VIDEO SEQUENCE

Final Rejection §103
Filed
Feb 29, 2024
Examiner
BARHAM, RYAN ALLEN
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Adobe Inc.
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
7 granted / 13 resolved
-8.2% vs TC avg
Strong +60% interview lift
Without
With
+60.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
19 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
45.4%
+5.4% vs TC avg
§112
2.8%
-37.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-5, 7-9, 11-12, 14-16, and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thurow (EP 3111647 B1), and further in view of Wang (US 20220292649 A1). Regarding claim 1, Thurow teaches a method comprising: obtaining a new texture to apply to a selected region of a first video frame of a video sequence (par. 0024: “The functions herein may employ texture mapping techniques which enable the use of existing software and hardware methods in the field of texture mapping to reduce computational complexity. Texture mapping is a technique wherein a three-dimensional (3D) surface is defined in computer memory with reference to a predefined 3D frame of reference, and a two-dimensional (2D) bitmap image termed a 'texture' or 'texture bitmap' is projected onto the 3D surface in accordance with a predefined texture map to generate a 3D textured surface which may be termed a "textured surface".”); generating a mesh for the selected region of the first video frame of the video sequence, wherein the mesh includes a plurality of control points arranged within the mesh (par. 0033: “Prior to modification based on the optical properties of the camera, panel vertices may be implemented as a quad mesh 400 as shown in FIG. 4”); determining control point location data for each of the plurality of control points for additional video frames of the video sequence (par. 0024: “The texture map does not, in general, define a one-to-one relationship of each and every pixel of the 2D texture to the 3D surface, but rather defines a mapping of a subset of points in the 2D texture to the vertices of the 3D surface. Projection of the 2D texture to the 3D surface employs interpolation between the vertices based on corresponding content of the 2D texture.”); generating warped video frames by applying the new texture to the additional video frames of the video sequence using the control point location data for the additional video frames of the video sequence (par. 0028: “The 3D surface panel associated with each video feed may be modified, or warped, in order to correct lens distortion in the video frames of the video feed, caused by the optical properties of the corresponding camera, when the video frames are projected as texture bitmaps onto the modified panel.”); generating blended video frames by blending the new texture in the warped video frames (par. 0047: “Computer graphics methods may also be used to provide blending of adjacent images in the overlapping border area, a task commonly done using computer vision methods.”); and providing a modified version of the video sequence using the generated blended video frames (par. 0064: “FIG. 17 illustrates the steps in the process. Images from neighbouring cameras in an array were projected onto corresponding panels warped to correct lens distortion to produce texture panels with lens distortion correction 1710. The textured panels were then visually aligned by manipulation of the underlying panels to produce a stitched textured panels with lens distortion correction 1720. These were then alpha-composited to produce blended, stitched textured panels with lens distortion correction 1730. Finally, corresponding positive and negative difference textures were generated and projected onto the respective panels to produce stitched, blended, and colour-normalized textured panels with lens distortion correction - a panoramic image 1740.”). Thurow fails to teach wherein the blending includes gradient-domain blending of the new texture with a corresponding video frame and adding high-frequency residuals computed from the corresponding video frame. Wang teaches a method comprising: obtaining a new texture to apply to a selected region of a first video frame of a video sequence (par. 0025: “The video editor copies the “fifty-yard line” object, subject to these motion constraints, and thereby modify the “fifty-yard line” object to include the UHD blades of grass or an axial rotation. In certain cases, the video editor determines that a nearby 3D object or a sub-region of the “fifty-yard line” object requires modification. Such a 3D “yardage marker” object, corresponding to the “fifty-yard line” object, requires modification to ensure the 3D “yardage marker” object does not appear with a geometric distortion (e.g., a parallax effect, pulling effect (e.g., a stretched background image), perspective distortion, warp, axial rotation, radial distortion, barrel distortion, pincushion, asymmetry, compression, elongation, texture gradient, image gradient, etc.).”); determining control point location data for each of the plurality of control points for additional video frames of the video sequence (par. 0017: “Certain aspects involve video inpainting using motion constraints based on sparse feature points or motion values. For instance, a video editor assists with modifying a target region of a video, which includes portions of video frames depicting an object to be removed or modified, by using the computed motion of a scene depicted in the video frames to identify content to be copied into the target region.”); generating warped video frames by applying the new texture to the additional video frames of the video sequence using the control point location data for the additional video frames of the video sequence (par. 0086: “In this example, the weighting function ω.sub.grad is used to provide a weight to gradient constraints based on their respective textures. For instance, the weighting function ω.sub.grad is designed to provide a higher weighted value to gradient constraints that correspond to a hole region with less texture. Further, the weighting function ω.sub.grad is also designed to provide a lower weighted value to gradient constraints that correspond to a hole region with more texture.”); generating blended video frames by blending the new texture in the warped video frames, wherein the blending includes gradient-domain blending of the new texture with a corresponding video frame and adding high-frequency residuals computed from the corresponding video frame (par. 0058: “The image mixer 114 combines the abovementioned information to arrange and generate images depicting the inpainted target regions within each video frame of the set of video frames. The image mixer 114 outputs the generated images (e.g., modified scene 124) to one or more computing devices. It should be appreciated that the image mixer 114 generates the images by blending, layering, overlaying, merging, slicing, or any other suitable audio visual integration technique.”); and providing a modified version of the video sequence using the generated blended video frames (par. 0058, as above). Wang fails to teach generating a mesh for the selected region of the first video frame of the video sequence, wherein the mesh includes a plurality of control points arranged within the mesh. It would have been obvious to one familiar in the art to introduce the warped frames of Wang into the image-stitching method taught by Thurow, as both are in the same field of endeavor of video frame image editing. Wang makes frequent mention of such frame-warping as an element of image stitching (par. 0047: “the motion estimation engine 136 uses sparse feature points to avoid geometric distortions such as parallax or pulling effects, perspective or radial distortions, warping, axial rotations, asymmetries, compressions, etc.”), which demonstrates that such an aspect is well-known in the art. Regarding claim 2, Thurow and Wang teach the method of claim 1. Thurow further teaches wherein determining the control point location data for each of the plurality of control points for the additional video frames of the video sequence further comprises: for each pair of consecutive video frames of the video sequence: determining motions of each of the plurality of control points from the first video frame of the video sequence to a second video frame of the video sequence from an optical flow mapping of the video sequence (par. 0036: “Having generated a modified, warped quad mesh of the panel, correction of lens distortion in a corresponding video frame is performed automatically when the video frame is projected as a texture bitmap onto the panel wherein the texture bitmap is piecewise mapped to the quad mesh at discrete points in the map, namely the vertices of the quads.”), generating a transformation function representing deformation of the plurality of control points using the determined motions of each of the plurality of control points (par. 0034: “The translation required for each vertex may be determined according to any suitable method. For example, a function of the openCV library, for example cvlnitUndistMap(in: A, in: DistortionCoefficients, Out: mapX, out: mapY), may be used. Alternative methods are possible.”), generating locations for each of the plurality of control points in the second video frame of the video sequence by warping the plurality of control points from the first video frame of the video sequence using the generated transformation function (par. 0034: “each vertex of the quad mesh is translated in the plane of the panel to a location corresponding to where the corresponding texture bitmap pixel would appear in an image but for lens distortion.”), and storing the generated locations for each of the plurality of control points in the second video frame of the video sequence as the control point location data (par. 0067: “The machine-readable medium can contain various sets of instructions, code sequences, configuration information, or other data, which, when executed, cause a processor to perform steps in a method according to an embodiment of the invention.”). Regarding claim 4, Thurow and Wang teach the method of claim 2. Thurow further teaches wherein generating the warped video frames by applying the new texture to the additional video frames of the video sequence using the control point location data for the additional video frames of the video sequence further comprises: determining first control point location data for the first video frame and second control point location data for a target video frame of the video sequence from the generated control point location data (par. 0030: “an original pixel point (x,y) may be related to a corrected pixel point (x.sub.corrected, y.sub.corrected) as follows:… …wherein r is a measure of the distance between the original point and the distortion center.”); generating a warping function using the first control point location data and the second control point location data (par. 0032: “The intrinsic camera parameter matrix A and the distortion coefficients Distortion.sub.coefficients may be used to modify, or warp, the panel associated with the corresponding camera in order to correct lens distortion”); and warping the new texture from the first video frame to the target video frame using the generated warping function (par. 0034: “In order to provide lens distortion correction, the panel quad mesh is then modified, or warped, based on the intrinsic camera parameter matrix A and the distortion coefficients.”). Regarding claim 5, Thurow and Wang teach the method of claim 4. Thurow further teaches wherein generating the blended video frames by blending the new texture in the warped video frames further comprises: for each video frame of the additional video frames of the video sequence: generating a preliminary blend by blending the new texture with the region in the video frame (par. 0049: “Blending of the two panels in the boundary area may then be performed by alpha compositing in the boundary area. In other words, for one or both of the panels, an alpha value of the panel pixels in the boundary area may be assigned so as to decrease from the boundary to the panel edge, thereby increasing the texture panel's transparency progressively from the boundary edge to the panel edge.”), computing high-frequency residuals lost during the generating of the preliminary blending by blending a transparent image with the video frame (par. 0050: “When neighbouring panels are in a common plane, they may be offset slightly in a direction orthogonal to that plane (e.g. along a z-axis) to provide for overlap as to assist blending in the boundary area. In this way, blending is performed in the overlapping boundary area of the first and second panels, with the first panel positioned above, or in front of, the second panel, or vice versa, where the alpha values of the first panel in the boundary area are lower than 1.0 thus rendering the textured panel transparent allowing the second textured panel to be visible partly in the boundary area. The lower the alpha value, the more the second texture panel becomes visible from behind the first textured panel.”), and computing a final blend of the new texture with the region in the video frame by merging the preliminary blend and the computed high-frequency residuals (par. 0051: “with reference to FIG. 8, the method 300 may be continued by modifying, in each boundary area of neighbouring panels, a transparency in a corresponding upper panel to blend the corresponding corrected calibration images in the boundary area (step 830).)”). Regarding claim 7, Thurow and Wang teach the method of claim 1. Thurow further teaches wherein generating the blended video frames by blending the new texture in the warped video frames comprises: receiving a second input including a prompt, the prompt indicating a requested texture for the selected region of the first video frame (par. 0044: “using the input and display means the user may translate, rotate, or scale any individual textured panel, which thereby defines 3D coordinates of the panel in the predefined 3D frame of reference. In particular, the user may use the input and display means to position adjacent corrected video frames - that is, corrected video frames corresponding to cameras having overlapping fields of view - so as achieve visual overlap of the adjacent corrected video frames along bordering portions thereof, and in this way define the relative position of the underlying panels corresponding to the corrected video frames in the 3D frame of reference in computer memory.”). Claim 8 is functionally identical to claim 1, and differs only in that it outlines a non-transitory computer-readable medium storing executable instructions rather than a method. It is thereby rejected on the same basis as claim 1. Claim 9 is functionally identical to claim 2, and differs only in that it depends on claim 8 rather than claim 1. It is thereby rejected on the same basis as claim 2. Claim 11 is functionally identical to claim 4, and differs only in that it depends on claim 9 rather than claim 2. It is thereby rejected on the same basis as claim 4. Claim 12 is functionally identical to claim 5, and differs only in that it depends on claim 11 rather than claim 4. It is thereby rejected on the same basis as claim 4. Claim 14 is functionally identical to claim 7, and differs only in that it depends on claim 8 rather than claim 1. It is thereby rejected on the same basis as claim 7. Claim 15 is functionally identical to claim 1, and differs only in that it outlines a system rather than a method. It is thereby rejected on the same basis as claim 1. Claim 16 is functionally identical to claim 2, and differs only in that it depends on claim 15 rather than claim 1. It is thereby rejected on the same basis as claim 2. Claim 18 is functionally identical to claim 4, and differs only in that it depends on claim 16 rather than claim 2. It is thereby rejected on the same basis as claim 4. Claim 19 is functionally identical to claim 5, and differs only in that it depends on claim 18 rather than claim 4. It is thereby rejected on the same basis as claim 4. Claim(s) 3, 10, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thurow (EP 3111647 B1) and Wang (US 20220292649 A1) as applied to claims 1-2, 8-9, and 15-16 above, and further in view of Geissler (GB 2609996 A). Regarding claim 3, Thurow and Wang teach the method of claim 2. Geissler further teaches: determining a reverse optical flow location of a control point in the first video frame of the video sequence using a reverse optical flow mapping from the second video frame of the video sequence to the first video frame of the video sequence (p. 31, lines 19-29: “Preferably, for every individual frame that is aligned, the assessment will be subject to the following constraints: (a) Applying neighbour frame constraint The assumption is that neighbouring frames should have: (i) similar transformations for alignment, and (ii) similar scores that quantify the alignment through image alignment metrics (e.g. mutual information or cross correlation) If a frame violates these constraints, the frame can be flagged for further human operator assessment during or after the entire video sequence is stitched.”); calculating a distance between an original location of the control point and a reverse optical flow location of the control point (p. 32, lines 6-8: “the alignment procedure searches for a transformation that best aligns the virtual and the camera based on a numeric metric (e.g. mutual information, cross correlation, sum squared error for corresponding points).”); and removing the control point from the plurality of control points when the calculated distance is greater than a threshold value (p. 32, lines 10-11: “frames with numeric metrics that exceed a certain threshold can be flagged.”). It would have been obvious to one familiar in the art prior to the effective filing date of the claimed invention to incorporate Geissler’s control point calculations into Thurow’s image stitching method, as both are in the same field of endeavor of image stitching. Thurow’s outlier calculations are an obvious innovation to Geissler, as it allows for more refined adjustments of frame data. Claim 10 is functionally identical to claim 3, and differs only in that it depends on claim 9 rather than claim 2. It is thereby rejected on the same basis as claim 3. Claim 17 is functionally identical to claim 3, and differs only in that it depends on claim 16 rather than claim 2. It is thereby rejected on the same basis as claim 3. Claim(s) 6, 13, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thurow (EP 3111647 B1) and Wang (US 20220292649 A1) as applied to claims 1-2, 8-9, and 15-16 above, and further in view of Zhou (US Patent No. 7,831,074). Regarding claim 6, Thurow and Wang teach the method of claim 2. Zhou further teaches wherein generating the transformation function representing the deformation of the plurality of control points using the determined motions of each of the plurality of control points further comprises: generating a first transformation function representing rigid deformation (col. 7, lines 32-34: “To reduce appearance variation, each video frame is aligned with respect to a mean shape using a rigid similarity transform.”) and a second transformation function representing non-rigid deformation (col. 4, lines 19-25: “As shown in image frames 202-212, the LV endocardium presents severe appearance changes over a cardiac cycle due to nonrigid deformation, imaging artifacts like speckle noise and signal dropout, movement of papillary muscle (which is attached to the LV endocardium but not a part of the wall), respiratory interferences, unnecessary probe movement, etc.”); and generating the transformation function by applying a weighting to the first transformation function and the second transformation function (col. 4, lines 55-62: “In accordance with an embodiment of the present invention, a two class LogitBoost algorithm is outlined in FIG. 3. The crucial step on the Logitboost algorithm is step 2(b) which requires fitting a weighted LS regression of z.sub.i to x.sub.i with weights w.sub.i. The LogitBoost algorithm acts as a feature selection oracle: picking up from the structural space F the weak learner (or feature function) that minimizes its weighted LS cost.di-elect cons.(f).”). It would have been obvious to one familiar in the art prior to the effective filing date of the claimed invention to incorporate the transformation functions in Zhou into Thurow’s method, as both are in the same field of endeavor of image modification based on other images, and the functions of Zhou have long been known in the art. Claim 13 is functionally identical to claim 6, and differs only in that it depends on claim 9 rather than claim 2. It is thereby rejected on the same basis as claim 6. Claim 20 is functionally identical to claim 6, and differs only in that it depends on claim 16 rather than claim 2. It is thereby rejected on the same basis as claim 6. Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN A BARHAM whose telephone number is (571)272-4338. The examiner can normally be reached Mon-Fri, 8:30am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu, can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN ALLEN BARHAM/Examiner, Art Unit 2613 /XIAO M WU/Supervisory Patent Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Feb 29, 2024
Application Filed
Oct 31, 2025
Non-Final Rejection — §103
Feb 05, 2026
Response Filed
Feb 05, 2026
Examiner Interview Summary
Feb 05, 2026
Applicant Interview (Telephonic)
Mar 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12564345
MEDICAL APPARATUS, AND IMAGE GENERATION METHOD FOR VISUALIZING TEMPORAL TRENDS OF BIOMAGNETIC DATA ON AN ORGAN MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12548109
Preserving Tumor Volumes for Unsupervised Medical Image Registration
2y 5m to grant Granted Feb 10, 2026
Patent 12530836
OBJECT TRANSITION BETWEEN DEVICE-WORLD-LOCKED AND PHYSICAL-WORLD-LOCKED
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
99%
With Interview (+60.0%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month