Prosecution Insights
Last updated: April 19, 2026
Application No. 18/687,764

VIDEO PROCESSING METHOD AND APPARATUS, DEVICE AND STORAGE MEDIUM

Final Rejection §103
Filed
Feb 28, 2024
Examiner
WEI, XIAOMING
Art Unit
2611
Tech Center
2600 — Communications
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
28 granted / 34 resolved
+20.4% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
24 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
7.1%
-32.9% vs TC avg
§103
83.6%
+43.6% vs TC avg
§102
4.4%
-35.6% vs TC avg
§112
2.2%
-37.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Response to Amendment The office action is in response to Applicant’s amendment filed 03/17/2026 which has been entered and made of record. Claims 1-2 and 15-16 have been amended. Claims 8-14 have been canceled. Claims 18-28 have been newly added. Claims 1-7, 15-16 and 18-28 are pending in the application. The claim interpretation under 35 U.S.C. 112(f) has been withdrawn based on the cancellation of claims 8-14. Response to Arguments Applicant’s arguments, filed 03/17/2026, with respect to the rejection(s) under 35 U.S.C. 103 have been fully considered, however they are not persuasive. Applicant argues The MPEP makes clear: "Examiners may not dissect a claimed invention into discrete elements and then evaluate the elements in isolation. Instead, the claim as a whole must be considered." MPEP 2103 (emphasis added). Here, the Office improperly dissects the claim into discrete elements and evaluates them in isolation. Examiner respectfully disagree. First, Chuang teaches using the optical flow to advent textures in paragraph [0243-0244] “the optical flow can indicate the directions of motion of image data regions from the projected source texture to the projected target texture …… an intermediate source texture can be determined based at least in part on the projected source texture and the optical flow, e.g., by advecting the projected source texture according to the optical flow.”. Second, Horn teaches exactly how to compute optical flow based on the motion of brightness in an image by using the brightness on each pixel, on page 6, third paragraph, Horn teaches “Let the measured brightness be Ei,j,k at the intersection of the ith row and jth column in the kth image frame” and Page 24, Sixth paragraph, “A method was developed for computing optical flow from a sequence of images. It is based on the observation that the flow velocity has two components and that the basic equation for the rate of change of image brightness provides only one constraint. Smoothness of the flow was introduced as a second constraint. An iterative method for solving the resulting equation was then developed.”. Finally, Chuang and Horn are in closely related field of optical flow. Chuang teaches using optical flow, Horn teaches how to define optical flow, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Horn’s computation of optical flow with the method of Chuang. Applicant argues: Horn explicitly discloses that the brightness of a particular point is constant and does not change. Horn does not disclose the claimed target pixel position that is a pixel position where a brightness value changes over time at a preset brightness change rate. Horn also does not disclose the target brightness of the target pixel position comprising a brightness value at the current moment and being changed from initial brightness at an initial moment. Furthermore, Horn does not disclose a mapping relationship in which initial brightness and a brightness change rate of each pixel position are preset as required by claim 1. Examiner respectfully disagree. Horn does teach “the brightness of a particular point in the pattern is constant, so that d E d t = 0 ” on page 3, last paragraph. This is the first constraint brightness constancy assumption taught by Horn. However, this constraint indicates as the particular points in the pattern moves (flows), the brightness of the particular points in the pattern remain the same. This is different from constant brightness at pixel location; First, Horn does teach the optical flow velocity u and v at image pixel locations. The optical flow is the preset brightness change rate. Chuang teaches the advect procedure on textures based on optical flow in Table 4, line 9, and paragraph [0244] “an intermediate source texture can be determined based at least in part on the projected source texture and the optical flow, e.g., by advecting the projected source texture according to the optical flow. Examples are discussed herein, e.g., with reference to the texture-synthesis module 208 ; Table 4, line 9 ; procedure Advect; and Eqs. (29)-(34).”. Horn teaches optical flow based on pixel locations and time steps, it defines the rate of change at pixel locations and time step. Chuang’s teaching of using optical flow on texture by an advection step discloses the target brightness of the target pixel position comprising a brightness value at the current moment and being changed from initial brightness at an initial moment. Furthermore, Horn does teach the optical flow as brightness change velocity u and v at pixel location, and in order to compute the optical flow, the brightness value at the initial textures is used. Applicant argues for dependent claims 2, 3, 6 and newly added dependent claims, they all depend on independent claims 1, 15 and 16. Examiner respectfully disagrees. Please refer above for detailed rationale. Conclusions: The rejections set in the previous Office Action are shown to have been proper, and the claims are rejected below. New citations and parenthetical remarks can be considered new grounds of rejection and such new grounds of rejection are necessitated by the Applicant's amendments to the claims. Therefore, the present Office Action is made final. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 15, 16, 18 and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chuang et al. (US 20180012407 A1), hereinafter as Chuang, in view of NPL Horn et al. (“Determining Optical Flow”), hereinafter as Horn. Regarding claim 1, Chuang teaches A video processing method (Chuang paragraph [0003] “This disclosure describes systems, methods, and computer-readable media for synthesizing processor-generated characters, e.g., for use in rendering computer-generated videos.”), comprising: acquiring three-dimensional reconstruction data of a video image (Chuang paragraph [0045] “Capture system 112 is configured to provide mesh data 120. Mesh data 120 can include vertices and edges, e.g., as described above, for a surface and/or volume of an actor captured during performance 118. Mesh data 120 is depicted as a random polygonal tessellation but is not restricted thereto. Mesh data 120 can include, e.g., triangles, quads, or other polys, in any combination. Mesh data 120 can include, e.g., at least one mesh per captured frame.”) and a first texture image corresponding to a current moment (Chuang paragraph [0046] “Capture system 112 can also be configured to provide texture data 122……Texture data 122 can include, e.g., at least one texture per captured frame.”); determining …… of a target pixel position on the first texture image corresponding to the current moment based on a mapping relationship …… wherein …… comprises …… at the current moment and is changed from …… at an initial moment (Chuang teaches the source texture as the first texture image at the initial moment, further teaches using an optical flow as a mapping relationship to generate intermediate texture at a current moment, paragraph [0244] “an intermediate source texture can be determined based at least in part on the projected source texture and the optical flow, e.g., by advecting the projected source texture according to the optical flow.”, Table 4, Line 9, PNG media_image1.png 36 317 media_image1.png Greyscale ) …… ; adjusting …… of the target pixel position on the first texture image to …… so as to obtain a second texture image (Chuang teaches generating a synthetic texture as the second texture image based on the source texture and optical flow, paragraph [0243-0244] “an optical flow can be determined based at least in part on the projected source texture and the projected target texture. For example, the optical flow can indicate the directions of motion of image data regions from the projected source texture to the projected target texture…….an intermediate source texture can be determined based at least in part on the projected source texture and the optical flow, e.g., by advecting the projected source texture according to the optical flow.”, paragraph [0246] “ the intermediate source texture and the intermediate target texture can be blended to determine the synthetic texture.”); and mapping the second texture image onto the video image based on the three-dimensional reconstruction data so as to obtain a target video image (Chuang paragraph [0252] “at block 1306, the synthetic meshes can be presented on a display, e.g., as video or still frame(s) of a production. The synthetic meshes can be textured with the respective synthetic textures for presentation. The synthetic meshes (and textures) can be presented sequentially.”). Chuang is not replied on for the below claim language …… target brightness …… in which initial brightness and a brightness change rate of each pixel position are preset, …… the target brightness…… a brightness value…… initial brightness…… and the target pixel position is a pixel position where a brightness value changes over time at a preset brightness change rate …… brightness…… the target brightness…… Horn teaches …… target brightness …… in which initial brightness and a brightness change rate of each pixel position are preset, …… the target brightness…… a brightness value…… initial brightness…… and the target pixel position is a pixel position where a brightness value changes over time at a preset brightness change rate …… brightness…… the target brightness…… (Horn teaches a method of computing optical flow based on rate of change of brightness, pixel location and time, Page 3, Fourth paragraph, “We will derive an equation that relates the change in image brightness at a point to the motion of the brightness pattern. Let the image brightness at the point (x, y) in the image plane at time t be denoted by E(x, y, t).”, Page 19 and Page 24, Sixth paragraph, “A method was developed for computing optical flow from a sequence of images. It is based on the observation that the flow velocity has two components and that the basic equation for the rate of change of image brightness provides only one constraint. Smoothness of the flow was introduced as a second constraint. An iterative method for solving the resulting equation was then developed.”, on page 6, Third paragraph, “Let the measured brightness be Ei,j,k at the intersection of the ith row and jth column in the kth image frame”). Chuang and Horn are in the same filed of endeavor, namely image processing. Horn teaches a method of computing optical flow based on brightness rate of change. Chuang teaches using optical flow to generate synthetic textures. Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of the Horn with the method of Chuang to achieve better rendering result. Regarding claim 2, Chuang in view of Horn teaches The method according to claim 1, wherein the determining target brightness of the target pixel position on the first texture image corresponding to the current moment based on the mapping relationship comprises: and further teach acquiring the initial brightness and the preset brightness change rate of the target pixel position according to the mapping relationship (Horn teaches defining an optical flow with brightness rate of change at pixel position, the optical flow is the preset brightness change rate, Chuang teaches using optical flow with advection step to determine an intermediate source texture, the source texture defines the initial brightness of the target pixel position, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of brightness change rate of Horn with the source texture of Chuang, Chuang paragraph [0243] “ an optical flow can be determined based at least in part on the projected source texture and the projected target texture. For example, the optical flow can indicate the directions of motion of image data regions from the projected source texture to the projected target texture.”); determining a brightness variation of the target pixel position corresponding to the current moment based on a time parameter at the current moment and the preset brightness change rate of the target pixel position (Chuang teaches generating a sequence of synthetic texture based on a time sequence, Figure 3, paragraph [0185] “Synthetic transition sequence 310 , shown dashed for clarity, can include a plurality of synthetic frames 312 (−k+1)- 312 (k−1) (individually or collectively 312 ) …… mesh-synthesis module 206 and/or texture-synthesis module 208 can determine frames 312 of synthetic transition sequence 310 so that playing back source frame 304 (s k), followed by synthetic frames 312 (−k+1), 312 (−k+2), . . . , 312 (−1), 312 (0), 312 (1), . . . , 312 (k−2), 312 (k−1) of sequence 310 , followed by target frame 308 (t+k) will provide a visually smooth transition from source frame 304 (s−k) to target frame 308 (t+k).”); and determining the target brightness of the target pixel position corresponding to the current moment based on the brightness variation of the target pixel position corresponding to the current moment and the initial brightness of the target pixel position (Chuang paragraph [0244] “an intermediate source texture can be determined based at least in part on the projected source texture and the optical flow, e.g., by advecting the projected source texture according to the optical flow. Examples are discussed herein, e.g., with reference to the texture-synthesis module 208 ; Table 4, line 9 ; procedure Advect; and Eqs. (29)-(34).”). Chuang and Horn are in the same filed of endeavor, namely image processing. Horn teaches a method of computing optical flow based on brightness rate of change. Chuang teaches using optical flow to generate synthetic textures. Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of the Horn with the method of Chuang to achieve better rendering result. Regarding claim 15, it recites similar limitations of claim 1 but in an electronic device form. The rationale of claim 1 rejection is applied to reject claim 15. In addition, Chuang teaches An electronic device, comprising: a processor and a memory having stored therein a computer program which, when executed by the processor (Chuang paragraph [0310] “A device comprising: a processor; and a computer-readable medium, e.g., a computer storage medium, having thereon computer-executable instructions, the computer-executable instructions upon execution by the processor configuring the device to perform operations as any of paragraphs H-N recites.”). Regarding claim 16, it recites similar limitations of claim 1 but in a non-transitory computer-readable storage medium form. The rationale of claim 1 rejection is applied to reject claim 16. In addition, Chuang teaches A non-transitory computer-readable storage medium having stored therein a computer program which, when executed by a processor (Chuang paragraph [0317] “The methods and processes described above can be embodied in, and fully automated via, software code modules executed by one or more general purpose computers or processors. The code modules can be stored in any type of computer-readable storage medium or other computer storage medium.”). Regarding claim 18, claim 18 has similar limitations as claim 2, therefore it is rejected under the same rationale as claim 2. Regarding claim 24, claim 24 has similar limitations as claim 2, therefore it is rejected under the same rationale as claim 2. Claim(s) 3, 19 and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chuang et al. (US 20180012407 A1), hereinafter as Chuang, in view of NPL Horn et al. (“Determining Optical Flow”), hereinafter as Horn, further in view of NPL Bellini et al. (“Time-varying Weathering in Texture Space”), hereinafter as Bellini. Regarding claim 3, Chuang in view of Horn teaches The method according to claim 1, wherein the acquiring the first texture image corresponding to the current moment comprises: but fails to teach randomly sampling, for the current moment, a texture on a pre-obtained reference texture image onto at least a portion of pixel positions of a preset template so as to obtain the first texture image corresponding to the current moment. Bellini teaches randomly sampling, for the current moment, a texture on a pre-obtained reference texture image onto at least a portion of pixel positions of a preset template so as to obtain the first texture image corresponding to the current moment (Bellini teaches an input texture in Figure 7(a) as the pre-obtained reference texture image of time t, a square/rectangle shape as the preset template and a texture is randomly chosen as patch P in Figure 7, Page 7, Left Column, Third Paragraph, “The weathering process is illustrated in Figure 7. A patch P is randomly picked in texture ti (a). T”). Chuang, Horn and Bellini are in the same filed of endeavor, namely image processing. Bellini teaches a method to randomly choose a region for source texture to generate a time varying texture sequence for ease of use and better rendering result (Bellini Page 2, Right Column, Second Paragraph, “Our method incorporates two notable advantages. First, in most cases it does not require user interaction or assistance, since the weathered and un-weathered regions of the input texture are automatically detected. Second, new structures within the texture can be created over time, in addition to smooth color variations.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of the Bellini with the method of Chuang and Horn for ease of use and better rendering result. Regarding claim 19, claim 19 has similar limitations as claim 3, therefore it is rejected under the same rationale as claim 3. Regarding claim 25, claim 25 has similar limitations as claim 3, therefore it is rejected under the same rationale as claim 3. Claim(s) 6, 22 and 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chuang et al. (US 20180012407 A1), hereinafter as Chuang, in view of NPL Horn et al. (“Determining Optical Flow”), hereinafter as Horn, further in view of NPL Kwatra et al. (“Texturing Fluids”), hereinafter as Kwatra. Regarding claim 6, Chuang in view of Horn teach The method according to claim 1, and further teach wherein the three- dimensional reconstruction data comprises vertex coordinate information and normal direction information of a three-dimensional mesh (Chuang paragraph [0033-0034] “Various aspects can provide visually seamless transitions between synthetic meshes and captured meshes. In some examples of meshes, edges can define where two mathematically continuous smooth surfaces meet or can connect vertices using straight lines or curves. A vertex can include a position along with other information such as color, normal vector and texture coordinates”); the mapping the second texture image onto the video image based on the three-dimensional reconstruction data so as to obtain the target video image comprises (Chuang paragraph [0252] “at block 1306, the synthetic meshes can be presented on a display, e.g., as video or still frame(s) of a production. The synthetic meshes can be textured with the respective synthetic textures for presentation. The synthetic meshes (and textures) can be presented sequentially.”): but fail to teach performing differential processing on the three-dimensional mesh so as to obtain a fragment in the three- dimensional mesh and an offset position of the fragment in the three-dimensional mesh; determining coordinates and a normal direction of the fragment based on the offset position and vertex coordinates and a normal direction of the three-dimensional mesh; sampling the second texture image based on the coordinates and the normal direction of the fragment; and mapping a texture, which is sampled, onto the fragment. Kwatra teaches performing differential processing on the three-dimensional mesh so as to obtain a fragment in the three- dimensional mesh and an offset position of the fragment in the three-dimensional mesh (Kwatra teaches a geodesic distance as the offset position, and obtaining a mesh fragment based on the distance, Page 5, Right Column, Fifth Paragraph, “A vertex neighborhood in a mesh is defined as the set of vertices connected to each other and lying within a certain geodeisc distance – distance measured in the local orientation space of the mesh – to a central vertex. Given the vertex c as the center, the 2D location of a vertex in its neighborhood is computed as its displacement from c along the orientation field on the mesh.”); determining coordinates and a normal direction of the fragment based on the offset position and vertex coordinates and a normal direction of the three-dimensional mesh (Kwatra Page 7, Figure 5, “VERTEX NEIGHBORHOOD CONSTRUCTION: A SET OF CONNECTED POINTS ON THE MESH ARE MAPPED ONTO THE 2D PLANE. THE ORIENTATION VECTOR AT THE CENTRAL VERTEX ALIGNS ITSELF WITH THE PRIMARY AXIS OF THE PLANE. EACH VERTEX MAPS TO A REAL-VALUED 2D LOCATION, AS SHOWN EXPLICITLY FOR THE YELLOW CIRCLED VERTEX.”); sampling the second texture image based on the coordinates and the normal direction of the fragment (Kwatra Page 7, Figure 5 teaches deciding the 2D mapped location of vertex on the second texture image based on the vertex coordinates and normal) and mapping a texture, which is sampled, onto the fragment (Kwatra Page 7, Figure 5, “GIVEN A PLANAR PIXEL NEIGHBORHOOD, THE COLOR AT A MESH VERTEX IS DETERMINED THROUGH BILINEAR INTERPOLATION OF THE FOUR PIXELS THAT ITS 2D MAPPED LOCATION LIES BETWEEN.”). Chuang, Horn and Kwatra are in the same filed of endeavor, namely image processing. Kwatra teaches a method to map synthetic texture to dynamic changing 3D surface to improve rendering result (Kwatra Page 10, Right Column, Third Paragraph, “Our work successfully demonstrates transport of textures along 3D fluid flows, which undergo complex topological changes between successive frames, while preserving visual similarity between the input and the output textures.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of the Kwatra with the method of Chuang and Horn to improve rendering result. Regarding claim 22, claim 22 has similar limitations as claim 6, therefore it is rejected under the same rationale as claim 6. Regarding claim 28, claim 28 has similar limitations as claim 6, therefore it is rejected under the same rationale as claim 6. Allowable Subject Matter Claims 4, 5, 7, 20, 21, 23 and 26-27 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claims 4, 20 and 26, the closest prior art of NPL Neyret et al. (“Advected Textures”), hereinafter as Neyret, teaches using a perlin noise texture in Figure 2 to achieve texture animation. However, Neyret fails to teach the below combined limitation as a whole, “wherein the randomly sampling the texture on the pre- obtained reference texture image onto at least a portion of pixel positions of the preset template comprises: randomly selecting, for any pixel position of the at least a portion of pixel positions of the preset template, one pixel position from a pre-obtained noise texture image as a sampling position, the noise texture image comprising information of random coordinates corresponding to the sampling position; extracting the random coordinates corresponding to the sampling position from the noise texture image; and sampling a texture at a position in the reference texture image corresponding to the random coordinates onto the any pixel position.”. Furthermore, no prior art of record either alone or in combination teaches the above limitation as a whole. Therefore, claims 4, 20 and 26 are considered to allowable. Regarding claims 5, 21 and 27, the closest prior art of Neyret teaches using a perlin noise texture in Figure 2 to achieve texture animation. However, Neyret fails to teach the combined limitation below as a whole, “wherein the randomly sampling the texture on the pre- obtained reference texture image onto at least a portion of pixel positions of the preset template comprises: performing, for any pixel position of the at least a portion of pixel positions of the preset template, coordinate offset processing on coordinates of the any pixel position on the preset template so as to obtain offset coordinates corresponding to the any pixel position; acquiring random coordinates at a corresponding position from a preset noise texture image based on the offset coordinates; and capturing a texture at a position in the reference texture image corresponding to the random coordinates onto the any pixel position. “. Furthermore, no prior art of record either alone or in combination teaches the above limitation as a whole. Therefore, claims 5, 21 and 27 are considered to allowable. Regarding claims 7 and 23, the closest prior art of Kwatra teaches aligning the orientation vector of the central vertex with the primary axis of the texture plane. However, Kwatra fails to teach the below combined limitation as a whole, “in a case where a distance between a normal and a first coordinate axis in a preset three-dimensional coordinate system is the shortest, mapping the texture, which is sampled, onto the fragment along a direction of the first coordinate axis.”. Furthermore, no prior art of record either alone or in combination teaches the above limitation as a whole. Therefore, claims 7 and 23 are considered to allowable. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOMING WEI whose telephone number is (571)272-3831. The examiner can normally be reached M-F 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611 /XIAOMING WEI/Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Feb 28, 2024
Application Filed
Dec 16, 2025
Non-Final Rejection — §103
Mar 17, 2026
Response Filed
Mar 26, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603064
CIRCUIT AND METHOD FOR VIDEO DATA CONVERSION AND DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12597246
METHOD AND APPARATUS FOR GENERATING ADVERSARIAL PATCH
2y 5m to grant Granted Apr 07, 2026
Patent 12597175
Avatar Creation From Natural Language Description
2y 5m to grant Granted Apr 07, 2026
Patent 12586280
TECHNIQUES FOR GENERATING DUBBED MEDIA CONTENT ITEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12586318
METHOD AND APPARATUS FOR LABELING ROAD ELEMENT, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+26.1%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month