Prosecution Insights
Last updated: April 19, 2026
Application No. 18/294,557

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

Non-Final OA §101§103
Filed
Feb 02, 2024
Examiner
WEI, XIAOMING
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
28 granted / 34 resolved
+20.4% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
24 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
7.1%
-32.9% vs TC avg
§103
83.6%
+43.6% vs TC avg
§102
4.4%
-35.6% vs TC avg
§112
2.2%
-37.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “movement amount calculation unit” in claim 1, the corresponding structure in the disclosure is defined paragraph [0043] “The optical flow calculation unit 122 functions as a movement amount calculation unit that calculates a movement vector associated with a vertex included in a target frame (hereinafter, also represented as a "frame N") among two consecutive frames” and Figure 5 Information processing apparatus, optical flow calculation unit 122; “effect position calculation unit” in claim 2, the corresponding structure in the disclosure is defined in Figure 5 the information processing apparatus 10 and paragraph [0035] “As illustrated in Fig. 5, the control unit 120 includes a motion capture unit 121, an optical flow calculation unit 122, an effect position calculation unit 123“; “output control unit” in claim 3, the corresponding structure in the disclosure is defined in paragraph [0079] “the effect position proposal unit 124 functions as an example of an output control unit that controls output of information regarding the movement destination position of the effect” and Figure 5 information processing apparatus 10 and an effect position proposal unit 124; “output unit” in claim 3, the corresponding structure in the disclosure is defined in paragraph [0082] “a case where the output unit includes the display unit 130”; “recording control unit” in claim 5, the corresponding structure in the disclosure is defined in paragraph [0084] “The recording control unit 126 controls recording of the effect position in the frame N+1 to the storage unit 150” and Figure 5 information processing apparatus 10 and a recording control unit 126; “communication unit” in claim 6, the corresponding structure in the disclosure is defined in Figure 13 and paragraph [0088] “As illustrated in Fig. 13, an information processing apparatus 20 according to the first modification is implemented by a computer and includes a control unit 120 and a communication unit 160.”; “display unit” in claim 6, the corresponding structure in the disclosure is defined in paragraph [0036] “the display unit 130 can include a display. The type of the display is not limited. For example, the display included in the display unit 130 may be a liquid crystal display (LCD), an organic electro-luminescence (EL) display, a plasma display panel (PDP), or the like”; “transmission control unit” in claim 7, the corresponding structure in the disclosure is defined in Figure 13 information processing apparatus 20, a transmission control unit 127 and paragraph [0090] “the transmission control unit 127 controls transmission of the frame N+1 to which the effect has been assigned, to the terminal of the user by the communication unit 160”; Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 20 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because claim 20 recites: “A program ...”, the body of the claim recites computer program steps, such as, “comprising a movement amount calculation unit……”, which are nothing more than just programmed instructions to be performed by the system. Therefore the steps/elements recited in claim 20 are non- statutory. Similarly, computer programs claimed as computer listings per se, ie., the descriptions or expressions of the programs, are not physical “things.” They are neither computer components nor statutory processes, as they are not “acts” being performed. Such claimed computer programs do not define any structural and functional interrelationships between the computer program and other claimed elements of a computer which permit the computer program’s functionality to be realized. In contrast, a claimed non-transitory computer-readable medium encoded with a computer program is a computer element which defines structural and functional interrelationships between the computer program and the rest of the computer which permit the computer program’s functionality to be realized, and is thus statutory. Accordingly, it is important to distinguish claims that define descriptive material per se from claims that define statutory inventions. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 7-8, 10-11, 15, 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chuang et al. (US 20180012407 A1), hereinafter as Chuang. Regarding Claim 1, Chuang teaches An information processing apparatus (Chuang paragraph [0025] “FIG. 17 is a block diagram depicting an example computing device configured to participate in character synthesis or presentation according to various examples described herein.”) comprising a movement amount calculation unit that calculates a movement amount (Chuang teaches a processing unit as the movement amount calculation unit to compute the optical flow amount, paragraph [0145] “A processing unit can do this using an optical flow algorithm for signals on meshes. Computing optical flow directly on the surface can permit synthesizing textures in a way that is geometry-aware and agnostic to occlusions and texture seams.”, paragraph [0162] “With reference to advection (e.g., Eq. (30)), given a point p ∈, a processing unit can evaluate the scalar field s advected along {right arrow over (υ)} for time t by taking N geodesic steps along the mesh.”) associated with a first vertex included in a first frame on a basis of statistical processing (Chuang teaches the averaging as the statistical processing, paragraph [0163] “When the p is a vertex, a processing unit can offset p by a small amount into each adjacent triangle and proceed as above, using the average over adjacent triangles to set the value of the advected scalar field at p.”) according to color information associated with the first vertex, color information associated with a second vertex included in a second frame after the first frame, three- dimensional coordinates associated with the first vertex, and three-dimensional coordinates associated with the second vertex (Chuang teaches a source mesh with source texture to form the source frame as the first frame, a target mesh with target texture to form the target frame as the second frame, and further teaches vertex with color information and 3D vectors of position and normal, Table 5 MeshOpticalFlow and paragraph [0034] “A vertex can include a position along with other information such as color, normal vector and texture coordinates.”, paragraph [0114] “ a processing unit can receive as input, source and target meshes M α ={v α, t α), with α between 0 and 1, The vertex positions and normals can be denoted by the 3|υ.sub.α|-dimensional vectors”, paragraph [0219] “a first candidate mesh of the mesh sequence and a second candidate mesh of the mesh sequence can be determined. The candidate meshes can be candidates to become source frame F.sub.s or target frame F.sub.t.”). Chuang and the current application are in the same field of endeavor, namely computer graphics, especially in the field of motion generation for 3D model in video data. Chuang teaches in various embodiments a method to determine synthetic mesh with synthetic texture based on optical flow to improve image quality (Chuang paragraph [0030] “Some example techniques can determine synthetic textures registered to a synthetic mesh, which can permit rendering a character from any viewpoint without re-computing the synthetic texture for each viewpoint. This can reduce the time required to determine and render each frame, e.g., permitting higher frame rates in productions compared to prior schemes. Some example techniques can determine synthetic textures exhibiting reduced ghosting and improved image quality compared to prior schemes.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of various embodiment of Chuang to achieve better image rendering quality. Regarding Claim 7, Chuang teaches The information processing apparatus according to claim 1, and further teach wherein the information processing apparatus further comprises a transmission control unit that controls transmission of the first frame and the movement amount associated with the first vertex by a communication unit (Chuang teaches a computing device as the transmission control unit, networks 130 as the communication unit, paragraph [0043] “computing devices 106 operate synthesis engines discussed herein to determine (e.g., generate or synthesize) productions or components thereof (e.g., meshes, textures, or paths), and transmit data of the productions (e.g., mesh data or rendered image data of frames of the production) to computing device 102, e.g., a smartphone. Computing device 102 can, e.g., present the production to entity 110.” And paragraph [0048] “network(s) 130 can be any type of network, wired or wireless, using any type of network topology and any network communication protocol, and can be represented or otherwise implemented as a combination of two or more networks.”). Regarding Claim 8, Chuang teaches The information processing apparatus according to claim 1, and further teach wherein the statistical processing includes processing of extracting a mode value or processing of calculating an average value (Chuang paragraph [0163] “When the p is a vertex, a processing unit can offset p by a small amount into each adjacent triangle and proceed as above, using the average over adjacent triangles to set the value of the advected scalar field at p.”). Regarding Claim 10, Chuang teaches The information processing apparatus according to claim 1, and further teaches wherein the first vertex forms a first surface, the second vertex forms a second surface (Chuang teaches vertex forming edges and edges forming surface, paragraph [0034] “A vertex can include a position along with other information such as color, normal vector and texture coordinates. A face is a closed set of edges, in which a triangle face has three edges, and a quad face has four edges. A polygon is a face having at least three edges.” And paragraph [0026] “A mesh includes data indicating the positions of multiple vertices in a virtual space and data indicating connections (“edges”) between the vertices. A texture can include image data associated with specific vertices of the mesh.”), the color information associated with the first vertex includes the color information on the first surface, the color information associated with the second vertex includes the color information on the second surface (Chuang teaches the color information for vertex, and vertex are used to form surfaces, paragraph [0034] “ A vertex can include a position along with other information such as color, normal vector and texture coordinates”), the three-dimensional coordinates associated with the first vertex include the three-dimensional coordinates of the first surface, the three-dimensional coordinates associated with the second vertex include the three-dimensional coordinates of the second surface (Chuang teaches the 3D vectors as the position and normal of vertex, and vertex are used to form surfaces, paragraph [0114] “The vertex positions and normals can be denoted by the 3|υ.sub.α|-dimensional vectors”), and the movement amount associated with the first vertex includes the movement amount of the first surface (Chuang teaches the advected scalar field as the movement amount of vertex and vertex are used to form surfaces, paragraph [0163] “When the p is a vertex, a processing unit can offset p by a small amount into each adjacent triangle and proceed as above, using the average over adjacent triangles to set the value of the advected scalar field at p.”). Regarding Claim 11, Chuang teaches The information processing apparatus according to claim 10, wherein the movement amount calculation unit: and further teaches calculates the color information on the first surface on a basis of the color information on the first vertex, and also calculates the color information on the second surface on a basis of the color information on the second vertex (Chuang teaches projecting a 3D vertex to an image plane, and deciding the color of triangle based on the projection of vertex in the texture, paragraph [0143] “a processing unit can project a point in 3-D into the camera-image plane. The processing unit can sample the texture color of each triangle by determining the projection of that triangle on the camera that provides a desired visibility…… a processing unit can use nearest-point sampling to assign texture colors”); and calculates the three-dimensional coordinates of the first surface on a basis of the three-dimensional coordinates of the first vertex, and also calculates the three-dimensional coordinates of the second surface on a basis of the three-dimensional coordinates of the second vertex (Chuang Equation 20, “the tetrahedral meshes (given by linear interpolation of vertex positions)”, paragraph [0128] “ When synthesizing the geometry of in-between frames (e.g., as discussed herein with reference to Eqs. (23), (24), or (25)) a processing unit can use the weighted linear blend of the vertex positions in the source and aligned target as the initial guess.”). Regarding Claim 15, Chuang teaches The information processing apparatus according to claim 1, and further teaches wherein the color information associated with the first vertex includes the color information on the first vertex, the color information associated with the second vertex includes the color information on the second vertex (Chuang teaches color information for vertex, paragraph [0034] “ A vertex can include a position along with other information such as color, normal vector and texture coordinates”), the three-dimensional coordinates associated with the first vertex include the three-dimensional coordinates of the first vertex, the three-dimensional coordinates associated with the second vertex include the three-dimensional coordinates of the second vertex (Chuang teaches the 3D vectors as the position and normal of vertex, paragraph [0114] “The vertex positions and normals can be denoted by the 3|υ.sub.α|-dimensional vectors”), and the movement amount associated with the first vertex includes the movement amount of the first vertex (Chuang teaches the advected scalar field as the movement amount of vertex, paragraph [0163] “When the p is a vertex, a processing unit can offset p by a small amount into each adjacent triangle and proceed as above, using the average over adjacent triangles to set the value of the advected scalar field at p.”). Regarding Claim 19, it recites similar limitations of claim 1 but in a method form. The rationale of claim 1 rejection is applied to reject claim 10. In addition, Chuang teaches An information processing method comprising calculating, by a processor (Chuang paragraph [0003] “This disclosure describes systems, methods, and computer-readable media for synthesizing processor-generated characters, e.g., for use in rendering computer-generated videos. Some example techniques described herein can permit synthesizing motions that appear natural or that do not distort the shapes of characters. Some example techniques described herein can permit synthesizing motions based on multiple meshes, even if those meshes differ in connectivity.”). Chuang and the current application are in the same field of endeavor, namely computer graphics, especially in the field of motion generation for 3D model in video data. Chuang teaches in various embodiments a method to determine synthetic mesh with synthetic texture based on optical flow to improve image quality (Chuang paragraph [0030] “Some example techniques can determine synthetic textures registered to a synthetic mesh, which can permit rendering a character from any viewpoint without re-computing the synthetic texture for each viewpoint. This can reduce the time required to determine and render each frame, e.g., permitting higher frame rates in productions compared to prior schemes. Some example techniques can determine synthetic textures exhibiting reduced ghosting and improved image quality compared to prior schemes.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of various embodiment of Chuang to achieve better image rendering quality. Regarding Claim 20, it recites similar limitations of claim 1 but in a program form. The rationale of claim 1 rejection is applied to reject claim 20. In addition, Chuang teaches A program that causes a computer to function as an information processing apparatus (Chuang paragraph [0316] “the operations represent computer-executable instructions stored on one or more computer-readable media that, when executed by one or more processors, enable the one or more processors to perform the recited operations.”). Chuang and the current application are in the same field of endeavor, namely computer graphics, especially in the field of motion generation for 3D model in video data. Chuang teaches in various embodiments a method to determine synthetic mesh with synthetic texture based on optical flow to improve image quality (Chuang paragraph [0030] “Some example techniques can determine synthetic textures registered to a synthetic mesh, which can permit rendering a character from any viewpoint without re-computing the synthetic texture for each viewpoint. This can reduce the time required to determine and render each frame, e.g., permitting higher frame rates in productions compared to prior schemes. Some example techniques can determine synthetic textures exhibiting reduced ghosting and improved image quality compared to prior schemes.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of various embodiment of Chuang to achieve better image rendering quality. Claim(s) 2-4, 6, 14 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chuang et al. (US 20180012407 A1), hereinafter as Chuang, in view of Safreed et al. (US 9478033 B1), hereinafter as Safreed. Regarding Claim 2, Chuang teaches The information processing apparatus according to claim 1, but fails to teach wherein the information processing apparatus further comprises an effect position calculation unit that calculates a movement destination position of an effect on a basis of the movement amount associated with the first vertex and a position in the first frame of the effect. Safreed teaches wherein the information processing apparatus further comprises an effect position calculation unit that calculates a movement destination position of an effect on a basis of the movement amount associated with the first vertex and a position in the first frame of the effect. (Safreed teaches a computer circuit as the effect position calculation unit, teaches defining a region of interest on the initial frame for effect tracking and deformation, further teaches using optical flow to estimate the destination position, Chuang teaches a processing unit to advect source mesh and target mesh based on optical flow, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Safreed with the method of Chuang, Safreed Col. 2, lines 5-11, “The user interface circuit is configured to receive user inputs for defining a region of interest in an image represented by a first one of the video frames. The computer circuit is configured to generate a particle mesh from a set of feature points for the defined region of interest in the first one of the video frames,”, Col. 15, lines 57-64, “image data is merged with image data that is created or otherwise presented to achieve one or more effects. In one implementation, objects are dynamically modified, such as to make the objects thinner or to re-light the objects to increase brightness. In these contexts, mask data that describes the area is generated along with other data, such as point or motion path data, to vary other settings. “, Col. 13, lines 11-21, “An optical flow approach is used to find each tracked feature's location in Frame 2, …… For the second and subsequent frame pairs in a span, the transform calculated by the previous frame pair is used to generate a guess for the positions of the features in Frame 2”, Chuang paragraph [0145-0146] “A processing unit can do this using an optical flow algorithm for signals on meshes. Computing optical flow directly on the surface can permit synthesizing textures in a way that is geometry-aware and agnostic to occlusions and texture seams…….As meshes do not carry a hierarchical structure, a processing unit can adapt the algorithm by introducing a penalty term that encourages the flow to be smooth……The source and target are then advected along the estimated flow so that they are roughly aligned) Chuang and Safreed are in the same field of endeavor, namely computer graphics, especially in the field of motion generation for 3D model in video data. Safreed teaches a method to define a region of interest on an initial frame, and use feature point mesh in the region to track the motion of 3D model to improve accuracy. Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Safreed with the method of Chuang to improve accuracy. Regarding Claim 3, Chuang in view of Safreed teach The information processing apparatus according to claim 2, and further teach wherein the information processing apparatus further comprises an output control unit that controls an output of information regarding the movement destination position of the effect and the second frame by an output unit (Safreed teaches a coherence evaluator 350 in Figure 3 as the output control unit, further teaches a user display interface as the output unit, Col. 11, lines 1-9, “For each iterative generation of boundary/mesh points, the tracked video data as corresponding to these points can be selectively output for a variety of uses, such as for adding special effects or otherwise modifying the video data 310. For example, when an object such as a human face is tracked in a scene, image data modification that is tailored to the person's face can be carried out upon pixels in the tracked mesh, ensuring that the image data modification follows through the scenes.”, Col. 9, lines 39-41, “The image in FIG. 2 represents a user display showing a video frame in which features are tracked. “). Chuang and Safreed are in the same field of endeavor, namely computer graphics, especially in the field of motion generation for 3D model in video data. Safreed teaches a method to define a region of interest on an initial frame, and use feature point mesh in the region to track the motion of 3D model to improve accuracy. Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Safreed with the method of Chuang to improve accuracy. Regarding Claim 4, Chuang in view of Safreed teach The information processing apparatus according to claim 3, and further teach wherein the information regarding the movement destination position of the effect includes the effect moved to the movement destination position of the effect in the second frame (Safreed Col. 10, lines 31-41, “A ballistics engine 330 receives the video data 310 and the mesh point data 321, and uses this received data to generate ballistics data 332 that can be used to estimate movement of the mesh point data 321 from frame-to-frame. This ballistics data 332 may, for example, include processing parameters, code and/or an algorithm-type of data that can be used to estimate motion for propagating boundary/mesh points throughout a scene. A propagation estimation engine 340 uses the ballistics data 332 to generate new boundary/mesh points 342 for a subsequent frame in the video data.”). Chuang and Safreed are in the same field of endeavor, namely computer graphics, especially in the field of motion generation for 3D model in video data. Safreed teaches a method to define a region of interest on an initial frame, and use feature point mesh in the region to track the motion of 3D model to improve accuracy. Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Safreed with the method of Chuang to improve accuracy. Regarding Claim 6, Chuang in view of Safreed teaches The information processing apparatus according to claim 3, and further teach wherein the output unit includes a communication unit or a display unit (Safreed teaches a computer screen as the display unit, Col. 8, lines 5-10, “The user interface can be implemented upon a variety of different types of devices operating upon different platforms. In some embodiments, the user interface includes a computer that executes programming functions to generate and display user interface selections to a user viewing a computer screen.”), and the output control unit controls transmission of the information regarding the movement destination position of the effect and the second frame by the communication unit or display of the information regarding the movement destination position of the effect and the second frame by the display unit (Safreed Col. 9, lines 39-41, “The image in FIG. 2 represents a user display showing a video frame in which features are tracked. “, Col. 15, lines 31-34, “referring again to FIG. 2, boundary 240, similar to boundary 210, can be used to track the other individual in the scene. Features to be tracked (such as eyes, chin) are identified via mesh points and used in tracking the face.”) Chuang and Safreed are in the same field of endeavor, namely computer graphics, especially in the field of motion generation for 3D model in video data. Safreed teaches a method to define a region of interest on an initial frame, and use feature point mesh in the region to track the motion of 3D model to improve accuracy. Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Safreed with the method of Chuang to improve accuracy. Regarding Claim 14, Chuang teaches The information processing apparatus according to claim 10, but fails to teach wherein the information processing apparatus further comprises an effect position calculation unit that, in a case where the first surface and a fifth surface are present within a predetermined distance from a position of an effect in the first frame, calculates a movement destination position of the effect on a basis of the statistical processing for the movement amount of the first surface and the movement amount of the fifth surface. Safreed teaches wherein the information processing apparatus further comprises an effect position calculation unit that, in a case where the first surface and a fifth surface are present within a predetermined distance from a position of an effect in the first frame, calculates a movement destination position of the effect on a basis of the statistical processing for the movement amount of the first surface and the movement amount of the fifth surface (Safreed teaches a computer circuit as effect position calculation unit, further teaches deciding feature points that lies within a user defined mask as the position of an effect, the points are used to define triangle surfaces and computing averaging as the statistical processing of transformation, Col. 13, lines 11-21, “An optical flow approach is used to find each tracked feature's location in Frame 2, …… For the second and subsequent frame pairs in a span, the transform calculated by the previous frame pair is used to generate a guess for the positions of the features in Frame 2”, Col. 12, lines 46-49, “A list of suitable features is created for subsequent tracking. Such suitable features may, for example, include features that lie inside of the mask, aren't too close together”, Col. 13, lines 52-64 and Col. 14, 1-20, “Point pairs are randomly selected to form triangles, including one triangle in Frame 1 and one triangle in Frame 2……A non-iterative (flattened) linear equation solver is used to calculate the affine transform that would transform the Frame 1 triangle into the Frame 2 triangle. Each transform is used to calculate an average result triangle by transforming a unit triangle and averaging the corresponding vertex positions of the resultant triangles……the transforms are averaged to generate an affine transform. This process may occur iteratively with the optical flow calculation while increasing the optical flow window size.”). Chuang and Safreed are in the same field of endeavor, namely computer graphics, especially in the field of motion generation for 3D model in video data. Safreed teaches a method to define a region of interest on an initial frame, and use feature point mesh in the region to track the motion of 3D model to improve accuracy. Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Safreed with the method of Chuang to improve accuracy. Regarding Claim 18, Chuang teaches The information processing apparatus according to claim 15, but fails to teach wherein the information processing apparatus further comprises an effect position calculation unit that, in a case where the first vertex and a fifth vertex are present within a predetermined distance from a position of an effect in the first frame, calculates a movement destination position of the effect on a basis of the statistical processing for the movement amount of the first vertex and the movement amount of the fifth vertex. Safreed teaches wherein the information processing apparatus further comprises an effect position calculation unit that, in a case where the first vertex and a fifth vertex are present within a predetermined distance from a position of an effect in the first frame, calculates a movement destination position of the effect on a basis of the statistical processing for the movement amount of the first vertex and the movement amount of the fifth vertex (Safreed teaches a computer circuit as effect position calculation unit, further teaches deciding feature points within a user defined mask as the position of an effect and computing averaging as the statistical processing of transformation, Col. 13, lines 11-21, “An optical flow approach is used to find each tracked feature's location in Frame 2, …… For the second and subsequent frame pairs in a span, the transform calculated by the previous frame pair is used to generate a guess for the positions of the features in Frame 2”, Col. 12, lines 46-49, “A list of suitable features is created for subsequent tracking. Such suitable features may, for example, include features that lie inside of the mask, aren't too close together”, Col. 13, lines 52-64 and Col. 14, 1-20, “Point pairs are randomly selected to form triangles, including one triangle in Frame 1 and one triangle in Frame 2……A non-iterative (flattened) linear equation solver is used to calculate the affine transform that would transform the Frame 1 triangle into the Frame 2 triangle. Each transform is used to calculate an average result triangle by transforming a unit triangle and averaging the corresponding vertex positions of the resultant triangles……the transforms are averaged to generate an affine transform. This process may occur iteratively with the optical flow calculation while increasing the optical flow window size.”). Chuang and Safreed are in the same field of endeavor, namely computer graphics, especially in the field of motion generation for 3D model in video data. Safreed teaches a method to define a region of interest on an initial frame, and use feature point mesh in the region to track the motion of 3D model to improve accuracy. Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Safreed with the method of Chuang to improve accuracy. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chuang et al. (US 20180012407 A1), hereinafter as Chuang, in view of Safreed et al. (US 9478033 B1), hereinafter as Safreed, and further in view of Yamamoto et al. (US 20180144486 A1), hereinafter as Yamamoto. Regarding Claim 5, Chuang and Safreed teach The information processing apparatus according to claim 4, but fail to teach wherein the information processing apparatus further comprises a recording control unit that controls recording of the position after correction of the effect on a basis of a fact that a correction has been made to the movement destination position of the effect. Yamamoto teaches wherein the information processing apparatus further comprises a recording control unit that controls recording of the position after correction of the effect on a basis of a fact that a correction has been made to the movement destination position of the effect (Yamamoto teaches a storage unit 130 with tracking table 132 and recalculation table 133 in Figure 2 as the recording control unit, paragraph [0071] “The correcting unit 144 is a processing unit that corrects a tentative position of each person when the evaluating unit 143 has evaluated that the tentative position is not at an appropriate position. For example, the correcting unit 144 identifies an estimation position of each person at which the Eval value indicated in Equation (4) is maximized, and corrects the tentative position to the identified estimation position.” And paragraph [0075] “The correcting unit 144 registers information of a modified estimation position and the Eval value in an associated manner in the recalculation table 133”). Chuang, Safreed and Yamamoto are in the same field of endeavor, namely computer graphics, especially in the field of object motion tracking. Yamamoto teaches a method to correct tentative position and stored the modified position in order to achieve accurate tracking result (Yamamoto paragraph [0121] “Accurate tracking of more than one object is enabled.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Yamamoto with the combination of Chuang and Safreed to improve object tracking accuracy. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chuang et al. (US 20180012407 A1), hereinafter as Chuang, in view of Blasch et al. (US 20160110885 A1), hereinafter as Blasch. Regarding Claim 9, Chuang teaches The information processing apparatus according to claim 1, but fails to teach wherein each of the color information associated with the first vertex and the color information associated with the second vertex includes a hue. Blasch teaches wherein each of the color information associated with the first vertex and the color information associated with the second vertex includes a hue (Blasch paragraph [0009] “a method for video detection and tracking of a target, comprising the steps of defining a target by selecting a dataset of target image frames from a database and selecting the desired color of the target; converting the color to a template hue histogram representation: initializing an image detector; performing target image frame alignment and registration in which homography matrices are generated; producing an optical flow field”). Chuang and Blasch are in the same field of endeavor, namely computer graphics. Blasch teaches using hue histogram based on color information in target tracking for video data to achieve a robust detection result (Blasch paragraph [0042] “The goal of the present invention is the development of a general purpose detection framework to increase the robustness of detection by utilizing the information from the optical flow generator (OFG) and an active-learning histogram (AHM) matcher at the same time and in real-time.”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Blasch with the method of Chuang to achieve better motion tracking result. Allowable Subject Matter Claims 12-13 and 16-17 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding Claim 12, the closest prior art of Chuang teaches computing optical flow on surface and generate synthetic mesh with synthetic texture based on optical flow. However, Chuang fails to teach the combined limitation below as a whole, “wherein the movement amount calculation unit extracts one or a plurality of surfaces having the three-dimensional coordinates whose distance from the three-dimensional coordinates of the first surface is smaller than a first threshold value, from among a plurality of surfaces included in the second frame, and extracts a surface with the color information having a smallest difference from the color information on the first surface, from among the extracted one or plurality of surfaces, as the second surface.”. Furthermore, no prior art of record either alone or in combination teaches the above limitation as a whole. Therefore, claim 12 is considered to allowable. Claim 13 contain allowable subject matter because it depends on claim 12 that contains allowable subject matter. Regarding Claim 16, the closest prior art of Chuang teaches computing optical flow on surface and generate synthetic mesh with synthetic texture based on optical flow. However, Chuang fails to teach the combined limitation below as a whole, “wherein the movement amount calculation unit extracts one or a plurality of vertices having the three-dimensional coordinates whose distance from the three-dimensional coordinates of the first vertex is smaller than a first threshold value, from among a plurality of vertices included in the second frame, and extracts a vertex with the color information having a smallest difference from the color information on the first vertex, from among the extracted one or plurality of vertices, as the second vertex.”. Furthermore, no prior art of record either alone or in combination teaches the above limitation as a whole. Therefore, claim 16 is considered to allowable. Claim 17 contain allowable subject matter because it depends on claim 16 that contains allowable subject matter. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOMING WEI whose telephone number is (571)272-3831. The examiner can normally be reached M-F 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611 /XIAOMING WEI/Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Feb 02, 2024
Application Filed
Jan 27, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603064
CIRCUIT AND METHOD FOR VIDEO DATA CONVERSION AND DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12597246
METHOD AND APPARATUS FOR GENERATING ADVERSARIAL PATCH
2y 5m to grant Granted Apr 07, 2026
Patent 12597175
Avatar Creation From Natural Language Description
2y 5m to grant Granted Apr 07, 2026
Patent 12586280
TECHNIQUES FOR GENERATING DUBBED MEDIA CONTENT ITEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12586318
METHOD AND APPARATUS FOR LABELING ROAD ELEMENT, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+26.1%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month