Prosecution Insights
Last updated: April 19, 2026
Application No. 18/410,090

NORMAL AND MESH DETAIL SEPARATION FOR PHOTOMETRIC TANGENT MAP CREATION

Final Rejection §103
Filed
Jan 11, 2024
Examiner
MINKO, DENIS VASILIY
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Sony Corporation Of America
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
2y 5m
To Grant
79%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
10 granted / 16 resolved
+0.5% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
25 currently pending
Career history
41
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
61.4%
+21.4% vs TC avg
§102
18.7%
-21.3% vs TC avg
§112
9.9%
-30.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claims Claims 1-20 are pending. Claims 1-7, 9-20 are amended. Claims 1-20 are rejected. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 10, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pi et al. (CN 108898665) in view of Hunter et al. (US 20220130098). Regarding claim 1. Pi teaches: A system (Pi [Pg 1 Par 2] The invention relates to the electronic technology field, concretely, claims a three-dimensional human face re-establishing method, device, apparatus and computer-readable storage medium.), comprising: circuitry configured to: acquire a base three-dimensional (3D) mesh of an object (Pi [Pg 3 Par 1] step 103, performing global outline distorting the reference model based on feature point pixel coordinate of the calibration to obtain the initial overall outline deformed face model, the reference model into a known three-dimensional face model;); acquire a photometric surface normal corresponding to the object (Pi [ABSTRACT] performing global outline distorting the reference model based on feature point pixel coordinate. the primary face model deformed based on photometric stereo technique of the normal photometric reconstruction based on the surface normal of the target face model reconstruction); compute a base normal map based on vertex normal information included in the base 3D mesh (Pi [Pg 2 Par 9] the initial face model surface are dispersed to a three-dimensional grid containing vertex p, the corresponding three dimensional coordinate is expressed as a matrix, is calculated by using the following formula to obtain three-dimensional coordinate matrix X global profile after deformation of the initial vertices of face model:); determine a correction on the photometric surface normal based on the base normal map and the mesh density map (Pi [Pg 7 Par 1] step 103, realizing the overall outline deformation of the reference model, but lost a lot of detail information. To this end, step 104, to optimize the model detail based on photometric stereo technique of normal use, the initial face model X1 using an improved absorbency normal technology for improving the smooth change condition of normal constraints on the model surface normal estimation, the photometric reconstruction, obtaining the surface normal vector of the target face model:), wherein the correction on the photometric surface normal corresponds to an amount of rotation on the photometric surface normal (Pi [Pg 8 Par 10 – Pg 9 Par 2] As shown in FIG. 7, there are three different face image sets of the LDiCaprio using the experimental result of the present embodiment, from the results it can be seen that the solution based on no specific constraint face image set can realize three-dimensional reconstruction of high quality and has robustness. wherein, the picture of each image 3-7 can be color, or grey. In the embodiment, the claims based on photometric normal robustness of the three-dimensional face rebuilding technology combined with the photometric stereo technique and mesh deformation technique, using two kinds of method of advantages, it also avoids reconstruction of defects when face independently using a method, to improve the three-dimensional face rebuilding quality;); In figure 7 it is visible that the image is “corrected” by having a different angle seen or rotated. It is obvious to one in the art that a 3d model can be rotated to different degrees as sees in figure 7. and generate a corrected photometric surface normal based on an application of the correction on the photometric surface normal (Pi [Pg 7 Par 1] step 103, realizing the overall outline deformation of the reference model, but lost a lot of detail information. To this end, step 104, to optimize the model detail based on photometric stereo technique of normal use, the initial face model X1 using an improved absorbency normal technology for improving the smooth change condition of normal constraints on the model surface normal estimation, the photometric reconstruction, obtaining the surface normal vector of the target face model:). Pi fails to teach: compute a mesh density map based on the base 3D mesh (Hunter [0009] The computer-implement method may further include applying a density map over the target mesh to assign a density to each of a plurality of portions of the surface wherein at least two of the portions are assigned different densities, assigning application points of the application point set to locations on the surface according to the density map and a scattering function, wherein the scattering function is based on one or more repulsion forces, effects of which are modeled as to a first application point and as to one or more neighbor points neighboring the first application point, wherein the one or more repulsion forces are treated as pushing each of the first application point and the one or more neighbor points apart, and providing the target mesh having the application points of the application point set scattered across the surface based on the one or more repulsion forces.); Hunter teaches: compute a mesh density map based on the base 3D mesh (Hunter [0009] The computer-implement method may further include applying a density map over the target mesh to assign a density to each of a plurality of portions of the surface wherein at least two of the portions are assigned different densities, assigning application points of the application point set to locations on the surface according to the density map and a scattering function, wherein the scattering function is based on one or more repulsion forces, effects of which are modeled as to a first application point and as to one or more neighbor points neighboring the first application point, wherein the one or more repulsion forces are treated as pushing each of the first application point and the one or more neighbor points apart, and providing the target mesh having the application points of the application point set scattered across the surface based on the one or more repulsion forces.); Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi with Hunter. Having a mesh density map, as in Hunter, would benefit the Pi teachings by allowing for the mesh density information to be included in the data. Additionally, this is the application of a known technique, having mesh density data and then combining it with other data to create a mesh, to yield predictable results. Regarding claim 10. Pi teaches: A method (Pi [Pg 1 Par 2] The invention relates to the electronic technology field, concretely, claims a three-dimensional human face re-establishing method, device, apparatus and computer-readable storage medium.), comprising: in a system: acquiring a base three-dimensional (3D) mesh of an object (Pi [Pg 3 Par 1] step 103, performing global outline distorting the reference model based on feature point pixel coordinate of the calibration to obtain the initial overall outline deformed face model, the reference model into a known three-dimensional face model;); acquiring a photometric surface normal corresponding to the object (Pi [ABSTRACT] performing global outline distorting the reference model based on feature point pixel coordinate. the primary face model deformed based on photometric stereo technique of the normal photometric reconstruction based on the surface normal of the target face model reconstruction); computing a base normal map based on vertex normal information included in the base 3D mesh (Pi [Pg 2 Par 9] the initial face model surface are dispersed to a three-dimensional grid containing vertex p, the corresponding three dimensional coordinate is expressed as a matrix, is calculated by using the following formula to obtain three-dimensional coordinate matrix X global profile after deformation of the initial vertices of face model:); determining a correction on the photometric surface normal based on the base normal map and the mesh density map (Pi [Pg 7 Par 1] step 103, realizing the overall outline deformation of the reference model, but lost a lot of detail information. To this end, step 104, to optimize the model detail based on photometric stereo technique of normal use, the initial face model X1 using an improved absorbency normal technology for improving the smooth change condition of normal constraints on the model surface normal estimation, the photometric reconstruction, obtaining the surface normal vector of the target face model:); and wherein the correction on the photometric surface normal corresponds to an amount of rotation on the photometric surface normal (Pi [Pg 8 Par 10 – Pg 9 Par 2] As shown in FIG. 7, there are three different face image sets of the LDiCaprio using the experimental result of the present embodiment, from the results it can be seen that the solution based on no specific constraint face image set can realize three-dimensional reconstruction of high quality and has robustness. wherein, the picture of each image 3-7 can be color, or grey. In the embodiment, the claims based on photometric normal robustness of the three-dimensional face rebuilding technology combined with the photometric stereo technique and mesh deformation technique, using two kinds of method of advantages, it also avoids reconstruction of defects when face independently using a method, to improve the three-dimensional face rebuilding quality;); In figure 7 it is visible that the image is “corrected” by having a different angle seen or rotated. It is obvious to one in the art that a 3d model can be rotated to different degrees as sees in figure 7. and generate a corrected photometric surface normal based on an application of the correction on the photometric surface normal (Pi [Pg 7 Par 1] step 103, realizing the overall outline deformation of the reference model, but lost a lot of detail information. To this end, step 104, to optimize the model detail based on photometric stereo technique of normal use, the initial face model X1 using an improved absorbency normal technology for improving the smooth change condition of normal constraints on the model surface normal estimation, the photometric reconstruction, obtaining the surface normal vector of the target face model:). Pi fails to teach: computing a mesh density map based on the base 3D mesh (Hunter [0009] The computer-implement method may further include applying a density map over the target mesh to assign a density to each of a plurality of portions of the surface wherein at least two of the portions are assigned different densities, assigning application points of the application point set to locations on the surface according to the density map and a scattering function, wherein the scattering function is based on one or more repulsion forces, effects of which are modeled as to a first application point and as to one or more neighbor points neighboring the first application point, wherein the one or more repulsion forces are treated as pushing each of the first application point and the one or more neighbor points apart, and providing the target mesh having the application points of the application point set scattered across the surface based on the one or more repulsion forces.); Hunter teaches: computing a mesh density map based on the base 3D mesh (Hunter [0009] The computer-implement method may further include applying a density map over the target mesh to assign a density to each of a plurality of portions of the surface wherein at least two of the portions are assigned different densities, assigning application points of the application point set to locations on the surface according to the density map and a scattering function, wherein the scattering function is based on one or more repulsion forces, effects of which are modeled as to a first application point and as to one or more neighbor points neighboring the first application point, wherein the one or more repulsion forces are treated as pushing each of the first application point and the one or more neighbor points apart, and providing the target mesh having the application points of the application point set scattered across the surface based on the one or more repulsion forces.); Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi with Hunter. Having a mesh density map, as in Hunter, would benefit the Pi teachings by allowing for the mesh density information to be included in the data. Additionally, this is the application of a known technique, having mesh density data and then combining it with other data to create a mesh, to yield predictable results. Regarding claim 19. Pi teaches: A non-transitory computer-readable medium having stored thereon, computer- executable instructions which, when executed by a system, cause the system to execute operations, the operations comprising (Pi [Pg 1 Par 2] The invention relates to the electronic technology field, concretely, claims a three-dimensional human face re-establishing method, device, apparatus and computer-readable storage medium.): acquiring a base three-dimensional (3D) mesh of an object (Pi [Pg 3 Par 1] step 103, performing global outline distorting the reference model based on feature point pixel coordinate of the calibration to obtain the initial overall outline deformed face model, the reference model into a known three-dimensional face model;); acquiring a photometric surface normal corresponding to the object (Pi [ABSTRACT] performing global outline distorting the reference model based on feature point pixel coordinate. the primary face model deformed based on photometric stereo technique of the normal photometric reconstruction based on the surface normal of the target face model reconstruction); computing a base normal map based on vertex normal information included in the base 3D mesh (Pi [Pg 2 Par 9] the initial face model surface are dispersed to a three-dimensional grid containing vertex p, the corresponding three dimensional coordinate is expressed as a matrix, is calculated by using the following formula to obtain three-dimensional coordinate matrix X global profile after deformation of the initial vertices of face model:); determining a correction on the photometric surface normal based on the base normal map and the mesh density map (Pi [Pg 7 Par 1] step 103, realizing the overall outline deformation of the reference model, but lost a lot of detail information. To this end, step 104, to optimize the model detail based on photometric stereo technique of normal use, the initial face model X1 using an improved absorbency normal technology for improving the smooth change condition of normal constraints on the model surface normal estimation, the photometric reconstruction, obtaining the surface normal vector of the target face model:); and wherein the correction on the photometric surface normal corresponds to an amount of rotation on the photometric surface normal (Pi [Pg 8 Par 10 – Pg 9 Par 2] As shown in FIG. 7, there are three different face image sets of the LDiCaprio using the experimental result of the present embodiment, from the results it can be seen that the solution based on no specific constraint face image set can realize three-dimensional reconstruction of high quality and has robustness. wherein, the picture of each image 3-7 can be color, or grey. In the embodiment, the claims based on photometric normal robustness of the three-dimensional face rebuilding technology combined with the photometric stereo technique and mesh deformation technique, using two kinds of method of advantages, it also avoids reconstruction of defects when face independently using a method, to improve the three-dimensional face rebuilding quality;); In figure 7 it is visible that the image is “corrected” by having a different angle seen or rotated. It is obvious to one in the art that a 3d model can be rotated to different degrees as sees in figure 7. and generate a corrected photometric surface normal based on an application of the correction on the photometric surface normal (Pi [Pg 7 Par 1] step 103, realizing the overall outline deformation of the reference model, but lost a lot of detail information. To this end, step 104, to optimize the model detail based on photometric stereo technique of normal use, the initial face model X1 using an improved absorbency normal technology for improving the smooth change condition of normal constraints on the model surface normal estimation, the photometric reconstruction, obtaining the surface normal vector of the target face model:). Pi fails to teach: computing a mesh density map based on the base 3D mesh (Hunter [0009] The computer-implement method may further include applying a density map over the target mesh to assign a density to each of a plurality of portions of the surface wherein at least two of the portions are assigned different densities, assigning application points of the application point set to locations on the surface according to the density map and a scattering function, wherein the scattering function is based on one or more repulsion forces, effects of which are modeled as to a first application point and as to one or more neighbor points neighboring the first application point, wherein the one or more repulsion forces are treated as pushing each of the first application point and the one or more neighbor points apart, and providing the target mesh having the application points of the application point set scattered across the surface based on the one or more repulsion forces.); Hunter teaches: computing a mesh density map based on the base 3D mesh (Hunter [0009] The computer-implement method may further include applying a density map over the target mesh to assign a density to each of a plurality of portions of the surface wherein at least two of the portions are assigned different densities, assigning application points of the application point set to locations on the surface according to the density map and a scattering function, wherein the scattering function is based on one or more repulsion forces, effects of which are modeled as to a first application point and as to one or more neighbor points neighboring the first application point, wherein the one or more repulsion forces are treated as pushing each of the first application point and the one or more neighbor points apart, and providing the target mesh having the application points of the application point set scattered across the surface based on the one or more repulsion forces.); Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi with Hunter. Having a mesh density map, as in Hunter, would benefit the Pi teachings by allowing for the mesh density information to be included in the data. Additionally, this is the application of a known technique, having mesh density data and then combining it with other data to create a mesh, to yield predictable results. Claim(s) 2-4 and 11-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pi et al. (CN 108898665) in view of Hunter et al. (US 20220130098) and Gao et al. (CN 105957154). Regarding claim 2. Pi and Hunter teach: The system according to claim 1, wherein the circuitry is further configured to: aquire a photometric scan of the object wherein the photometric scan includes a plurality of images of the object from at least one of a plurality of viewpoints in a 3D space (Pi [PG 2 Par 2] photometric stereo technique is a driven reconstruction technique is widely used, is initially by restoring shape (Shape from Shading, SFS) algorithm developed to from Woodham to 1980 years, is a three-dimensional reconstruction method using the same object at the same position and a plurality of images under different light sources to recover the shape of the surface of the object.); reconstruct a 3D mesh based on the plurality of images (Pi [Pg 3 Par 1] step 103, performing global outline distorting the reference model based on feature point pixel coordinate of the calibration to obtain the initial overall outline deformed face model, the reference model into a known three-dimensional face model;); and Pi and Hunter fail to teach: refine the 3D mesh based on an input from a 3D artist, wherein the refined 3D mesh corresponds to the acquired base 3D mesh (Gao [Pg 5 Par 12] inputs have the same mesh topology model library, the model library for editing three-dimensional model by three-dimensional scanning technology or artist;). Gao teaches: refine the 3D mesh based on an input from a 3D artist, wherein the refined 3D mesh corresponds to the acquired base 3D mesh (Gao [Pg 5 Par 12] inputs have the same mesh topology model library, the model library for editing three-dimensional model by three-dimensional scanning technology or artist;). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Gao. Refining the mesh based on input from an artist, as in Gao, would benefit the Pi and Hunter teachings by allowing for the mesh information to be adjusted by someone. Additionally, this is the application of a known technique, manually adjusting mesh data, to yield predictable results. Regarding claim 3. Pi, Hunter, and Gao teach: The system according to claim 2, wherein the circuitry is further configured to generate the photometric surface normal based on a fitment of a surface reflectance model to the plurality of images (Pi [Pg 7 Par 9] In step 104, photometric normal technology can high-precisely estimated surface normal and the surface normal can well reflect the surface detail, so step 105 can use the average curvature FORMULA Δ x=-Hn reconstructing the three dimensional face surface.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Gao. Refining the mesh based on input from an artist, as in Gao, would benefit the Pi and Hunter teachings by allowing for the mesh information to be adjusted by someone. Additionally, this is the application of a known technique, manually adjusting mesh data, to yield predictable results. Regarding claim 4. Pi, Hunter, and Gao teach: The system according to claim 3, wherein an exposure of the object to a plurality of dynamic lighting conditions is in a duration of the acquisition of the photometric scan of the object (Pi [Pg 8 Par 12] In the embodiment, the claims based on photometric normal robustness of the three-dimensional face rebuilding technology combined with the photometric stereo technique and mesh deformation technique, using two kinds of method of advantages, it also avoids reconstruction of defects when face independently using a method, to improve the three-dimensional face rebuilding quality; moreover, the invention reduces the limitation of the input face reference image, allowing the face in the reference image with different lighting, different expressions, even the image of human face posture deflection, improves the applicability). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Gao. Refining the mesh based on input from an artist, as in Gao, would benefit the Pi and Hunter teachings by allowing for the mesh information to be adjusted by someone. Additionally, this is the application of a known technique, manually adjusting mesh data, to yield predictable results. Regarding claim 11. Pi and Hunter teach: The method according to claim 10, further comprising: receiving a photometric scan of the object that includes a plurality of images of the object captured from one or more viewpoints in a 3D space (Pi [Pg 2 Par 2] photometric stereo technique is a driven reconstruction technique is widely used, is initially by restoring shape (Shape from Shading, SFS) algorithm developed to from Woodham to 1980 years, is a three-dimensional reconstruction method using the same object at the same position and a plurality of images under different light sources to recover the shape of the surface of the object.); reconstructing a 3D mesh based on the plurality of images included in the photometric scan (Pi [Pg 3 Par 1] step 103, performing global outline distorting the reference model based on feature point pixel coordinate of the calibration to obtain the initial overall outline deformed face model, the reference model into a known three-dimensional face model;); and Pi and Hunter fail to teach: refining the 3D mesh based on an input from a 3D artist, wherein the refined 3D mesh corresponds to the acquired base 3D mesh (Gao [Pg 5 Par 12] inputs have the same mesh topology model library, the model library for editing three-dimensional model by three-dimensional scanning technology or artist;). Gao teaches: refining the 3D mesh based on an input from a 3D artist, wherein the refined 3D mesh corresponds to the acquired base 3D mesh (Gao [Pg 5 Par 12] inputs have the same mesh topology model library, the model library for editing three-dimensional model by three-dimensional scanning technology or artist;). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Gao. Refining the mesh based on input from an artist, as in Gao, would benefit the Pi and Hunter teachings by allowing for the mesh information to be adjusted by someone. Additionally, this is the application of a known technique, manually adjusting mesh data, to yield predictable results. Regarding claim 12. Pi, Hunter, and Gao teach: The method according to claim 11, Further comprising generating the photometric surface normal based on a fitment of a surface reflectance model to the plurality of images (Pi [Pg 7 Par 9] In step 104, photometric normal technology can high-precisely estimated surface normal and the surface normal can well reflect the surface detail, so step 105 can use the average curvature FORMULA Δ x=-Hn reconstructing the three dimensional face surface.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Gao. Refining the mesh based on input from an artist, as in Gao, would benefit the Pi and Hunter teachings by allowing for the mesh information to be adjusted by someone. Additionally, this is the application of a known technique, manually adjusting mesh data, to yield predictable results. Regarding claim 13. Pi, Hunter, and Gao teach: The method according to claim 11, wherein the object is exposed to dynamic lighting conditions throughout a duration of acquisition of the photometric scan (Pi [Pg 8 par 2] In the embodiment, the claims based on photometric normal robustness of the three-dimensional face rebuilding technology combined with the photometric stereo technique and mesh deformation technique, using two kinds of method of advantages, it also avoids reconstruction of defects when face independently using a method, to improve the three-dimensional face rebuilding quality; moreover, the invention reduces the limitation of the input face reference image, allowing the face in the reference image with different lighting, different expressions, even the image of human face posture deflection, improves the applicability). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Gao. Refining the mesh based on input from an artist, as in Gao, would benefit the Pi and Hunter teachings by allowing for the mesh information to be adjusted by someone. Additionally, this is the application of a known technique, manually adjusting mesh data, to yield predictable results. Claim(s) 5, 6, 14, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pi et al. (CN 108898665) in view of Hunter et al. (US 20220130098) and Rhyu et al. (US 20220028119). Regarding claim 5. Pi and Hunter teach: The system according to claim 1, Pi and Hunter fail to teach: wherein the mesh density map includes a plurality of points, each of the plurality of points represents a local vertex density of a corresponding vertex of the base 3D mesh (Rhyu [0007] In compressing mesh content by using V-PCC, vertexes included in the mesh content may correspond to points of a point cloud. However, in the case of mesh content, spaces between vertexes are padded with triangular planes, whereas, in point cloud data, spaces between points are padded with points. Accordingly, a density of vertexes included in mesh content is different from that of points included in point cloud data.). Rhyu teaches: wherein the mesh density map includes a plurality of points, each of which represents a local vertex density of a corresponding vertex of the base 3D mesh (Rhyu [0007] In compressing mesh content by using V-PCC, vertexes included in the mesh content may correspond to points of a point cloud. However, in the case of mesh content, spaces between vertexes are padded with triangular planes, whereas, in point cloud data, spaces between points are padded with points. Accordingly, a density of vertexes included in mesh content is different from that of points included in point cloud data.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Rhyu. Applying vertex data in the mesh data, as in Rhyu, would benefit the Pi and Hunter teachings by allowing for the vertices to have their own data and density information regarding that data. Additionally, this is the application of a known technique, having vertex data, to yield predictable results. Regarding claim 6. Pi and Hunter teach: The system according to claim 1, Pi and Hunter fail to teach: wherein the circuitry is further configured to compute the base normal map based on vertex location information and the vertex location information is associated with vertices of the base 3D mesh (Rhyu [0002] A point cloud, which is a method of representing 3-dimensional (3D) data, refers to a group of a massive amount of points, and a massive amount of 3D data that can be represented as a point cloud. That is, a point cloud refers to samples extracted in a process of obtaining a 3D model. [0003] A point cloud is a value that can be compared with a 2-dimensional (2D) image, and also is a method of representing a point in a 3D space. A point cloud has a vector form that can include both location coordinates and colors. For example, a point cloud can be represented as (x, y, z, R, G, B). A point cloud forming a spatial configuration by collecting numerous colors and location data converges on more specific data as the density thereof is higher, thereby having significance as a 3D model. [0082] The method of compressing the mesh content, according to an embodiment of the disclosure, may include operation 360 of translating face commands of the mesh content based on the occupancy map. Translating the face commands of the mesh content based on the occupancy map may mean representing the face commands based on a bitmap of the occupancy map. For example, locations of vertexes may be determined with reference to the occupancy map, and vertex information may be signaled through face commands connecting the individual vertexes by a decoder. The signaled vertex information may be used to reconstruct V-PCC compressed mesh content.). Rhyu teaches: wherein the circuitry is further configured to compute the base normal map based on vertex location information associated with vertices of the base 3D mesh (Rhyu [0002] A point cloud, which is a method of representing 3-dimensional (3D) data, refers to a group of a massive amount of points, and a massive amount of 3D data that can be represented as a point cloud. That is, a point cloud refers to samples extracted in a process of obtaining a 3D model. [0003] A point cloud is a value that can be compared with a 2-dimensional (2D) image, and also is a method of representing a point in a 3D space. A point cloud has a vector form that can include both location coordinates and colors. For example, a point cloud can be represented as (x, y, z, R, G, B). A point cloud forming a spatial configuration by collecting numerous colors and location data converges on more specific data as the density thereof is higher, thereby having significance as a 3D model. [0082] The method of compressing the mesh content, according to an embodiment of the disclosure, may include operation 360 of translating face commands of the mesh content based on the occupancy map. Translating the face commands of the mesh content based on the occupancy map may mean representing the face commands based on a bitmap of the occupancy map. For example, locations of vertexes may be determined with reference to the occupancy map, and vertex information may be signaled through face commands connecting the individual vertexes by a decoder. The signaled vertex information may be used to reconstruct V-PCC compressed mesh content.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Rhyu. Applying vertex data in the mesh data, as in Rhyu, would benefit the Pi and Hunter teachings by allowing for the vertices to have their own data and density information regarding that data. Additionally, this is the application of a known technique, having vertex data, to yield predictable results. Regarding claim 14. Pi and Hunter teach: The method according to claim 10, Pi and Hunter fail to teach: wherein the mesh density map includes a plurality of points, each of which represents a local vertex density of a corresponding vertex of the base 3D mesh (Rhyu [0007] In compressing mesh content by using V-PCC, vertexes included in the mesh content may correspond to points of a point cloud. However, in the case of mesh content, spaces between vertexes are padded with triangular planes, whereas, in point cloud data, spaces between points are padded with points. Accordingly, a density of vertexes included in mesh content is different from that of points included in point cloud data.). Rhyu teaches: wherein the mesh density map includes a plurality of points, each of which represents a local vertex density of a corresponding vertex of the base 3D mesh (Rhyu [0007] In compressing mesh content by using V-PCC, vertexes included in the mesh content may correspond to points of a point cloud. However, in the case of mesh content, spaces between vertexes are padded with triangular planes, whereas, in point cloud data, spaces between points are padded with points. Accordingly, a density of vertexes included in mesh content is different from that of points included in point cloud data.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Rhyu. Applying vertex data in the mesh data, as in Rhyu, would benefit the Pi and Hunter teachings by allowing for the vertices to have their own data and density information regarding that data. Additionally, this is the application of a known technique, having vertex data, to yield predictable results. Regarding claim 15. Pi and Hunter teach: The method according to claim 10, Pi and Hunter fail to teach: further comprising computing the base normal map based on vertex location information associated with vertices of the base 3D mesh (Rhyu [0002] A point cloud, which is a method of representing 3-dimensional (3D) data, refers to a group of a massive amount of points, and a massive amount of 3D data that can be represented as a point cloud. That is, a point cloud refers to samples extracted in a process of obtaining a 3D model. [0003] A point cloud is a value that can be compared with a 2-dimensional (2D) image, and also is a method of representing a point in a 3D space. A point cloud has a vector form that can include both location coordinates and colors. For example, a point cloud can be represented as (x, y, z, R, G, B). A point cloud forming a spatial configuration by collecting numerous colors and location data converges on more specific data as the density thereof is higher, thereby having significance as a 3D model. [0082] The method of compressing the mesh content, according to an embodiment of the disclosure, may include operation 360 of translating face commands of the mesh content based on the occupancy map. Translating the face commands of the mesh content based on the occupancy map may mean representing the face commands based on a bitmap of the occupancy map. For example, locations of vertexes may be determined with reference to the occupancy map, and vertex information may be signaled through face commands connecting the individual vertexes by a decoder. The signaled vertex information may be used to reconstruct V-PCC compressed mesh content.). Rhyu teaches: further comprising computing the base normal map based on vertex location information associated with vertices of the base 3D mesh (Rhyu [0002] A point cloud, which is a method of representing 3-dimensional (3D) data, refers to a group of a massive amount of points, and a massive amount of 3D data that can be represented as a point cloud. That is, a point cloud refers to samples extracted in a process of obtaining a 3D model. [0003] A point cloud is a value that can be compared with a 2-dimensional (2D) image, and also is a method of representing a point in a 3D space. A point cloud has a vector form that can include both location coordinates and colors. For example, a point cloud can be represented as (x, y, z, R, G, B). A point cloud forming a spatial configuration by collecting numerous colors and location data converges on more specific data as the density thereof is higher, thereby having significance as a 3D model. [0082] The method of compressing the mesh content, according to an embodiment of the disclosure, may include operation 360 of translating face commands of the mesh content based on the occupancy map. Translating the face commands of the mesh content based on the occupancy map may mean representing the face commands based on a bitmap of the occupancy map. For example, locations of vertexes may be determined with reference to the occupancy map, and vertex information may be signaled through face commands connecting the individual vertexes by a decoder. The signaled vertex information may be used to reconstruct V-PCC compressed mesh content.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Rhyu. Applying vertex data in the mesh data, as in Rhyu, would benefit the Pi and Hunter teachings by allowing for the vertices to have their own data and density information regarding that data. Additionally, this is the application of a known technique, having vertex data, to yield predictable results. Claim(s) 7, 16, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pi et al. (CN 108898665) in view of Hunter et al. (US 20220130098), Rhyu et al. (US 20220028119) and Ghosh et al. (US 20240135645). Regarding claim 7. Pi and Hunter teach: The system according to claim 1, Pi and Hunter fail to teach: wherein the correction on the photometric surface normal is associated with a removal of inconsistent low-frequency information in the photometric surface normal (Rhyu [0044] For example, for packing, resizing, transforming, rotating and/or re-sampling (for example, up-sampling, down-sampling, or differential sampling according to locations in a region) of a region, etc. may be performed.). Rhyu and Ghosh teach: wherein the correction corresponds to an amount of rotation that is applicable on the photometric surface normal for a removal of inconsistent low-frequency information included in the photometric surface normal (Rhyu [0044] For example, for packing, resizing, transforming, rotating and/or re-sampling (for example, up-sampling, down-sampling, or differential sampling according to locations in a region) of a region, etc. may be performed.) (Ghosh [0027] Blending may include any techniques known in the art of multi-view texture capturing, such as, for example, averaging the low frequency response of two or more maps (or images), and embossing (superposing) high frequency responses (high pass filtered) from a single map (or image) corresponding to a view direction closest to the mesh normal at that UV coordinate. The low frequency response of a map (or image) may be obtained by blurring the map (or image), for example a Gaussian blurring. High frequency responses may be obtained by subtracting the low-frequency response of a map (or image) from that map (or image).). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Rhyu. Applying vertex data in the mesh data, as in Rhyu , would benefit the Pi and Hunter teachings by allowing for the vertices to have their own data and density information regarding that data. Additionally, this is the application of a known technique, having vertex data, to yield predictable results. Regarding claim 16. Pi and Hunter teach: The method according to claim 10, Pi and Hunter fail to teach: wherein the correction corresponds to an amount of rotation that is applicable on the photometric surface normal for a removal of inconsistent low-frequency information included in the photometric surface normal (Rhyu [0044] For example, for packing, resizing, transforming, rotating and/or re-sampling (for example, up-sampling, down-sampling, or differential sampling according to locations in a region) of a region, etc. may be performed.). Rhyu and Ghosh teach: wherein the correction corresponds to an amount of rotation that is applicable on the photometric surface normal for a removal of inconsistent low-frequency information included in the photometric surface normal (Rhyu [0044] For example, for packing, resizing, transforming, rotating and/or re-sampling (for example, up-sampling, down-sampling, or differential sampling according to locations in a region) of a region, etc. may be performed.) (Ghosh [0027] Blending may include any techniques known in the art of multi-view texture capturing, such as, for example, averaging the low frequency response of two or more maps (or images), and embossing (superposing) high frequency responses (high pass filtered) from a single map (or image) corresponding to a view direction closest to the mesh normal at that UV coordinate. The low frequency response of a map (or image) may be obtained by blurring the map (or image), for example a Gaussian blurring. High frequency responses may be obtained by subtracting the low-frequency response of a map (or image) from that map (or image).). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Rhyu and Ghosh. Rotating around and finding things based on the low-frequency information, as in Rhyu and Ghosh, would benefit the Pi and Hunter teachings by allowing for adjusting and fixing the data. Additionally, this is the application of a known technique, rotating around and finding things based on the low-frequency information, to yield predictable results. Regarding claim 20. Pi and Hunter teach: The non-transitory computer-readable medium according to claim 19, Pi and Hunter fail to teach: wherein the correction corresponds to an amount of rotation that is applicable on the photometric surface normal for a removal of inconsistent low-frequency information included in the photometric surface normal (Rhyu [0044] For example, for packing, resizing, transforming, rotating and/or re-sampling (for example, up-sampling, down-sampling, or differential sampling according to locations in a region) of a region, etc. may be performed.) (Ghosh [0027] Blending may include any techniques known in the art of multi-view texture capturing, such as, for example, averaging the low frequency response of two or more maps (or images), and embossing (superposing) high frequency responses (high pass filtered) from a single map (or image) corresponding to a view direction closest to the mesh normal at that UV coordinate. The low frequency response of a map (or image) may be obtained by blurring the map (or image), for example a Gaussian blurring. High frequency responses may be obtained by subtracting the low-frequency response of a map (or image) from that map (or image).). Rhyu and Ghosh teach: wherein the correction corresponds to an amount of rotation that is applicable on the photometric surface normal for a removal of inconsistent low-frequency information included in the photometric surface normal (Rhyu [0044] For example, for packing, resizing, transforming, rotating and/or re-sampling (for example, up-sampling, down-sampling, or differential sampling according to locations in a region) of a region, etc. may be performed.) (Ghosh [0027] Blending may include any techniques known in the art of multi-view texture capturing, such as, for example, averaging the low frequency response of two or more maps (or images), and embossing (superposing) high frequency responses (high pass filtered) from a single map (or image) corresponding to a view direction closest to the mesh normal at that UV coordinate. The low frequency response of a map (or image) may be obtained by blurring the map (or image), for example a Gaussian blurring. High frequency responses may be obtained by subtracting the low-frequency response of a map (or image) from that map (or image).). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Rhyu and Ghosh. Rotating around and finding things based on the low-frequency information, as in Rhyu and Ghosh, would benefit the Pi and Hunter teachings by allowing for adjusting and fixing the data. Additionally, this is the application of a known technique, rotating around and finding things based on the low-frequency information, to yield predictable results. Claim(s) 8, 9, 17, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pi et al. (CN 108898665) in view of Hunter et al. (US 20220130098), Rhyu et al. (US 20220028119) and Ghosh et al. (US 20240135645). Regarding claim 8. Pi and Hunter teach: The system according to claim 1, Pi and Hunter fail to teach: wherein the circuitry is further configured to: extract a UV coordinate map of the base 3D mesh Ghosh [0233] A photometric normal map PN.sub.UV is then generated by superposing the high-frequency normal map HFN.sub.UV with a geometric normal at each UV-coordinate. Geometric normal at a UV-coordinate are obtained by interpolation of the normal corresponding to the surrounding vertices. This process may sometimes be described as “embossing” the high frequency details of the tangent normal map TN.sub.UV onto the geometric normal from the mesh 20.; and convert the corrected photometric surface normal into a tangent map based on the UV coordinate map (Ghosh [0233] A photometric normal map PN.sub.UV is then generated by superposing the high-frequency normal map HFN.sub.UV with a geometric normal at each UV-coordinate. Geometric normal at a UV-coordinate are obtained by interpolation of the normal corresponding to the surrounding vertices. This process may sometimes be described as “embossing” the high frequency details of the tangent normal map TN.sub.UV onto the geometric normal from the mesh 20. [0011] The method also includes determining a tangent normal map corresponding to the target region of the object surface based on high-pass filtering each object image of the second subset. The method also includes storing and/or outputting the mesh, the diffuse map, the specular map and the tangent normal map.). Ghosh teaches: wherein the circuitry is further configured to: extract a UV coordinate map of the base 3D mesh Ghosh [0233] A photometric normal map PN.sub.UV is then generated by superposing the high-frequency normal map HFN.sub.UV with a geometric normal at each UV-coordinate. Geometric normal at a UV-coordinate are obtained by interpolation of the normal corresponding to the surrounding vertices. This process may sometimes be described as “embossing” the high frequency details of the tangent normal map TN.sub.UV onto the geometric normal from the mesh 20.; and convert the corrected photometric surface normal into a tangent map based on the UV coordinate map (Ghosh [0233] A photometric normal map PN.sub.UV is then generated by superposing the high-frequency normal map HFN.sub.UV with a geometric normal at each UV-coordinate. Geometric normal at a UV-coordinate are obtained by interpolation of the normal corresponding to the surrounding vertices. This process may sometimes be described as “embossing” the high frequency details of the tangent normal map TN.sub.UV onto the geometric normal from the mesh 20. [0011] The method also includes determining a tangent normal map corresponding to the target region of the object surface based on high-pass filtering each object image of the second subset. The method also includes storing and/or outputting the mesh, the diffuse map, the specular map and the tangent normal map.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Ghosh. Using a UV coordinate map, as in Ghosh, would benefit the Pi and Hunter teachings by allowing for adjusting parts based on the UV coordinate map. Additionally, this is the application of a known technique, using a UV coordinate map, to yield predictable results. Regarding claim 9. Pi, Hunter, and Ghosh teach: The system according to claim 8, wherein the circuitry is further configured to apply the tangent map to the base 3D mesh and generate, based on the application of the tangent map to the base d mesh, a 3D mesh that carries texture details associated with the tangent map (Ghosh [0233] A photometric normal map PN.sub.UV is then generated by superposing the high-frequency normal map HFN.sub.UV with a geometric normal at each UV-coordinate. Geometric normal at a UV-coordinate are obtained by interpolation of the normal corresponding to the surrounding vertices. This process may sometimes be described as “embossing” the high frequency details of the tangent normal map TN.sub.UV onto the geometric normal from the mesh 20. [0011] The method also includes determining a tangent normal map corresponding to the target region of the object surface based on high-pass filtering each object image of the second subset. The method also includes storing and/or outputting the mesh, the diffuse map, the specular map and the tangent normal map.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Ghosh. Using a UV coordinate map, as in Ghosh, would benefit the Pi and Hunter teachings by allowing for adjusting parts based on the UV coordinate map. Additionally, this is the application of a known technique, using a UV coordinate map, to yield predictable results. Regarding claim 17. Pi and Hunter teach: The method according to claim 10, Pi and Hunter fail to teach: Further comprising: extracting a UV coordinate map of the base 3D mesh (Ghosh [0233] A photometric normal map PN.sub.UV is then generated by superposing the high-frequency normal map HFN.sub.UV with a geometric normal at each UV-coordinate. Geometric normal at a UV-coordinate are obtained by interpolation of the normal corresponding to the surrounding vertices. This process may sometimes be described as “embossing” the high frequency details of the tangent normal map TN.sub.UV onto the geometric normal from the mesh 20.); and converting the corrected photometric surface normal into a tangent map based on the UV coordinate map (Ghosh [0233] A photometric normal map PN.sub.UV is then generated by superposing the high-frequency normal map HFN.sub.UV with a geometric normal at each UV-coordinate. Geometric normal at a UV-coordinate are obtained by interpolation of the normal corresponding to the surrounding vertices. This process may sometimes be described as “embossing” the high frequency details of the tangent normal map TN.sub.UV onto the geometric normal from the mesh 20. [0011] The method also includes determining a tangent normal map corresponding to the target region of the object surface based on high-pass filtering each object image of the second subset. The method also includes storing and/or outputting the mesh, the diffuse map, the specular map and the tangent normal map.). Ghosh teaches: Further comprising: extracting a UV coordinate map of the base 3D mesh (Ghosh [0233] A photometric normal map PN.sub.UV is then generated by superposing the high-frequency normal map HFN.sub.UV with a geometric normal at each UV-coordinate. Geometric normal at a UV-coordinate are obtained by interpolation of the normal corresponding to the surrounding vertices. This process may sometimes be described as “embossing” the high frequency details of the tangent normal map TN.sub.UV onto the geometric normal from the mesh 20.); and converting the corrected photometric surface normal into a tangent map based on the UV coordinate map (Ghosh [0233] A photometric normal map PN.sub.UV is then generated by superposing the high-frequency normal map HFN.sub.UV with a geometric normal at each UV-coordinate. Geometric normal at a UV-coordinate are obtained by interpolation of the normal corresponding to the surrounding vertices. This process may sometimes be described as “embossing” the high frequency details of the tangent normal map TN.sub.UV onto the geometric normal from the mesh 20. [0011] The method also includes determining a tangent normal map corresponding to the target region of the object surface based on high-pass filtering each object image of the second subset. The method also includes storing and/or outputting the mesh, the diffuse map, the specular map and the tangent normal map.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Ghosh. Using a UV coordinate map, as in Ghosh, would benefit the Pi and Hunter teachings by allowing for adjusting parts based on the UV coordinate map. Additionally, this is the application of a known technique, using a UV coordinate map, to yield predictable results. Regarding claim 18. Pi, Hunter, and Ghosh teach: The system according to claim 8, Further comprising applying the tangent map to the base 3D mesh to generate a 3D mesh that carries texture details associated with the tangent map (Ghosh [0233] A photometric normal map PN.sub.UV is then generated by superposing the high-frequency normal map HFN.sub.UV with a geometric normal at each UV-coordinate. Geometric normal at a UV-coordinate are obtained by interpolation of the normal corresponding to the surrounding vertices. This process may sometimes be described as “embossing” the high frequency details of the tangent normal map TN.sub.UV onto the geometric normal from the mesh 20. [0011] The method also includes determining a tangent normal map corresponding to the target region of the object surface based on high-pass filtering each object image of the second subset. The method also includes storing and/or outputting the mesh, the diffuse map, the specular map and the tangent normal map.). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Pi and Hunter with Ghosh. Using a UV coordinate map, as in Ghosh, would benefit the Pi and Hunter teachings by allowing for adjusting parts based on the UV coordinate map. Additionally, this is the application of a known technique, using a UV coordinate map, to yield predictable results. Response to Arguments Applicant's arguments filed 12/26/2025 have been fully considered but they are not persuasive. The applicant’s arguments state that “Pi and Hunter fail to teach: wherein the correction corresponds to an amount of rotation that is applicable on the photometric surface normal for a removal of inconsistent low- frequency information included in the photometric surface normal ... Rhyu and Ghosh teach: wherein the correction corresponds to an amount of rotation that is applicable on the photometric surface normal for a removal of inconsistent low- frequency information included in the photometric surface normal (Rhyu [00441For example, for packing, resizing, transforming, rotating and/or re-sampling (for example, up- sampling, down-sampling, or differential sampling according to locations in a region) of a region, etc. may be performed.) Rhyu describes "[i]n operation 120, the transmitter 100 may project the 3D image in a space on a 2D plane to generate a 2D image ... [t]o project a 3D image to a 2D image, any one of equirectangular projection (ERP), octahedron projection (OHP), cylinder projection, cube projection, and various projections which are usable in the related technical field may be used." See Rhyu at [0042-0043]. Further, Rhyu describes "[i]n operation 130, the transmitter 100 may pack the projected 2D image. Packing may mean changing a location, size, and direction of at least a part of a plurality of regions constructing a projected 2D image to generate a new 2D image (that is, a packed 2D image). For example, for packing, resizing, transforming, rotating and/or re- sampling (for example, up-sampling, down-sampling, or differential sampling according to locations in a region) of a region, etc. may be performed." See Rhyu at [0044]. Rhyu describes that the transmitter packs the projected 2D image by rotating of the regions of the 2D image. However, Rhyu does not teach or suggest correction on a photometric surface normal of the 3D object. Further, Rhyu does not teach or suggest that a correction on the photometric surface normal corresponds to an amount of the rotation on the photometric surface normal. Furthermore, the Examiner has failed to provide "articulated reasoning with some rationale underpinning to support the legal conclusion of obviousness" in the detailed manner described in KSR. Rather, the Examiner merely provides cursory statements in an attempt to support the claim rejections. In the present instance, Rhyu describes a method of compressing mesh content representing a 3-dimensional (3D) object. Further, Rhyu merely describes that the transmitter packs the projected 2D image by performing a rotation of the regions of the projected 2D image. In contrast, amended independent” However, the applicant’s arguments are unpersuasive because it is well known in the art that correction refers to manipulating a 3d object to closer match the projection, this includes: scaling, moving, rotating etc. As well, in Pi, figure 7 shows different angles of the 3d model which appear to be rotated. Pi also teaches: [Pg 8 Par 10 – Pg 9 Par 2] (“As shown in FIG. 7, there are three different face image sets of the LDiCaprio using the experimental result of the present embodiment, from the results it can be seen that the solution based on no specific constraint face image set can realize three-dimensional reconstruction of high quality and has robustness. Also, the picture of each image 3-7 can be color, or grey. In the embodiment, the claims based on photometric normal robustness of the three-dimensional face rebuilding technology combined with the photometric stereo technique and mesh deformation technique, using two kinds of method of advantages, it also avoids reconstruction of defects when face independently using a method, to improve the three-dimensional face rebuilding quality”). In figure 7 of Pi it is visible that the image is “corrected” by having a different angle seen or rotated. It is obvious to one skilled in the art that a 3d model can be rotated to different degrees as seen in figure 7. Therefore, the 35 U.S.C. 103 rejection of claims 1-20 stand. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENIS VASILIY MINKO whose telephone number is (571)270-5226. The examiner can normally be reached Monday-Thursday 8:30-6:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENIS VASILIY MINKO/Examiner, Art Unit 2612 /Said Broome/Supervisory Patent Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Jan 11, 2024
Application Filed
Sep 22, 2025
Non-Final Rejection — §103
Dec 26, 2025
Response Filed
Mar 17, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597195
METHOD FOR GENERATING PHOTOGRAPHED IMAGE DATA USING VIRTUAL ORGANOID
2y 5m to grant Granted Apr 07, 2026
Patent 12579732
Face-Oriented Geometry Streaming
2y 5m to grant Granted Mar 17, 2026
Patent 12518497
MODEL ALIGNMENT METHOD
2y 5m to grant Granted Jan 06, 2026
Patent 12518641
SYSTEMS AND METHODS FOR GENERATING AVIONIC DISPLAYS INDICATING WAKE TURBULENCE
2y 5m to grant Granted Jan 06, 2026
Patent 12462476
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR GENERATING THREE-DIMENSIONAL MODEL
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
79%
With Interview (+16.7%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month