Prosecution Insights
Last updated: April 19, 2026
Application No. 18/425,130

Systems and Methods for Improving Compression of Normal Maps

Final Rejection §103
Filed
Jan 29, 2024
Examiner
SALVUCCI, MATTHEW D
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Six Impossible Things Before Breakfast Limited
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
348 granted / 485 resolved
+9.8% vs TC avg
Strong +28% interview lift
Without
With
+28.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
17 currently pending
Career history
502
Total Applications
across all art units

Statute-Specific Performance

§101
4.6%
-35.4% vs TC avg
§103
60.8%
+20.8% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 485 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Applicant's amendments filed on 30 January 2026 have been entered. Claims 1, 7, 13, 15-17, 19, and 20 have been amended. No claims have been canceled. No claims have been added. Claims 1-20 are still pending in this application, with claims 1, 7, 13, and 17 being independent. Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on every reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7-11, and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (US Pub. 2017/0039731), hereinafter Liu, in view of Rump (US Pub. 2023/0237738), and further in view of Prasad et al. (US Pub. 2023/0085156), hereinafter Prasad. Regarding claim 1, Liu discloses a computer-implemented method of compressing normal maps using an integrating conversion, the method comprising: applying an integrating conversion to a normal map, wherein each pixel in the integrated image is represented as floating-point numbers (Paragraph [0022]: where (xp, yp, zp) are the coordinates in the units (such as meters) of the content (or object or feature point) in the image and for every pixel or point in the image. To perform the conventional methods, the parameters for the equation (1) h(ah, bh, ch, dh) is first converted into a Polar coordinate system representation: h(ah, bh, ch, dh)=h(cos θ sin φ, sin θ sin φ, cos φ, dh), the parameter space (θ, φ, dh) is quantized, and the system effectively searches the parameter space for every possible combination of (θ, φ, dh), where θ is the angle of the normal vector (ah, bh, ch) on the xy plane and φ is the angle between the xy plane and the normal vector (ah, bh, ch) in z direction. θ ranges from 0 to 360 degrees and φ ranges from 0 to 180 degrees. Assuming the target application operates within a 5 meter range, and hence dh ranges from 0 to 5, 5 degrees is used for both θ and φ, then 0.05 meters for dh to have reasonable plane detection results. The final plane hypotheses are those that have sufficient pixel support. A pixel p(xp, yp, zp) supports a plane hypothesis: h(ah, bh, ch, dh)…where δ is a threshold to account for depth noise. Since {ah, bh, ch, dh} are floating point numbers, in practice, the parameter space is quantized with large bin size to reduce the search time, but even with quantization, the number of hypotheses is still large); and compressing the integrated image using a compression method (Paragraph [0089]: image processing system 1000 may have one or more processors 1020 which may include a dedicated image signal processor (ISP) 1022 such as the Intel Atom, memory stores 1024, one or more displays 1028 to provide images 1030, a coder 1032, and antenna 1026. In one example implementation, the image processing system 1000 may have the display 1028, at least one processor 1020 communicatively coupled to the display, and at least one memory 1024 communicatively coupled to the processor. A coder 1032, which may be an encoder, decoder, or both, also may be provided. As an encoder 1032 and antenna 1034 may be provided to compress the modified image date for transmission to other devices that may display or store the image. It will be understood that the image processing system 1000 also may include a decoder (or encoder 1032 may include a decoder) to receive and decode image data for processing by the system 1000. Otherwise, the processed image 1030 may be displayed on display 1028 or stored in memory 1024. As illustrated, any of these components may be capable of communication with one another and/or communication with portions of logic modules 1004 and/or imaging device 1002. Thus, processors 1020 may be communicatively coupled to both the image device 1002 and the logic modules 1004 for operating those components. By one approach, although image processing system 1000, as shown in FIG. 10, may include one particular set of blocks or actions associated with particular components or modules, these blocks or actions may be associated with different components or modules than the particular component or module illustrated here). Liu does not explicitly disclose obtaining a normal map for a texture used to render a three-dimensional model in a virtual scene. However, Rump teaches rendering of models including texturing (Abstract; Paragraph [0098]), further comprising obtaining a normal map for a texture used to render a three-dimensional model in a virtual scene (Paragraph [0098]: rendering the scene comprising the at least one virtual object based on the first and second sets of appearance attributes and the three-dimensional geometry data; Paragraph [0128]: Texture as understood in the present disclosure includes phenomena like coarseness, sparkle, and macroscopic variations of surface topography. Texture can be described by “texture attributes”. In the context of the present disclosure, the term “texture attributes” is to be understood broadly as encompassing any form of data that is able to quantify at least one aspect of texture. Examples of texture attributes include global texture attributes such as a global coarseness parameter or a global sparkle parameter. In some embodiments, texture attributes can include a normal map or a height map. In some embodiments, texture attributes can include image data. In some embodiments, the image data can be associated with particular combinations of illumination and viewing directions. In such embodiments, the texture attributes can comprise a plurality of sets of image data, each set of image data associated with a different combination of illumination and viewing directions. A “discrete texture table” is to be understood as relating to a collection of sets of texture attributes, preferably in the form of image data, each set of texture attributes associated with a different combination of illumination and viewing directions. In some embodiments of the present disclosure, images in a discrete texture table are generated from a set of source textures, and these images are accordingly called “destination textures”). Rump teaches that this will allow for enhanced realism of texture and scenes (Paragraph [0038]; Paragraph [0145]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Liu with the features of obtaining a normal map for a texture used to render a three-dimensional model in a virtual scene as taught by Rump so as to allow for enhanced realism as presented by Rump. While Liu teaches applying the integrating conversion (Paragraph [0022]: where (xp, yp, zp) are the coordinates in the units (such as meters) of the content (or object or feature point) in the image and for every pixel or point in the image. To perform the conventional methods, the parameters for the equation (1) h(ah, bh, ch, dh) is first converted into a Polar coordinate system representation: h(ah, bh, ch, dh)=h(cos θ sin φ, sin θ sin φ, cos φ, dh), the parameter space (θ, φ, dh) is quantized, and the system effectively searches the parameter space for every possible combination of (θ, φ, dh), where θ is the angle of the normal vector (ah, bh, ch) on the xy plane and φ is the angle between the xy plane and the normal vector (ah, bh, ch) in z direction. θ ranges from 0 to 360 degrees and φ ranges from 0 to 180 degrees. Assuming the target application operates within a 5 meter range, and hence dh ranges from 0 to 5, 5 degrees is used for both θ and φ, then 0.05 meters for dh to have reasonable plane detection results. The final plane hypotheses are those that have sufficient pixel support. A pixel p(xp, yp, zp) supports a plane hypothesis: h(ah, bh, ch, dh)), Liu, in view of Rump does not explicitly disclose wherein applying the integrating conversion to the normal map produces an integrated image of the normal map. However, Prasad teaches image compression (Paragraphs [0021]-[0024]; Paragraphs [0052]-[0058]), further comprising wherein applying the integrating conversion to the normal map produces an integrated image of the normal map (Fig. 2E; Paragraph [0029]: one or more of the frame 220D and the frame 220E—e.g., corresponding to a depth map and a surface normal map, respectively—may be used to generate the edge map of frame 220F. In other embodiments, the frame 220C—without first determining depth and/or surface normal information—may be used to generate the edge map of frame 220F. For example, changes in pixel values beyond a threshold amount may indicate edge locations, and this information may be used to determine the edges for the edge map. However, using the pixel values alone without the depth map and/or the surface normal map may lead to less accurate results than using the depth map and/or the surface normal map). Prasad teaches that this will allow for increasingly accurate results (Paragraphs [0029]-[0031]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Liu, in view of Rump with the features wherein applying the integrating conversion to the normal map produces an integrated image of the normal map as taught by Prasad so as to provide accurate results as presented by Prasad. Regarding claim 2, Liu, in view of Rump, and further in view of Prasad teaches the computer-implemented method of claim 1, Liu discloses wherein the normal map comprises vectors at each fragment that are normalized in three-dimensional space (Paragraph [0022]: where (xp, yp, zp) are the coordinates in the units (such as meters) of the content (or object or feature point) in the image and for every pixel or point in the image. To perform the conventional methods, the parameters for the equation (1) h(ah, bh, ch, dh) is first converted into a Polar coordinate system representation: h(ah, bh, ch, dh)=h(cos θ sin φ, sin θ sin φ, cos φ, dh), the parameter space (θ, φ, dh) is quantized, and the system effectively searches the parameter space for every possible combination of (θ, φ, dh), where θ is the angle of the normal vector (ah, bh, ch) on the xy plane and φ is the angle between the xy plane and the normal vector (ah, bh, ch) in z direction. θ ranges from 0 to 360 degrees and φ ranges from 0 to 180 degrees. Assuming the target application operates within a 5 meter range, and hence dh ranges from 0 to 5, 5 degrees is used for both θ and φ, then 0.05 meters for dh to have reasonable plane detection results. The final plane hypotheses are those that have sufficient pixel support. A pixel p(xp, yp, zp) supports a plane hypothesis: h(ah, bh, ch, dh); Paragraph [0044]: the plane hypothesis parameters in the plane hypothesis equation may be determined by finding the normal of the plane. For example, the three sample points forming the triangular area or plane first may be used to determine two vectors forming the plane both stemming from the main sample point. Thus, sample points 502 and 504 may form a first vector {right arrow over (a)}. while sample points 502 and 510 form a second vector {right arrow over (b)}. The three dimensional points of each of the three sample points then may be used to determine the normal of the plane by finding the cross product of the two vectors where: where Ns is a Normal vector of the plane hypothesis based on the sample points, and continuing with the example of plane 534 on image 500, where ps is the main sample point). Regarding claim 3, Liu, in view of Rump, and further in view of Prasad teaches the computer-implemented method of claim 1, Prasad discloses wherein applying the integrating conversion to the normal map comprises optimizing per-pixel difference between the normal map and a map obtained when a differential operator is applied to each pixel and its surrounding pixels (Paragraph [0023]: the entropy loss function may measure pixel gradients within portions of the pre-filtered frame—except for portions corresponding to edges as identified using the edge maps—in order to reduce the gradient between neighboring pixels. In such an example, the higher the entropy control parameter value, the more a higher gradient is penalized using the entropy loss function. As such, where a high value for the entropy control parameter is used (e.g., indicative of lower frame entropy), the gradient between neighboring or surrounding pixels may be reduced such that differences in pixel values between neighboring or surrounding pixels are minimal. Similarly, for lower entropy control parameter values (e.g., indicative of higher frame entropy), the gradient between neighboring or surrounding pixels may be reduced less such that differences in pixels are allowed to be greater (but not as great as a full entropy frame). Where more than one loss function is used, the loss functions may be weighted during training; Paragraph [0037]: the entropy loss function may measure pixel gradients within portions of the pre-filtered frame 112—except for portions corresponding to edges as identified using the edge maps 106—in order to reduce the gradient between neighboring pixels. In such an example, the higher the entropy control parameter value, the more a higher gradient is penalized using the entropy loss function. As such, where a high value for the entropy control parameter is used (e.g., indicative of lower frame entropy), the gradient between neighboring or surrounding pixels may be reduced such that differences in pixel values between neighboring or surrounding pixels are minimal. Similarly, for lower entropy control parameter values (e.g., indicative of higher frame entropy), the gradient between neighboring or surrounding pixels may be reduced less such that differences in pixels are allowed to be greater (but not as great as a full entropy frame); Paragraph [0044]: method 300, at block B308, includes computing, using a second loss function, a second loss value based at least in part on comparing pixel values of proximately located pixels within the pre-filtered frame. For example, the entropy loss function may measure pixel gradients within portions of the pre-filtered frame—except for portions corresponding to edges as identified using the edge maps—in order to reduce the gradient between neighboring pixels. For example, for a DNN 110 corresponding to a higher entropy control parameter (and thus a lower frame entropy), differences between the neighboring pixels may be penalized more. As another example, for a DNN 110 corresponding to a lower entropy control parameter (and thus a higher frame entropy), differences between the neighboring pixels may be penalized less—but still penalized in order to reduce the frame entropy from that of the original frame). Regarding claim 4, Liu, in view of Rump, and further in view of Prasad teaches the computer-implemented method of claim 1, Liu discloses the method further comprising applying quantization to the integrated image, wherein applying quantization to the integrated image produces an RGB integrated image of the normal map, wherein the integrated image compressed using the compression method comprises the RGB integrated image (Paragraphs [0026]-[0038]: guided by the given depth image, the present method samples the plane parameter space in an educated manner, and hence achieves a much higher frame rate (about 300 fps at video graphics array (VGA) depth) than the conventional methods. Since the present method also includes gravity-sensitive (such as an inertial measurement unit (IMU)) sensor data incorporation to further improve the frame rate when only planes of specific orientation (e.g. horizontal planes, such as ground) are needed, a different voting scheme that takes into account local geometry to improve the accuracy at plane boundaries, and a color-based refinement stage to fill holes and unfit areas, this results in a method and system of plane detection that may be hundreds of times faster than the known methods. The current method also produces a smoother boundary of the planes with the incorporation of the local geometry and color information versus that of the known methods… operations also may be performed to further eliminate plane hypotheses and refine the quality of the planes before providing a final index map of the plane hypotheses. This may include suppression of the non-largest planes when planes overlap, and color-based hole and gap filling using region erosion and region growth as described in detail below… the sample points are used to compute the plane hypotheses. By forming a plane hypothesis for each sample point rather than for each bin in the quantized parameter space, the computational load for detecting the planar surfaces in the image is reduced significantly resulting in a large increase in frame rate during the detection process; Paragraph [0086]: such technology may include a camera such as a digital camera system, a dedicated camera device, or an imaging phone or tablet, whether a still picture or video camera, camera that provides a preview screen, or some combination of these. Thus, in one form, imaging device 1002 may include camera hardware and optics including one or more sensors as well as auto-focus, zoom, aperture, ND-filter, auto-exposure, flash, and actuator controls. These controls may be part of a sensor module or component for operating the sensor that can be used to generate images for a viewfinder and take still pictures or video. The imaging device 1002 also may have a lens, an image sensor with a RGB Bayer color filter, an analog amplifier, an A/D converter, other components to convert incident light into a digital signal, the like, and/or combinations thereof. The digital signal also may be referred to as the raw image data herein). Regarding claim 5, Liu, in view of Rump, and further in view of Prasad teaches the computer-implemented method of claim 4, Liu discloses wherein a map obtained after applying quantization to the integrated image corresponds to a height map of the normal map (Paragraphs [0181]-[0184]: instead of the depth difference, the derivate of the depth gradient is used to determine the interpolation function by the decoder. This could either be the 1D derivate for x and y direction (performed individually as z/dx and z/dy) or the 2D-derviate to perform the interpolation in one step (z/dxdy). Such an approach can represent non-linear depth gradients better than the simple difference approach…where the depth information in the geometry does not follow linear quantization, the same quantization is applied for determining the new depth values for the interpolated 3D points…the depth interpolation weighting is derived from a 2D gaussian filter centred at the desired reconstruction coordinate in the geometry picture (spatial kernel), taking only valid values into account…instead of the depth weighted interpolation, a depth-guided bilateral filter is used for interpolating new geometry values, where depth z (geometry image) guides the range kernel (intensity) and the spatial kernel is based on the coordinate differences in the geometry image). Regarding claim 7, the limitations of this claim substantially correspond to the limitations of claim 1 (except for the one or more processors, which are disclosed by Liu, Fig. 10; Paragraph [0089]: image processing system 1000 may have one or more processors 1020 which may include a dedicated image signal processor (ISP) 1022 such as the Intel Atom, memory stores 1024, one or more displays 1028 to provide images 1030, a coder 1032, and antenna 1026. In one example implementation, the image processing system 1000 may have the display 1028, at least one processor 1020 communicatively coupled to the display, and at least one memory 1024 communicatively coupled to the processor); thus they are rejected on similar grounds. Regarding claim 8, the limitations of this claim substantially correspond to the limitations of claim 2; thus they are rejected on similar grounds. Regarding claim 9, the limitations of this claim substantially correspond to the limitations of claim 3; thus they are rejected on similar grounds. Regarding claim 10, the limitations of this claim substantially correspond to the limitations of claim 4; thus they are rejected on similar grounds. Regarding claim 11, the limitations of this claim substantially correspond to the limitations of claim 5; thus they are rejected on similar grounds. Regarding claim 13, Liu discloses a computer-implemented method of improving rendering of three-dimensional models by decompressing texture maps compressed using an integrated conversion (Paragraph [0089]: image processing system 1000 may have one or more processors 1020 which may include a dedicated image signal processor (ISP) 1022 such as the Intel Atom, memory stores 1024, one or more displays 1028 to provide images 1030, a coder 1032, and antenna 1026. In one example implementation, the image processing system 1000 may have the display 1028, at least one processor 1020 communicatively coupled to the display, and at least one memory 1024 communicatively coupled to the processor. A coder 1032, which may be an encoder, decoder, or both, also may be provided. As an encoder 1032 and antenna 1034 may be provided to compress the modified image date for transmission to other devices that may display or store the image. It will be understood that the image processing system 1000 also may include a decoder (or encoder 1032 may include a decoder) to receive and decode image data for processing by the system 1000. Otherwise, the processed image 1030 may be displayed on display 1028 or stored in memory 1024. As illustrated, any of these components may be capable of communication with one another and/or communication with portions of logic modules 1004 and/or imaging device 1002. Thus, processors 1020 may be communicatively coupled to both the image device 1002 and the logic modules 1004 for operating those components. By one approach, although image processing system 1000, as shown in FIG. 10, may include one particular set of blocks or actions associated with particular components or modules, these blocks or actions may be associated with different components or modules than the particular component or module illustrated here); decompressing a compressed integrated image corresponding to a normal map (Paragraph [0089]: image processing system 1000 may have one or more processors 1020 which may include a dedicated image signal processor (ISP) 1022 such as the Intel Atom, memory stores 1024, one or more displays 1028 to provide images 1030, a coder 1032, and antenna 1026. In one example implementation, the image processing system 1000 may have the display 1028, at least one processor 1020 communicatively coupled to the display, and at least one memory 1024 communicatively coupled to the processor. A coder 1032, which may be an encoder, decoder, or both, also may be provided. As an encoder 1032 and antenna 1034 may be provided to compress the modified image date for transmission to other devices that may display or store the image. It will be understood that the image processing system 1000 also may include a decoder (or encoder 1032 may include a decoder) to receive and decode image data for processing by the system 1000. Otherwise, the processed image 1030 may be displayed on display 1028 or stored in memory 1024. As illustrated, any of these components may be capable of communication with one another and/or communication with portions of logic modules 1004 and/or imaging device 1002. Thus, processors 1020 may be communicatively coupled to both the image device 1002 and the logic modules 1004 for operating those components. By one approach, although image processing system 1000, as shown in FIG. 10, may include one particular set of blocks or actions associated with particular components or modules, these blocks or actions may be associated with different components or modules than the particular component or module illustrated here). Liu does not explicitly disclose a normal map for a texture used to render a three-dimensional model in a virtual scene. However, Rump teaches rendering of models including texturing (Abstract; Paragraph [0098]), further comprising a normal map for a texture used to render a three-dimensional model in a virtual scene (Paragraph [0098]: rendering the scene comprising the at least one virtual object based on the first and second sets of appearance attributes and the three-dimensional geometry data; Paragraph [0128]: Texture as understood in the present disclosure includes phenomena like coarseness, sparkle, and macroscopic variations of surface topography. Texture can be described by “texture attributes”. In the context of the present disclosure, the term “texture attributes” is to be understood broadly as encompassing any form of data that is able to quantify at least one aspect of texture. Examples of texture attributes include global texture attributes such as a global coarseness parameter or a global sparkle parameter. In some embodiments, texture attributes can include a normal map or a height map. In some embodiments, texture attributes can include image data. In some embodiments, the image data can be associated with particular combinations of illumination and viewing directions. In such embodiments, the texture attributes can comprise a plurality of sets of image data, each set of image data associated with a different combination of illumination and viewing directions. A “discrete texture table” is to be understood as relating to a collection of sets of texture attributes, preferably in the form of image data, each set of texture attributes associated with a different combination of illumination and viewing directions. In some embodiments of the present disclosure, images in a discrete texture table are generated from a set of source textures, and these images are accordingly called “destination textures”). Rump teaches that this will allow for enhanced realism of texture and scenes (Paragraph [0038]; Paragraph [0145]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Liu with the features of a normal map for a texture used to render a three-dimensional model in a virtual scene as taught by Rump so as to allow for enhanced realism as presented by Rump. While Liu teaches decompressing maps compressed using an integrated conversion (Paragraph [0022]: where (xp, yp, zp) are the coordinates in the units (such as meters) of the content (or object or feature point) in the image and for every pixel or point in the image. To perform the conventional methods, the parameters for the equation (1) h(ah, bh, ch, dh) is first converted into a Polar coordinate system representation: h(ah, bh, ch, dh)=h(cos θ sin φ, sin θ sin φ, cos φ, dh), the parameter space (θ, φ, dh) is quantized, and the system effectively searches the parameter space for every possible combination of (θ, φ, dh), where θ is the angle of the normal vector (ah, bh, ch) on the xy plane and φ is the angle between the xy plane and the normal vector (ah, bh, ch) in z direction. θ ranges from 0 to 360 degrees and φ ranges from 0 to 180 degrees. Assuming the target application operates within a 5 meter range, and hence dh ranges from 0 to 5, 5 degrees is used for both θ and φ, then 0.05 meters for dh to have reasonable plane detection results. The final plane hypotheses are those that have sufficient pixel support. A pixel p(xp, yp, zp) supports a plane hypothesis: h(ah, bh, ch, dh)), Liu does not explicitly disclose decompressing texture maps compressed using an integrated conversion; or applying a differential conversion to the decompressed integrated image, wherein an image obtained after applying the differential conversion to the decompressed integrated image comprises the normal map. However, Prasad teaches image compression (Paragraphs [0021]-[0024]; Paragraphs [0052]-[0058]), further comprising wherein applying the integrating conversion to the normal map produces an integrated image of the normal map (Fig. 1B; Paragraph [0029]: one or more of the frame 220D and the frame 220E—e.g., corresponding to a depth map and a surface normal map, respectively—may be used to generate the edge map of frame 220F. In other embodiments, the frame 220C—without first determining depth and/or surface normal information—may be used to generate the edge map of frame 220F. For example, changes in pixel values beyond a threshold amount may indicate edge locations, and this information may be used to determine the edges for the edge map. However, using the pixel values alone without the depth map and/or the surface normal map may lead to less accurate results than using the depth map and/or the surface normal map; Paragraph [0051]: DNN(s) 430 may store several DNNs, each corresponding to different levels of filtering—e.g., to a different entropy control parameter. Depending on the level of filtering, a pre-filtered frame may include more or less visual detail when displayed on the display 408 of the end-user device 420. Turning briefly to FIGS. 5A-5D, for example, each of frames 500A/B/C/D correspond to a different entropy control parameters (e.g., 0.1, 0.5, 1.0, and 3.0, respectively), which correspond to high entropy, medium entropy, low entropy, and very low entropy, respectively. Each of the frames 500A/B/C/D include an archway 502A/B/C/D. As can be seen, archway 502A includes a significant level of detail with well-defined and visible bricks in the archway 502A. Archway 502B includes less detail when compared to 502A. Archway 502B includes some degree of texture to illustrate bricks, but they are not well defined visually. Archway 502C includes less detail when compared to 502B. The archway 502C includes some degree of detail for the archway 502C, but no bricks are visible. Lastly, archway 502D includes still less detail when compared to archway 502C. The archway 502D includes no texture and no bricks are visible. However, the edges of the archway 502D are maintained, which allows a user to navigate a game corresponding to the frames 500A/B/C/D); and applying a differential conversion to the decompressed integrated image, wherein an image obtained after applying the differential conversion to the decompressed integrated image comprises the normal map (Paragraph [0023]: the entropy loss function may measure pixel gradients within portions of the pre-filtered frame—except for portions corresponding to edges as identified using the edge maps—in order to reduce the gradient between neighboring pixels. In such an example, the higher the entropy control parameter value, the more a higher gradient is penalized using the entropy loss function. As such, where a high value for the entropy control parameter is used (e.g., indicative of lower frame entropy), the gradient between neighboring or surrounding pixels may be reduced such that differences in pixel values between neighboring or surrounding pixels are minimal. Similarly, for lower entropy control parameter values (e.g., indicative of higher frame entropy), the gradient between neighboring or surrounding pixels may be reduced less such that differences in pixels are allowed to be greater (but not as great as a full entropy frame). Where more than one loss function is used, the loss functions may be weighted during training). Prasad teaches that this will allow for increasingly accurate results (Paragraphs [0029]-[0031]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Liu with the features of wherein applying the integrating conversion to the normal map produces an integrated image of the normal map; and applying a differential conversion to the decompressed integrated image, wherein an image obtained after applying the differential conversion to the decompressed integrated image comprises the normal map as taught by Prasad so as to provide accurate results as presented by Prasad. Regarding claim 14, Liu, in view of Rump, and further in view of Prasad teaches the computer-implemented method of claim 13, Liu discloses wherein the normal map comprises vectors at each fragment that are normalized in three-dimensional space (Paragraph [0022]: where (xp, yp, zp) are the coordinates in the units (such as meters) of the content (or object or feature point) in the image and for every pixel or point in the image. To perform the conventional methods, the parameters for the equation (1) h(ah, bh, ch, dh) is first converted into a Polar coordinate system representation: h(ah, bh, ch, dh)=h(cos θ sin φ, sin θ sin φ, cos φ, dh), the parameter space (θ, φ, dh) is quantized, and the system effectively searches the parameter space for every possible combination of (θ, φ, dh), where θ is the angle of the normal vector (ah, bh, ch) on the xy plane and φ is the angle between the xy plane and the normal vector (ah, bh, ch) in z direction. θ ranges from 0 to 360 degrees and φ ranges from 0 to 180 degrees. Assuming the target application operates within a 5 meter range, and hence dh ranges from 0 to 5, 5 degrees is used for both θ and φ, then 0.05 meters for dh to have reasonable plane detection results. The final plane hypotheses are those that have sufficient pixel support. A pixel p(xp, yp, zp) supports a plane hypothesis: h(ah, bh, ch, dh); Paragraph [0044]: the plane hypothesis parameters in the plane hypothesis equation may be determined by finding the normal of the plane. For example, the three sample points forming the triangular area or plane first may be used to determine two vectors forming the plane both stemming from the main sample point. Thus, sample points 502 and 504 may form a first vector {right arrow over (a)}. while sample points 502 and 510 form a second vector {right arrow over (b)}. The three dimensional points of each of the three sample points then may be used to determine the normal of the plane by finding the cross product of the two vectors where: where Ns is a Normal vector of the plane hypothesis based on the sample points, and continuing with the example of plane 534 on image 500, where ps is the main sample point). Regarding claim 15, Liu, in view of Rump, and further in view of Prasad teaches the computer-implemented method of claim 13, Prasad discloses wherein the compressed integrated image is decompressed using a decompression method complimentary to a compression method used to compress the compressed integrated image (Paragraph [0089]: image processing system 1000 may have one or more processors 1020 which may include a dedicated image signal processor (ISP) 1022 such as the Intel Atom, memory stores 1024, one or more displays 1028 to provide images 1030, a coder 1032, and antenna 1026. In one example implementation, the image processing system 1000 may have the display 1028, at least one processor 1020 communicatively coupled to the display, and at least one memory 1024 communicatively coupled to the processor. A coder 1032, which may be an encoder, decoder, or both, also may be provided. As an encoder 1032 and antenna 1034 may be provided to compress the modified image date for transmission to other devices that may display or store the image. It will be understood that the image processing system 1000 also may include a decoder (or encoder 1032 may include a decoder) to receive and decode image data for processing by the system 1000. Otherwise, the processed image 1030 may be displayed on display 1028 or stored in memory 1024. As illustrated, any of these components may be capable of communication with one another and/or communication with portions of logic modules 1004 and/or imaging device 1002. Thus, processors 1020 may be communicatively coupled to both the image device 1002 and the logic modules 1004 for operating those components. By one approach, although image processing system 1000, as shown in FIG. 10, may include one particular set of blocks or actions associated with particular components or modules, these blocks or actions may be associated with different components or modules than the particular component or module illustrated here). Regarding claim 16, Liu, in view of Rump, and further in view of Prasad teaches the computer-implemented method of claim 13, the method further comprising converting the decompressed integrated image into a floating-point representation of the normal map and applying the differential conversion to the floating-point representation of the normal map, wherein converting the decompressed integrated image into a floating-point representation comprises: multiplying each point in the normal map by a scale parameter (Prasad: Paragraph [0023]: the entropy loss function may measure pixel gradients within portions of the pre-filtered frame—except for portions corresponding to edges as identified using the edge maps—in order to reduce the gradient between neighboring pixels. In such an example, the higher the entropy control parameter value, the more a higher gradient is penalized using the entropy loss function. As such, where a high value for the entropy control parameter is used (e.g., indicative of lower frame entropy), the gradient between neighboring or surrounding pixels may be reduced such that differences in pixel values between neighboring or surrounding pixels are minimal. Similarly, for lower entropy control parameter values (e.g., indicative of higher frame entropy), the gradient between neighboring or surrounding pixels may be reduced less such that differences in pixels are allowed to be greater (but not as great as a full entropy frame). Where more than one loss function is used, the loss functions may be weighted during training); and assigning each pixel a value based on a value map generated by applying quantization to an integrated image of the normal map prior to compression (Liu: Paragraphs [0181]-[0184]: instead of the depth difference, the derivate of the depth gradient is used to determine the interpolation function by the decoder. This could either be the 1D derivate for x and y direction (performed individually as z/dx and z/dy) or the 2D-derviate to perform the interpolation in one step (z/dxdy). Such an approach can represent non-linear depth gradients better than the simple difference approach…where the depth information in the geometry does not follow linear quantization, the same quantization is applied for determining the new depth values for the interpolated 3D points…the depth interpolation weighting is derived from a 2D gaussian filter centred at the desired reconstruction coordinate in the geometry picture (spatial kernel), taking only valid values into account…instead of the depth weighted interpolation, a depth-guided bilateral filter is used for interpolating new geometry values, where depth z (geometry image) guides the range kernel (intensity) and the spatial kernel is based on the coordinate differences in the geometry image). Regarding claim 17, the limitations of this claim substantially correspond to the limitations of claim 13 (except for the one or more processors, which are disclosed by Liu, Fig. 10; Paragraph [0089]: image processing system 1000 may have one or more processors 1020 which may include a dedicated image signal processor (ISP) 1022 such as the Intel Atom, memory stores 1024, one or more displays 1028 to provide images 1030, a coder 1032, and antenna 1026. In one example implementation, the image processing system 1000 may have the display 1028, at least one processor 1020 communicatively coupled to the display, and at least one memory 1024 communicatively coupled to the processor); thus they are rejected on similar grounds. Regarding claim 18, the limitations of this claim substantially correspond to the limitations of claim 14; thus they are rejected on similar grounds. Regarding claim 19, the limitations of this claim substantially correspond to the limitations of claim 15; thus they are rejected on similar grounds. Regarding claim 20, the limitations of this claim substantially correspond to the limitations of claim 16; thus they are rejected on similar grounds. Claims 6 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Liu, in view of Rump, in view of Prasad, and further in view of Cohen-Tidhar et al. (US Pub. 2023/0177734), hereinafter Cohen-Tidhar. Regarding claim 6, Liu, in view of Rump, and further in view of Prasad teaches the computer-implemented method of claim 1. Liu, in view of Rump, and further in view of Prasad does not explicitly disclose wherein the compression method comprises a compression method associated with the AVIF file format or a method associated with the JPEG XL file format. However, Cohen-Tidhar teaches image compression (Paragraph [0067]), further comprising wherein the compression method comprises a compression method associated with the AVIF file format or a method associated with the JPEG XL file format (Paragraph [0067]: Efficiency Image File Format (HEIF) file or container, or other type of a single file or a single container that can support two or more encoding (or compression) techniques that are implemented with regard to content-portions or content-layers of a single image. Some embodiments may thus operate to perform image encoding in a content-aware manner, by selecting the most suitable single media format that would store the composite image, and by compositing a plurality of encoding formats that encode image-portions or image-layers of a single image into a single container (e.g., SVG or PDF or HEIF, or other container that can contain multiple payload codecs or multiple payload-contents that are encoded separately by two or more different encoding techniques), and by selecting the coding techniques and parameters per image-region within an expressive format that can achieve the most efficient performance goals for each content type (e.g., to select an encoding technique that supports macroblocks or layering, such as JXL JPEG XL encoding, or AVIF or AV1 encoding). Cohen-Tidhar teaches that this will enable achieving the most efficient performance goals in compression (Paragraph [0067]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Liu, in view of Rump, and further in view of Prasad with the features of above as taught by Cohen-Tidhar so as to provide efficient performance as presented by Cohen-Tidhar. Regarding claim 12, the limitations of this claim substantially correspond to the limitations of claim 6; thus they are rejected on similar grounds. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW D SALVUCCI whose telephone number is (571)270-5748. The examiner can normally be reached M-F: 7:30-4:00PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XIAO WU can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW SALVUCCI/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Jan 29, 2024
Application Filed
Oct 28, 2025
Non-Final Rejection — §103
Jan 30, 2026
Response Filed
Feb 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597198
RAY TRACING METHOD AND APPARATUS BASED ON ATTENTION FOR DYNAMIC SCENES
2y 5m to grant Granted Apr 07, 2026
Patent 12597207
Camera Reprojection for Faces
2y 5m to grant Granted Apr 07, 2026
Patent 12579753
Phased Capture Assessment and Feedback for Mobile Dimensioning
2y 5m to grant Granted Mar 17, 2026
Patent 12561899
Vector Graphic Parsing and Transformation Engine
2y 5m to grant Granted Feb 24, 2026
Patent 12548256
IMAGE PROCESSING APPARATUS FOR GENERATING SURFACE PROFILE OF THREE-DIMENSIONAL GEOMETRIC MODEL, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+28.5%)
2y 12m
Median Time to Grant
Moderate
PTA Risk
Based on 485 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month