Prosecution Insights
Last updated: April 19, 2026
Application No. 18/764,220

METHOD AND DEVICE FOR GENERATING PHOTOGRAMMETRY DATA

Final Rejection §103
Filed
Jul 04, 2024
Examiner
ZAK, JACQUELINE ROSE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Phase One A/S
OA Round
4 (Final)
67%
Grant Probability
Favorable
5-6
OA Rounds
2y 10m
To Grant
55%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
8 granted / 12 resolved
+4.7% vs TC avg
Minimal -11% lift
Without
With
+-11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
46 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-14 are pending for examination in the application filed 01/21/2026. Priority Acknowledgement is made of Applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent application DKPA202370379 filed on 07/12/2023. Response to Arguments Applicant's arguments filed 01/21/2026 have been fully considered but they are not persuasive. Applicant argues on page 7-8 of the Remarks that the combination of references of Howells and Zimmer fails to teach “using two different sets of conversion parameters to determine spatial coordinates by triangulation and generate surface color data, respectively (via the generation of different first and second ancillary format images, respectively” because the claim “requires the use of two distinct sets of conversion parameters, with a first being applied to the raw file format images to then determine spatial coordinates by triangulation (based on the first ancillary file format images), and the second being used to generate second ancillary file format images from the same (the assessed) raw file format images, in order to generate surface color data for the images”. As stated in the Non-Final Office Action filed 10/27/2025, Howells teaches: processing, using the processor, the raw file format images for generating spatial photogrammetry data ([pg. 4 ln. 1-2] a processor for processing the raw data, wherein the processing comprises producing a 3D digital model from the raw data. [pg. 1 ln. 20-26] The recovery of the actual 3D co-ordinates of points on the surface of an object from a set of overlapping 2D images is known as photogrammetry, a word deriving from the Latin photo (light), gram (picture) and metry (measurement). Once 3D co-ordinates are known, a mesh or model formed from the set of co-ordinates can be used to feed into a 3D printer to produce a 3D object, to provide topographic maps, for use in manufacturing, forensics, film making and as a tool in many other fields), comprising: using the processor, determining spatial coordinates by triangulation based on the generated images ([pg. 4 ln 1-3] wherein the processing comprises producing a 3D digital model from the raw data, and wherein the raw data is part-processed in the scanning module in real time to produce an initial image. [pg. 1 ln. 28-33] Triangulation can be used to recover the actual co-ordinates of points which appear in 2D photographs or other images where depth information has been lost. Two images taken of the same point from different locations can be used together in order to recover the 3D co-ordinates by intersecting two rays (or lines of sight from the camera or detector through a particular point on an object) and applying simple trigonometry as shown in Figure 1); and using the processor, generating surface color data based on the images ([pg. 6 ln. 29-32] In an embodiment, processing the images further comprises analysing the images to assess skin conditions. Information such as colour of parts of a subject’s body can be used to assess health of the subject. Changes over time for repeat users of the scanner can also be pinpointed in this way); generating a digital spatial model based on the determined spatial coordinates and the generated surface color data ([pg. 1 ln. 5-14] The present invention relates to a system, method and scanning module for producing a 3D digital model of a subject. 3D scanning systems recover information about points on the surface of an object, in particular their spatial co-ordinates and/or colour, and use this information to reconstruct digital models of the object which can be stored and are useful for many varied applications. The information collected by the scanner for a set of surface points is known as a “point cloud” of data, and this can be used to reconstruct the shape of the object by extrapolating to create a mesh (a process known as surface reconstruction)). As stated in the Non-Final Office Action filed 10/27/2025, Zimmer teaches: wherein a raw file format image is a digital image file produced by a camera with a digital image sensor, and where the image data in the file corresponds to the light levels captured by each photo site of the image sensor with no demosaicing ([0004] Typically, the digital camera has an image pipeline that performs a demosaicing or de-Bayering process on the RAW image and transforms the image with a compressing algorithm to output a JPEG or other type of compressed file suitable for display and viewing. However, the RAW image captured by the digital camera can be uploaded to a computer, and computer software, such as Apple's Aperture 1.0, operating on the computer can allow a user to perform various manual operations on RAW image. [0031] In use, the imaging sensor 120 of the camera 110 captures a RAW image 122. As discussed previously, the imaging sensor 120 has a color-filtered array that can be arranged in an RGB Bayer pattern. Therefore, the color value at each photo-site in the RAW image 122 represents either a red intensity value, a green intensity value, or a blue intensity value, and each value is typically 10-bits in the range of 0 to 4095); using the processor, generating, first ancillary file format images from the raw file format images using first sets of conversion parameters; using the processor, generating second ancillary file format images from the raw file format images using second sets of conversion parameters different from the first sets of conversion parameters (First and second ancillary file formats are results of different applied RAW processing. [0044] In a RAW processing stage 302, RAW processing is performed on the RAW image 312 using one or more various processes, including a black subtraction process 320, a highlight recovery process 321, a stuck pixel elimination process 322, an auto-exposure adjustment process 323, a demosaic (de-Bayer) process 324, and a chroma-blur process 325. One purpose of the RAW processing stage 302 is to preserve spatial quality and detail of the captured scene in the RAW image 312. [0045] To implement these processes 320-325, the metadata 314 is used to obtain various raw processing algorithms associated with the processes (Block 328). Although shown in a particular order, one or more of these processes 320-325 may be rearranged depending on specific requirements, such as any particulars associated with the camera. In addition, one or more of these processes 320-325 may or may not be used for various reasons, and other processes commonly used in image processing can be used. Furthermore, some of these various processes 320-325 may actually operate in conjunction with one another in ways not necessarily expressed herein. [0046] At the end of the RAW processing stage 302, the demosaic process 326 produces a camera RGB image 329 from the initial RAW image 312). For further clarification, Zimmer applies different stages of RAW processing and discusses that the different processes can be done in in different orders, including some and not others, and in conjunction: ([0045] Although shown in a particular order, one or more of these processes 320-325 may be rearranged depending on specific requirements… In addition, one or more of these processes 320-325 may or may not be used for various reasons, and other processes commonly used in image processing can be used. Furthermore, some of these various processes 320-325 may actually operate in conjunction with one another in ways not necessarily expressed herein). The RAW processing pipeline includes distinct transforms with unique conversion parameters for the different processing phases, which is why the first and second ancillary formats are generated from the different conversion parameters during these respective processes. Therefore, the Examiner disagrees with the argument “there is no teaching in either of Zimmer nor Howells, alone or in combination that would suggest or imply using sets of conversion parameters that are different for different uses of the images”. Furthermore, Zimmer describes the demosaicing or de-Bayering process on the RAW image (stage 324) which converts the RAW images to the first or second ancillary file format images. These first or second ancillary file format images are generated from the raw file format images and are generated using different conversion parameters, as explained above in correspondence with the claim language. The Examiner suggests amending the claims to clarify how the first and second ancillary file format images differ from each other beyond the different sets of conversion parameters and incorporating the allowable subject matter of dependent claim 12. Applicant further argues on page 8 of the Remarks that the combination of Howells and Zimmer would not be obvious to a person skilled in the art. In order to establish a prima facie case of obviousness, Examiner must set forth (a) the relevant teachings of the prior art relied upon, (b) the differences between the prior art in the claim and the applied references, (c) the proposed modification of the applied references necessary to arrive at the claimed subject matter, and (d) an explanation as to why the claimed invention would have been obvious to one of ordinary skill in the art at the relevant time. See MPEP 2142. Here, Examiner has mapped the Howells reference to the claim, explained the deficiencies of the Howells reference, proposed a modification of the Howells reference with the Zimmer reference, and provided a motivation for the combination. Therefore, a prima facie case of obviousness has been made. Thus, the 35 USC § 103 rejections of claims 1-11 and 13-14 are maintained. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-9 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Howells (GB2544268A) in view of Zimmer (US20100271505A1). Regarding claim 1, Howells teaches a method for operating an electronic device comprising a processor (processor) and an interface (user interface) and being configured to generate a digital spatial model ([pg. 1 ln. 5-14] The present invention relates to a system, method and scanning module for producing a 3D digital model of a subject. 3D scanning systems recover information about points on the surface of an object, in particular their spatial co-ordinates and/or colour, and use this information to reconstruct digital models of the object which can be stored and are useful for many varied applications. The information collected by the scanner for a set of surface points is known as a “point cloud” of data, and this can be used to reconstruct the shape of the object by extrapolating to create a mesh (a process known as surface reconstruction)), the method comprising: accessing, via the interface and/or using the processor, digital raw file format images captured from a plurality of different relative positions and comprising metadata providing the capturing position and orientation ([pg. 15 ln. 30 - pg. 16 ln. 3] Figure 2 shows an overview of the system which comprises a scanning module, a client app or web portal and a back end including at least one secure web server and a database on the server or coupled to the server and Figure 3 shows the architecture of the system in more detail. The scanning module or booth includes scanning means for collecting the raw data needed to produce a 3D image. Data capture generally uses cameras or sensors mounted within the module (along with illumination and positioning systems such as those described in detail below). Data is transferred to and stored on the server or servers in cloud storage after a scan); processing, using the processor, the raw file format images for generating spatial photogrammetry data ([pg. 4 ln. 1-2] a processor for processing the raw data, wherein the processing comprises producing a 3D digital model from the raw data. [pg. 1 ln. 20-26] The recovery of the actual 3D co-ordinates of points on the surface of an object from a set of overlapping 2D images is known as photogrammetry, a word deriving from the Latin photo (light), gram (picture) and metry (measurement). Once 3D co-ordinates are known, a mesh or model formed from the set of co-ordinates can be used to feed into a 3D printer to produce a 3D object, to provide topographic maps, for use in manufacturing, forensics, film making and as a tool in many other fields), comprising: using the processor, determining spatial coordinates by triangulation based on the generated images ([pg. 4 ln 1-3] wherein the processing comprises producing a 3D digital model from the raw data, and wherein the raw data is part-processed in the scanning module in real time to produce an initial image. [pg. 1 ln. 28-33] Triangulation can be used to recover the actual co-ordinates of points which appear in 2D photographs or other images where depth information has been lost. Two images taken of the same point from different locations can be used together in order to recover the 3D co-ordinates by intersecting two rays (or lines of sight from the camera or detector through a particular point on an object) and applying simple trigonometry as shown in Figure 1); and using the processor, generating surface color data based on the images ([pg. 6 ln. 29-32] In an embodiment, processing the images further comprises analysing the images to assess skin conditions. Information such as colour of parts of a subject’s body can be used to assess health of the subject. Changes over time for repeat users of the scanner can also be pinpointed in this way); generating a digital spatial model based on the determined spatial coordinates and the generated surface color data ([pg. 1 ln. 5-14] The present invention relates to a system, method and scanning module for producing a 3D digital model of a subject. 3D scanning systems recover information about points on the surface of an object, in particular their spatial co-ordinates and/or colour, and use this information to reconstruct digital models of the object which can be stored and are useful for many varied applications. The information collected by the scanner for a set of surface points is known as a “point cloud” of data, and this can be used to reconstruct the shape of the object by extrapolating to create a mesh (a process known as surface reconstruction)). Howells does not teach wherein a raw file format image is a digital image file produced by a camera with a digital image sensor, and where the image data in the file corresponds to the light levels captured by each photo site of the image sensor with no demosaicing; using the processor, generating, first ancillary file format images from the raw file format images using first sets of conversion parameters; using the processor, generating second ancillary file format images from the raw file format images using second sets of conversion parameters different from the first sets of conversion parameters. Zimmer, in the same field of endeavor of raw image processing, teaches wherein a raw file format image is a digital image file produced by a camera with a digital image sensor, and where the image data in the file corresponds to the light levels captured by each photo site of the image sensor with no demosaicing ([0004] Typically, the digital camera has an image pipeline that performs a demosaicing or de-Bayering process on the RAW image and transforms the image with a compressing algorithm to output a JPEG or other type of compressed file suitable for display and viewing. However, the RAW image captured by the digital camera can be uploaded to a computer, and computer software, such as Apple's Aperture 1.0, operating on the computer can allow a user to perform various manual operations on RAW image. [0031] In use, the imaging sensor 120 of the camera 110 captures a RAW image 122. As discussed previously, the imaging sensor 120 has a color-filtered array that can be arranged in an RGB Bayer pattern. Therefore, the color value at each photo-site in the RAW image 122 represents either a red intensity value, a green intensity value, or a blue intensity value, and each value is typically 10-bits in the range of 0 to 4095); using the processor, generating, first ancillary file format images from the raw file format images using first sets of conversion parameters; using the processor, generating second ancillary file format images from the raw file format images using second sets of conversion parameters different from the first sets of conversion parameters (First and second ancillary file formats are results of different applied RAW processing. [0044] In a RAW processing stage 302, RAW processing is performed on the RAW image 312 using one or more various processes, including a black subtraction process 320, a highlight recovery process 321, a stuck pixel elimination process 322, an auto-exposure adjustment process 323, a demosaic (de-Bayer) process 324, and a chroma-blur process 325. One purpose of the RAW processing stage 302 is to preserve spatial quality and detail of the captured scene in the RAW image 312. [0045] To implement these processes 320-325, the metadata 314 is used to obtain various raw processing algorithms associated with the processes (Block 328). Although shown in a particular order, one or more of these processes 320-325 may be rearranged depending on specific requirements, such as any particulars associated with the camera. In addition, one or more of these processes 320-325 may or may not be used for various reasons, and other processes commonly used in image processing can be used. Furthermore, some of these various processes 320-325 may actually operate in conjunction with one another in ways not necessarily expressed herein. [0046] At the end of the RAW processing stage 302, the demosaic process 326 produces a camera RGB image 329 from the initial RAW image 312). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Howells with the teachings of Zimmer to use a raw file format with no demosaicing and generate a first and second ancillary format images for the purpose of "preserving the spatial quality and the detail of the digital image… [and] sufficiently representing the color of the digital image" [Zimmer 0007]. Regarding claim 2, Howells and Zimmer teach the method of claim 1. Zimmer teaches wherein the first and second sets of conversion parameters are selected for the first ancillary file format images to be generated with: a higher resolution; and/or a lower bit depth; and/or a lower noise suppression; than the second ancillary file format images (Second ancillary file format is, for example, the result of RAW processing stuck pixel elimination/ noise processing 430. [0082] The RAW images are then processed to quantify the noise response (e.g., amount of noise) and to quantify the stuck pixel response in each RAW image (Block 606). [0083] Ultimately, the stored profile data characterizing various cameras can be used to decide how much to adjust a given RAW image for noise). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Howells with the teachings of Zimmer for the second ancillary file format image to have greater noise suppression than the first because "Noise in a given image produces channel samples with aberrant values. This noise can be amplified when an RGB image is constructed from the Bayer-encoded RAW image. Typically, the rendered noise is undesirably colorful. This noise can be correlated to the ISO setting (i.e., the gain on the sensor readout) and the amount of exposure time used when the RAW image 402 is captured. Therefore, the noise processing in sub-step 430 preferably uses a correlation between ISO setting, exposure time, and amount of noise to reduce or control the colorfulness of noise for the RAW image 402 being processed" [Zimmer 0079]. Regarding claim 3, Howells and Zimmer teach the method of claim 1. Zimmer teaches generating third ancillary file format images from the raw file format image using a third set of conversion parameters different from the first and second sets of conversion parameters (First, second, and third ancillary file formats are results of different applied RAW processing. [0044] In a RAW processing stage 302, RAW processing is performed on the RAW image 312 using one or more various processes, including a black subtraction process 320, a highlight recovery process 321, a stuck pixel elimination process 322, an auto-exposure adjustment process 323, a demosaic (de-Bayer) process 324, and a chroma-blur process 325. One purpose of the RAW processing stage 302 is to preserve spatial quality and detail of the captured scene in the RAW image 312. [0045] To implement these processes 320-325, the metadata 314 is used to obtain various raw processing algorithms associated with the processes (Block 328). Although shown in a particular order, one or more of these processes 320-325 may be rearranged depending on specific requirements, such as any particulars associated with the camera. In addition, one or more of these processes 320-325 may or may not be used for various reasons, and other processes commonly used in image processing can be used. Furthermore, some of these various processes 320-325 may actually operate in conjunction with one another in ways not necessarily expressed herein. [0046] At the end of the RAW processing stage 302, the demosaic process 326 produces a camera RGB image 329 from the initial RAW image 312). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Howells with the teachings of Zimmer to generate a third ancillary format images for the purpose of "preserving the spatial quality and the detail of the digital image… [and] sufficiently representing the color of the digital image" [Zimmer 0007]. Regarding claim 4, Howells and Zimmer teach the method of claim 1. Zimmer teaches wherein processing the raw file format images comprises generating first ancillary file format images from the raw file format images using first sets of conversion parameters that are different for different raw file format images ([0044] In a RAW processing stage 302, RAW processing is performed on the RAW image 312 using one or more various processes, including a black subtraction process 320, a highlight recovery process 321, a stuck pixel elimination process 322, an auto-exposure adjustment process 323, a demosaic (de-Bayer) process 324, and a chroma-blur process 325. [0045] To implement these processes 320-325, the metadata 314 is used to obtain various raw processing algorithms associated with the processes (Block 328). Although shown in a particular order, one or more of these processes 320-325 may be rearranged depending on specific requirements, such as any particulars associated with the camera. In addition, one or more of these processes 320-325 may or may not be used for various reasons, and other processes commonly used in image processing can be used. Furthermore, some of these various processes 320-325 may actually operate in conjunction with one another in ways not necessarily expressed herein. [0069] The calculation is performed using correlation information that correlates channel saturation values to over-exposed images that have been examined for the given camera make and model. Thus, the correlation information may be predetermined and stored for a plurality of camera makes and models for later access by the highlight recovery process 500. In this way, the camera information obtained from the metadata associated with the RAW image is used to access the corresponding correlation information for the camera make or model that obtained the RAW image). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Howells with the teachings of Zimmer to use different conversion parameters for different raw file format images because "processing stages will depend on attributes of the image 312 and the camera 310" [Zimmer 0043]. Regarding claim 5, Howells and Zimmer teach the method of claim 1. Zimmer teaches wherein processing the raw file format images comprises generating second ancillary file format images from the raw file format images using second sets of conversion parameters that are different for different raw file format images ([0044] In a RAW processing stage 302, RAW processing is performed on the RAW image 312 using one or more various processes, including a black subtraction process 320, a highlight recovery process 321, a stuck pixel elimination process 322, an auto-exposure adjustment process 323, a demosaic (de-Bayer) process 324, and a chroma-blur process 325. [0045] To implement these processes 320-325, the metadata 314 is used to obtain various raw processing algorithms associated with the processes (Block 328). Although shown in a particular order, one or more of these processes 320-325 may be rearranged depending on specific requirements, such as any particulars associated with the camera. In addition, one or more of these processes 320-325 may or may not be used for various reasons, and other processes commonly used in image processing can be used. Furthermore, some of these various processes 320-325 may actually operate in conjunction with one another in ways not necessarily expressed herein. [0083] Ultimately, the stored profile data characterizing various cameras can be used to decide how much to adjust a given RAW image for noise. When a RAW image is being processed in the RAW processing stage 400 of FIG. 4, for example, the camera model, the ISO setting, and the exposure time associated with the RAW image is obtained from the associated metadata (Block 614)). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Howells with the teachings of Zimmer to use different conversion parameters for different raw file format images because "processing stages will depend on attributes of the image 312 and the camera 310" [Zimmer 0043]. Regarding claim 6, Howells and Zimmer teach the method of claim 5. Zimmer teaches wherein the different second sets of conversion parameters used to generate second ancillary file format images from different raw file format images are determined based on data from the first ancillary file format images generated from the corresponding raw file format images ([0086] Returning to FIG. 4, the RAW processing stage 400 includes a fourth step 440 where the RAW image 402 undergoes an auto-exposure adjustment. The auto-exposure adjustment adjusts luminance of the RAW image 402 so that its exposure meets predetermined criteria. Preferably, the adjustment uses predetermined luminance variables that are already stored for the adjustment in the RAW processing stage 400. The predetermined luminance variables are based on survey information obtained from a plurality of people viewing various images with adjusted exposure. The survey uses reference images generated with various cameras at a plurality of exposures. The average luminance of these references images is computed. Using a Monte Carlo simulation to vary the luminescence variables of these reference images, survey participants are asked to select examples of the images that are most visually pleasing. Then, the survey results are converged to an acceptable resulting exposure having luminance variables correlated to the original input luminance. [0087] The results are associated with the particular camera and are stored for later processing. When the RAW image 402 is received for processing, the auto-exposure adjustment computes the average luminance of the RAW image 402 and determines the exposure from the associated metadata). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Howells with the teachings of Zimmer to base second ancillary file format images on first ancillary file format images to perform processing "associated with the particular camera" [Zimmer 0087]. Regarding claim 7, Howells and Zimmer teach the method of claim 1. Howells further teaches selecting a subset of raw file format images that have overlapping regions; and determining second sets of conversion parameters for the selected subset so that differences in one or more image metrics in corresponding overlapping regions of the images generated from the subset are smaller than a given threshold or minimized over the subset ([pg. 22 ln. 1-11] It is preferred that each point on the subject’s body (other than, of course, the soles of their feet or other obscured portions) is within the field of view of at least two, and more preferably exactly three cameras during the scan. This will require images from different cameras to overlap. Although only two cameras in stereo are required to be directed towards a given point on the surface of an object in order to calculate distance to the point, having three cameras directed at each point makes the calculation of 3D coordinates more accurate/reduces errors whilst still keeping the number of cameras required inside the module at an affordable level for gym owners. The system of the present invention can achieve a resolution of one point per 0.5 to - 5 mm. This resolution will, however, depend on the number of cameras imaging each point on the surface of the subject simultaneously). Howells does not teach generating second ancillary file format images. Zimmer teaches generating second ancillary file format images ([0044] In a RAW processing stage 302, RAW processing is performed on the RAW image 312 using one or more various processes, including a black subtraction process 320, a highlight recovery process 321, a stuck pixel elimination process 322, an auto-exposure adjustment process 323, a demosaic (de-Bayer) process 324, and a chroma-blur process 325. One purpose of the RAW processing stage 302 is to preserve spatial quality and detail of the captured scene in the RAW image 312. [0045] To implement these processes 320-325, the metadata 314 is used to obtain various raw processing algorithms associated with the processes (Block 328). Although shown in a particular order, one or more of these processes 320-325 may be rearranged depending on specific requirements, such as any particulars associated with the camera. In addition, one or more of these processes 320-325 may or may not be used for various reasons, and other processes commonly used in image processing can be used. Furthermore, some of these various processes 320-325 may actually operate in conjunction with one another in ways not necessarily expressed herein. [0046] At the end of the RAW processing stage 302, the demosaic process 326 produces a camera RGB image 329 from the initial RAW image 312). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Howells with the teachings of Zimmer to generate second ancillary format images for the purpose of "preserving the spatial quality and the detail of the digital image… [and] sufficiently representing the color of the digital image" [Zimmer 0007]. Regarding claim 8, Howells and Zimmer teach the method of claim 7. Howells further teaches wherein the second sets of conversion parameters are determined based on a comparison of pixel values and/or image metrics in the overlapping regions of the raw file format images of the subset ([pg. 22 ln. 1-11] It is preferred that each point on the subject’s body (other than, of course, the soles of their feet or other obscured portions) is within the field of view of at least two, and more preferably exactly three cameras during the scan. This will require images from different cameras to overlap. Although only two cameras in stereo are required to be directed towards a given point on the surface of an object in order to calculate distance to the point, having three cameras directed at each point makes the calculation of 3D coordinates more accurate/reduces errors whilst still keeping the number of cameras required inside the module at an affordable level for gym owners. The system of the present invention can achieve a resolution of one point per 0.5 to - 5 mm. This resolution will, however, depend on the number of cameras imaging each point on the surface of the subject simultaneously). Regarding claim 9, Howells and Zimmer teach the method of claim 1. Howells further teaches wherein generating images comprises: selecting a subset of raw file format images that have overlapping regions; and for a images generated from the subset and in which an image metric is (such as: selected or adjusted to be) different in separate overlapping regions, determining values of a conversion parameter related to the image metric comprises fitting the values across the image to a continuous function connecting the different image metrics in the separate overlapping regions ([pg. 22 ln. 1-11] It is preferred that each point on the subject’s body (other than, of course, the soles of their feet or other obscured portions) is within the field of view of at least two, and more preferably exactly three cameras during the scan. This will require images from different cameras to overlap. Although only two cameras in stereo are required to be directed towards a given point on the surface of an object in order to calculate distance to the point, having three cameras directed at each point makes the calculation of 3D coordinates more accurate/reduces errors whilst still keeping the number of cameras required inside the module at an affordable level for gym owners. The system of the present invention can achieve a resolution of one point per 0.5 to - 5 mm. This resolution will, however, depend on the number of cameras imaging each point on the surface of the subject simultaneously. [pg. 5 ln. 30 - pg. 6 ln. 4] Points in this context refer to sites spaced across the surface of the scanning volume and projected onto the subject within the scanning volume at locations for which 3D coordinates are to be derived. These will preferably number no less than 1 per square millimetre on the surface of the subject. In some embodiments, points can be spaced closer together in regions of the subject which are likely to be more detailed but there should be no less than l point per square millimetre on any part of the surface in order to ensure that the final digital model is of adequate resolution). Howells does not teach generating second ancillary file format images. Zimmer teaches generating second ancillary file format images ([0044] In a RAW processing stage 302, RAW processing is performed on the RAW image 312 using one or more various processes, including a black subtraction process 320, a highlight recovery process 321, a stuck pixel elimination process 322, an auto-exposure adjustment process 323, a demosaic (de-Bayer) process 324, and a chroma-blur process 325. One purpose of the RAW processing stage 302 is to preserve spatial quality and detail of the captured scene in the RAW image 312. [0045] To implement these processes 320-325, the metadata 314 is used to obtain various raw processing algorithms associated with the processes (Block 328). Although shown in a particular order, one or more of these processes 320-325 may be rearranged depending on specific requirements, such as any particulars associated with the camera. In addition, one or more of these processes 320-325 may or may not be used for various reasons, and other processes commonly used in image processing can be used. Furthermore, some of these various processes 320-325 may actually operate in conjunction with one another in ways not necessarily expressed herein. [0046] At the end of the RAW processing stage 302, the demosaic process 326 produces a camera RGB image 329 from the initial RAW image 312). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Howells with the teachings of Zimmer to generate second ancillary format images for the purpose of "preserving the spatial quality and the detail of the digital image… [and] sufficiently representing the color of the digital image" [Zimmer 0007]. Regarding claim 13, Howells and Zimmer teach the method of claim 1. Zimmer teaches wherein the first and second sets of conversion parameters comprise at least one or more of: resolution conversion parameters; bit depth conversion parameters; pixel color format ([0069] FIG. 5 shows a highlight recovery process 500 that is applied to recover the luminance and hue of pixels where one or two of the channels are clipped by the limits of the sensors); and noise suppression parameters ([0082] The RAW images are then processed to quantify the noise response (e.g., amount of noise) and to quantify the stuck pixel response in each RAW image (Block 606). [0083] Ultimately, the stored profile data characterizing various cameras can be used to decide how much to adjust a given RAW image for noise). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Howells with the teachings of Zimmer to use pixel color format and noise suppression parameters for the purpose of "preserving the spatial quality and the detail of the digital image… [and] sufficiently representing the color of the digital image" [Zimmer 0007]. Regarding claim 14, Howells and Zimmer teach the method of claim 1. Howells further teaches an image processing device comprising memory circuitry, processor circuitry, and an interface, wherein the image processing device is configured to perform any of the methods ([pg. 15 ln. 30 - pg. 16 ln. 6] Figure 2 shows an overview of the system which comprises a scanning module, a client app or web portal and a back end including at least one secure web server and a database on the server or coupled to the server and Figure 3 shows the architecture of the system in more detail. The scanning module or booth includes scanning means for collecting the raw data needed to produce a 3D image. Data capture generally uses cameras or sensors mounted within the module (along with illumination and positioning systems such as those described in detail below). Data is transferred to and stored on the server or servers in cloud storage after a scan. Processing can take place before, after or during upload to the server. Once data has been uploaded a user can either download an app onto a personal device). Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Howells in view of Zimmer and Babiloni (US20220247889A1). Regarding claim 10, Howells and Zimmer teach the method of claim 1. Babiloni, in the same field of endeavor of raw image transformation, teaches wherein, for each raw file format image, the steps of generating the first ancillary file format image from the raw file format image and determining spatial coordinates are performed before the step of generating the second ancillary file format image from the raw file format image ([0055] FIG. 5 illustrates an example of the high-level structure of an image processing system 500 for transforming a raw image 501 to an RGB image 502. The pipeline takes the raw sensor input 501, which is a matrix of height×width×one channel of size H×W×1 sampled on a color filter array (Bayer pattern). The raw image is packed, as shown at 503. The deep learning approach applies two subnetworks based on the LAB color space, shown at 504 and 505. The CNN subnetwork 504 reconstructs the luminance (L) relating to image grayscale brightness, representing texture, edges and image structure including high frequency detail. The input to this module is the raw H×W×1 data and the output is H×W×1 data which gives the luminance channel (L), shown at 506. The CNN subnetwork 505 estimates the image chrominance (AB) relating to image color. The input to this module is the raw H×W×1 data and the output is H×W×2 data corresponding to the two chrominance channels (AB), shown at 507). [0057] The processes performed by the two modules are linked together, as depicted by the dotted arrow indicated at 509. Through the linking mechanism, luminance information is used to produce higher quality color output). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Howells with the teachings of Babiloni to generate the first ancillary file format image from the raw image and determine spatial coordinates before generating the second ancillary file format image because "through the linking mechanism, luminance information is used to produce higher quality color output" [Babiloni 0057]. Regarding claim 11, Howells and Zimmer teach the method of claim 1. Babiloni teaches wherein, for a given raw file format image, the steps of generating the first ancillary file format image from said raw file format image and determining spatial coordinates based thereon are performed before the step of generating a first ancillary file format image from a subsequent raw file format image ([0055] FIG. 5 illustrates an example of the high-level structure of an image processing system 500 for transforming a raw image 501 to an RGB image 502. The pipeline takes the raw sensor input 501, which is a matrix of height×width×one channel of size H×W×1 sampled on a color filter array (Bayer pattern). The raw image is packed, as shown at 503. The deep learning approach applies two subnetworks based on the LAB color space, shown at 504 and 505. The CNN subnetwork 504 reconstructs the luminance (L) relating to image grayscale brightness, representing texture, edges and image structure including high frequency detail. The input to this module is the raw H×W×1 data and the output is H×W×1 data which gives the luminance channel (L), shown at 506. The CNN subnetwork 505 estimates the image chrominance (AB) relating to image color. The input to this module is the raw H×W×1 data and the output is H×W×2 data corresponding to the two chrominance channels (AB), shown at 507). [0057] The processes performed by the two modules are linked together, as depicted by the dotted arrow indicated at 509. Through the linking mechanism, luminance information is used to produce higher quality color output). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Howells with the teachings of Babiloni to generate the first ancillary file format image from the raw image and determine spatial coordinates before doing so for a subsequent image so that “the image may be converted to an RGB color space once luminance and color recovery have been addressed in a different color space" [Babiloni 0015]. Allowable Subject Matter Claim 12 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claim 12, Babiloni teaches the steps of generating the first ancillary file format image from the raw file format image and determining spatial coordinates are performed before the step of generating the second ancillary file format image from the raw file format image ([0055] FIG. 5 illustrates an example of the high-level structure of an image processing system 500 for transforming a raw image 501 to an RGB image 502. The pipeline takes the raw sensor input 501, which is a matrix of height×width×one channel of size H×W×1 sampled on a color filter array (Bayer pattern). The raw image is packed, as shown at 503. The deep learning approach applies two subnetworks based on the LAB color space, shown at 504 and 505. The CNN subnetwork 504 reconstructs the luminance (L) relating to image grayscale brightness, representing texture, edges and image structure including high frequency detail. The input to this module is the raw H×W×1 data and the output is H×W×1 data which gives the luminance channel (L), shown at 506. The CNN subnetwork 505 estimates the image chrominance (AB) relating to image color. The input to this module is the raw H×W×1 data and the output is H×W×2 data corresponding to the two chrominance channels (AB), shown at 507). [0057] The processes performed by the two modules are linked together, as depicted by the dotted arrow indicated at 509. Through the linking mechanism, luminance information is used to produce higher quality color output). The following limitation is not found to be taught in the art: wherein the steps of generating the first ancillary file format images from the raw file format images and determining spatial coordinates are performed for all raw file format images before the steps of generating the second ancillary file format images from the raw file format images. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacqueline R Zak whose telephone number is (571)272-4077. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACQUELINE R ZAK/Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Jul 04, 2024
Application Filed
Jan 25, 2025
Non-Final Rejection — §103
Apr 25, 2025
Response Filed
May 07, 2025
Final Rejection — §103
Aug 15, 2025
Interview Requested
Aug 28, 2025
Applicant Interview (Telephonic)
Aug 28, 2025
Examiner Interview Summary
Sep 15, 2025
Response after Non-Final Action
Oct 10, 2025
Request for Continued Examination
Oct 16, 2025
Response after Non-Final Action
Oct 22, 2025
Non-Final Rejection — §103
Dec 07, 2025
Interview Requested
Dec 22, 2025
Applicant Interview (Telephonic)
Dec 22, 2025
Examiner Interview Summary
Jan 21, 2026
Response Filed
Jan 30, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586340
PIXEL PERSPECTIVE ESTIMATION AND REFINEMENT IN AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12462343
MEDICAL DIAGNOSTIC APPARATUS AND METHOD FOR EVALUATION OF PATHOLOGICAL CONDITIONS USING 3D OPTICAL COHERENCE TOMOGRAPHY DATA AND IMAGES
2y 5m to grant Granted Nov 04, 2025
Patent 12373946
ASSAY READING METHOD
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
67%
Grant Probability
55%
With Interview (-11.4%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month