Prosecution Insights
Last updated: April 19, 2026
Application No. 18/026,479

IMAGE GENERATION SYSTEM, IMAGE GENERATION METHOD, AND RECORDING MEDIUM

Final Rejection §102§103
Filed
Mar 15, 2023
Examiner
CASCAIS, JUSTIN PHILIP
Art Unit
2674
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
86%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
31 granted / 44 resolved
+8.5% vs TC avg
Strong +15% interview lift
Without
With
+15.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
23 currently pending
Career history
67
Total Applications
across all art units

Statute-Specific Performance

§101
15.1%
-24.9% vs TC avg
§103
57.6%
+17.6% vs TC avg
§102
20.9%
-19.1% vs TC avg
§112
6.4%
-33.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 44 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Amendment Applicant submitted amendments on 7/30/2025. The Examiner acknowledges the amendment and has reviewed the claims accordingly. Priority Receipt is acknowledged that application is a National Stage application of PCT JP2020/036345. Priority to PCT/JP2020/036345 with a priority date of 9/25/2020 is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDS(s) dated 3/15/2023 that have been previously considered remain placed in the application file. Overview Claims 1-13 are pending in this application and have been considered below. Claims 1-13 are rejected. Applicant Arguments In regards to Argument 1, Applicant states the cited references do not disclose Claim 1, specifically “…that takes into account an error that occurs when the position information is detected…”. Applicant states Kozakaya only describes “detecting facial features from facial images” and “generating different facial pattern images by perturbing the coordinates of facial features in arbitrary direction, with no such mention of estimation errors or how to calculate the perturbation quantity according to the estimation error (See Remarks, page 7). In regards to Argument 2, Applicant states the cited references do not disclose Claim 5, specifically “error” evaluated using a test image. Applicant states Lawlor only describes posture error, which is different from the error recited (See Remarks, page 7 bottom, page 8 top). Examiner’s Response In response to Argument 1, the Examiner respectfully disagrees. The Applicant states Kozakaya does not disclose Claim 1, specifically “…that takes into account an error that occurs when the position information is detected…”. The Applicant is reminded that the opinion in In re Hiniker Co., 47 USPQ2d 1523 (Fed. Cir. 1998) stated "...the name of the game is the claim. See Giles Sutherland Rich, Extent of Protection and Interpretation of Claims--American Perspectives , 21 Int'l Rev. Indus. Prop.& Copyright L. 497, 499 (1990) (“The U.S. is strictly an examination country and the main purpose of the examination, to which every application is subjected, is to try to make sure that what each claim defines is patentable. To coin a phrase, the name of the game is the claim.”)." Our reviewing Court has made clear that examined claims are interpreted as broadly as is reasonable using ordinary and accustomed term meanings so as to be consistent with the Specification. In re Thrift, 298 F.3d 1357, 1364 (Fed. Cir. 2002). The Examiner must interpret the phrase “that takes into account an error that occurs when the position information is detected” under the broadest reasonable interpretation. Under BRI, “takes into account” broadly means considering, incorporating, or addressing errors in some way, without requiring explicit calculation of error values. “Error” could include inaccuracies from detection processes like sensor noise, camera movement, or estimation under uncertainties. Considering the language of the claim under BRI, the Examiner finds the perturbation isn’t required to be solely for error correction, and instead could serve multiple purposes as long as it inherently addresses detection errors. The limitation as a whole, under BRI, is interpreted to cover a process where position data is detected, then modified to reflect or compensate for potential detection inaccuracies, resulting in variant position information used in further processing. Kozakaya in ¶ 21 discloses “the image recognition apparatus ... includes an image input unit for inputting a face of an objective person, an object detection unit for detecting the face of the person from the inputted image.” Additionally, ¶88 discloses “an image (depth map) having a depth as a pixel value may be inputted from a device capable of measuring a three-dimensional shape, such as a range finder.” This establishes that position information is detected, including 2D feature points from the image and 3D depth data, which are subject to detection errors, for example from rangefinder inaccuracies. See also ¶61 and 72. Kozakaya in ¶88-90 discloses “when the camera motion matrix is obtained from expression (3), not only a method of obtaining a generalized inverse matrix, but also any method may be used. For example, M-estimation as one of robust estimations is used, and the camera motion matrix can be obtained. When an estimated error .epsilon._{M} of the camera motion matrix is defined as in expression (7), as indicated in expression (8), <M> which minimizes the estimated error is solved in accordance with the evaluation reference function .rho.(x) to obtain the camera motion matrix.” This shows explicit error minimization during model creation, addressing errors in position detection due to camera motion. Overall, the perturbations “take into account” errors by generating position variants that compensate for them, aligning with BRI’s allowance for functional equivalence without verbatim language. The Examiner interprets the prior art to teach “…that takes into account an error that occurs when the position information is detected…”. In response to Argument 2, the Examiner respectfully disagrees. The Applicant states references do not disclose Claim 5, specifically “error” evaluated using test images. The Applicant is reminded that the opinion in In re Hiniker Co., 47 USPQ2d 1523 (Fed. Cir. 1998) stated "...the name of the game is the claim. See Giles Sutherland Rich, Extent of Protection and Interpretation of Claims--American Perspectives , 21 Int'l Rev. Indus. Prop.& Copyright L. 497, 499 (1990) (“The U.S. is strictly an examination country and the main purpose of the examination, to which every application is subjected, is to try to make sure that what each claim defines is patentable. To coin a phrase, the name of the game is the claim.”)." Our reviewing Court has made clear that examined claims are interpreted as broadly as is reasonable using ordinary and accustomed term meanings so as to be consistent with the Specification. In re Thrift, 298 F.3d 1357, 1364 (Fed. Cir. 2002). The Examiner must interpret the phrase “evaluate the error by using an image for a test” under the broadest reasonable interpretation. Under BRI, “error” could include inaccuracies from detection processes like sensor noise, camera movement, or estimation under uncertainties. Considering the language of the claim under BRI, the Examiner finds that a “test image” is not necessarily required, but that the system evaluates error by using an image with “correct answer data about the position information”. The limitation as a whole, under BRI, is interpreted to cover a process where error is evaluated by using an image showing the “true” position information. The Examiner finds no language in the claim that would exclude “posture error” as alleged by the applicant. Lawlor in ¶ 29-36 and 50 discloses “a machine learning model to predict a pose error from image data … Image is labeled with survey points with known locations … the sensor system pose data associated with the image to determine the respective capture location with respect to a common or global coordinate system”. This shows explicitly that error is evaluated based on an image showing “known” or “correct” position information. Overall, the “error” is evaluated by comparing captured points with known points, aligning with BRI’s allowance for functional equivalence without verbatim language. The Examiner interprets the prior art to teach “evaluate the error by using an image for a test”. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-3, 7, 12-13 is/are rejected under 35 U.S.C. 102(a)(2) as being unpatentable over Kozakaya et al (US 20060269143 A1, hereafter referred to as Kozakaya). Claim 1 In regards to Claim 1, Kozakaya teaches an image generation system comprising: at least one memory that is configured to store instructions (Kozakaya teaches an image recognition apparatus uses an image obtained by photographing an object having a three-dimensional shape and performs image recognition of the object, and includes an image input unit to which the image is inputted, a three-dimensional shape information holding unit configured to store three-dimensional shape information as an origin of a three-dimensional model of the object, a model creation unit configured to create the three-dimensional model by using the input image and the three-dimensional shape information, a pattern creation unit configured to create plural pattern images in which the three-dimensional model is projected on a plane in different directions, a feature extraction unit configured to extract a feature quantity from the plural pattern images, a registration dictionary holding unit configured to register a feature quantity of the object, and a similarity degree calculation unit configured to calculate a similarity degree between the extracted feature quantity and the registered feature quantity of the object and to recognize the object based on the calculated similarity degree, [9], abstract); and at least one first processor that is configured to execute the instructions to detect a position information about a position of a face or a position of a feature point of the face from an image (Kozakaya teaches the texture perturbation unit 26 uses the texture obtained from the model creation unit 16 and creates plural face pattern images. Since the correspondence between coordinates on the obtained texture and coordinates on the three-dimensional face model is established, the coordinates of face feature points in the texture are known. The face pattern image is cut out by using the coordinates of the face feature points in this texture. With respect to the cutting method of the face pattern image, any cutting method may be used, for example, normalization may be made so that the interval between both pupils becomes equal, or the barycenter of feature points is positioned at the center of the pattern image, [49-51], figures 2, 4, and 8); obtain a perturbed position information that takes into account an error that occurs when the position information is detected, for the position information (Kozakaya the image recognition apparatus 10 of this embodiment includes an image input unit 12 for inputting a face of an objective person, an object detection unit 14 for detecting the face of the person from the inputted image, a model creation unit 16 for creating a three-dimensional face model by using the detected face and previously held three-dimensional shape information, a texture perturbation unit 26 for creating plural face pattern images from a texture, a feature extraction unit 20 for extracting a feature quantity used for recognition from the created face pattern images, and a similarity degree calculation unit 24 for calculating a similarity degree to a previously registered registration dictionary 22.For example, different feature points perturbed in an arbitrary direction with respect to detected feature points are calculated and can be outputted. At this time, the processing of the model creation and the pattern creation is performed by the number of the sets of the outputted feature points, and the integration is performed in the feature extraction unit 20, so that the processing can be performed independently of the number of the sets of the outputted feature points. Besides, also with respect to the kind of the face feature point to be perturbed, one or all feature points can be arbitrarily combined, and also with respect to the direction in which perturbation is made, the perturbation can be made not only in a direction vertical or horizontal to the image, but also in an arbitrary direction, [26, 46-49, 53-54], figures 2, 4, and 8); and generate a new image including the face on the basis of the perturbed position information (Kozakaya the image recognition apparatus 10 of this embodiment includes an image input unit 12 for inputting a face of an objective person, an object detection unit 14 for detecting the face of the person from the inputted image, a model creation unit 16 for creating a three-dimensional face model by using the detected face and previously held three-dimensional shape information, a texture perturbation unit 26 for creating plural face pattern images from a texture, a feature extraction unit 20 for extracting a feature quantity used for recognition from the created face pattern images, and a similarity degree calculation unit 24 for calculating a similarity degree to a previously registered registration dictionary 22.For example, different feature points perturbed in an arbitrary direction with respect to detected feature points are calculated and can be outputted. At this time, the processing of the model creation and the pattern creation is performed by the number of the sets of the outputted feature points, and the integration is performed in the feature extraction unit 20, so that the processing can be performed independently of the number of the sets of the outputted feature points. Besides, a different face pattern image can be created by perturbing the coordinate of the face feature point at the time of cutting in an arbitrary direction. The quantity of perturbation may be within any range. Besides, also with respect to the kind of the face feature point to be perturbed, one or all feature points can be arbitrarily combined, and also with respect to the direction in which perturbation is made, the perturbation can be made not only in a direction vertical or horizontal to the image, but also in an arbitrary direction, [26, 46-49, 53-54], figures 2, 4, and 8). Claim 2 In regards to Claim 2, Kozakaya teaches the image generation system according to claim 1, wherein the at least one first processor is configured to execute the instructions to generate, as the new image, a normalized image obtained by adjusting at least one of a position, a size, and an angle of the face on the basis of the perturbed position information (Kozakaya teaches with respect to the cutting method of the face pattern image, any cutting method may be used, for example, normalization may be made so that the interval between both pupils becomes equal, or the barycenter of feature points is positioned at the center of the pattern image. The image recognition apparatus 10 of this embodiment includes an image input unit 12 for inputting a face of an objective person, an object detection unit 14 for detecting the face of the person from the inputted image, a shape input unit 72 for inputting a three-dimensional shape of the face of the objective person, a shape normalization unit 76 for normalizing the inputted face shape by using previously held reference shape information 74, a model creation unit 16 for creating a three-dimensional face model by using the detected face and normalized three-dimensional shape information 78. Incidentally, as the reference shape information 74, any information may be used. For example, the three-dimensional shape of a general face of a person as a recognition object, which has been described in the first embodiment, can be used. Besides, by performing such an iterative operation that a new reference shape is created from the average of normalized input shapes and the input shape is again created, the precision of the normalization can also be raised. The first embodiment describes the feature points to be outputted may be plural sets of points. For example, different feature points perturbed in an arbitrary direction with respect to detected feature points are calculated and can be outputted. At this time, the processing of the model creation and the pattern creation is performed by the number of the sets of the outputted feature points, and the integration is performed in the feature extraction unit 20, so that the processing can be performed independently of the number of the sets of the outputted feature points, [26, 51-54, 76, 81-86], figures 7-8). Claim 3 In regards to Claim 3, Kozakaya teaches the image generation system according to claim 1, further comprising a second processor that is configured to execute instructions to calculate a perturbation quantity corresponding to the error (Kozakaya teaches for example, when both eyes are selected as the face feature points, when the perturbation is made within the range of -2 to +2 pixels in each of the horizontal and vertical directions, 625 face pattern images can be created from the texture obtained from the model creation unit 16. This range is applied to feature point coordinates to account for detection error, and creates face pattern images that adjust for inaccuracies. Under "Modified Examples" Kozakaya teaches un the image input unit, an image (depth map) having a depth as a pixel value may be inputted from a device capable of measuring a three-dimensional shape, such as a range finder. In that case, the registration dictionary also uses feature quantities created from the depth map, and the calculation of a similarity degree is performed. In the model creation unit, when the camera motion matrix is obtained from expression (3), not only a method of obtaining a generalized inverse matrix, but also any method may be used. For example, M-estimation as one of robust estimations is used, and the camera motion matrix can be obtained as described below. When an estimated error .epsilon._{M} of the camera motion matrix is defined as in expression (7), as indicated in expression (8), which minimizes the estimated error is solved in accordance with the evaluation reference function .rho.(x) to obtain the camera motion matrix. Incidentally, denotes such a character that a tilde is attached to "M". Although any evaluation reference function .rho.(x) may be used, for example, expression (9) is known. Incidentally, .sigma. of expression (9) denotes a scale parameter. The estimated error is calculated and minimized using an evaluation reference function. This error corresponds to the perturbation quantity -- i.e., the deviation due to camera motion or feature detection that the perturbation addresses, [48-54, 88-94]), figures 4 and 8), wherein the at least one first processor is configured to execute the instructions to obtain the perturbed position information by adding the perturbation quantity to the position information (Kozakaya teaches that the text perturbation unit with respect to the kind of the face feature point to be perturbed, one or all feature points can be arbitrarily combined, and also with respect to the direction in which perturbation is made, the perturbation can be made not only in a direction vertical or horizontal to the image, but also in an arbitrary direction. For example, when both eyes are selected as the face feature points, when the perturbation is made within the range of -2 to +2 pixels in each of the horizontal and vertical directions, 625 face pattern images can be created from the texture obtained from the model creation unit 16. FIG. 4 is a conceptual view of a case where a texture image is changed one-dimensionally. A different face pattern image can be created by perturbing the coordinate of the face feature point at the time of cutting in an arbitrary direction. The quantity of perturbation may be within any range, [48-54, 88-94]), figures 4 and 8). Claim 7 In regards to Claim 7, Kozakaya teaches the image generation system according to claim 1, wherein the at least one first processor is configured to execute the instructions to obtain the perturbed position information on the basis of a plurality of position informations detected by one first processor (Kozakaya teaches an image recognition apparatus that obtains perturbed position information. The system detects a plurality of position informations, such as feature points of a face, using a model creation unit. The position informations are processed to account for perturbations, such as errors in detection, and the perturbed position information is obtained by adjusting the detected positions, [26, 46-49, 53-54], figures 2, 4, and 8). Claim 12 In regards to Claim 12, Kozakaya teaches an image generation method comprising: detecting a position information about a position of a face or a position of a feature point of the face from an image (Kozakaya teaches the texture perturbation unit 26 uses the texture obtained from the model creation unit 16 and creates plural face pattern images. Since the correspondence between coordinates on the obtained texture and coordinates on the three-dimensional face model is established, the coordinates of face feature points in the texture are known. The face pattern image is cut out by using the coordinates of the face feature points in this texture. With respect to the cutting method of the face pattern image, any cutting method may be used, for example, normalization may be made so that the interval between both pupils becomes equal, or the barycenter of feature points is positioned at the center of the pattern image, [49-51], figures 2, 4, and 8); obtaining a perturbed position information that takes into account an error that occurs when the position information is detected, for the position information (Kozakaya the image recognition apparatus 10 of this embodiment includes an image input unit 12 for inputting a face of an objective person, an object detection unit 14 for detecting the face of the person from the inputted image, a model creation unit 16 for creating a three-dimensional face model by using the detected face and previously held three-dimensional shape information, a texture perturbation unit 26 for creating plural face pattern images from a texture, a feature extraction unit 20 for extracting a feature quantity used for recognition from the created face pattern images, and a similarity degree calculation unit 24 for calculating a similarity degree to a previously registered registration dictionary 22.For example, different feature points perturbed in an arbitrary direction with respect to detected feature points are calculated and can be outputted. At this time, the processing of the model creation and the pattern creation is performed by the number of the sets of the outputted feature points, and the integration is performed in the feature extraction unit 20, so that the processing can be performed independently of the number of the sets of the outputted feature points. Besides, also with respect to the kind of the face feature point to be perturbed, one or all feature points can be arbitrarily combined, and also with respect to the direction in which perturbation is made, the perturbation can be made not only in a direction vertical or horizontal to the image, but also in an arbitrary direction, [26, 46-49, 53-54], figures 2, 4, and 8); and generating a new image including the face on the basis of the perturbed position information (Kozakaya the image recognition apparatus 10 of this embodiment includes an image input unit 12 for inputting a face of an objective person, an object detection unit 14 for detecting the face of the person from the inputted image, a model creation unit 16 for creating a three-dimensional face model by using the detected face and previously held three-dimensional shape information, a texture perturbation unit 26 for creating plural face pattern images from a texture, a feature extraction unit 20 for extracting a feature quantity used for recognition from the created face pattern images, and a similarity degree calculation unit 24 for calculating a similarity degree to a previously registered registration dictionary 22.For example, different feature points perturbed in an arbitrary direction with respect to detected feature points are calculated and can be outputted. At this time, the processing of the model creation and the pattern creation is performed by the number of the sets of the outputted feature points, and the integration is performed in the feature extraction unit 20, so that the processing can be performed independently of the number of the sets of the outputted feature points. Besides, a different face pattern image can be created by perturbing the coordinate of the face feature point at the time of cutting in an arbitrary direction. The quantity of perturbation may be within any range. Besides, also with respect to the kind of the face feature point to be perturbed, one or all feature points can be arbitrarily combined, and also with respect to the direction in which perturbation is made, the perturbation can be made not only in a direction vertical or horizontal to the image, but also in an arbitrary direction, [26, 46-49, 53-54], figures 2, 4, and 8). Claim 13 In regards to Claim 13, Kozakaya teaches a non-transitory recording medium on which a computer program that allows a computer to execute an image generation method is recorded, the image generation method including: detecting a position information about a position of a face or a position of a feature point of the face from an image (Kozakaya teaches the texture perturbation unit 26 uses the texture obtained from the model creation unit 16 and creates plural face pattern images. Since the correspondence between coordinates on the obtained texture and coordinates on the three-dimensional face model is established, the coordinates of face feature points in the texture are known. The face pattern image is cut out by using the coordinates of the face feature points in this texture. With respect to the cutting method of the face pattern image, any cutting method may be used, for example, normalization may be made so that the interval between both pupils becomes equal, or the barycenter of feature points is positioned at the center of the pattern image, [49-51], figures 2, 4, and 8); obtaining a perturbed position information that takes into account an error that occurs when the position information is detected, for the position information (Kozakaya the image recognition apparatus 10 of this embodiment includes an image input unit 12 for inputting a face of an objective person, an object detection unit 14 for detecting the face of the person from the inputted image, a model creation unit 16 for creating a three-dimensional face model by using the detected face and previously held three-dimensional shape information, a texture perturbation unit 26 for creating plural face pattern images from a texture, a feature extraction unit 20 for extracting a feature quantity used for recognition from the created face pattern images, and a similarity degree calculation unit 24 for calculating a similarity degree to a previously registered registration dictionary 22.For example, different feature points perturbed in an arbitrary direction with respect to detected feature points are calculated and can be outputted. At this time, the processing of the model creation and the pattern creation is performed by the number of the sets of the outputted feature points, and the integration is performed in the feature extraction unit 20, so that the processing can be performed independently of the number of the sets of the outputted feature points. Besides, also with respect to the kind of the face feature point to be perturbed, one or all feature points can be arbitrarily combined, and also with respect to the direction in which perturbation is made, the perturbation can be made not only in a direction vertical or horizontal to the image, but also in an arbitrary direction, [26, 46-49, 53-54], figures 2, 4, and 8); and generating a new image including the face on the basis of the perturbed position information (Kozakaya the image recognition apparatus 10 of this embodiment includes an image input unit 12 for inputting a face of an objective person, an object detection unit 14 for detecting the face of the person from the inputted image, a model creation unit 16 for creating a three-dimensional face model by using the detected face and previously held three-dimensional shape information, a texture perturbation unit 26 for creating plural face pattern images from a texture, a feature extraction unit 20 for extracting a feature quantity used for recognition from the created face pattern images, and a similarity degree calculation unit 24 for calculating a similarity degree to a previously registered registration dictionary 22.For example, different feature points perturbed in an arbitrary direction with respect to detected feature points are calculated and can be outputted. At this time, the processing of the model creation and the pattern creation is performed by the number of the sets of the outputted feature points, and the integration is performed in the feature extraction unit 20, so that the processing can be performed independently of the number of the sets of the outputted feature points. Besides, a different face pattern image can be created by perturbing the coordinate of the face feature point at the time of cutting in an arbitrary direction. The quantity of perturbation may be within any range. Besides, also with respect to the kind of the face feature point to be perturbed, one or all feature points can be arbitrarily combined, and also with respect to the direction in which perturbation is made, the perturbation can be made not only in a direction vertical or horizontal to the image, but also in an arbitrary direction, [26, 46-49, 53-54], figures 2, 4, and 8). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 4-5, 8 is/are rejected under 35 U.S.C. 103 as obvious over Kozakaya et al (US 20060269143 A1, hereafter referred to as Kozakaya) in view of Lawlor et al (US 20210089572 A1, hereafter referred to as Lawlor). Claim 4 Regarding Claim 4, Kozakaya teaches the image generation system according to claim 3. Kozakaya does not explicitly teach all of wherein the second processor is configured to execute instructions to calculate the perturbation quantity in accordance with the error that is specified by a user. However, Lawlor teaches wherein the second processor is configured to execute instructions to calculate the perturbation quantity in accordance with the error that is specified by a user (Lawlor teaches a machine learning model that calculates a perturbation quantity (pose error) based on errors between survey points and camera poses. The predicting module flags sensor system pose data when the error exceeds a threshold (minimum distances greater than an error threshold), identifying captures for correction. Under the broadest reasonable interpretation of "an error specified by a user," this limitation can encompass an error threshold that is configured within the system, as user-specified parameters often include thresholds set during system design or operation. Combining familiar elements according to known methods is obvious when it yields predictable results. Here, configuring the error threshold as a user-specified value is a familiar element in image processing systems, where users commonly set error tolerances to customize performance (e.g., setting a maximum allowable error for position detection). This modification yields the predictable result of allowing user control over the perturbation process, improving system flexibility. An error threshold flags pose data, [29-36, 50, 64]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kozakaya by calculating a perturbation quantity in accordance with error that is taught by Lawlor, to make the invention that generates an image in accordance with error specified by a user; thus, one of ordinary skilled in the art would be motivated to combine the references since configuring an error threshold is a common element in image processing systems (Lawlor, [35, 64]). Lawlor teaches the use of an error threshold to control perturbation, allowing the system to adjust the output image based on a predefined error tolerance. It would have been obvious to a POSITA to modify Lawlor’s error threshold to be user-defined, as user-defined error thresholds are a well-known and standard practice in image processing systems, especially considering a user interface is incorporated in Kozakaya’s and Lawlor’s system. For example, in applications such as image denoising, compression, and generative modeling, users routinely define error tolerances to balance image quality and computational efficiency based on specific use cases. This is a standard technique in the art, as it allows user to configure system performance to their needs. A POSITA would have been motivated to make this modification because Kozakaya’s system would benefit from increased user control to allow the perturbation process to be adjusted to meet varying user needs. For example, allowing a user to specify the error threshold allows for real-time optimization by reducing computational overhead in cases where high image quality is not critical, while maintaining higher precision in other imaging contexts. The combination yields the predictable result of a system that calculates perturbation in accordance with a user-specified error. See MPEP § 2143. Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Claim 5 Regarding Claim 5, Kozakaya teaches the image generation system according to claim 3. Kozakaya does not explicitly teach all of further comprising a third processor that is configured to execute instructions to evaluate the error by using an image for a test having correct answer data about the position information, wherein the second processor is configured to execute instructions to calculate the perturbation quantity in accordance with the evaluated error. However, Lawlor teaches further comprising a third processor that is configured to execute instructions to evaluate the error by using an image for a test having correct answer data about the position information (Lawlor teaches FIG. 4A is a prospective view 400 of an example of projecting rays to train a machine learning model to predict a pose error from image data, according to one embodiment. As shown in the example of FIG. 4A, image 401 is being processed to train a machine learning model to predict a pose error from image data captured using the sensor system (e.g., GPS, IMU, camera, LiDAR, Radar, etc.). Image 401 is labeled with survey points with known locations, detected at pixel locations 403a, 403b, and 403c. The model module 205 uses the sensor system pose data associated with the image 401 to determine the respective capture location 407 with respect to a common or global coordinate system 409. The model module 205 also uses the sensor system pose data and/or the meta-data associated with the sensor system to determine the physical location of the image plane 411, which represents the location and orientation of the image 401 with respect to the coordinate system 409 (e.g., a field of view corresponding to image 401). FIG. 4B is a top view 420 of the example of projecting rays to train the machine learning model to predict a pose error from image data, observing from the top 421 of the common or global coordinate system 409, according to one embodiment. In one embodiment, for each of the labeled or detected pixel locations 403a-403c of the image 401, the model module 205 generates respective rays 413a-413c originating from the capture location 407 through each of the labeled or detected pixel locations 403a-403c. The images are annotated with meta-data such as sensor system pose data and sensor system technical parameters (e.g., field of view, focal length, camera lens used, etc.). To simply the discussion, a single camera is used as an example of the sensor system (e.g., GPS, IMU, camera, LiDAR, Radar, etc.). The camera pose data includes position data (e.g., locations of the camera when the corresponding images were capture), orientation data (e.g., pointing direction), etc. Using the camera pose data, the system 100 can determine a capture location of the camera for an image. The system 100 can use the camera pose data and the camera parameters to generate a ray from the capture location through a pixel location of the survey point on an image plane of the image. The image plane, for instance, represents the location of the camera's fields of view in three-dimensional space, thereby enabling the system 100 to determine the relative orientation of the image with respect to each other. The system 100 then calculates an error between the ray generated for the image and the known physical location. In one embodiment, the error is the minimum distance between the generated ray and the known physical location of the survey point. The perturbation quantity is calculated in accordance with the evaluated error, as the machine learning model uses the evaluated errors during training to predict the necessary correction for new images. Position information is the sensor pose data or pixel locations tied to survey points, and the perturbation quantity is the predicted pose error (correction) derived from the evaluated error, abstract, [29-36, 50]), wherein the second processor is configured to execute instructions to calculate the perturbation quantity in accordance with the evaluated error (Lawlor teaches FIG. 4A is a prospective view 400 of an example of projecting rays to train a machine learning model to predict a pose error from image data, according to one embodiment. As shown in the example of FIG. 4A, image 401 is being processed to train a machine learning model to predict a pose error from image data captured using the sensor system (e.g., GPS, IMU, camera, LiDAR, Radar, etc.). Image 401 is labeled with survey points with known locations, detected at pixel locations 403a, 403b, and 403c. The model module 205 uses the sensor system pose data associated with the image 401 to determine the respective capture location 407 with respect to a common or global coordinate system 409. The model module 205 also uses the sensor system pose data and/or the meta-data associated with the sensor system to determine the physical location of the image plane 411, which represents the location and orientation of the image 401 with respect to the coordinate system 409 (e.g., a field of view corresponding to image 401). FIG. 4B is a top view 420 of the example of projecting rays to train the machine learning model to predict a pose error from image data, observing from the top 421 of the common or global coordinate system 409, according to one embodiment. In one embodiment, for each of the labeled or detected pixel locations 403a-403c of the image 401, the model module 205 generates respective rays 413a-413c originating from the capture location 407 through each of the labeled or detected pixel locations 403a-403c. The images are annotated with meta-data such as sensor system pose data and sensor system technical parameters (e.g., field of view, focal length, camera lens used, etc.). To simply the discussion, a single camera is used as an example of the sensor system (e.g., GPS, IMU, camera, LiDAR, Radar, etc.). The camera pose data includes position data (e.g., locations of the camera when the corresponding images were capture), orientation data (e.g., pointing direction), etc. Using the camera pose data, the system 100 can determine a capture location of the camera for an image. The system 100 can use the camera pose data and the camera parameters to generate a ray from the capture location through a pixel location of the survey point on an image plane of the image. The image plane, for instance, represents the location of the camera's fields of view in three-dimensional space, thereby enabling the system 100 to determine the relative orientation of the image with respect to each other. The system 100 then calculates an error between the ray generated for the image and the known physical location. In one embodiment, the error is the minimum distance between the generated ray and the known physical location of the survey point. The perturbation quantity is calculated in accordance with the evaluated error, as the machine learning model uses the evaluated errors during training to predict the necessary correction for new images. Position information is the sensor pose data or pixel locations tied to survey points, and the perturbation quantity is the predicted pose error (correction) derived from the evaluated error, abstract, [29-36, 50]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kozakaya by evaluating error by using an annotated training image and calculating the perturbation quantity in accordance with this error that is taught by Lawlor, to make the invention that generates an image in accordance with error defined using a correctly annotated image; thus, one of ordinary skilled in the art would be motivated to combine the references since configuring error by using correctly annotated image data is a common element in machine learning image processing systems (Lawlor, [29-36, 50]). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Claim 8 Regarding Claim 8, Kozakaya teaches the image generation system according to claim 7, wherein the at least one first processor is configured to execute the instructions to obtain the perturbed position information on the basis of a plurality of position informations (Kozakaya teaches an image recognition apparatus that obtains perturbed position information. The system detects a plurality of position informations, such as feature points of a face, using a model creation unit. The position informations are processed to account for perturbations, such as errors in detection, and the perturbed position information is obtained by adjusting the detected positions, [26, 46-49, 53-54], figures 2, 4, and 8). Kozakaya does not explicitly teach all of However, Lawlor teaches (Lawlor teaches a system can include multiple processors or modules (module 205, 411, etc.) that detect position informations from different sources. The use of multiple data sources and associated processing modules is described, [1, 50]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kozakaya by using a plurality of processors to detect information that is taught by Lawlor, to make the invention of an image generation system that uses position information detected from a plurality of processors; thus, one of ordinary skilled in the art would be motivated to combine the references since using multiple processors to detect position information from different sources would enhance robustness and accuracy of the system by having diverse data inputs, which is a well-known advantage in image processing systems where position detection errors can impact output quality (Lawlor, [1, 50]). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Claim(s) 6, 10 is/are rejected under 35 U.S.C. 103 as obvious over Kozakaya et al (US 20060269143 A1, hereafter referred to as Kozakaya) in view of Takada et al (JP-2012173812-A, hereafter referred to as Takada). Claim 6 Regarding Claim 6, Kozakaya teaches the image generation system according to claim 3. Kozakaya does not explicitly teach all of wherein the second processor is configured to execute instructions to calculate the perturbation quantity in accordance with a deviation of a probability distribution of a plurality of position informations. However, Takada teaches wherein the second processor is configured to execute instructions to calculate the perturbation quantity in accordance with a deviation of a probability distribution of a plurality of position informations (Takada teaches a face authentication device that calculates a perturbation quantity in accordance with a deviation of a probability distribution of a plurality of position informations. The statistical shape model calculation unit estimates a Gaussian distribution of position informations detected from multiple images under varying conditions, calculating the mean and variance to determine the deviation. The perturbation quantity is then computed based on this deviation and used to adjust the position informations, [38-40, 44-58, 66-69]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kozakaya by incorporating the method of calculating a perturbation quantity in accordance with a deviation of a probability distribution of a plurality of position information that is taught by Takada, to make the invention where a perturbation quantity is calculated in accordance with a deviation of a probability distribution; thus, one of ordinary skilled in the art would be motivated to combine the references since the statistical method of using a distribution and its deviation to calculate perturbation quantity would enhance the accuracy and robustness of the system by providing an approach to account for variability in position information, a common challenge in image processing systems (Takada, [38-40). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Claim 10 Regarding Claim 10, Kozakaya teaches the image generation system according to claim 1, wherein the at least one first processor is configured to execute the instructions to obtain the perturbed position information on the basis of a plurality of position informations detected for each of the plurality of perturbed images (Kozakaya teaches an image recognition apparatus that obtains perturbed position information. The system detects a plurality of position informations, such as feature points of a face, using a model creation unit. The position informations are processed to account for perturbations, such as errors in detection, and the perturbed position information is obtained by adjusting the detected positions, [26, 46-49, 53-54], figures 2, 4, and 8). Kozakaya does not explicitly teach all of further comprising a fifth processor that is configured to execute instructions to generate a plurality of perturbed images by perturbing the image. However, Takada teaches further comprising a fifth processor that is configured to execute instructions to generate a plurality of perturbed images by perturbing the image (Takada teaches generating a new image using a perturbed statistical shape model, where the perturbation is based on a plurality of position informations. Statistical shape model calculation unit creates the
Read full office action

Prosecution Timeline

Mar 15, 2023
Application Filed
May 08, 2025
Non-Final Rejection — §102, §103
Jul 30, 2025
Response Filed
Sep 16, 2025
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597145
MEASURING METHOD AND SYSTEM FOR BODY-SHAPED DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12586362
METHOD AND APPARATUS WITH MULTI-MODAL FEATURE FUSION
2y 5m to grant Granted Mar 24, 2026
Patent 12579685
SYSTEM AND METHOD FOR PERFORMING A CAMERA TO GROUND ALIGNMENT FOR A VEHICLE
2y 5m to grant Granted Mar 17, 2026
Patent 12573178
BRAIN IMAGE CLASSIFICATION METHOD BASED ON DISCRETIZED DATA
2y 5m to grant Granted Mar 10, 2026
Patent 12568180
APPARATUS, METHOD, AND SYSTEM FOR A PRIVACY MASK FOR VIDEO STREAMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
86%
With Interview (+15.2%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 44 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month