DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process (concept performed in a human mind, including as observation, evaluation, judgment, opinion, organizing human activity and mathematical concepts and calculations).
Claim 1 recites A device comprising: a memory configured to store a first model and a second model, wherein the first model is configured to perform inference based on a first set of parameters corresponding to a first context; and one or more processors configured to: process, using the second model, the first set of parameters and input corresponding to a second context to generate an output of the second model; and update the first model to perform inference using an updated set of parameters based on the output of the second model.
This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be performed mentally and no additional features in the claim would preclude them from being performed as such.
According to the USPTO guidelines, a claim is directed to non-statutory subject matter if:
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
STEP 2: the claim recites a judicial exception, e.g., an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that claims 1-24 are directed to an abstract idea as shown below:
STEP 1: Do the claims fall within one of the statutory categories? YES. Claim(s) 1-24 are directed to a device, method, a non-transitory computer-readable medium performing a method and an apparatus.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES , the claims are directed toward a mental process (i.e., abstract idea).
With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas:
Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations;
Certain methods of organizing human activity -- fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and
Mental processes -- concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion).
The device in claim 1 comprises a mental process that can be practicably performed in the human mind (or generic computers or components configured to perform the functions) and, therefore, an abstract idea.
Regarding claim 1: a memory configured to store a first model and a second model, wherein the first model is configured to perform inference based on a first set of parameters corresponding to a first context (mental process performed in the human mind or generic computers or components configured to perform the functions); and one or more processors configured to: process, using the second model, the first set of parameters and input corresponding to a second context to generate an output of the second model (mental process performed in the human mind or generic computers or components configured to perform the functions); and update the first model to perform inference using an updated set of parameters based on the output of the second model (mental process performed in the human mind or generic computers or components configured to perform the functions).
These limitations, as drafted, are a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193,197 (1978) (same).
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application.
With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application:
an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application:
an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
an additional element adds insignificant extra-solution activity to the judicial exception; and
an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Claim 1 does not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application.
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? NO, the claims do not recite additional elements that amount to significantly more than the judicial exception.
With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements:
adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or
simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.
Claim 1 does not recite any additional elements that are not well-understood, routine or conventional. The use of generic computer elements to “store, perform inference, process, generate, and update as claimed in Claim 1 is a routine, well-understood and conventional process that is performed by computers.
Independent Claims 22-24 contain the same steps included in claim 1; therefore, the same rationale pertains.
Thus, since Claims 1-24 are: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and
(c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear that Claim(s) 1-24 are not eligible subject matter under 35 U.S.C 101.
Regarding claims 2-21: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 4-11, 13-16 and 19-24 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lee et al. (hereinafter Lee) (EP-4507319-A1).
Regarding claim 1: Lee discloses a memory (Fig. 31, memory 520) configured to store a first model (Fig. 19, first neural network model 1210) and a second model (Fig. 19, second neural network model 1220), wherein the first model (Fig. 19, first neural network model 1210) is configured to perform inference based on a first set of parameters (Fig. 16, inference image 162 and correction parameter 1610) corresponding to a first context (As described above, a neural network model of the image correction module 1200 may be selected based on at least one of a type of the image sensor 100, a shooting mode, and a shooting condition. In particular, when a type of image sensor 100 and a detailed component of the image reconstruction module 1100 are selected according to a shooting context (a shooting mode and a shooting condition), a neural network model of the image correction module 1200 may be selected accordingly., par. 111); and one or more processors (Fig. 31, processor 530) configured to: process, using the second model (Fig. 19, second neural network model 1220), the first set of parameters (Fig. 16, correction parameter 1610) and input (Fig. 19, input image 161) corresponding to a second context (As described above, neural network models included in the image correction module 1200 may be trained by setting a direction of correction of image characteristics differently for each shooting context (e.g., increasing contrast in a zoom mode, increasing brightness in night shooting mode, etc.), and hereinafter, a method of training a neural network model included in the image correction module 1200 is described with reference to FIGS. 16 to 33 ., par. 137) to generate an output of the second model (Fig. 19, second neural network model 1220); and update the first model (Fig. 17, first neural network model 1210) to perform inference (Fig. 17, inference image 162) using an updated set of parameters (Fig. 17, correction parameter 1610) based on the output of the second model (Fig. 17, second neural network model 1220).
Regarding claim 2: Lee satisfies all the elements of claim 1. Lee further discloses wherein the first context (As described above, a neural network model of the image correction module 1200 may be selected based on at least one of a type of the image sensor 100, a shooting mode, and a shooting condition. In particular, when a type of image sensor 100 and a detailed component of the image reconstruction module 1100 are selected according to a shooting context (a shooting mode and a shooting condition), a neural network model of the image correction module 1200 may be selected accordingly., par. 111) corresponds to a 2-dimensional (2D) or 3-dimensional (3D) representation of a first scene or a first 3D object (Fig. 23), and wherein the second context (As described above, neural network models included in the image correction module 1200 may be trained by setting a direction of correction of image characteristics differently for each shooting context (e.g., increasing contrast in a zoom mode, increasing brightness in night shooting mode, etc.), and hereinafter, a method of training a neural network model included in the image correction module 1200 is described with reference to FIGS. 16 to 33., par. 137) corresponds to a 2D or 3D representation of a second scene or a second 3D object (Fig. 23).
Regarding claim 4: Lee satisfies all the elements of claim 1. Lee further discloses wherein generation of the output of the second model includes performance of multiple iterations of inference at the second model (Applicant’s SPEC describes iterations as instances of inferences; (According to an embodiment of the disclosure, a neural network model included in the image correction module 1200 may include the first neural network model 1210 and the second neural network model 1220, the first neural network model 1210 may be a model trained to minimize a difference between an inference image output when an input image and a correction parameter are fed into the first neural network model 1210 and a label image corresponding to the correction parameter, the label image corresponding to the correction parameter may be an image obtained by correcting the input image by using at least one image correction algorithm to which the correction parameter is applied, the correction parameter fed into the first neural network model 1210 may be a correction parameter inferred when the input image is fed into the second neural network model 1220, and the second neural network model 1220 may be a model trained to minimize a difference between the correction parameter inferred by the second neural network model 1220 when the input image is fed thereinto and a correction parameter that causes the label image to have preset image characteristics., par. 246).
Regarding claim 5: Lee satisfies all the elements of claim 1. Lee further discloses wherein the output of the second model (Fig. 19, second neural network model 1220) includes the updated set of parameters (Fig. 19, correction parameter 1610), or a set of adjustment values to apply to the first set of parameters to generate the updated set of parameters (The second neural network model 1220 may infer the correction parameter 1610 for correcting the input image 161 to have image characteristics that many users would typically find pleasing (e.g., image characteristics determined to be optimal by a designer of a neural network model). Therefore, the second neural network model 1220 may be used to automatically generate (infer) a correction parameter for correcting the input image 161 to have optimal image characteristics without the user needling to set or adjust a correction parameter each time, and the input image 161 may be corrected according to the correction parameter and presented to the user. For example, when the user captures an image via a terminal where the first and second neural network models 1210 and 1220 are embedded, the captured image may be corrected according to the correction parameter 1610 inferred by the second neural network model 1220 and displayed on a screen of the terminal as a preview., par. 178).
Regarding claim 6: Lee satisfies all the elements of claim 1. Lee further discloses wherein the one or more processors (Fig. 31, processor 530) are configured to: access a collection of stored parameter sets corresponding to multiple contexts for the first model (As described above, neural network models included in the image correction module 1200 may be trained by setting a direction of correction of image characteristics differently for each shooting context (e.g., increasing contrast in a zoom mode, increasing brightness in night shooting mode, etc.), and hereinafter, a method of training a neural network model included in the image correction module 1200 is described with reference to FIGS. 16 to 33. In the following embodiment of the disclosure, it is assumed that the neural network models included in the image correction module 1200 each include two neural network models (a first neural network model and a second neural network model). In other words, each correction parameter included in the image correction module 1200 includes both parameters for setting the first neural network model and parameters for setting the second neural network model., pars. 137-138); and identify, based on a similarity measure, a particular context of the multiple contexts that has a closest similarity to the second context (Referring back to FIG. 18 , the label image 163 output from the label image generation module 1810 may be used as training data (in particular, ground truth data) for training the first neural network model 1210. The first neural network model 1210 may be trained to maximize the similarity between the label image 163 and the inference image 162 that is output from the first neural network model 1210 when the input image 161 and the correction parameter 1610 are fed into the first neural network model 1210. That is, according to an embodiment of the disclosure, the first neural network model 1210 may be a model trained to minimize a difference between the inference image 162, which is output when the input image 161 and the correction parameter 1610 are received as an input, and the label image 163 corresponding to the correction parameter 1610. In this case, the label image 163 corresponding to the correction parameter 1610 may be an image obtained by correcting the input image 161 by using at least one image correction algorithm to which the correction parameter 1610 is applied., par. 169).
Regarding claim 7: Lee satisfies all the elements of claim 6. Lee further discloses wherein the similarity measure is based on a set of extracted feature descriptors associated with the multiple contexts and an extracted feature descriptor associated with the second context (Referring back to FIG. 18 , the label image 163 output from the label image generation module 1810 may be used as training data (in particular, ground truth data) for training the first neural network model 1210. The first neural network model 1210 may be trained to maximize the similarity between the label image 163 and the inference image 162 that is output from the first neural network model 1210 when the input image 161 and the correction parameter 1610 are fed into the first neural network model 1210. That is, according to an embodiment of the disclosure, the first neural network model 1210 may be a model trained to minimize a difference between the inference image 162, which is output when the input image 161 and the correction parameter 1610 are received as an input, and the label image 163 corresponding to the correction parameter 1610. In this case, the label image 163 corresponding to the correction parameter 1610 may be an image obtained by correcting the input image 161 by using at least one image correction algorithm to which the correction parameter 1610 is applied. In other words, the optimizer 1820 may update the first neural network model 1210 to minimize a loss value of a loss function 1620 representing a difference between the inference image 162 and the label image 163. In this case, the loss function 1620 may consist of a combination of mean absolute error (MAE), mean square error (MSE), and structural similarity index measure (SSIM)., pars. 169-170).
Regarding claim 8: Lee satisfies all the elements of claim 7. Lee further discloses wherein the feature descriptors correspond to one or more of: a scene type, an object type (Fig. 15, object recognition module 1510 and An object recognition module 1510 may recognize an object in an image output by the image correction module 1200. According to an embodiment of the disclosure, the object recognition module 1510 may request a change of the image sensor 100 based on an object recognition result. Furthermore, according to an embodiment of the disclosure, the object recognition module 1510 may recommend another shooting mode to the user or automatically change the shooting mode, based on the object recognition result., par. 132), a location, features obtained via a large language model, or descriptors obtained via a large language model.
Regarding claim 9: Lee satisfies all the elements of claim 6. Lee further discloses wherein the one or more processors (Fig. 31, processor 530) are configured to select, as the first set of parameters, the stored parameter set that corresponds to the identified particular context (As described above, neural network models included in the image correction module 1200 may be trained by setting a direction of correction of image characteristics differently for each shooting context (e.g., increasing contrast in a zoom mode, increasing brightness in night shooting mode, etc.), and hereinafter, a method of training a neural network model included in the image correction module 1200 is described with reference to FIGS. 16 to 33. In the following embodiment of the disclosure, it is assumed that the neural network models included in the image correction module 1200 each include two neural network models (a first neural network model and a second neural network model). In other words, each correction parameter included in the image correction module 1200 includes both parameters for setting the first neural network model and parameters for setting the second neural network model., pars. 137-138).
Regarding claim 10: Lee satisfies all the elements of claim 6. Lee further discloses wherein the collection of stored parameter sets (As described above, neural network models included in the image correction module 1200 may be trained by setting a direction of correction of image characteristics differently for each shooting context (e.g., increasing contrast in a zoom mode, increasing brightness in night shooting mode, etc.), and hereinafter, a method of training a neural network model included in the image correction module 1200 is described with reference to FIGS. 16 to 33. In the following embodiment of the disclosure, it is assumed that the neural network models included in the image correction module 1200 each include two neural network models (a first neural network model and a second neural network model). In other words, each correction parameter included in the image correction module 1200 includes both parameters for setting the first neural network model and parameters for setting the second neural network model., pars. 137-138) is stored in the memory (Fig. 31, memory 520), and wherein the one or more processors (Fig. 31, processor 530) are configured to, based on the closest similarity (Referring back to FIG. 18 , the label image 163 output from the label image generation module 1810 may be used as training data (in particular, ground truth data) for training the first neural network model 1210. The first neural network model 1210 may be trained to maximize the similarity between the label image 163 and the inference image 162 that is output from the first neural network model 1210 when the input image 161 and the correction parameter 1610 are fed into the first neural network model 1210. That is, according to an embodiment of the disclosure, the first neural network model 1210 may be a model trained to minimize a difference between the inference image 162, which is output when the input image 161 and the correction parameter 1610 are received as an input, and the label image 163 corresponding to the correction parameter 1610. In this case, the label image 163 corresponding to the correction parameter 1610 may be an image obtained by correcting the input image 161 by using at least one image correction algorithm to which the correction parameter 1610 is applied., par. 169) failing to satisfy a threshold similarity (In other words, the optimizer 1820 may update the first neural network model 1210 to minimize a loss value of a loss function 1620 representing a difference between the inference image 162 and the label image 163. In this case, the loss function 1620 may consist of a combination of mean absolute error (MAE), mean square error (MSE), and structural similarity index measure (SSIM)., par. 170 using SSIM (pass/fail); In other words, the optimizer 1820 may update the second neural network model 1220 to minimize a loss value of a second loss function 1622 representing a difference between the measured characteristic value 1910 and the target characteristic value 1920. In this case, the second loss function 1622 may consist of a combination of MAE, MSE, and SSIM., par. 183), access a remote (According to an embodiment of the disclosure, training of the first and second neural network models 1210 and 1220 may be performed by an external apparatus (e.g., a computing apparatus 500 of FIG. 31 ) other than a device (e.g. the user terminal 3000 of FIG. 3 ) into which the first and second neural network models 1210 and 1220 are loaded., par. 149) collection of parameter sets via a communication network to obtain the first set of parameters (As described above, neural network models included in the image correction module 1200 may be trained by setting a direction of correction of image characteristics differently for each shooting context (e.g., increasing contrast in a zoom mode, increasing brightness in night shooting mode, etc.), and hereinafter, a method of training a neural network model included in the image correction module 1200 is described with reference to FIGS. 16 to 33. In the following embodiment of the disclosure, it is assumed that the neural network models included in the image correction module 1200 each include two neural network models (a first neural network model and a second neural network model). In other words, each correction parameter included in the image correction module 1200 includes both parameters for setting the first neural network model and parameters for setting the second neural network model., pars. 137-138).
Regarding claim 11: Lee satisfies all the elements of claim 10. Lee further discloses wherein the one or more processors (Fig. 31, processor 530) are configured to select whether to access the remote collection at least partially based on a timing criteria (A method of selecting a neural network model in the Al reconstruction ISP 1130 of the image reconstruction module 1100 of FIG. 1 is described. FIG. 5 is a diagram for describing a method of selecting, based on a shooting context, a neural network model included in the image reconstruction module 1100, according to an embodiment of the disclosure. As shown in FIG. 5 , it is assumed that the Al reconstruction ISP 1130 includes a neural network model for daytime (a daytime model parameter 1131) and a neural network model for nighttime (a nighttime model parameter 1132)., par. 55) associated with updating the first model (According to an embodiment of the disclosure, training may be performed by updating model parameters in the Al reconstruction ISP 1130 to minimize a difference between an image output when an image including noise is input to the Al reconstruction ISP 1130 and an image from which noise is removed., par. 66).
Regarding claim 13: Lee satisfies all the elements of claim 1. Lee further discloses wherein, after the first model is updated (Fig. 17, correction parameter 1610) based on the output of the second model (Fig. 17, second neural network model 1220), the one or more processors (Fig. 31, processor 530) are further configured to perform one or more training operations (According to an embodiment of the disclosure, training of the first and second neural network models 1210 and 1220 may be performed by an external apparatus (e.g., a computing apparatus 500 of FIG. 31 ) other than a device (e.g. the user terminal 3000 of FIG. 3 ) into which the first and second neural network models 1210 and 1220 are loaded. According to an embodiment of the disclosure, the device into which the first and second neural network models 1210 and 1220 are loaded may also train the first and second neural network models 1210 and 1220. Hereinafter, for convenience of description, it is assumed that a processor 530 of the computing apparatus 500 of FIG. 31 executes a program stored in a memory (520 of FIG. 31 ) to train the first and second neural network models 1210 and 1220 described with reference to FIGS. 18 and 19 . That is, it may be considered that operations performed by a label image generation module 1810 or an optimizer 1820 and calculation of a loss function as described below with reference to FIGS. 18 and 19 and calculation of a loss function are actually performed by the processor 530 of the computing apparatus 500., par. 149) on the updated first model (Fig. 17, correction parameter 1610) to enhance an inference accuracy (Fig. 17, inference image 162) of the updated first model (Fig. 17, correction parameter 1610) for the second context (As described above, neural network models included in the image correction module 1200 may be trained by setting a direction of correction of image characteristics differently for each shooting context (e.g., increasing contrast in a zoom mode, increasing brightness in night shooting mode, etc.), and hereinafter, a method of training a neural network model included in the image correction module 1200 is described with reference to FIGS. 16 to 33 ., par. 137).
Regarding claim 14: Lee satisfies all the elements of claim 13. Lee further discloses wherein the one or more training operations are performed until the inference accuracy reaches an accuracy threshold (An optimizer 630 may update model parameters of the Al reconstruction ISP 1130 to minimize a result output when the inference image 63 and the input image 61 are input to a loss function 620. Therefore, the Al reconstruction ISP 1130 may be trained to infer an image that is as close as possible to a denoised image when an image with added noise is input., par. 72).
Regarding claim 15: Lee satisfies all the elements of claim 13. Lee further discloses wherein the one or more processors (Fig. 31, processor 530) are configured to alternate between parameter updates using training operations (According to an embodiment of the disclosure, training of the first and second neural network models 1210 and 1220 may be performed by an external apparatus (e.g., a computing apparatus 500 of FIG. 31 ) other than a device (e.g. the user terminal 3000 of FIG. 3 ) into which the first and second neural network models 1210 and 1220 are loaded. According to an embodiment of the disclosure, the device into which the first and second neural network models 1210 and 1220 are loaded may also train the first and second neural network models 1210 and 1220. Hereinafter, for convenience of description, it is assumed that a processor 530 of the computing apparatus 500 of FIG. 31 executes a program stored in a memory (520 of FIG. 31 ) to train the first and second neural network models 1210 and 1220 described with reference to FIGS. 18 and 19 . That is, it may be considered that operations performed by a label image generation module 1810 or an optimizer 1820 and calculation of a loss function as described below with reference to FIGS. 18 and 19 and calculation of a loss function are actually performed by the processor 530 of the computing apparatus 500., par. 149) and parameter updates using the second model (Fig. 17, second neural network model 1220) until the inference accuracy reaches an accuracy threshold (An optimizer 630 may update model parameters of the Al reconstruction ISP 1130 to minimize a result output when the inference image 63 and the input image 61 are input to a loss function 620. Therefore, the Al reconstruction ISP 1130 may be trained to infer an image that is as close as possible to a denoised image when an image with added noise is input., par. 72).
Regarding claim 16: Lee satisfies all the elements of claim 1. Lee further discloses the second model (Fig. 17, second neural network model 1220) is configured to generate the output based on a difference measurement (When training the second neural network model 1220, a measured characteristic value 1910 obtained by quantitatively digitizing characteristics (e.g., brightness, contrast, color temperature, etc.) of the label image 163 is compared with a preset target characteristic value 1920, and the second neural network model 1220 may be updated to minimize a difference between the measured characteristic value 1910 and the preset target characteristic value 1920. The target characteristic value 1920 may be preset to a value desired by a user (administrator). That is, according to an embodiment of the disclosure, the second neural network model 1220 may be a model trained to minimize the difference between the correction parameter 1610 inferred by the second neural network model 1220 when the input image 161 is fed thereto, and a correction parameter that causes the label image 163 to have preset image characteristics (a correction parameter that causes the label image generation module 1810 to output an image corresponding to the target characteristic value 1920 in FIG. 19 ., par. 182) of the first context (As described above, a neural network model of the image correction module 1200 may be selected based on at least one of a type of the image sensor 100, a shooting mode, and a shooting condition. In particular, when a type of image sensor 100 and a detailed component of the image reconstruction module 1100 are selected according to a shooting context (a shooting mode and a shooting condition), a neural network model of the image correction module 1200 may be selected accordingly., par. 111) to the second context (As described above, neural network models included in the image correction module 1200 may be trained by setting a direction of correction of image characteristics differently for each shooting context (e.g., increasing contrast in a zoom mode, increasing brightness in night shooting mode, etc.), and hereinafter, a method of training a neural network model included in the image correction module 1200 is described with reference to FIGS. 16 to 33 ., par. 137).
Regarding claim 19: Lee satisfies all the elements of claim 1. Lee further discloses further comprising a camera (Fig. 3, camera module 3100) configured to generate context data associated with the second context (As described above, neural network models included in the image correction module 1200 may be trained by setting a direction of correction of image characteristics differently for each shooting context (e.g., increasing contrast in a zoom mode, increasing brightness in night shooting mode, etc.), and hereinafter, a method of training a neural network model included in the image correction module 1200 is described with reference to FIGS. 16 to 33 ., par. 137).
Regarding claim 20: Lee satisfies all the elements of claim 1. Lee further discloses further comprising a modem (Fig. 4, cloud server 400) coupled to the one or more processors (Fig. 4, user terminal 3000) and configured to receive the first model, the second model, the first set of parameters, or a combination thereof, from a remote device (Figs. 26-28 and Hereinafter, a method of correcting an image by using a neural network model is described with reference to FIGS. 26 to 28 . Hereinafter, for convenience of description, it is assumed that the ISP 3130 (corresponding to the ISP 1000 of FIG. 1 ) of the user terminal 3000 of FIG. 3 performs operations illustrated in FIGS. 26 to 28 . However, the disclosure is not limited thereto, and all or some of the operations of FIGS. 26 to 28 may be performed by the main processor 3200 of the user terminal 3000 or a processor of the cloud server 400 of FIG. 4 ., par. 200).
Regarding claim 21: Lee satisfies all the elements of claim 1. Lee further discloses further comprising a display device configured to display image data generated using the updated first model (For example, when the user captures an image via a terminal where the first and second neural network models 1210 and 1220 are embedded, the captured image may be corrected according to the correction parameter 1610 inferred by the second neural network model 1220 and displayed on a screen of the terminal as a preview., par. 178).
Regarding claim 22: The structural elements of apparatus claim 1 perform all of the steps of method claim 22. Thus, claim 22 is rejected for the same reasons discussed in the rejection of claim 1.
Regarding claim 23: Arguments analogous to those stated in the rejection of claim 1 are applicable. A non-transitory computer-readable medium is inherently taught as evidenced by (Fig. 31, computing apparatus 500) and various memories stored therein.
Regarding claim 24: Arguments analogous to those stated in the rejection of claim 1 are applicable.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Lee et al. (hereinafter Lee 2) (WO-2024086333-A1).
Regarding claim 3: Lee satisfies all the elements of claim 1. Lee further discloses wherein the first model (Fig. 19, first neural network model 1210).
Lee fails to specifically address corresponds to a neural radiance field (NeRF) model.
Lee 2 discloses corresponds to a neural radiance field (NeRF) model (Figs. 2A, Neural Radiance Field (NeRF) Model 208).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include corresponds to a neural radiance field (NeRF) model in order to infer a 3D shape from a 2D image by performing a plurality of iterations to generate a plurality of sample 2D images of a 3D scene as taught by Lee 2 (Abstract).
Allowable Subject Matter
Claims 12 and 17-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and provided that the rejection under 35 USC 101 is overcome.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLOTTE M BAKER whose telephone number is (571)272-7459. The examiner can normally be reached Mon - Fri 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JENNIFER MEHMOOD can be reached at (571)272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLOTTE M BAKER/Primary Examiner, Art Unit 2664
08 January 2026