Prosecution Insights
Last updated: April 19, 2026
Application No. 17/348,278

METHODS AND SYSTEMS FOR PREDICTING OPTICAL PROPERTIES OF A SAMPLE USING DIFFUSE REFLECTANCE SPECTROSCOPY

Non-Final OA §101§103
Filed
Jun 15, 2021
Examiner
SECK, ABABACAR
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
3y 7m
To Grant
55%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
309 granted / 481 resolved
+9.2% vs TC avg
Minimal -9% lift
Without
With
+-9.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
25 currently pending
Career history
506
Total Applications
across all art units

Statute-Specific Performance

§101
30.2%
-9.8% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
11.7%
-28.3% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 481 resolved cases

Office Action

§101 §103
348278DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the arguments filed on 06/15/2021. Claims 1-16 are pending in the application and have been considered below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1: For Step 1, the claim is a method, so it does recite a statutory category of invention. For Step 2A, Prong 1: The claim recites the limitation of “generating, by a multi-layered Deep Fully Connected Neural Network (DFCNN) in the device, a first set of intermediate values by non-linearly mapping the plurality of diffuse reflectance values to the first set of intermediate values.” The generating limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is nothing in the claim precludes the generating step from practically being performed in the human mind. This limitation is a mathematical concept. See MPEP 2106.04(a)(2)(III)(C). The claim recites the limitation of “generating, by a One-Dimensional-Convolutional Neural Network (ID-CNN) in the device, a second set of intermediate values by non-linearly mapping the plurality of diffuse reflectance values to the second set of intermediate values.” The generating limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is nothing in the claim precludes the generating step from practically being performed in the human mind. This limitation is a mathematical concept. See MPEP 2106.04(a)(2)(III)(C). The claim recites the limitation of “predicting, by the device, values of the optical properties of the sample based on the first set of intermediate values and the second set of intermediate values.” The predicting limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is nothing in the claim precludes the predicting step from practically being performed in the human mind. This limitation is a mental process. See MPEP 2106.04(a)(2)(III)(C). For Step 2A, Prong 2, the claim recites additional elements: “obtaining, by a device, a plurality of diffuse reflectance values based on optical energy diffusely reflected from the sample,” multi-layered Deep Fully Connected Neural Network (DFCNN), device, and One-Dimensional-Convolutional Neural Network (1D-CNN). The “multi-layered Deep Fully Connected Neural Network (DFCNN), device, and One-Dimensional-Convolutional Neural Network (ID-CNN)” are generic computer components that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f). The additional element of “obtaining, by a device, a plurality of diffuse reflectance values based on optical energy diffusely reflected from the sample” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g). Step 2B The additional elements “multi-layered Deep Fully Connected Neural Network (DFCNN), device, and One-Dimensional-Convolutional Neural Network (1D-CNN)” do not amount to significantly more for the reasons set forth in step 2A above. Additionally, under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B. Here the “obtaining (i.e. data gathering), by a device, a plurality of diffuse reflectance values based on optical energy diffusely reflected from the sample” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(i). i. Receiving or transmitting data over a network, e.g., using the Internet to gather data”. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “obtaining (i.e. data gathering), by a device, a plurality of diffuse reflectance values based on optical energy diffusely reflected from the sample,” “multi-layered Deep Fully Connected Neural Network (DFCNN),” “device,” and “One-Dimensional-Convolutional Neural Network (1D-CNN)” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Regarding Claim 2: Claim 2, which incorporates the rejection of claim 1, recites an additional element such as: wherein the plurality of diffuse reflectance values is provided as an [input feature vector to the DFCNN], and wherein the plurality of diffuse reflectance values corresponds to features of the input feature vector.” The recited “wherein the plurality of diffuse reflectance values is provided as an [input feature vector to the DFCNN], and wherein the plurality of diffuse reflectance values corresponds to features of the input feature vector” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g). Additionally, under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B. Here the “wherein the plurality of diffuse reflectance values is provided (i.e. transmitting) as an [input feature vector to the DFCNN], and wherein the plurality of diffuse reflectance values corresponds to features of the input feature vector” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(i). i. Receiving or transmitting data over a network, e.g., using the Internet to gather data”. There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible. Regarding Claim 3: Claim 3, which incorporates the rejection of claim 1, recites further limitations such as “ wherein the plurality of diffuse reflectance values is provided as an input tensor to the 1D-CNN, and wherein the 1D-CNN obtains shape characteristics of the plurality of diffuse reflectance values” that are part of the abstract idea. See MPEP 2106.04(a)(2)(III)(C). For Step 2A, Prong 2, the claim recites an additional element: ID-CNN. The MPEP 2106.05(f). The additional element ID-CNN” does not amount to significantly more for the reasons set forth in step 2A above. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “ID-CNN” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Regarding Claim 4: Claim 4, which incorporates the rejection of claim 1, recites further limitations such as “ generating a merged set of intermediate layer output values by merging the first set of intermediate values and the second set of intermediate values by a merging neural network and reducing the merged set of intermediate values to a predefined number of output values, wherein the intermediate values in the merged set is non-linearly mapped to the predefined number of output values by an output neural network comprising at least one layer” that are part of the abstract idea. See MPEP 2106.04(a)(2)(III)(C). There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible. Regarding Claim 5: Claim 5, which incorporates the rejection of claim 4, recites an additional element: “the DFCNN, the 1D-CNN, the merging neural network, and the output neural network are trained to predict the values of the optical properties based on a mean square weighted error cost function.” The recited “the DFCNN, the 1D-CNN, the merging neural network, and the output neural network are trained to predict the values of the optical properties based on a mean square weighted error cost function” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under MPEP 2106.05(f). The additional elements “the DFCNN, the 1D-CNN, the merging neural network, and the output neural network are trained to predict the values of the optical properties based on a mean square weighted error cost function” do not amount to significantly more for the reasons set forth in step 2A above. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “the DFCNN, the ID-CNN, the merging neural network, and the output neural network are trained to predict the values of the optical properties based on a mean square weighted error cost function” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Regarding Claim 6: Claim 6, which incorporates the rejection of claim 5, recites further limitations such as “ wherein a value of the mean square weighted error cost function is determined based on an error vector and a weight vector,[wherein the error vector is a difference between a first vector, corresponding to the values of the optical properties predicted during the training], and a second vector, corresponding to specified reference values of the optical properties, and wherein the weight vector corresponds to weight factors assigned to the optical properties” that are part of the abstract idea. The claim recites an additional wherein the error vector is a difference between a first vector, corresponding to the values of the optical properties predicted during the training, is a generic training recitation that may amount to a generic computer component to apply an abstract idea under MPEP 2106.05(f). The additional element “wherein the error vector is a difference between a first vector, corresponding to the values of the optical properties predicted during the training” does not amount to significantly more for the reasons set forth in step 2A above The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of ”wherein the error vector is a difference between a first vector, corresponding to the values of the optical properties predicted during the training” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Regarding Claim 7: Claim 7, which incorporates the rejection of claim 6, recites further limitations such as “ wherein magnitudes of dimensions of the weight vector is inversely proportional to ranges of the specified reference values of the optical properties, and wherein the ranges of the specified reference values of the optical properties correspond to differences between maximum specified reference values of the optical properties and minimum specified reference values of the optical properties” that are part of the abstract idea. There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible. Regarding Claim 8: Claim 8, which incorporates the rejection of claim 6, recites further limitations such as “ wherein the specified reference values of the optical properties correspond to reference values of diffuse reflectance, and wherein the reference values of diffuse reflectance is provided as input to the DFCNN, the 1D-CNN, the merging neural network and the output neural network, during the training” that are part of the abstract idea. The claim recites additional elements: the DFCNN, the 1D-CNN, the merging neural network and the output neural network, during the training. The recited “the DFCNN, the 1D-CNN, the merging neural network and the output neural network, during the training” are generic computer components that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f). The recited “training” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under MPEP 2106.05(f). The additional elements “the DFCNN, the 1D-CNN, the merging neural network and the output neural network, during the training” do not amount to significantly more for the reasons set forth in step 2A above. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “the DFCNN, the ID-CNN, the merging neural network and the output neural network, during the training” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Regarding Claim 9: For Step 1, the claim is a device, so it does recite a statutory category of invention. For Step 2A, Prong 1: The claim recites the limitation of “generate, [by a multi-layered Deep Fully Connected Neural Network (DFCNN) in the device,] a first set of intermediate values by non-linearly mapping the plurality of diffuse reflectance values to the first set of intermediate values.” The generate limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is nothing in the claim precludes the generate step from practically being performed in the human mind. This limitation is a mathematical concept. See MPEP 2106.04(a)(2)(III)(C). The claim recites the limitation of “generate, by a One-Dimensional-Convolutional Neural Network (1D-CNN) in the device, a second set of intermediate values by non-linearly mapping the plurality of diffuse reflectance values to the second set of intermediate values.” The generate limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is nothing in the claim precludes the generate step from practically being performed in the human mind. This limitation is a mathematical concept. See MPEP 2106.04(a)(2)(III)(C). The claim recites the limitation of “predict, by the device, values of the optical properties of the sample based on the first set of intermediate values and the second set of intermediate values.” The predict limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is nothing in the claim precludes the predict step from practically being performed in the human mind. This limitation is a mental process. See MPEP 2106.04(a)(2)(III)(C). For Step 2A, Prong 2, the claim recites additional elements: processor, “obtain, by a device, a plurality of diffuse reflectance values based on optical energy diffusely reflected from the sample,” multi-layered Deep Fully Connected Neural Network (DFCNN) and device and One-Dimensional-Convolutional Neural Network (1D-CNN). The processor is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing data. This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component. (MPEP 2106.05(f)). The additional element of “obtaining, by a device, a plurality of diffuse reflectance values based on optical energy diffusely reflected from the sample” step is a form of insignificant extra-solution activity. See MPEP 2106.05(g). The “multi-layered Deep Fully Connected Neural Network (DFCNN), device, and One-Dimensional-Convolutional Neural Network (1D-CNN)” are generic computer components that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f). Step 2B The additional elements of “processor, multi-layered Deep Fully Connected Neural Network (DFCNN), device, and One-Dimensional-Convolutional Neural Network (1D-CNN)” do not amount to significantly more for the reasons set forth in step 2A above. Additionally, under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B. Here the “obtaining (i.e. data gathering), by a device, a plurality of diffuse reflectance values based on optical energy diffusely reflected from the sample” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by MPEP 2106.05(d)(II)(i). i. Receiving or transmitting data over a network, e.g., using the Internet to gather data”. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of processor, “obtaining, by a device, a plurality of diffuse reflectance values based on optical energy diffusely reflected from the sample,” “multi-layered Deep Fully Connected Neural Network (DFCNN),” “device,” and “One-Dimensional-Convolutional Neural Network (1D-CNN)” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Regarding Claim 10: Claim 10, which incorporates the rejection of claim 9, recites further limitations such as “ wherein the plurality of diffuse reflectance values is provided as an input feature vector to the DFCNN, and wherein the plurality of diffuse reflectance values correspond to features of the input feature vector” that are part of the abstract idea. See MPEP 2106.04(a)(2)(III)(C). There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible. Regarding Claim 11: Claim 11, which incorporates the rejection of claim 9, recites further limitations such as “ wherein the plurality of diffuse reflectance values is provided as an input feature vector to the DFCNN, and wherein the plurality of diffuse reflectance values corresponds to features of the input feature vector.” that are part of the abstract idea. See MPEP 2106.04(a)(2)(III)(C). The “1D-CNN” is a generic computer component that amounts to mere instructions to apply the abstract idea. See MPEP 2106.05(f). The additional element “1D-CNN” does not amount to significantly more for the reasons set forth in step 2A above. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “1D-CNN” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Regarding Claim 12: Claim 12, which incorporates the rejection of claim 9, recites further limitations such as “ generating a merged set of intermediate layer output values by merging the first set of intermediate values and the second set of intermediate values by a merging neural network and reducing the merged set of intermediate values to a predefined number of output values” that are part of the abstract idea. See MPEP 2106.04(a)(2)(III)(C). There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible. Regarding Claim 13: Claim 13, which incorporates the rejection of claim 12, recites additional elements: “the DFCNN, the 1D-CNN, the merging neural network, and the output neural network are trained to predict the values of the optical properties based on a mean square weighted error cost function.” The recited “the DFCNN, the ID-CNN, the merging neural network, and the output neural network are trained to predict the values of the optical properties based on a mean square weighted error cost function” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under MPEP 2106.05(f). The additional element “the DFCNN, the 1D-CNN, the merging neural network, and the output neural network are trained to predict the values of the optical properties based on a mean square weighted error cost function” does not amount to significantly more for the reasons set forth in step 2A above. Training using this cost function is well-understood, routine and conventional activity (WURC) as evidenced by Yu et al. (“Model transfer of QoT prediction in optical networks based on artificial neural networks: B. ANN-Based Transfer Learning Procedures: “train and predict the channel QoT” and “minimized root mean square error.”) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “the DFCNN, the 1D-CNN, the merging neural network, and the output neural network are trained to predict the values of the optical properties based on a mean square weighted error cost function” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Regarding Claim 14: Claim 14, which incorporates the rejection of claim 13, recites further limitations such as “wherein a value of the mean square weighted error cost function is determined based on an error vector and a weight vector, [wherein the error vector is a difference between a first vector, corresponding to the values of the optical properties predicted during the training], and a second vector, corresponding to specified reference values of the optical properties, and wherein the weight vector corresponds to weight factors assigned to the optical properties” that are part of the abstract idea. The claim recites an additional wherein the error vector is a difference between a first vector, corresponding to the values of the optical properties predicted during the training is a generic training recitation that may amount to a generic computer component to apply an abstract idea under MPEP 2106.05(f). The additional element “wherein the error vector is a difference between a first vector, corresponding to the values of the optical properties predicted during the training” does not amount to significantly more for the reasons set forth in step 2A above The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “wherein the error vector is a difference between a first vector, corresponding to the values of the optical properties predicted during the training,” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Regarding Claim 15: Claim 15, which incorporates the rejection of claim 14, recites further limitations such as “wherein magnitudes of dimensions of the weight vector is inversely proportional to ranges of the specified reference values of the optical properties, and wherein the ranges of the specified reference values of the optical properties correspond to differences between maximum specified reference values of the optical properties and minimum specified reference values of the optical properties” that are part of the abstract idea. There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible. Regarding Claim 16: Claim 16, which incorporates the rejection of claim 14, recites further limitations such as “wherein the specified reference values of the optical properties correspond to reference values of diffuse reflectance, and wherein the reference values of diffuse reflectance is provided as input to the DFCNN, the 1D-CNN, the merging neural network and the output neural network, during the training that are part of the abstract idea. The claim recites an additional element: the DFCNN, the 1D-CNN, the merging neural network and the output neural network, during the training. The recited “the DFCNN, the 1D-CNN, the merging neural network and the output neural network, during the training” are generic computer components that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f). The recited “training” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under MPEP 2106.05(f). The additional element “the DFCNN, the 1D-CNN, the merging neural network and the output neural network, during the training” does not amount to significantly more for the reasons set forth in step 2A above. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “the DFCNN, the 1D-CNN, the merging neural network and the output neural network, during the training” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Pfefer et al. (“Reflectance-based determination of optical properties in highly attenuating tissue,” herein after referred to as Pfefer), in view of Shen et al. (“Automated spectroscopic modelling with optimized convolutional neural networks,” herein after referred to as Shen), and further in view of Sajedian et al. (“Finding the optical properties of plasmonic structures by image processing using a combination of convolutional neural networks and recurrent neural networks,” herein after referred to as Sajedian). As to claim 1, Pfefer teaches a method for predicting optical properties of a sample, the method comprising: obtaining, by a device, a plurality of diffuse reflectance values based on optical energy diffusely reflected from the sample (Abstract: Reflectance datasets were generated by direct measurement of Intralipid-dye tissue phantoms at…and Monte Carlo simulation of light propagation; page 207, 2.3 Tissue Phantoms Measurements, perform diffuse reflectance measurements are presented in Fig. 10); generating, by a multi-layered Deep Fully Connected Neural Network (DFCNN) in the device, a first set of intermediate values by non-linearly mapping the plurality of diffuse reflectance values to the first set of intermediate values (page 208, right column, the NN algorithm involved a feed-forward backpropagation network based on a Levenburg-Marquardt training function. The input layer had five nodes, corresponding to the five nonzero S values, and the output layer contained two nodes, one for each of the two optical properties predicted; wherein Examiner interprets the output layer to generate the first set of intermediate values and the feed-forward backpropagation network with its input layer having five nodes and output layer containing two nodes with output layer to be multi-layered Deep Fully Connected Neural Network (DFCNN) that performs non-linear mapping primarily through the use of non-linear activation functions in their hidden layers). However, Pfefer fails to explicitly teach: generating, by a One-Dimensional-Convolutional Neural Network (1D-CNN) in the device, a second set of intermediate values by non-linearly mapping the plurality of diffuse reflectance values to the second set of intermediate values; and predicting, by the device, values of the optical properties of the sample based on the first set of intermediate values and the second set of intermediate values. Shen, in combination with Pfefer, teaches: generating, by a One-Dimensional-Convolutional Neural Network (1D-CNN) in the device, a second set of intermediate values by non-linearly mapping the plurality of diffuse reflectance values to the second set of intermediate values (Methods A parametric representation for 1D‑CNNs. We described 1D-CNNs using three types of building blocks: Convolutional blocks (Conv-blocks), Fully-connected blocks (FC-blocks), and an Output block (Fig. 6a–c). A Conv-block stacks a convolutional, batch normalisation, activation, pooling, and a dropout layer in sequence. Similarly, an FC-block consists of a fully-connected, a batch normalisation, an activation, and a dropout layer. The output block is essentially a Fully-connected layer that outputs target values. Thus, a 1D-CNN can be defined by a number of Conv-blocks and FC-blocks, joined by a Flatten layer (Fig. 6d); wherein Examiner interprets the outputs target values to generate the second set of intermediate values and the 1D-CNNs to perform non-linear mapping primarily through the use of non-linear activation functions in their hidden layers). It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the system of Pfefer to add One-Dimensional-Convolutional Neural Network (1D-CNN) to the system of Pfefer, as taught by Shen, above. The modification would have been obvious because one of ordinary skill would be motivated to maximise a model’s performance, as suggested by Shen (Abstract). However, Shen and Pfefer fail to explicitly tech predicting, by the device, values of the optical properties of the sample based on the first set of intermediate values and the second set of intermediate values. Sajedian, in combination with Shen and Pfefer, teaches: predicting, by the device, values of the optical properties of the sample based on the first set of intermediate values and the second set of intermediate values (Abstract; predict the optical results; Introduction, Fig. 1, Output (Optical results) obtained by combining CNNs with recurrent neural networks (RNN)). It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Shen and Pfefer to add optical properties prediction to the combination system of Shen and Pfefer, as taught by Sajedian, above. The modification would have been obvious because one of ordinary skill would be motivated to have an accurate image processing method that can be used to replace time- and computationally-intensive numerical simulations, as suggested by Sajedian, (Abstract). As to claim 9, Pfefer teaches a device configured to predict optical properties of a sample, the device comprising: at least one processor (Fig.1, element 20) configured to: obtain, by a device, a plurality of diffuse reflectance values based on optical energy diffusely reflected from the sample (Abstract: Reflectance datasets were generated by direct measurement of Intralipid-dye tissue phantoms at…and Monte Carlo simulation of light propagation; page 207, 2.3 Tissue Phantoms Measurements, perform diffuse reflectance measurements are presented in Fig. 10; wherein using the broadest reasonable interpretation, Examiner interprets the “generated reflectance datasets and the perform diffuse reflectance measurements” to teach the limitation); generate, by a multi-layered Deep Fully Connected Neural Network (DFCNN) in the device, a first set of intermediate values by non-linearly mapping the plurality of diffuse reflectance values to the first set of intermediate values (page 208, right column, the NN algorithm involved a feed-forward backpropagation network based on a Levenburg-Marquardt training function. The input layer had five nodes, corresponding to the five nonzero S values, and the output layer contained two nodes, one for each of the two optical properties predicted; wherein Examiner interprets the output layer to generate the first set of intermediate values and the feed-forward backpropagation network with its input layer having five nodes and output layer containing two nodes with output layer to be multi-layered Deep Fully Connected Neural Network (DFCNN) that performs non-linear mapping primarily through the use of non-linear activation functions in their hidden layers). However, Pfefer fails to explicitly teach: generate, by a multi-layered Deep Fully Connected Neural Network (DFCNN) in the device, a first set of intermediate values by non-linearly mapping the plurality of diffuse reflectance values to the first set of intermediate values; and predict, by the device, values of the optical properties of the sample based on the first set of intermediate values and the second set of intermediate values. Shen, in combination with Pfefer, teaches: generate, by a multi-layered Deep Fully Connected Neural Network (DFCNN) in the device, a first set of intermediate values by non-linearly mapping the plurality of diffuse reflectance values to the first set of intermediate values (Methods A parametric representation for 1D‑CNNs. We described 1D-CNNs using three types of building blocks: Convolutional blocks (Conv-blocks), Fully-connected blocks (FC-blocks), and an Output block (Fig. 6a–c). A Conv-block stacks a convolutional, batch normalisation, activation, pooling, and a dropout layer in sequence. Similarly, an FC-block consists of a fully-connected, a batch normalisation, an activation, and a dropout layer. The output block is essentially a Fully-connected layer that outputs target values. Thus, a 1D-CNN can be defined by a number of Conv-blocks and FC-blocks, joined by a Flatten layer (Fig. 6d); wherein Examiner interprets the outputs target values to generate the second set of intermediate values and the 1D-CNNs to perform non-linear mapping primarily through the use of non-linear activation functions in their hidden layers). It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the system of Pfefer to add a by a multi-layered Deep Fully Connected Neural Network (DFCNN) to the system of Pfefer, as taught by Shen, above. The modification would have been obvious because one of ordinary skill would be motivated to maximise a model’s performance, as suggested by Shen (Abstract). However, Shen and Pfefer fail to explicitly tech predict, by the device, values of the optical properties of the sample based on the first set of intermediate values and the second set of intermediate values. Sajedian, in combination with Shen and Pfefer, teaches: predicting, by the device, values of the optical properties of the sample based on the first set of intermediate values and the second set of intermediate values (Abstract; predict the optical results; Introduction, Fig. 1, Output (Optical results) obtained by combining CNNs with recurrent neural networks (RNN). It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Shen and Pfefer to add optical properties prediction to the combination system of Shen and Pfefer, as taught by Sajedian, above. The modification would have been obvious because one of ordinary skill would be motivated to have an accurate image processing method that can be used to replace time- and computationally-intensive numerical simulations, as suggested by Sajedian, (Abstract). Claims 2 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over, Pfefer et al. (“Reflectance-based determination of optical properties in highly attenuating tissue,” herein after referred to as Pfefer), in view of Shen et al. (“Automated spectroscopic modelling with optimized convolutional neural networks,” herein after referred to as Shen), and further in view of Sajedian et al. (“Finding the optical properties of plasmonic structures by image processing using a combination of convolutional neural networks and recurrent neural networks,” herein after referred to as Sajedian), and Zhu et al.(“Diagnosis of Breast Cancer Using Diffuse Reflectance Spectroscopy: Comparison of a Monte Carlo Versus Partial Least Squares Analysis Based Feature Extraction Technique,” herein after referred to as Zhu). As to claim 2, which incorporates the rejection of claim 1, Shen, Pfefer and Sajedian fail to explicitly teach wherein the plurality of diffuse reflectance values is provided as an input feature vector to the DFCNN, and wherein the plurality of diffuse reflectance values correspond to features of the input feature vector. However, Zhu, in combination with Shen, Pfefer and Sajedian, teaches wherein the plurality of diffuse reflectance values is provided as an input feature vector to the DFCNN, and wherein the plurality of diffuse reflectance values correspond to features of the input feature vector (page 715, right column, wherein Examiner interprets “The features obtained using each method of analysis were then input into a support vector machine (SVM) algorithm based on machine learning theory to classify each sample as malignant or non-malignant” to teach the limitation). It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Shen, Pfefer and Sajedian to add an input feature vector to the combination system Shen, Pfefer and Sajedian, as taught by Zhu, above. The modification would have been obvious because one of ordinary skill would be motivated to classify samples, as suggested by Zhu (page 715, right column). As to claim 10, which incorporates the rejection of claim 9, Shen, Pfefer and Sajedian fail to explicitly teach wherein the plurality of diffuse reflectance values is provided as an input feature vector to the DFCNN, and wherein the plurality of diffuse reflectance values correspond to features of the input feature vector. However, Zhu, in combination with Shen, Pfefer and Sajedian, teaches wherein the plurality of diffuse reflectance values is provided as an input feature vector to the DFCNN, and wherein the plurality of diffuse reflectance values correspond to features of the input feature vector (page 715, right column, wherein Examiner interprets “The features obtained using each method of analysis were then input into a support vector machine (SVM) algorithm based on machine learning theory to classify each sample as malignant or non-malignant” to teach the limitation). It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Shen, Pfefer and Sajedian to add an input feature vector to the combination system of Shen, Pfefer and Sajedian, as taught by Zhu, above. The modification would have been obvious because one of ordinary skill would be motivated to classify samples, as suggested by Zhu (page 715, right column). Claims 3 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Pfefer et al. (“Reflectance-based determination of optical properties in highly attenuating tissue,” herein after referred to as Pfefer), in view of Shen et al. (“Automated spectroscopic modelling with optimized convolutional neural networks,” herein after referred to as Shen), and further in view of Sajedian et al. (“Finding the optical properties of plasmonic structures by image processing using a combination of convolutional neural networks and recurrent neural networks,” herein after referred to as Sajedian), and Kim et al. (“Determining shape and reflectance properties objects by using diffuse illumination,” herein after referred to as Kim). As to claim 3, which incorporates the rejection of claim 1, Shen, Pfefer and Sajedian fail to explicitly teach wherein the plurality of diffuse reflectance values is provided as an input tensor to the 1D-CNN, and wherein the 1D-CNN obtains shape characteristics of the plurality of diffuse reflectance values. However, Kim, in combination with Shen, Pfefer and Sajedian teaches wherein the plurality of diffuse reflectance values is provided as an input tensor to the 1D-CNN, and wherein the 1D-CNN obtains shape characteristics of the plurality of diffuse reflectance values (Abstract; 3. Diffuse illumination and shape reconstruction). It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Shen, Pfefer and Sajedian to add an input tensor to the combination system of Shen, Pfefer and Sajedian, as taught by Kim, above. The modification would have been obvious because one of ordinary skill would be motivated to determine shape, as suggested by Kim (Abstract). As to claim 11, which incorporates the rejection of claim 9, Shen, Pfefer and Sajedian fail to explicitly teach wherein the plurality of diffuse reflectance values is provided as an input tensor to the 1D-CNN, and wherein the 1D-CNN obtains shape characteristics of the plurality of diffuse reflectance values. However, Kim, in combination with Shen, Pfefer and Sajedian, teaches wherein the plurality of diffuse reflectance values is provided as an input tensor to the 1D-CNN, and wherein the 1D-CNN obtains shape characteristics of the plurality of diffuse reflectance values (Abstract; 3. Diffuse illumination and shape reconstruction). It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Shen, Pfefer and Sajedian to add an input tensor to the combination system of Shen, Pfefer and Sajedian, as taught by Kim, above. The modification would have been obvious because one of ordinary skill would be motivated to determine shape, as suggested by Kim (Abstract). Examiner’s comments For the record a complete prior art search was made for claims 4-8 and 12-16. No art rejection is made for these claims, they are only rejected under 35 USC 101 as explained above in this office action. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Patents and patent related publications are cited in the Notice of References Cited (Form PTO-892) attached to this action to further show the state of the art with respect to the invention. Phebus et al. (US 11,182,666 B1) teach an integrated circuit for implementing a neural network, comprises a field programmable gate array programmed to provide memory circuits, where each of the memory circuits is configured to receive an input vector. Balooch et al. (US 20150085279 A1) teach a method for measuring and categorizing colors and spectra of surfaces, by mapping color or spectral measurement into multi-dimensional feature space, and classifying mapped measurement into color category. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABABACAR SECK whose telephone number is (571)270-7146. The examiner can normally be reached Monday-Friday 8:00 A.M.-6:00 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABABACAR SECK/Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Jun 15, 2021
Application Filed
Feb 17, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572797
ELECTRONIC APPARATUS IMPLEMENTING ARTIFICIAL INTELLIGENCE MODEL HAVING LOW RESOURCE UTILIZATION AND CONTROL METHOD THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12561573
CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING
2y 5m to grant Granted Feb 24, 2026
Patent 12524583
PROPERTY MODELING USING ATTENTIVE NEURAL PROCESSES
2y 5m to grant Granted Jan 13, 2026
Patent 12475392
AUXILIARY DECISION-MAKING METHOD FOR URBAN SUBWAY WATERLOGGING RISK DISPOSAL BASED ON BAYESIAN NETWORK
2y 5m to grant Granted Nov 18, 2025
Patent 12412085
DATA AND COMPUTE EFFICIENT EQUIVARIANT CONVOLUTIONAL NETWORKS
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
55%
With Interview (-9.2%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 481 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month