DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant claim the benefit of a prior-filed provisional application No. 63/313,701, filed February 24, 2022 , that is acknowledged by the examiner.
Drawings
The drawings were received on 9/20/2022. These drawings are acceptable.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/14/2022 has been considered by the examiner.
Response to Arguments
Applicant's arguments filed 11/25/2025 have been fully considered but they are not persuasive.
Regarding 35 USC 101 Rejection: Abstract idea
Applicant agues the claims are patent eligible for at least two reasons, first, the claim does not recite an abstract idea enumerated in MPEP 2106.04(a), see pgs. 7-8; and secondly, the claims recite limitations that integrate any purported abstract idea into practical applications with regard to MPEP 2106.04(d), see remarks pages 8-13.
Examiner disagrees and notes the following:
With regards to remarks claims do not recite an abstract idea:
Examiner notes that the rejection clearly noted, in accordance with MPEP 2106.04(a), that the claim recite a limitation deemed directed to a mental process, See Step 2A Prong 1 in the rejection noted in the non-final office action for the independent claims.
Applicant argues that the USPTO has issued a Memorandum and examples 37 to 42, that train a neural network and thus considered not be directed to a mathematical concept.
Examiner notes that these citations provide no objective analysis regarding the rejection made in the previous office action which specifies that the claim recites a mental process. How does an analysis directed to mathematical concepts and training neural network address the question associated with the Step 2A Prong 1 analysis?
The cited memorandum also notes that the MPEP guidance is still applicable, and analysis used in the previous office action is in lined with the cited memo, the MPEP 2106 and the January 2019 Guidance. Under step one the eligibility analysis evaluates whether the claim falls within any statutory category. MPEP 2106.03: According to the first part of the Alice analysis, in the instant case, the claims were determined to be directed to one of the four statutory categories: 3) Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2), subsection III). The applicant remarks amount to mere allegations of patentability.
The rejection made in the previous office action has been maintained.
With regards to arguments that additional elements integrates any purported abstract ides into practical application.
Examiner notes that the additional elements were analyzed per MPEP 2106 and under Step 2A Prong 2 and Step 2B as noted in the previous rejection.
Specifically, the applicant alleges the claims are directed to the improvement of generating machine learning models that constructs more accurate images based on sparsely sampled visibilities relative to what can be achieved using conventional processes.
Examiner notes MPEP 2106.04(d)(1) discloses the evaluation of claimed improvements in the functioning of a computer or improvement to a technical field in step 2A prong two. The MPEP section discloses “if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification…” The claim in the instant case fail to ensure the disclosed improvement. Rather the claims, as highlighted by applicant remarks with regard to the 35 USC 103 rejection (see pages 13-14 of remarks). The claim recites generating a set of predicted data points and trained machine learning model where the output is based on the claimed neural network; where the limitations merely invoke the use of the neural networks at a high level and as a tool for achieving the claimed outcome. The MPEP 2106.04(a)(2)(II)(C):
A Claim That Requires a Computer May Still Recite a Mental Process
Claims can recite a mental process even if they are claimed as being performed on a computer. The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea. The Court concluded that the algorithm could be performed purely mentally even though the claimed procedures "can be carried out in existing computers long in use, no new machinery being necessary." 409 U.S at 67, 175 USPQ at 675. See also Mortgage Grader, 811 F.3d at 1324, 117 USPQ2d at 1699 (concluding that concept of "anonymous loan shopping" recited in a computer system claim is an abstract idea because it could be "performed by humans without a computer").
In evaluating whether a claim that requires a computer recites a mental process, examiners should carefully consider the broadest reasonable interpretation of the claim in light of the specification. For instance, examiners should review the specification to determine if the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept. In these situations, the claim is considered to recite a mental process.
… 2. Performing a mental process in a computer environment. An example of a case identifying a mental process performed in a computer environment as an abstract idea is Symantec Corp., 838 F.3d at 1316-18, 120 USPQ2d at 1360. In this case, the Federal Circuit relied upon the specification when explaining that the claimed electronic post office, which recited limitations describing how the system would receive, screen and distribute email on a computer network, was analogous to how a person decides whether to read or dispose of a particular piece of mail and that "with the exception of generic computer-implemented steps, there is nothing in the claims themselves that foreclose them from being performed by a human, mentally or with pen and paper". 838 F.3d at 1318, 120 USPQ2d at 1360. Another example is FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 120 USPQ2d 1293 (Fed. Cir. 2016). The patentee in FairWarning claimed a system and method of detecting fraud and/or misuse in a computer environment, in which information regarding accesses of a patient’s personal health information was analyzed according to one of several rules (i.e., related to accesses in excess of a specific volume, accesses during a pre-determined time interval, or accesses by a specific user) to determine if the activity indicates improper access. 839 F.3d. at 1092, 120 USPQ2d at 1294. The court determined that these claims were directed to a mental process of detecting misuse, and that the claimed rules here were "the same questions (though perhaps phrased with different words) that humans in analogous situations detecting fraud have asked for decades, if not centuries." 839 F.3d. at 1094-95, 120 USPQ2d at 1296.
3. Using a computer as a tool to perform a mental process. An example of a case in which a computer was used as a tool to perform a mental process is Mortgage Grader, 811 F.3d. at 1324, 117 USPQ2d at 1699. The patentee in Mortgage Grader claimed a computer-implemented system for enabling borrowers to anonymously shop for loan packages offered by a plurality of lenders, comprising a database that stores loan package data from the lenders, and a computer system providing an interface and a grading module. The interface prompts a borrower to enter personal information, which the grading module uses to calculate the borrower’s credit grading, and allows the borrower to identify and compare loan packages in the database using the credit grading. 811 F.3d. at 1318, 117 USPQ2d at 1695. The Federal Circuit determined that these claims were directed to the concept of "anonymous loan shopping", which was a concept that could be "performed by humans without a computer." 811 F.3d. at 1324, 117 USPQ2d at 1699. Another example is Berkheimer v. HP, Inc., 881 F.3d 1360, 125 USPQ2d 1649 (Fed. Cir. 2018), in which the patentee claimed methods for parsing and evaluating data using a computer processing system. The Federal Circuit determined that these claims were directed to mental processes of parsing and comparing data, because the steps were recited at a high level of generality and merely used computers as a tool to perform the processes. 881 F.3d at 1366, 125 USPQ2d at 1652-53.
…
The instant case are not similar to the cited court cases, and are at best mere use of the computer as a tool to perform a mental process/perform the noted mental process in a computer environment.
The remarks made by the applicant, amount to mere allegations of patentability as the claim limitation fail to ensure the purported/alleged improvement and are not similar to any of the cited court cases.
The rejection made in the previous office action has been maintained.
Regarding 35 USC 103 Rejection
Applicant argues that the primary reference Ozcan (US 20230153600, hereinafter ‘Oz’) fails to teach the limitation directed to "generating a trained machine learning model based on the first neural network, the second neural network, and the first set of predicted data points" and that none of the cited references teach the noted claim limitation. Specifically, applicant argues that the teaches in Oz only refer to one neural network and no mention of more than one network is noted in the cited references.
Examiner disagrees. The applicant appears to have ignored at least Pages 23-25 of the office action and appears to argue an interpretation of the claim limitations not explicitly recited in the claim limitation.
MPEP 2111 discloses :
CLAIMS MUST BE GIVEN THEIR BROADEST REASONABLE INTERPRETATION IN LIGHT OF THE SPECIFICATION
During patent examination, the pending claims must be "given their broadest reasonable interpretation consistent with the specification." The Federal Circuit’s en banc decision in Phillips v. AWH Corp., 415 F.3d 1303, 1316, 75 USPQ2d 1321, 1329 (Fed. Cir. 2005) expressly recognized that the USPTO employs the "broadest reasonable interpretation" standard:
The Patent and Trademark Office ("PTO") determines the scope of claims in patent applications not solely on the basis of the claim language, but upon giving claims their broadest reasonable construction "in light of the specification as it would be interpreted by one of ordinary skill in the art." In re Am. Acad. of Sci. Tech. Ctr., 367 F.3d 1359, 1364[, 70 USPQ2d 1827, 1830] (Fed. Cir. 2004). Indeed, the rules of the PTO require that application claims must "conform to the invention as set forth in the remainder of the specification and the terms and phrases used in the claims must find clear support or antecedent basis in the description so that the meaning of the terms in the claims may be ascertainable by reference to the description." 37 CFR 1.75(d)(1).
See also In re Suitco Surface, Inc., 603 F.3d 1255, 1259, 94 USPQ2d 1640, 1643 (Fed. Cir. 2010); In re Hyatt, 211 F.3d 1367, 1372, 54 USPQ2d 1664, 1667 (Fed. Cir. 2000).
Patented claims are not given the broadest reasonable interpretation during court proceedings involving infringement and validity, and can be interpreted based on a fully developed prosecution record. In contrast, an examiner must construe claim terms in the broadest reasonable manner during prosecution as is reasonably allowed in an effort to establish a clear record of what applicant intends to claim. Thus, the Office does not interpret claims when examining patent applications in the same manner as the courts. In re Morris, 127 F.3d 1048, 1054, 44 USPQ2d 1023, 1028 (Fed. Cir. 1997); In re Zletz, 893 F.2d 319, 321-22, 13 USPQ2d 1320, 1321-22 (Fed. Cir. 1989).
Examiner notes that the claimed neural network, is considered a collection of artificial neurons that are networked together. As noted in the office action Oz teaches the required first neural network as disclosed in pg. 23 of the non-final office action as neural network device 10, that is connected to the trained artificial neural network, 110. Specifically, device 10 is an artificial neural network as denoted in the cited portions of the office action and in addition in paragraph [0085] It is important to note that if the material absorption of the diffractive layers 16 is lower and/or the signal-to-noise ratio of the single-pixel detector 32 is increased, the optical inference accuracy of the presented network designs could be further improved by e.g., increasing the number of diffractive layers 16 or the number of learnable features (i.e., neurons) within the diffractive optical neural network device 10…
Paragraphs 0085 and 0099 discloses that device 10, considered the first neural network as noted in page 23 of the previous office action, includes “[a] network device 10”… “[0099] Based on the diffractive network layout reported in FIG. 4D, the half diffraction cone angle that enables full connectivity between the diffractive features/neurons on two successive layers 16..” The applicant limitations do not restrict this type of artificial neural network where the network learns the learnable features of the optical neural network. And per applicant remarks the prior discloses the second neural network wherein both networks are used in examining the spectral data as noted in the previous office action.
The applicant should amend claims such that the teaches in the cited prior art references no longer overlap in scope with the claim language.
The rejection made in the previous office action has been maintained.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more.
Claim 1: Does claim fall within a statutory category? Yes.
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
… generating a first set of predicted data points that are associated with both the first item and the spectral domain (Considered directed to a Mental Process: Making observations for formulating observations, evaluations and judgements as claimed; see MPEP § 2106.04(a)(2), subsection III)
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
executing a first neural network on a first set of data points associated with both a first item and the spectral domain to generate a second neural network; generating … via the second neural network; and generating a trained machine learning model based on the first neural network, the second neural network, and the first set of predicted data points, (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).)
… wherein the trained machine learning model maps one or more positions within the spectral domain to one or more values associated with an item based on a set of data points associated with both the item and the spectral domain. Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
First, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and elements invoking computers or other machinery merely as a tool to perform the claimed process/judicial exception. These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 2: Does claim fall within a statutory category? Yes.
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Abstract idea recited in claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein executing the first neural network on the first set of data points comprises modifying a third neural network based on the first set of data points to generate the second neural network. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and elements considered mere instructions to apply the judicial exception using a computer/computing environment as a tool. These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 3: Does claim fall within a statutory category? Yes.
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
(Considered directed to a Mental Process: Making evaluations and judgements of observations for formulating observations, evaluations and judgements as claimed; see MPEP § 2106.04(a)(2), subsection III)
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein executing the first neural network on the first set of data points comprises computing … parameters associated with the second neural network based on the first set of data points. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f).)
… parameters associated with the second neural network… (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and elements invoking computers or other machinery merely as a tool to perform the claimed process/judicial exception. These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 4: Does claim fall within a statutory category? Yes.
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
wherein generating the first set of predicted data points comprises computing a set of predicted values corresponding to a set of positions within the spectral domain based on one or more learnable parameters (Considered directed to a Mental Process: Making evaluations and judgements of observations for formulating observations, evaluations and judgements as claimed; see MPEP § 2106.04(a)(2), subsection III)
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
… one or more learnable parameters included in the second neural network. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use. These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 5: Does claim fall within a statutory category? Yes.
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
wherein generating the first set of predicted data points comprises mapping a set of positions within the spectral domain to a set of predicted values in the spectral domain (Considered directed to a Mental Process: Making evaluations and judgements of observations for formulating observations, evaluations and judgements as claimed; see MPEP § 2106.04(a)(2), subsection III)
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein generating the first set of predicted data points comprises mapping … via the second neural network. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and elements invoking computers or other machinery merely as a tool to perform the claimed process/judicial exception. These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 6: Does claim fall within a statutory category? Yes.
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
(Considered directed to a Mental Process: Making evaluations and judgements of observations for formulating observations, evaluations and judgements as claimed; see MPEP § 2106.04(a)(2), subsection III)
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein generating the trained machine learning model comprises modifying a first learnable parameter associated with the first neural network to reduce an error ... (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f).)
… first learnable parameter associated with the first neural network …(Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and elements invoking computers or other machinery merely as a tool to perform the claimed process/judicial exception. These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 7: Does claim fall within a statutory category? Yes.
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
(Considered directed to a Mental Process: Making evaluations and judgements of observations for formulating observations, evaluations and judgements as claimed; see MPEP § 2106.04(a)(2), subsection III))
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein generating the trained machine learning model comprises modifying a first learnable parameter associated with the first neural network to reduce an error… (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f).)
… first learnable parameter associated with the first neural network …(Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use. These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 8: Does claim fall within a statutory category? Yes.
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Incorporates abstract ideas recited in claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the spectral domain comprises a Fourier domain, a k-space, a cepstral domain, or a wavelet domain. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
First, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and elements invoking computers or other machinery merely as a tool to perform the claimed process/judicial exception. These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 9 Does claim fall within a statutory category? Yes.
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Incorporates abstract ideas recited in claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the first set of data points comprises a set of visibility data points, a set of magnetic resonance imaging measurements, or a set of surface measurements. (Claimed limitations are generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h))
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use. These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 10: Does claim fall within a statutory category? Yes.
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Incorporates abstract ideas recited in claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the first item comprises an astronomical object, a body organ, a surface, or an image. (Claimed limitations are generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h))
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use. These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Regarding claim 11, the claim limitations are similar to claim 1 and are rejected under the same rationale.
Regarding claim 12, the claim limitations are similar to claim 2 and are rejected under the same rationale.
Claim 13: Does claim fall within a statutory category? Yes.
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
. (Considered directed to a Mental Process: Making evaluations and judgements of observations for formulating observations, evaluations and judgements as claimed; see MPEP § 2106.04(a)(2), subsection III))
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein executing the first neural network on the first set of data points comprises computing at least one value for at least one parameter associated with a layer included in the second neural network…(Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f).)
… one parameter associated with a layer included in the second neural network… (Claimed limitations are generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h))
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and elements directed to mere instructions to implement an abstract idea on a computer. These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 14: Does claim fall within a statutory category? Yes.
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
(Considered directed to a Mental Process: Making evaluations and judgements of observations for formulating observations, evaluations and judgements as claimed; see MPEP § 2106.04(a)(2), subsection III))
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein generating the first set of predicted data points comprises computing a set of predicted values… based … the second neural network…(Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f).)
… a plurality of parameter values that are (Claimed limitations are generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h))
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and elements directed to mere instructions to implement an abstract idea on a computer. These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Regarding claims 15-18, the claim limitations are similar to claims 5 -7 & 9 respectively, and are rejected under the same rationale.
Claim 19: Does claim fall within a statutory category? Yes.
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
further comprising performing one or more Fourier transform operations on an image associated with both the first item and a spatial domain to generate the first set of data points. (Considered directed to a Mathematical concepts – mathematical relationships/ mathematical calculations (see MPEP § 2106.04(a)(2), subsection I)
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
… an image associated with both the first item and a spatial domain to generate the first set of data points.. (Claimed limitations are generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h))
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use.. These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Regarding claim 20, the claim limitations are similar to claim 1 and are rejected under the same rationale.
Therefore, claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed a judicial exception and does not recite, when claim elements are examined or as an ordered combination, that are directed to what have the courts have identified as "significantly more”, than the identified abstract idea, see MPEP 2106.05.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ozcan et al. (US 20230153600 hereinafter ‘Oz’) in view of Guan et al. (US 20170083747, hereinafter ‘Guan’).
Regarding independent claim 1, Oz teaches a computer-implemented method for representing items in a spectral domain, the method comprising: (in [0121] For the training of the models, a desktop computer with a TITAN RTX graphical processing unit (GPU, Nvidia Inc.) and Intel® Core™ i9-9820X central processing unit (CPU, Intel Inc.) and 128 GB of RAM was used, running Windows 10 operating system (Microsoft Inc.)…; And in )
executing a first neural network on a first set of data points associated with both a first item and the spectral domain to generate a second neural network;; (As depicted in fig. 3B, in [0065] For the same 3D-printed diffractive optical neural network device 10 (FIGS. 4A, 4B), a shallow, fully-connected trained neural network 110 was trained with only 2 hidden layers in order to reconstruct images 120 of the input objects 4 based on the detected s [executing a first neural network on a first set of data points associated with both a first item and the spectral domain to generate a second neural network]. The training of this decoder neural network 110 is based on the knowledge of: (1) class scores (s=[s.sub.0, s.sub.1, . . . , s.sub.9]) resulting from the numerical diffractive network model, and (2) the corresponding input object images. Without any fine tuning of the network parameters for possible deviations between the numerical forward model and the experimental setup, when the shallow, trained neural network 110 was blindly tested on the experimental measurements (s), the reconstructions of the images of the handwritten digits were successful as illustrated in FIG. 3B (also see FIGS. 13A-13B, 14A-14B), further validating the presented framework as well as the experimental robustness of the diffractive optical neural network device 10 […executing a first neural network…](see Materials and Methods section for further details). It should be emphasized that this shallow, trained neural network 110 ANN is trained to decode a highly compressed form of information that is spectrally-encoded by a diffractive front-end and it uses only ten (10) numbers (i.e., s.sub.0, s.sub.1, . . . , s.sub.9) at its input to reconstruct an image 120 that has >780 pixels [generating a first set of predicted data points that are associated with both the first item and the spectral domain via the second neural network]. Stated differently this trained neural network 110 performs task-specific super-resolution, the task being the reconstruction of the images 120 of handwritten digits based on spectrally-encoded inputs. In addition to performing task-specific image reconstruction, the proposed machine vision framework can possibly be extended for the design of a general-purpose, high-resolution, single-pixel imaging system based on spectral encoding.)
and generating a trained machine learning model based on the first neural network, the second neural network, and the first set of predicted data points, wherein the trained machine learning model maps one or more positions within the spectral domain to one or more values associated with an item based on a set of data points associated with both the item and the spectral domain. (As depicted in Fig. 8B: 0076] The function of the decoder trained neural network 110, up to this point, has been to reconstruct the images 120 of the unknown input objects 4 based on the encoding present in the spectral class scores, s=[s.sub.0, s.sub.1, . . . , s.sub.9], which also helped to improve the classification accuracy of the diffractive optical neural network device 10 by feeding these reconstructed images 120 back to it [and generating a trained machine learning model based on the first neural network, the second neural network, and the first set of predicted data points,]. As an alternative strategy, the decoder trained neural network 110 was investigated for a different task: to directly classify the objects 4 based on the spectral encoding (s) provided by the diffractive optical neural network device 10. In this case, the decoder trained neural network 110 is solely focused on improving the classification performance with respect to the optical inference results that are achieved using max(s)…; And in [0118] where custom-character.sub.S stands for the pixel-wise structural loss between the reconstructed image of the object O.sub.recon and the ground truth object structure O.sub.input. custom-character.sub.I is the same loss function defined in Eq. (17); except, instead of ŝ, it computes the loss SCE(custom-character, g) using custom-character and ground truth label vector g [wherein the trained machine learning model maps one or more positions within the spectral domain to one or more values associated with an item based on a set of data points associated with both the item and the spectral domain]…; [0084] An optical-based machine vision system 2 is presented that uses trainable matter composed of diffractive layers 16 to encode the spatial information of objects 4 into the power spectrum of the diffracted light, which is used to perform optical classification of unknown objects 4 with a single-pixel spectroscopic detector 32. A shallow, low-complexity trained neural networks 110 can be used as decoders to reconstruct images 120 of the input objects 4 based on the spectrally-encoded class scores [wherein the trained machine learning model maps one or more positions within the spectral domain to one or more values associated with an item based on a set of data points associated with both the item and the spectral domain], demonstrating task-specific super-resolution… And in [0081] To further explore the capabilities of the system 2 for more challenging image classification tasks beyond handwritten digits, the EMNIST dataset was used, containing 26 object classes, corresponding to handwritten capital letters (see FIG. 20A). For this EMNIST image dataset, non-differential and differential diffractive classification networks were trained, encoding the information of the object data classes into the output power of 26 (FIGS. 20A, FIG. 20C) and 52 distinct wavelengths (FIGS. 20B, 20D), respectively. Furthermore, to better highlight the benefits of the collaboration between the optical and electronic networks [wherein the trained machine learning model maps one or more positions within the spectral domain to one or more values associated with an item based on a set of data points associated with both the item and the spectral domain], hybrid network systems that use a shallow classification ANN 110 (with 2 hidden layers) described earlier were jointly-trained to extract the object class from the spectral encoding performed by the diffractive optical front-end, through a single-pixel detector 32 [wherein the trained machine learning model maps one or more positions within the spectral domain to one or more values associated with an item based on a set of data points associated with both the item and the spectral domain], same as before. Table 2 summarizes the results on this 26-class handwritten capital letter image dataset. First, a comparison between the all-optical diffractive classification networks and the jointly-trained hybrid network systems highlight the importance of the collaboration between the optical and electronic networks, the jointly-trained hybrid systems (where a diffractive optical neural network device 10 is followed by a classification encoder [generating a trained machine learning model based on the first neural network, the second neural network, and the first set of predicted data points] (i.e., electronic trained neural network 110) can achieve higher object classification accuracies (see Table 2). For example, a jointly-trained hybrid network using 52 encoding wavelengths that are processed through 3 diffractive layers 16 and a shallow decoder trained neural network 110 [wherein the trained machine learning model maps one or more positions within the spectral domain to one or more values associated with an item based on a set of data points associated with both the item and the spectral domain] achieved a classification accuracy of 87.68% for EMNIST test dataset,...)
While Oz teaches the image processing system noted above and the mapping process for classifying objects to pixels in an image.
Guan, alternatively, discloses the process for mapping pixels in an image to spectral domain bands as claimed one or more positions within the spectral domain to one or more values associated with an item based on a set of data points associated with both the item and the spectral domain, in [0157] In the present problem domain the features are the various spectral bands recorded for each pixel of the satellite image [maps one or more positions within the spectral domain to one or more values associated with an item based on a set of data points associated with both the item and the spectral domain] and the class is binary, potentially classifying the pixel as a water pixel or dry-land pixel [associated with both the item and the spectral domain]. The techniques described herein are not limited to any particular type of classifier. For example, the classifier utilized by the spectral analysis logic 171 [trained machine learning model maps one or more positions within the spectral domain to one or more values associated with an item based on a set of data points associated with both the item and the spectral domain] may include support vector machines (SVMs), neural networks, logistic regression,…; And in [0183] The classification of water pixels based on some machine learning techniques, such as logistic regression, do not consider the coupling effects between neighboring pixels. For example, for two pixels with the same NIR band values, the pixel surrounded by water pixels is more likely to actually represent water than the pixel surrounded by dry-land pixels. The coupling logic 173 models such effects to increase the reliability and accuracy of the probability estimations produced by the spectral analysis logic 171 [maps one or more positions within the spectral domain to one or more values associated with an item based on a set of data points associated with both the item and the spectral domain]. Furthermore, the spectral analysis logic 171 is also reconciled with the results of the flow simulation logic 172. Pixels which have a high probability of being water as determined by the spectral analysis logic 171 and are also in locations where ponding water is likely to occur as determined by the flow simulation logic 172 are more likely to represent ponding water. Whereas pixels for which the spectral analysis logic 171 and the flow simulation logic 172 disagree are considered less likely to represent ponding water.)
Oz and Guan are analogous art because both involve developing information retrieval and object recognition techniques using machine learning systems and algorithms.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing models and techniques for image processing using a classifier that has been trained based on the spectral bands as disclosed by Guan with the method of developing information retrieval and object recognition using neural network and machine learning models based on encoding spatial information of image objects as disclosed by Oz.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Guan and Oz disclosed above; Doing so allowing for improving the time and resources required to train the classifier based on spectral analysis, (Guan, 0171).
Regarding claim 2, the rejection of claim 1 is incorporated and Oz in combination with Guan further teaches the computer-implemented method of claim 1, wherein executing the first neural network on the first set of data points comprises modifying a third neural network based on the first set of data points to generate the second neural network. (As depicted in Fig. 8B, in [0053] The trained neural network 110 [to generate the second neural network] may be trained using at least one of the following: (i) a structural loss term, (ii) a cross entropy loss term, (iii) softmax-cross-entropy loss term [wherein executing the first neural network on the first set of data points comprises modifying a third neural network based on the first set of data points], (iv) a diffractive network inference accuracy related penalty term, or (v) combinations of (i-iv) with different weights [wherein executing the first neural network on the first set of data points comprises modifying a third neural network based on the first set of data points]…. As explained herein, in some embodiments, the reconstructed images 120 are fed back to the same diffractive optical neural network device 10 as new inputs to improve the inference accuracy of the same [modifying a third neural network based on the first set of data points to generate the second neural network ]. This operation is illustrated by dashed arrows B in FIG. 1D. FIG. 1A illustrates how the diffractive optical neural network device 10 may work without the trained neural network 110 and perform classification (or other task) as seen in arrow C. Arrow D illustrates an alternative path in which the trained neural network 110 us used to improve the results by generating a reconstructed image 120 that is fed back to the diffractive optical network device 10 [modifying a third neural network based on the first set of data points to generate the second neural network ]]. This may be accomplished by projecting the reconstructed image 120 back through the diffractive optical network device 10 using a projector or the like (not shown). In addition, as explained herein, in some embodiments, the trained neural network 110 may instead be trained to assist in object classification (or other task) instead of outputting or generating a reconstructed image 120.; Examiner notes that the feed-back for improving accuracy as claimed training resulting in claimed third model or in claimed third model updated in the noted plurality, in [0076] The function of the decoder trained neural network 110, up to this point, has been to reconstruct the images 120 of the unknown input objects 4 based on the encoding present in the spectral class scores, s=[s.sub.0, s.sub.1, . . . , s.sub.9], which also helped to improve the classification accuracy of the diffractive optical neural network device 10 by feeding these reconstructed images 120 back to it. As an alternative strategy, the decoder trained neural network 110 was investigated for a different task: to directly classify the objects 4 based on the spectral encoding (s) provided by the diffractive optical neural network device 10. In this case, the decoder trained neural network 110 is solely focused on improving the classification performance with respect to the optical inference results that are achieved using max(s). For example, based on the spectral class scores encoded by the diffractive optical neural network models [modifying a third neural network based on the first set of data points to generate the second neural network ]/devices 10)
Regarding claim 3, the rejection of claim 1 is incorporated and Oz in combination with Guan further teaches the computer-implemented method of claim 1, wherein executing the first neural network on the first set of data points comprises computing one or more values for one or more parameters associated with the second neural network based on the first set of data points. (As depicted in Fig. 8B and in [0053] The trained neural network 110 may be trained using at least one of the following: (i) a structural loss term, (ii) a cross entropy loss term, (iii) softmax-cross-entropy loss term, (iv) a diffractive network inference accuracy related penalty term, or (v) combinations of (i-iv) with different weights [wherein executing the first neural network on the first set of data points comprises computing one or more values for one or more parameters associated with the second neural network based on the first set of data points]...)
Regarding claim 4, the rejection of claim 1 is incorporated and Oz in combination with Guan further teaches the computer-implemented method of claim 1, wherein generating the first set of predicted data points comprises computing a set of predicted values corresponding to a set of positions within the spectral domain based on one or more learnable parameters included in the second neural network. (FIG. 3A illustrates the machine vision framework based on spectral encoding using the diffractive optical neural network device 10. A broadband diffractive optical neural network device 10 is trained to transform the spatial information of the objects 4 into the spectral domain through a pre-selected set of class-specific [corresponding to a set of positions within the spectral domain] wavelengths measured by a single-pixel spectroscopic detector 32 at the output plane; the resulting spectral class scores are denoted by the vector s=[s.sub.0, s.sub.1, . . . ,s.sub.9] [wherein generating the first set of predicted data points comprises computing a set of predicted values corresponding to a set of positions within the spectral domain](FIG. 3A). Because the learning task assigned to the diffractive optical neural network device 10 is the optical classification of handwritten digits (MNIST database), after its training and design phase, for a given input image it learns to channel more power to the spectral component assigned to the correct class (e.g., digit ‘8’ in FIG. 3A) compared to the other class scores [based on one or more learnable parameters included in the second neural network]; therefore, max(s) reveals the correct data class. As demonstrated in FIG. 3B, the same class score vector, s, can also be used as an input to a shallow trained neural network 110 [based on one or more learnable parameters included in the second neural network] to reconstruct an image of the input object 4, decoding the spectral encoding performed by the broadband diffractive network 10 [wherein generating the first set of predicted data points comprises computing a set of predicted values corresponding to a set of positions within the spectral domain based on one or more learnable parameters included in the second neural network]. Of course, other learning tasks may be used in accordance with the invention.)
Regarding claim 5, the rejection of claim 1 is incorporated and Oz in combination with Guan further teaches the computer-implemented method of claim 1, wherein generating the first set of predicted data points comprises mapping a set of positions within the spectral domain to a set of predicted values in the spectral domain via the second neural network. (FIG. 3A illustrates the machine vision framework based on spectral encoding using the diffractive optical neural network device 10. A broadband diffractive optical neural network device 10 is trained to transform the spatial information of the objects 4 into the spectral domain through a pre-selected set of class-specific wavelengths measured by a single-pixel spectroscopic detector 32 at the output plane; the resulting spectral class scores are denoted by the vector s=[s.sub.0, s.sub.1, . . . ,s.sub.9] [, wherein generating the first set of predicted data points comprises mapping a set of positions within the spectral domain to a set of predicted values in the spectral domain via the second neural network](FIG. 3A). Because the learning task assigned to the diffractive optical neural network device 10 is the optical classification of handwritten digits (MNIST database), after its training and design phase, for a given input image it learns to channel more power to the spectral component assigned to the correct class (e.g., digit ‘8’ in FIG. 3A) compared to the other class scores; therefore, max(s) reveals the correct data class. As demonstrated in FIG. 3B, the same class score vector, s, can also be used as an input to a shallow trained neural network 110 to reconstruct an image of the input object 4 [, wherein generating the first set of predicted data points comprises mapping a set of positions within the spectral domain to a set of predicted values in the spectral domain via the second neural network], decoding the spectral encoding performed by the broadband diffractive network 10. Of course, other learning tasks may be used in accordance with the invention.)
Regarding claim 6, the rejection of claim 1 is incorporated and Oz in combination with Guan further teaches the computer-implemented method of claim 1, wherein generating the trained machine learning model comprises modifying a first learnable parameter associated with the first neural network to reduce an error associated with the first set of predicted data points. ([0120] Training-related details. Both the diffractive optical neural network models/devices 10 and the corresponding decoder trained neural network 110 used herein were simulated and trained using Python (v3.6.5) and TensorFlow (v1.15.0, Google Inc.)…. In FIGS. 8A, 8B, two different training schemes for image reconstruction ANNs. If there is no feedback cycle, i.e., γ=1 in Eq. (22), the remaining loss factor is the structural loss, custom-character.sub.S(O.sub.recon,O.sub.input) In this case, the best trained neural network 110 model was selected based on the minimum loss value over the validation data set [wherein generating the trained machine learning model comprises modifying a first learnable parameter associated with the first neural network to reduce an error associated with the first set of predicted data points]. If there was an image feedback cycle, i.e., γ<1 in Eq. (22), the best trained neural network 110 model was selected based on the classification performance provided by custom-character over the validation set.; And in [0078] This was demonstrated using the MNIST dataset and jointly-trained a diffractive network with an image reconstruction trained neural network 110 at the back-end. The same approach will may also be extended to jointly-train a diffractive network with a classification trained neural network 110 at the back-end, covering a different dataset (EMNIST). In the joint-training of hybrid network systems composed of a diffractive optical neural network model (for ultimate use as a device 10) and a reconstruction trained neural network 110, a linear superposition of two different loss functions was used to optimize both the optical classification accuracy and the image reconstruction fidelity: see Eq. 24 and Table 3.)
Regarding claim 7, the rejection of claim 1 is incorporated and Oz in combination with Guan further teaches the computer-implemented method of claim 1, wherein generating the trained machine learning model comprises modifying at least one learnable parameter associated with the second neural network based on the first set of predicted data points and a set of ground-truth values that are associated with both the first item and the spectral domain. ([0023] FIGS. 8A and 8B: Different strategies for training an image reconstruction ANN to decode spectral encoding. FIG. 8A illustrates the training strategy for image reconstruction ANN based on a structural loss function that pixel-wise compares the reconstructed image, O.sub.recon, with the ground truth O.sub.input [and a set of ground-truth values that are associated with both the first item and the spectral domain]. FIG. 8B illustrates the application of the image feedback mechanism used for tailoring the image reconstruction space of the decoder ANN [wherein generating the trained machine learning model comprises modifying at least one learnable parameter associated with the second neural network based on the first set of predicted data points]in order to collaborate with the corresponding diffractive optical network and help its optical classification. And in [0069] where custom-character.sub.S refers to structural loss, e.g., Mean Absolute Error (MAE) or reversed Huber (“BerHu”) loss, which are computed through pixel-wise comparison of the reconstructed image (O.sub.recon) with the ground truth object image (O.sub.input) (see Materials and Methods section for details). The second term in Eq. (2), custom-character.sub.I, refers to the same loss function used in the training of the diffractive optical neural network model/device 10 (front-end) as in Eq. (1), except this time it is computed over the new class scores, s′, obtained by feeding the reconstructed image, O.sub.recon, back to the same diffractive optical neural network model/device 10 (see FIG. 7B and FIGS. 8A and 8B) [wherein generating the trained machine learning model comprises modifying at least one learnable parameter associated with the second neural network based on the first set of predicted data points and a set of ground-truth values that are associated with both the first item and the spectral domain]. Eq. (2) is only concerned with the training of the image reconstruction trained neural network 110, and therefore, the parameters of the decoder trained neural network 110 are updated through standard error backpropagation, while the diffractive optical neural network model is preserved.)
Regarding claim 8, the rejection of claim 1 is incorporated and Oz in combination with Guan further teaches the computer-implemented method of claim 1, wherein the spectral domain comprises a Fourier domain, a k-space, a cepstral domain, or a wavelet domain. (in [0008] The system and methods presented herein can be used for the development of various new machine vision systems that utilize spectral encoding of object information to achieve a specific inference task in a resource-efficient manner, with low-latency, low power and low pixel count. The teachings can also be extended to spectral domain interferometric measurement systems, such as Fourier-Domain Optical Coherence Tomography (FDOCT) [wherein the spectral domain comprises a Fourier domain, …], Fourier Transform Infrared Spectroscopy (FTIR), interferometric measurements devices, and others to create fundamentally new 3D imaging and sensing modalities integrated with spectrally encoded classification tasks performed through diffractive optical networks…; And in [0058] In one embodiment, an acoustic source 220 is provided and configured to expose the object 4 and generate the input acoustic signal 214… As seen in FIG. 2, the sensed temporal signal from the detector 230 may then be converted into, for example, a power spectrum using a Fourier transform to identify the particular peak frequency (or wavelength) [the spectral domain comprises a Fourier domain, … or a wavelet domain] that is used to classify the object 4.)
Regarding claim 9, the rejection of claim 1 is incorporated and Oz in combination with Guan further teaches the computer-implemented method of claim 1, wherein the first set of data points comprises a set of visibility data points, a set of magnetic resonance imaging measurements, or a set of surface measurements. (in [0065] For the same 3D-printed diffractive optical neural network device 10 (FIGS. 4A, 4B), a shallow, fully-connected trained neural network 110 was trained with only 2 hidden layers in order to reconstruct images 120 of the input objects 4 based on the detected s. The training of this decoder neural network 110 is based on the knowledge of: (1) class scores (s=[s.sub.0, s.sub.1, . . . , s.sub.9]) resulting from the numerical diffractive network model, and (2) the corresponding input object images... It should be emphasized that this shallow, trained neural network 110 ANN is trained to decode a highly compressed form of information that is spectrally-encoded by a diffractive front-end and it uses only ten (10) numbers (i.e., s.sub.0, s.sub.1, . . . , s.sub.9) at its input to reconstruct an image 120 that has >780 pixels [the first set of data points comprises a set of visibility data points..]…)
Regarding claim 10, the rejection of claim 1 is incorporated and Oz in combination with Guan further teaches the computer-implemented method of claim 1, wherein the first item comprises an astronomical object, a body organ, a surface, or an image. (in [0052] With reference to FIG. 1D, the system 2 may include, in some embodiments, an optional computing device 100 may be used to run software 102 that receives/transmits signals and/or data from/to the detector 32… For example, the Fourier transform functionality of off-the-shelf software like MATLAB may be used for this purpose. The software 102 may also be used to perform object classification or object typing using spectral class scores as described herein. This embodiment and the communication between the detector 32 and the software 102 is illustrated as arrow A in FIG. 1D. In some embodiments, the computing device 100 may run or execute a trained neural network 110 (sometimes also referred to as ANN) that performs image reconstruction (i.e., an image reconstruction neural network) as part of the software 102. For example, the trained neural network 110 may be implemented on Python or TensorFlow software as examples. In one example, the trained neural network 110 is a three layer fully-connected neural network with two hidden layers. Generally, the trained neural network 110 should have five or less hidden layers. As explained herein, in one embodiment, the trained neural network 110 receives an input of spectral class score vector (s) and outputs a reconstructed image 120 of the object 4 [wherein the first item comprises an … an image].)
Regarding claim 11, the claim limitations are similar to claim 1 and are rejected under the same rationale.
Regarding claim 12, the claim limitations are similar to claim 2 and are rejected under the same rationale.
Regarding claim 13, the rejection of claim 11 is incorporated and Oz in combination with Guan further teaches the one or more non-transitory computer readable media of claim 11, wherein executing the first neural network on the first set of data points comprises computing at least one value for at least one parameter associated with a layer included in the second neural network based on the first set of data points. (As depicted in Fig.8B, Fig. 3B, in [0018] FIGS. 3A-3B schematically illustrate spectrally-encoded machine vision/task/classification framework for object classification and image reconstruction. FIG. 3A shows the optical layout of the single detector machine vision concept for spectrally-encoded classification of handwritten digits. As an example, digit ‘8’ is illuminated with a broadband pulsed light, and the subsequent diffractive optical network transforms the object information into the power spectrum of the diffracted light collected by a single detector. The object class is determined by the maximum of the spectral class scores. s, defined over a set of discrete wavelengths, each representing a data class (i.e., digit). FIG. 3B schematically illustrates the task-specific image reconstruction using the diffractive network's spectral class scores as an input. A separately-trained shallow neural network (e.g., ANN) recovers the images of handwritten digits from the spectral information encoded in s [wherein executing the first neural network on the first set of data points comprises computing at least one value for at least one parameter associated with a layer included in the second neural network based on the first set of data points]. Each reconstructed image is composed of >780 pixels [computing at least one value for at least one parameter associated with a layer included in the second neural network based on the first set of data points], whereas the input vector, s, has 10 spectral values. And in in [0065] For the same 3D-printed diffractive optical neural network device 10 (FIGS. 4A, 4B), a shallow, fully-connected trained neural network 110 was trained with only 2 hidden layers in order to reconstruct images 120 of the input objects 4 based on the detected s [executing a first neural network on a first set of data points associated with both a first item and the spectral domain to generate a second neural network]. The training of this decoder neural network 110 is based on the knowledge of: (1) class scores (s=[s.sub.0, s.sub.1, . . . , s.sub.9]) resulting from the numerical diffractive network model, and (2) the corresponding input object images. Without any fine tuning of the network parameters for possible deviations between the numerical forward model and the experimental setup, when the shallow, trained neural network 110 [wherein executing the first neural network on the first set of data points comprises computing at least one value for at least one parameter associated with a layer included in the second neural network based on the first set of data points] was blindly tested on the experimental measurements (s), the reconstructions of the images of the handwritten digits were successful as illustrated in FIG. 3B (also see FIGS. 13A-13B, 14A-14B), further validating the presented framework as well as the experimental robustness of the diffractive optical neural network device 10 […executing a first neural network…]()
Regarding claim 14, the rejection of claim 11 is incorporated and Oz in combination with Guan further teaches the one or more non-transitory computer readable media of claim 11, wherein generating the first set of predicted data points comprises computing a set of predicted values corresponding to a set of positions within the spectral domain based on a plurality of parameter values that are derived from the first set of data points and associated with the second neural network. (FIG. 3A illustrates the machine vision framework based on spectral encoding using the diffractive optical neural network device 10. A broadband diffractive optical neural network device 10 is trained to transform the spatial information of the objects 4 into the spectral domain through a pre-selected set of class-specific [corresponding to a set of positions within the spectral domain] wavelengths measured by a single-pixel spectroscopic detector 32 at the output plane; the resulting spectral class scores are denoted by the vector s=[s.sub.0, s.sub.1, . . . ,s.sub.9] [wherein generating the first set of predicted data points comprises computing a set of predicted values corresponding to a set of positions within the spectral domain based on a plurality of parameter values that are derived from the first set of data points and associated with the second neural network](FIG. 3A). Because the learning task assigned to the diffractive optical neural network device 10 is the optical classification of handwritten digits (MNIST database), after its training and design phase, for a given input image it learns to channel more power to the spectral component assigned to the correct class (e.g., digit ‘8’ in FIG. 3A) compared to the other class scores [based on a plurality of parameter values that are derived from the first set of data points]; therefore, max(s) reveals the correct data class. As demonstrated in FIG. 3B, the same class score vector, s, can also be used as an input to a shallow trained neural network 110 [and associated with the second neural network] to reconstruct an image of the input object 4, decoding the spectral encoding performed by the broadband diffractive network 10 [wherein generating the first set of predicted data points comprises computing a set of predicted values corresponding to a set of positions within the spectral domain based on a plurality of parameter values that are derived from the first set of data points and associated with the second neural network]. Of course, other learning tasks may be used in accordance with the invention.)
Regarding claims 15-18, the claim limitations are similar to claims 5 -7 & 9 respectively, and are rejected under the same rationale.
Regarding claim 19, the rejection of claim 11 is incorporated and Oz in combination with Guan further teaches the one or more non-transitory computer readable media of claim 11, further comprising performing one or more Fourier transform operations on an image associated with both the first item and a spatial domain to generate the first set of data points. (in 0052] With reference to FIG. 1D, the system 2 may include, in some embodiments, an optional computing device 100 may be used to run software 102 that receives/transmits signals and/or data from/to the detector 32. The computing device 100 may include a computer or the like such as a personal computer, laptop, server, mobile computing device. The computing device 100 may run software 102 that performs a number of functions via one or more processors 104. This includes, for example, converting the raw temporal signal/data from the detector 32 into, for example, a power spectrum using a Fourier transform operation [performing one or more Fourier transform operations on an image associated with both the first item and a spatial domain to generate the first set of data points]. For example, the Fourier transform functionality of off-the-shelf software like MATLAB may be used for this purpose…; And in[0053] The trained neural network 110 may be trained using at least one of the following: (i) a structural loss term, (ii) a cross entropy loss term, (iii) softmax-cross-entropy loss term, (iv) a diffractive network inference accuracy related penalty term, or (v) combinations of (i-iv) with different weights. The computing device 100 may execute an algorithm or software program 102 (or other dedicated hardware) may also be used to perform various post-processing operations of the output signals or data from the detector 32. This includes, by way of illustration, one or more operations of: Fourier transform [performing one or more Fourier transform operations on an image associated with both the first item and a spatial domain to generate the first set of data points], addition, subtraction, multiplication, standardization, peak detection or combinations thereof. As explained herein, in some embodiments, the reconstructed images 120 [an image associated with both the first item and a spatial domain to generate the first set of data points] are fed back to the same diffractive optical neural network device 10 as new inputs to improve the inference accuracy of the same…; And in [0050] … . The detector 32 or set of detectors 32 generates output signals or data 34 that used to perform the machine vision task [performing one or more Fourier transform operations on an image associated with both the first item and a spatial domain to generate the first set of data points], machine learning task, and/or classification of objects 4 [performing one or more Fourier transform operations on an image associated with both the first item and a spatial domain to generate the first set of data points]. The output signals or data 34 may be used directly or indirectly to perform the desired task or classification. For example, the detector 32 may generate a time domain signal (temporal signal) or data that contains (e.g., in its Fourier transform) [performing one or more Fourier transform operations on an image associated with both the first item and a spatial domain to generate the first set of data points] output information of the diffractive optical neural network device 10 from the object 4 [an image associated with both the first item and a spatial domain to generate the first set of data points]. The detector 32 may also generate a spectral domain signal or data that directly reveals the output information of the diffractive optical neural network device 10 from the object 4.)
Regarding claim 20, the claim limitations are similar to claim 1 and are rejected under the same rationale.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Windrim et al. (NPL: “Unsupervised Feature-Learning for Hyperspectral Data with Autoencoders”): teaches using a plurality of neural networks to process spectral data with autoencoders.
Zhou et al. (NPL: “In situ optical backpropagation training of diffractive optical neural networks”): teaches as depicted in Fig. 1, the an optical neural network is a neural network.
Shazeer et al. (US 20200089755): teaches Neural networks are machine learning models that employ one or more layers of nonlinear units to predict an output for a received input. Some neural networks include one or more hidden layers in addition to an output layer. The output of each hidden layer is used as input to the next layer in the network, i.e., the next hidden layer or the output layer. Each layer of the network generates an output from a received input in accordance with current values of a respective set of parameters. Neural networks may be trained on machine learning tasks using training data to determine trained values of the layer parameters and may be used to perform machine learning tasks on neural network inputs
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUWATOSIN ALABI whose telephone number is (571)272-0516. The examiner can normally be reached Monday-Friday, 8:00am-5:00pm EST..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OLUWATOSIN ALABI/ Primary Examiner, Art Unit 2129