DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) was submitted on 11/25/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Examiner notes disclosed U.S. Patent documents were not considered by examiner because filing dates, as shown below, occur after Applicant’s claimed priority date (02/24/2022):
US 20230081531 A1 to Akino, published 2023-03-16, filed 03/28/2022;
US 20220414953 A1 to Shen, published 2022-12-29, filed 06/28/2922;
US 20220114771 A1 to Arberet, published 2022-04-14, filed 04/14/2022
Other documents on information disclosure statement are considered by the examiner.
Response to Amendment
Applicant's amendments, filed 11/25/2025 are accepted and appreciated by Examiner, with independent Claims 1, 19, and 20 amended. Examiner agrees with Applicant that no new matter is added, with support found in specification as originally filed.
Arguments filed 11/25/2025 have been fully considered but they are not persuasive.
With regard to rejection of Claims 1-20 under 35 USC § 101, examiner maintains rejection for claims as currently amended. Detailed response to Applicant arguments concerning evaluation of eligibility, particularly with respect to Step 2A and Step 2B, Examiner acknowledges Applicant’s reference to USPTO Subject Matter Eligibility
Examples 37 to 42, specifically Example 39, in USPTO Memorandum published 08/04/2025. In considering Applicant’s arguments and fully reviewing published guidance directed to evaluation of applications under 35 USC 101, Examiner found that Example 47, specifically Claim 2, as incorporated into the Manual of Patent Examining Procedure (MPEP) 2106 and discussed in the 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence, to be consistent with fact patterns, specifically for independent claims 1, 11, and 20. . As discussed below, Example 47 was held to be patent ineligible. Example 47 provides a relevant reference for guidance in evaluation of the instant application for several reasons, as explained below Rationale for evaluation and rejection of claims 1-20 as currently amended is discussed in detail below.
With regard to rejection of 1-20 under 35 USC § 103, over obvious combination of prior art, based on further consideration and search as necessitated by amendments, Examiner finds arguments are not persuasive. Detailed response to Applicant arguments are presented below with new grounds of rejection.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These claims fall into statutory categories as set forth in 35 U.S.C. 101 (See MPEP § 2106.03).
The following evaluation follows Example 47, with focus on independent claims 1, 11, and 20 as currently amended.
Claim 1 is held to be patent ineligible, as explained below.
Claim 1 recites (Emphasis and line numbers added to facilitate discussion):
“A computer-implemented method for reconstructing representations of items in a
spectral domain, the method comprising:
generating conditioning information using a first trained machine learning model that maps a first set of data points associated with both a first item and the spectral domain to conditioning information via a first trained machine learning model;
training a second trained machine learning model based on the conditioning information to generate a model that represents the first item within the spectral domain;
generating a second set of data points associated with both the first item and the spectral domain via the model; and
constructing an image associated with the first item based on the second set of
data points.
Claim Interpretation: Examiner applies broadest reasonable interpretation, with guidance from specification, such that claim limitations are presumed to have their plain meaning as would be understood by one of ordinary skill in the art. (MPEP 2111)
Examiner interprets phrase in line (1) to have plain meaning reciting a computer-based mathematical method for producing a representation, with the term “computer” recited at a high level of generality, as would be understood by one of ordinary skill, a generic computer performing generally known computer functions. This interpretation is supported in at least specification [0018], which recites generic computational components used for implementation of the method.
Lines (2), (3), (4), and (5) recite mathematical processes (emphasized in bold) to be carried out by the computer. This interpretation of “reconstructing representations” as a mathematical process by “generate” or “generating” is supported by referring to specification in at least [0029]: ““generating dense spectral data points associated with both an astronomical object and a Fourier domain”, followed by [0030], reciting a description of mathematical functions used to represent data, “astronomical object is a complex function that is denoted herein as
V
(
u
,
v
)
”, and introduction in [0031] of the mathematical theorem of Cittert-Zernike, and use of Fourier transform method, followed by mathematical equations (1)-(4). Further, line (2) recites a process of “maps”. Again in reference to specification, this would be understood in its plain meaning to involve a mathematical function, or “mapping function”, as would be understood by one of ordinary skill, and as supported in specification in at least [0094]: “ “mapping” refers to any transformation between one or more “inputs” and one or more “outputs” that is defined via any number and/or types of mapping functions.” The term “training” is likewise interpreted in its plain meaning as a mathematical process, as would be understood by one of ordinary skill. One of ordinary skill would understand, in the context of a machine-learning-based context, the term “training” in its plain meaning, to involve use of algorithms to minimize a loss function (error rate) by iteratively optimizing model parameters (weights and biases) based on input data. This interpretation is supported in specification in at least Figures 3A and 3B as described in [0014] for the process of training, and [0034], which describes “training application”, [0039]: “the training application 130 includes, without limitation, a spectral reconstruction model 140, sparse visibility data points 122(1), dense spectral positions 132(1), dense ground-truth visibilities 124, dense predicted visibilities 148(1), and a reconstruction loss function 136.”, with details of loss function found in at least [0065], and [0040]: “training application 130 executes any number and/or types of machine learning operations and/or algorithms on the spectral reconstruction model 140”. The term found in (5), “constructing”, is interpreted in its plain meaning as a mathematical process, as would be understood by one of ordinary skill, to mean generally a mathematical optimization process based on an input data set, and using a functional representation of the data, with a mapping process involving a reconstruction loss function to minimize error. This interpretation is supported in the text in at least [0009]: “constructing an image associated with the first item based on the second set of data points”.
Referring now to terms in Claim one emphasized with underline, phrase (2) recites “first set of data points”, as associated with “first item and the spectral domain”, interpret in plain meaning to mean quantitative information use as input to mathematical processes discussed above. The term “data points” is recognized in plain meaning as some collection of numerical values without limitations on possible values. The limitations of “first item”, likewise refers to some generic feature of “data points” without limitations on how the data is associated with the items, or what data is associated with the item. The term “spectral domain” is interpreted in its plain meaning to be a mathematical representation of data in terms of frequency components (or eigen values), as opposed to spatial or temporal representations, as would be understood by one of ordinary skill in the art. The plain meaning interpretation of these terms is supported in specification in at least [0028], FIG. 1, “Inference Application”, and [0037]: “spectral-to-spatial application 190 uses an inverse 2D Fourier transform to generate an astronomical image for a target astronomical object based on the explicit dense spectral representation associated with the target astronomical object”
Other terms as emphasized in underline, including, “trained machine learning model” or “model”, “conditioning information”, and “second set of data points”, and “image” are interpreted as involving generic output results of a mathematical process. One of ordinary skill would understand a “model” as the output of a machine learning process. Likewise “conditional information” is in its plain meaning interpreted to be a result of a conditional process where in a mathematical process incorporating additional information is used to guide a model toward a targeted output. The claim does not impose any limits on how the data is output or require any particular components that are used to output the image.
Using the above interpretation based on plain meaning of words in Claim 1, the broadest reasonable interpretation for Claim 1 is a method implemented by a generic computer, where input values for using mathematical calculations to generate a model representing input data using any number or type of algorithms, and an optimized reconstruction of data.
In evaluation of eligibility, Examiner finds:
STEP 1: Claim 1 recites an eligible statutory category. (MPEP 2106.03), namely, a process or method, based on limitation of input data values.
STEP2A-Prong 1: Claim 1 recites the judicial exception. (MPEP2106.04) Claim 1 describes, as discussed above, using broadest reasonable interpretation, processes that fall within definition of Abstract Idea in the Mathematical Concept grouping.(MPEP 2106.04(a)(2), subsection I) Examiner acknowledges that claims to not explicitly recited mathematical relations, formulas, or calculations. (Remarks, P7) However, as discussed above, Examiner is directed to use specification as guidance for understanding and interpretation of claim limitations, and asserts language of Claim 1 does explicitly recite mathematical concepts, as would be known and understood by one of ordinary skill in the art, in carrying out a machine learning method. Examiner further acknowledges Applicant’s reference to Example 39, directed to use of a neural network. As noted above, Examiner finds fact patterns of Example 47 to be more salient for guidance in evaluation of instant application, and notes that Example 47 is also directed to use of neural networks, machine learning, and training models. Examiner further points to the mathematical equations, relationships and formulas disclosed in the specification to be pertinent in evaluation of the limitations to be recited a judicial exception.
As noted by Applicant (Remarks, P 7), claims are not directed towards mental processes, because claimed steps are computer-implemented techniques that cannot be practically performed in the human mind or using pen/paper. However, Examiner, using broadest reasonable interpretation as discussed above, and with guidance from specification, Claim 1 limitations describe processes which may include observation, evaluation, judgment, opinion in the described mathematical concepts, even in the context of a computer implemented process. Example 47 notes “the examiner should consider the limitations together as a single abstract idea rather than as a plurality of separate abstract ideas to be analyzed individually.” In light of the discussion above, using broadest reasonable interpretation and plain meaning of terms which are understood to be mathematical process and/or calculations, with support in the specification, including equations, mathematical relationships, and formulas, necessary for understanding and interpreting claim limitations, Claim 1 sets forth a judicial exception in analysis under STEP 2A-Prong 1.
STEP2A - Prong 2: Claim 1 does not integrate the recited judicial exception into a practical application. Claim 1 is directed to the judicial exception, as discussed above. Claim 1 does recite additional elements as emphasized above in underline. The additional elements of item, first item, first set of data points, as discussed above, using broadest reasonable interpretation and plain meaning, represent input values to be used in carrying out the mathematical concept. These terms represent mere data gathering. The terms “generating a trained machine learning model” or “model”, “conditioning information”, and “second set of data points”, and “image” are interpreted as involving generic output results of a mathematical process as implemented aby a computer, where the computer is recited at a high level of generality. Thus, these limitations are mere instructions to apply the judicial exception using a generic computer. This limitation language also merely indicates field of use or technological environment in which the judicial exception is carried out. (MPEP2106.05(f))
Individually, or when viewed in combination, additional elements as identified in Claim 1 do not integrate the recited judicial exception into a practical application. Thus, Claim 1 is directed to the judicial exception.
Step 2B: Claim 1 does not amount to significantly more than the recited judicial exception. Consideration of additional elements as identified above, either individually or in combination does not reveal an inventive concept to the claim. As discussed in Claim 2, additional elements (emphasized in underline above) are identified, but amount to mere instructions to carry out the judicial exception, or are interpreted as extra-solution activity that is well understood, routine, and conventional, as would be understood by one of ordinary skill in the art. (MPEP 2106.06(g)) For example, using a first trained machine learning model that maps a first set of data points are interpreted input and use of data in carrying out the judicial exception. As in STEP 2A- Prong 2, even when considered in combination, identified additional elements represent mere instructions to implement the judicial exception, or recited insignificant extra solution activity which do not provide an inventive concept.
As in previous office action, Examiner further notes evidence attesting to identified additional elements/steps as well known, routine and conventional in view of relevant prior art of record cited herein, including, for example: BUCHHOLZ (Buchholz, et al., "Fourier Image Transformer." arXiv preprint arXiv:2104.02555v2, 2021), RAVISHANKAR (“Image Reconstruction: From Sparsity to Data-Adaptive Methods and Machine Learning”, PROCEEDINGS OF THE IEEE, Vol. 108, No. 1, January 2020), or MOU (Mou, et al., "Learning to Pay Attention on Spectral Domain: A Spectral Attention Module-Based Convolutional Network for Hyperspectral Image Classification", IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 58, NO. 1, JANUARY 2020).
Claims 11 and 20 are likewise held to be patent ineligible. Examiner applies rationale and reasoning used to evaluate Claim 1 to parallel limitations as recited in independent Claims 11 and 20, using plain meaning and interpreted using broadest reasonable interpretation, and viewed with the guidance of Example 47. Claims 11 and 20 fall within a statutory category. Both Claims 11 and 20 recite a judicial exception, and in keeping with analysis as presented above for Claim 1, are directed to the judicial exception of Abstract Idea in the Mathematical Concept grouping, with additional elements which do not amount to significantly more and do not provide an inventive concept.
Dependent Claims 2-10 with dependency to Claim 1; and Claims 12-19, with dependency on Claim 11, recite limitations additional elements that require evaluation. Dependent claims are evaluated in groups below.
Considering dependent Claims 2-10, with dependency to Claim 1, Examiner finds claims fall into an eligible statutory category and recite limitations which are incorporated into the judicial exception of Abstract idea in the Mathematical concepts grouping.
Applying evaluation under STEP2A, Prongs 1 and 2, dependent claims 2-10 do contain limitations using terms identified as carrying out the mathematical concept process steps, including: “mapping”, “positional encoding operations” (Claim 2); “updating” “modifying” (Claim 3); “generating” , “executing”, “generate” (Claim 4); “maps” (Claim 5); “constructing”, “computing”, “generate”, “computing an inverse Fourier transform” (Claim 6); “is constructed” (Claim 7); “generating” (Claim 8). Claims 2-10 recite additional elements, including “first set of data points”, “second trained machine learning model”, “first trained machine learning model”, “second trained machine learning model”, “model, image, second set of data points”. Further, additional elements recited in Claim 2-10 limitations include: “predicted values”, “neural network”, “third set of data points”, “target level of fidelity”, “astronomical object”, “body organ”, “surface”, “first image”, “interferometric observation”. Using reasoning and rationale as applied to additional elements recited in Claim 1, the additional elements recited by Claims 2-10 are interpreted to be insignificant extra solution activity, generally known or mere data gathering, necessary to provide numerical values based input of a set of data points, as would be known by one of ordinary skill in the art as necessary to carry out mathematical process. Similarly, these additional elements further modify steps performed by a generic computer system, recited at a high level of generality. Following rationale and reasoning applied to Claim 1 rejection, input data and output data simply recite field of use/technological environment. Additional elements in Claims 2-10 do not impose meaningful limits on practicing of the abstract idea: mathematical concept, no integrate the judicial exception into a practical application.
Further evaluation under STEP 2B reveals that additional elements found in Claims 2-10 limitations not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as noted above, additional elements amount to mere instructions or modification to instructions to apply the mathematical concept judicial exception (MPEP 2106.05(h)); additional elements, including data input (data gathering) of “first data points” and generation of “model” and “image” because these steps amount to insignificant extra-solution activity of selecting a data source and data output after mathematical processes (MPEP 2106.05(g)) ; likewise, additional elements of computer and machine learning processes, including machine learning models, amount to mere instructions to apply the judicial exception using generic computer components (MPEP21-6.05(f)).
Thus, identified additional elements as recited in Claims 2-10, with dependency to Claim 1 do not apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claims independently or as a whole is more than a drafting effort designed to monopolize the exception. (see MPEP 2106.05(e) and Vanda Memo).
Using the same rational and reasoning as applied to Claims 2-10, consideration of dependent Claims 12-19, with dependency to Claim 11 reveal the same conclusions. These depended claims do fall into an eligible statutory category, and do contain limitations which modify judicial exception of abstract idea: mathematical concept and/or recite additional elements. As discussed above, Claims 12-19 recite further mathematical process steps reciting the judicial exception. Claims 12-19 recite additional elements as found in dependent claims 2-10. Using reasoning and rationale as applied to additional elements recited in Claim 11, the additional elements recited by Claims 12-19 are interpreted to be insignificant extra solution activity, generally known or mere data gathering, necessary to provide numerical values based input of a set of data points, as would be known by one of ordinary skill in the art as necessary to carry out mathematical process. Similarly, these additional elements further modify steps performed by a generic computer system, recited at a high level of generality. Following rationale and reasoning applied to Claim 11 rejection, input data and output data simply recite field of use/technological environment. Additional elements in Claims 12-19 do not impose meaningful limits on practicing of the abstract idea: mathematical concept sufficient to incorporate the abstract idea into a practical application. And, as above, additional elements recited in Claims 12-19 are not sufficient to amount to significantly more than the judicial exception.
Thus, identified additional elements as recited in Claims 12-19, with dependency to Claim 11 do not apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claims independently or as a whole is more than a drafting effort designed to monopolize the exception. (see MPEP 2106.05(e) and Vanda Memo).
When analyzed independently or in combination, dependent Claims 2-10 and Claims 12-19, are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) therein represent additional elements that describe insignificant extra solution activity, additional mathematical calculations, instructions or definitions for calculations, and/or numerical data to be used according to the recitation of the abstract ideas as discussed above for independent Claims 1 and 11.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-9 and 11-20 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over BUCHHOLZ (Buchholz, et al., "Fourier Image Transformer " arXiv:2104.02555v2 [cs.CV] 3 May 2021), in view of PEREZ (Perez, et. al., “FiLM: Visual Reasoning with a General Conditioning Layer”, arXiv:1709.07871v2, 18 Dec 2017) , and further in view of MEYERS-NORMAND (US 20220181007 A1)
With respect to Claims 1, 11, 20, BUCHHOLZ teaches:
A computer-implemented method for reconstructing representations of items in a spectral domain (BUCHHOLZ is in same technical field, Abstract: “a sequential image representation… Using such Fourier Domain Encodings (FDEs), an autoregressive image completion task is equivalent to predicting a higher resolution output given a low-resolution input”; and Section 3.4: “Fourier Image Transformer setup for tomographic reconstruction (“FIT: TRec”); Examiner interprets “spectral domain” as analogous to the method of using frequency domain with Fourier methods, as would be understood by one of ordinary skill; BUCHHOLZ teaches computer-implemented method for image reconstruction, Fig. 3 with caption, “encoder-decoder based Fourier Image Transformer setup for tomographic reconstruction…2D computed tomography, 1D projections of an imaged sample…are back-transformed into a 2D image.”; Examiner interprets “items” as analogous to “imaged sample”; BUCHHOLZ teaches non-transitory computer readable media and processors to execute instructions, as would be understood by one of ordinary skill, Abstract: “computed tomography image reconstruction”, 2.3. “Transformers in Computer Vision”, or 2.4. “Tomographic Image Reconstruction”)
the method comprising: using a first trained machine learning model that maps a first set of data points associated with both a first item and the spectral domain to conditioning information (BUCHHOLZ teaches trained machine learning model used for mapping, Fig. 2, with Pg1, 1: “we show how an encoder-decoder based Fourier Image Transformer (“FIT: TRec”) can be trained on a set of Fourier measurements and then used to query arbitrary Fourier coefficients, which we use to improve sparse-view computed tomography (CT) image restoration mapping”; BUCHHOLZ teaches use of conditioning information and transformation to frequency domain, Pg2,2.3: “first n pixels of the flattened input image are used to condition a generative transformer setup that then predicts the remaining image in an auto-regressive manner”, and Pg.3, Section 3.1, FIG.2 and caption: “Low-resolution input images are first transformed into Fourier space and then unrolled into an FDE sequence…fed to a FIT, that, conditioned on this input, extends the FDE sequence to represent a higher resolution image…FIT is conditioned on the first 39 entries of the FDE”)
training a second trained machine learning model (BUCCHOLZ teaches iterative model development, generally, Pg.1, “Section 3 we introduce our novel Fourier Domain Encoding (FDE) and training strategies for auto-regressive and encoder-decoder transformer models”; and Pg. 6, Table 1, discloses details of iterative, progressing model development, and §5: “Discussion…idea of Fourier Domain Encodings (FDEs),a novel sequential image encoding”)
generate a model that represents the first item within the spectral domain; (BUCCHOLZ teaches generative model in the spectral domain, as above, P4, SS3.3: “final prediction images
x
^
are generated by computing the inverse Fourier transform on predictions
C
^
, which are rearranged (rolled) into
X
^
h
and completed to a full predicted Fourier spectrum
X
^
”, and as above, Section 3.4: “Fourier Image Transformer setup for tomographic reconstruction (“FIT: TRec”); Examiner interprets “spectral domain” as discussed above as analogous to frequency domain involving Fourier methods.)
constructing an image associated with the first item (BUCHHOLZ teaches reconstruction for each dataset, as above, Fig. 6, Pg6, 4.6; and predicted reconstruction, Pg.4 Section 3.4: “..such that the predicted reconstruction
PNG
media_image1.png
16
14
media_image1.png
Greyscale
of
PNG
media_image2.png
11
13
media_image2.png
Greyscale
can be computed by
PNG
media_image3.png
24
146
media_image3.png
Greyscale
, where roll arranges the 1D sequence back into a discrete 2D Fourier spectrum…“FIT: TRec” ”)
BUCHHOLZ does not teach:
the method comprising: generating conditioning information using a first trained machine learning model
the method comprising: training a second trained machine learning model based on the conditioning information
the method comprising: generating a second set of data points associated with both the first item and the spectral domain via the model
the method comprising: output associated with the first item based on the second set of data points.
PEREZ teaches
generating conditioning information using a first trained machine learning model (PEREZ is in same technical field, Abstract: “general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation… answering image-related questions which require a multi-step, high-level process”; PEREZ teaches explicitly several methods for generating conditioning information using trained model, Pg.2,§2 “Method”, and Pg.4, §4 “Experiments”)
It would have been obvious to one of ordinary skill in the art before effective filing date of the claimed invention to modify BUCHHOLZ to include a first model that generates condition information, such as that of PEREZ because it would result in improved mapping by adding responsive flexibility based on changes in the conditioning information. Examiner notes BUCCHOLZ does teach the idea of conditioning model (BUCCHOLZ Pg.2,SS2.3, as above), but does not explicitly teach how the conditioning information is achieved, as does PEREZ. One of ordinary skill would see the obvious connection and advantage of using teaching of PEREZ with the suggestion of BUCCHOLZ to improve the method and system for reconstruction. One of ordinary skill would understand that generation of conditioning data would ultimately allow for more effective control of the generative process for image reconstruction, particularly for inputs with low resolution or sparse data sets, and would see that value of using a pre-trained model to provide conditioning information to accelerate the training process of the second model or make it more data-efficient.
BUCHHOLZ, as modified by PEREZ, as taught above does not teach:
the method comprising: training a second trained machine learning model based on the conditioning information
the method comprising: generating a second set of data points associated with both the first item and the spectral domain via the model
the method comprising: output associated with the first item based on the second set of data points.
MEYERS-NORMAND teaches:
the method comprising: training a second trained machine learning model based on the conditioning information (MEYERS-NORMAND is in related technical teaching a machine-learning based image analysis method/system, Abstract: “device can train a first machine learning model (“model”) based on a first group of the retinal images”, and [0002]: “relates to deep learning…computing systems and processes for utilizing and training a computing system to predict geographic atrophy progression based on retinal images.”, and using spectral domain, [0047]: “retinal image includes…Spectral domain-optical coherence tomography (SD-OCT) retinal images”; MEYERS-NORMAND teaches using conditioning-based training method, FIG. 2 and [0037]: “machine learning system 200 includes a data conditioning module 212” ; Examiner notes BUCCHOLZ teaches use of conditioning information but does not explicitly disclose incorporation into second model, as does MEYERS-NORMAND.)
the method comprising: generating a second set of data points associated with both the first item and the spectral domain via the model (MEYERS-NORMAND teaches generation of predictions using iterative trained models, Abstract: “raining the first model and the second model, the device can train a third model to predict”, and FIG.4, and FIG.8, with [0009]: “instructions for training a first machine learning model based on a first group of the retinal images and patient data corresponding to the first group, wherein the first group includes a first type of retinal image…instructions for training a second machine learning model…instructions for generating, using the trained second machine learning model, a second prediction of geographic atrophy progression based on a second subset of the third group of the retinal images and patient data corresponding to the second subset. After training the first machine learning model and the second machine learning model, the one or more programs can also include instructions for training a third machine learning model…based on the first prediction, the second prediction, the first subset, the second subset and patient data corresponding to the first subset and the second subset.”)
the method comprising: output based on the second set of data points. (As above, MEYERS-NORMAND teaches generative model with prediction outputs based on first item and second model, [0009], and [0008]: “generating, using the trained first machine learning model, a first prediction of geographic atrophy progression based on a first subset of a third group of the retinal images and patient data corresponding to the first subset…generating, using the trained second machine learning model, a second prediction of geographic atrophy progression based on a second subset”)
It would have been obvious to one of ordinary skill in the art before effective filing date of the claimed invention to modify BUCHHOLZ, as modified by PEREZ and taught above, to include the method comprising: the method comprising: training a second trained machine learning model based on the conditioning information, generating a second set of data points associated with both the first item and the spectral domain via the model, and output associated with the first item based on the second set of data points, such as that of MEYERS-NORMAND, because these techniques take advantage of unique properties contained in frequency domain data in a wide range of imaging application fields. One of ordinary skill would understand the advantage of these explicit steps in a robust machine-learning methods as an obvious way to leverage the method taught by BUCCHOLZ as modified by PEREZ to improve removal of undesired artifacts, including blur and noise, and to improving the ability to extract meaningful data even from low signal-to-noise or sparsely constructed input image data. acquisition. One of ordinary skill would be motivated to combine the explicit disclosure of machine-learning methods using multiple iterative steps and conditioning information incorporation into a trained model, as taught by MEYERS-NORMAND with the method of image reconstruction using the related FOURIER techniques as taught by BUCHHOLZ to yield improved reconstructed image, even for sparse input data by allowing the generative model to follow intentional structural constraints.
With respect to Claims 2 and 12, BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above, teaches the limitations of Claims 1 and 11.
BUCHHOLZ further teaches:
wherein mapping the first set of data points to the conditioning information comprises performing one or more positional encoding operations on the first set of data points. (BUCHHOLZ teaches mapping and conditioning in an encoding process, Fig. 2 and Pg1, 1 and Pg2,2.3)
With respect to Claims 3 and 13, BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above, teaches the limitations of Claims 1 and 11.
BUCHHOLZ, further teaches:
wherein updating the second trained machine learning model comprises modifying one or more values of one or more parameters associated with the second trained machine learning model based on the conditioning information. (BUCHHOLZ teaches iterative process, see Pg1-2,2.1: “an encoder-decoder structure, where the encoder maps frequency Fourier coefficients, an input encoding
PNG
media_image4.png
33
153
media_image4.png
Greyscale
into a continuous latent space
PNG
media_image5.png
33
157
media_image5.png
Greyscale
, with N corresponding to the number of input tokens and F representing the feature dimensionality per token…latent space embedding z is then given to the decoder, which generates an M long output sequence
PNG
media_image6.png
33
127
media_image6.png
Greyscale
iteratively, element by element…auto-regressive decoding scheme means that the decoder generated the i-th output token while not only observing z, but also all i−1 output tokens generated previously.”; and, as above, BUCHHOLZ teaches using conditioning, Pg3, Fig.2; and teaches model construction details, Pg3.3: “Methods”.)
BUCCHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above does not teach:
updating the second trained machine learning model comprises modifying one or more values of one or more parameters associated with the second trained machine learning model based on the conditioning information.
MEYERS-NORMAND further teaches:
updating the second trained machine learning model comprises modifying one or more values of one or more parameters associated with the second trained machine learning model based on the conditioning information. (As above, MEYERS-NORMAND teaches repetitive use of conditioning module in iterative model progression, FIG.4, and [0008]: “generating, using the trained first machine learning model, a first prediction of geographic atrophy progression based on a first subset of a third group of the retinal images and patient data corresponding to the first subset…generating, using the trained second machine learning model, a second prediction of geographic atrophy progression based on a second subset”)
It would have been obvious to one of ordinary skill in the art before effective filing date of the claimed invention to modify BUCCHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above, to include the technique of updating the second trained machine learning model comprises modifying one or more values of one or more parameters associated with the second trained machine learning model based on the conditioning information, as further disclosed by MEYER-NORMAND because these steps add iterative modeling details that would result in a more accurate model for improving the image reconstruction method/system of BUCCHOLZ as modified above. One of ordinary skill would be familiar with the advantages of including adaptability steps for developing iterative models where each model is improved from the previous model. One of ordinary skill would see an obvious connection between the explicit training steps disclosed by MEYERS-NORMAN which avoid full retraining of the first model for each new input dataset, as a way to improve the method/system of BUCCHOLZ as modified above, by allowing the resulting model to produce a more accurate image reconstruction without sacrificing efficiency and speed.
With respect to Claims 4 and 14, BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above, teaches the limitations of Claims 1 and 11.
BUCHHOLZ further teaches:
wherein generating the second set of data points comprises executing the model on a first set of two- dimensional positions within the spectral domain to generate a set of predicted values that correspond to the first set of two-dimensional positions and are associated with the first item. (BUCHHOLZ teaches 2D transformation process from original image, FIG. 1, “Fast-Transformers (7), that operate on images via a novel sequential image representation we call Fourier Domain Encoding (FDE)”; and Pg2,2.1: “positional encodings are required whenever specific input topologies need to be made accessible to the transformer. In (5), a useful 1D positional encoding scheme was proposed. Later, Wang et al. (8) generalized this scheme to 2D topologies.”)
With respect to Claim 5, BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above, teaches the limitations of Claim 1.
BUCHHOLZ further teaches:
where the model maps one or more positions within the spectral domain to one or more predicted values associated with both the first item and the spectral domain. (BUCHHOLZ teaches mapping positions to predictions, as above, Fig. 2, with Pg1, 1: “we show how an encoder-decoder based Fourier Image Transformer (“FIT: TRec”) can be trained on a set of Fourier measurements and then used to query arbitrary Fourier coefficients, which we use to improve sparse-view computed tomography (CT) image restoration mapping”; BUCHHOLZ teaches predictive modeling and transformation to frequency domain, Pg2,2.3: “first n pixels of the flattened input image are used to condition a generative transformer setup that then predicts the remaining image in an auto-regressive manner”, and Pg.3, Section 3.1, FIG.2 and caption: “Low-resolution input images are first transformed into Fourier space and then unrolled into an FDE sequence…fed to a FIT, that, conditioned on this input, extends the FDE sequence to represent a higher resolution image…FIT is conditioned on the first 39 entries of the FDE”; Examiner notes interpretation of “spectral domain” as above.)
BUCHHOLZ, does not teach:
the model comprises a neural network that maps one or more positions within the spectral domain to one or more predicted values
MEYERS-NORMAND teaches:
the model comprises a neural network within the spectral domain to one or more predicted values (MEYERS-NORMAND teaches use of neural network for generating predictive models, [0041]: “machine learning algorithm can be implemented using a variety of techniques, including the use of one or more an artificial neural network, a deep neural network, a convolutional neural network, a multilayer perceptron, and the like”, and as above, iterative method including spectral domain data for generating predictive models.)
It would have been obvious to one of ordinary skill in the art before effective filing date of the claimed invention to modify BUCHHOLZ, as modified by PEREZ and taught above, to include a model that comprises a neural network within the spectral domain to one or more predicted values, such as that of MEYERS-NORMAND, because it would take advantage of the unsupervised training power made possible by a neural network. One of ordinary skill would be motivated to include neural network techniques, as explicitly taught as part of the machine-learning method of MEYERS-NORMAND into the method taught by BUCHHOLZ as modified above to arrive at a more efficient and robust method/system for image reconstruction.
With respect to Claims 6 and 16, BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above, teaches the limitations of Claims 1 and 11.
BUCHHOLZ further teaches:
wherein constructing the image comprises computing an inverse Fourier transform of the second set of data points to generate a third set of data points associated with both the first item and a spatial domain. (BUCHHOLZ teaches analogous iterative data sets, Fig.6, and Pg6, 4.6: “Qualitative tomographic reconstruction results for all three datasets we used are shown in Figure 6. For each dataset, we show three input sinograms, the reconstruction baseline obtained via filtered back projection (FBP), our results obtained via “FIT: TRec” and “FIT: TRec + FBP”, and the corresponding ground truth images.” – different inputs yield different reconstruction models”, and teaches technique of using inverse FT, Fig. 4,3.3: “All final prediction images
PNG
media_image1.png
16
14
media_image1.png
Greyscale
are generated by computing the inverse Fourier transform on predictions
PNG
media_image7.png
21
17
media_image7.png
Greyscale
”)
With respect to Claims 7 and 17, BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above, teaches the limitations of Claim 1 and 11.
BUCHHOLZ teaches
Image is constructed (BUCHHOLZ teaches, as above, construction of image from trained model,
BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above does not teach:
wherein the image is constructed to have a target level of fidelity.
MEYERS-NORMAND teaches
wherein output is constructed to have a target level of fidelity. (MEYERS- NORMAND teaches “correct” target for result, [0040]: “supervised machine learning algorithm builds a machine learning model by processing training data that includes both input data and desired outputs (e.g., for each input data, the correct answer (also referred to as the “target” or “target attribute”)”; Examiner interprets “target level of fidelity” as analogous to “correct answer” or “target attribute”, as taught in reference.)
It would have been obvious to one of ordinary skill in the art before effective filing date of the claimed invention modify BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above to incorporate a target level of fidelity for output, such as that further disclosed by MEYERS-NORMAND because it would be understood as a way to increase confidence in interpretation of results produced by a generative predictive model. It would be obvious to combine the target attribute comparison as taught by MEYERS-NORMAND into the method/system disclosed by BUCHHOLZ as modified above, to result in the ability to ascertain reliability and accuracy of a reconstructed image. One of ordinary skill would understand that including a reliable fidelity limit or “target attribute, would enhance the ability to balance the competing objectives of image construction, namely accuracy compared to ‘ground truth’, competing with noise level and spatial resolution, among other factors.
With respect to Claim 8, BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above, teaches the limitations of Claim 1.
BUCHHOLZ further teaches:
generating, via the first trained machine learning model, a second model ( As above, BUCHHOLZ teaches iterative model generation with multiple items, different inputs yielding different models for reconstruction, see PG6, 4.6: “For each dataset, we show three input sinograms, the reconstruction baseline obtained via filtered back projection (FBP), our results obtained via “FIT: TRec” and “FIT: TRec + FBP”, and the corresponding ground truth images.”)
BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above does not teach:
generating, via the first trained machine learning model, a second model that represents a second item based on a third set of data points associated with both the second item
MEYERS-NORMAND further teaches:generating, via the first trained machine learning model, a second model that
represents a second item based on a third set of data points associated with both the second item (MEYERS NORMAND teaches generative of multiple iterative models, as above, FIG. 4, with [0008], and [0009])
It would have been obvious to one of ordinary skill in the art before effective filing
date of the claimed invention to modify BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above to include generating, via the first trained machine learning model, a second model that represents a second item based on a third set of data points associated with both the second item, such as that further disclosed by MEYERS-NORMAND because it would improve the ability to extract meaningful data even from low signal-to-noise acquisition by multiple iterative training steps. One of ordinary skill would be motivated to combine the machine-learning focused steps of MEYERS-NORMAND with the method of image reconstruction using the related Fourier techniques as taught by BUCHHOLZ to make the resulting image models and reconstruction more accurate and reliable.
With respect to Claims 9 and 19, BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above, teaches the limitations of Claim 1.
BUCHHOLZ further teaches:
wherein the first item comprises an astronomical object, a body organ, a surface, or a first image. (BUCHHOLZ teaches a variety of applications, Fig.5, caption: “input images”)
With respect to Claim 15, BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above, teaches the limitations of Claim 11.
BUCHHOLZ further teaches:
wherein the first trained machine learning model comprises at least one of a transformer encoder, a variational encoder, or a learnable neural spline. (BUCHHOLZ is directed to use of Fourier Image transformer, see Title; based on encoding/decoding, see Pg1,1: “We demonstrate this by providing a given set of projection Fourier coefficients to our encoder-decoder setup and use it to predict Fourier coefficients at arbitrary query points.”)
With respect to Claim 18, BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above, teaches the limitations of Claim 11.
BUCHHOLZ further teaches:
wherein the spectral domain comprises a frequency domain, a k-space, a cepstral domain, or a wavelet domain. (BUCHHOLZ teaches, as above, Fourier methods, with Fourier transforms to frequency domain, Pg.1, FIG. 1, w/caption: “Fourier Image Transformers (FITs), realizations of Fast-Transformers (7), that operate on images…Fourier Domain Encoding (FDE) …second task is tomographic reconstruction (bottom row), where a given set of projection images (a sinogram) is transformed by an encoder-decoder FIT to improve the quality of the reconstructed image transformation (iFFT)”)
Claim 10 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over BUCHHOLZ, in view of PEREZ and MEYERS-NORMAND, as applied above to Claim 1, and further in view of HONMA (Honma, et al., "Super-Resolution Imaging with Radio Interferometry using Sparse Modeling", Publications of the Astronomical Society of Japan, Vol. 66, No.5, 2014)
With respect to Claim 10, BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above, teaches the limitations of Claim 1.
BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above does not teach:
wherein the first set of data points comprises an interferometric observation of the first item.
HONMA teaches:
wherein the first set of data points comprises an interferometric observation of the first item. (HONMA is in same technical field or image reconstruction, directed toward astronomical images, Abstract: “new technique to obtain super-resolution images with radio interferometer using sparse modeling” and Pg2,1.2: “different approaches to reconstruct radio interferometer images”; HONMA teaches interferometric image data for analysis, Abstract: “present results of one-dimensional and two-dimensional simulations of interferometric imaging,”; HONMA teaches image analysis for reconstruction of interferometric data, Pg2,2.1: “imaging synthesis, side-lobe levels can be reduced by using taper function instead of equal weight to all the sampled data.”; and Pg6-7,4.3see Fig.4: “image reconstruction is done in the same manner using LASSO”)
It would have been obvious to one of ordinary skill in the art before effective filing date of the claimed invention to further BUCHHOLZ, as modified by PEREZ and MEYERS-NORMAND, as taught above to include the first set of data points comprises an interferometric observation of the first item, such as that of HONMA because this would be a useful and important application of the method/system of image reconstruction disclosed by BUCHHOLZ as modified above. One of ordinary skill would be motivated to use the teaching of HONMA to broaden the range of use for a machine learning-based image reconstruction method/system by including the option of interferometric data would broaden the application range. One of ordinary skill would realize the advantage of combining the high resolution spatial frequency information of interferometric data with trained machine learning algorithms designed to work in a spectral/frequency domain as taught by BUCHHOLZ, as modified above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
BROCK (Brock, et. al., “Large scale gan training for high fidelity natural image synthesis”, arXiv:1809.11096, 2018) – teaches multiple methods for generation of conditioning information for image reconstruction.
CHEN (Chen, et. al., “Infogan: Interpretable representation learning by information maximizing generative adversarial nets”, Advances in neural information processing systems. 2016, pp. 2172–2180) – teaches use of conditioning information to improve reconstruction
GIAMBAGLI (Giambagli, et. al., “Machine learning in spectral domain”, Nat Commun 12, 1330, 2021) – teaches specific techniques related to machine learning techniques for data represented in the spectral domain.
NGUYEN (Nguyen, et. al., “Plug & play generative networks: Conditional
iterative generation of images in latent space”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4467–4477, 2017) – teaches foundational techniques and methods for iterative model generation using machine learning.
VAN OORT (“Leveraging Domain Knowledge in Deep Learning Systems”, January 2021 Publisher: The University of Vermont and State Agricultural College, ISBN (Print): 979-8-5355-3445-9. Published: 01 January 2021, DISSERTATION, available online). – teaches multiple methods for implementing conditioning in machine learning systems for improving model outcomes.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TONI D SAUNCY whose telephone number is (703)756-4589. The examiner can normally be reached Monday - Friday 8:30 a.m. - 5:30 p.m. ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Catherine Rastovski can be reached at (571) 270-0349. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TONI D SAUNCY/Examiner, Art Unit 2857
/Catherine T. Rastovski/Supervisory Primary Examiner, Art Unit 2863