DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
35 U.S.C. 112(f)
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
In claim 1: “a properties adjuster to optimize a loss function between said predicted transducer output and a measured transducer output and to generate an improved set of said physical properties, said properties adjuster operating a backpropagator using said neural network representation of said wave function”;
“a medium properties recoverer to output a current improved set of said physical properties”.
In claim 6: “said wave field modeler also comprises a restriction operator to restrict an output of said neural network […]”
In claim 7: “a plurality of error calculators each to generate an error vector […]”
“a loss accumulator to accumulate said error vectors”
“a non-linear gradient calculator, implemented by said backpropagator, to exploit said non-linear wave function to backpropagate gradients through said neural network, […]”
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Upon review of Applicant’s specification, the instant disclosure provides:
“Embodiments of the present invention may include apparatus for performing the operations herein. This apparatus may be specially constructed for the desired purposes, or it may comprise a computing device or system typically having at least one processor and at least one memory, selectively activated or reconfigured by a computer program stored in the computer. The resultant apparatus when instructed by software may turn the general purpose computer into inventive elements as discussed herein. The instructions may define the inventive device in operation with the computer platform for which it is desired. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk, including optical disks, magnetic-optical disks, read-only memories (ROMs), volatile and non-volatile memories, random access memories (RAMs), electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, Flash memory, disk-on-key or any other type of media suitable for storing electronic instructions and capable of being coupled to a computer system bus. The computer readable storage medium may also be implemented in cloud storage”
Specification [0077] (emphasis added)
“The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the desired method. The desired structure for a variety of these systems will appear from the description below. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.”
Specification [0079] (emphasis added)
In view of the Applicant’s instant specification, for the purposes of examination the claim limitations interpreted under §112(f) discussed above are interpreted as generic processing/computing components.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-8 and 9-16 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites the limitations:
“A system for recovering physical properties from a non-linear medium, the system comprising:
a wave field modeler to model a wave field generated by at least one transmitted pulse as it travels in said non-linear medium and to generate a predicted transducer output from said modeled wave field, said wave field modeler being implemented as a neural network […]
a properties adjuster to optimize a loss function […] operating a backpropagator using said neural network representation of said wave function, […]
a medium properties recoverer to output a current improved set of said physical properties once said properties adjuster finishes operation”;
and claim 9 recites the limitations:
“A method for recovering physical properties from a non-linear medium, the method comprising:
modeling a wave field generated by at least one transmitted pulse as it travels in said non-linear medium, said modeling using a neural network […]
generating a predicted transducer output from said modeled wave field;
optimizing a loss function […] using backpropagation with said neural network representation of said wave function;
activating said modeling with said improved set of said physical properties; and
providing a current improved set of said physical properties”.
The limitations provided above as drafted describe processes that, under their broadest reasonable interpretation, covers a performance of the limitations in the mind but for the recitation of generic computer processing steps. That is, other than reciting “being implemented as a neural network” in claim 1 and “using a neural network” in claim 9, nothing in the claims precludes the step from practically being performed in the mind. For example, but for the “being implemented as a neural network” and “using a neural network” language, “modeling” in the context of this claim encompasses the user manually plotting a wave field describing physical properties (e.g., density, conductivity, etc.) as generated by a transmitted ‘pulse’. Similarly, the limitations as drafted of “to optimize a loss function” in claim 1 and “optimizing a loss function” in claim 9 are processes that, under the broadest reasonable interpretation covers performance of the limitations in the mind but for the recitation of “operating a backpropagator using said neural network” and “using backpropagation with said neural network representation” language in claims 1 and 9, respectively. For example, but for the using/with “said neural network” language, optimizing a loss function using backpropagation in the context of these claims encompasses the user performing backpropagation by calculating a gradient, i.e., manually calculating the slope of a function. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer processing, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element – “being implemented as a neural network” in claim 1 and “using a neural network” in claim 9. The ‘neural network’ is recited at a high-level of generality (i.e., as a generic neural network performing a generic processing function of modeling a wave field and generating a predicted output) such that it amounts no more than mere instructions to apply the exception using a generic neural network. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a neural network to perform modeling a wave field and generating a predicted output, as well as optimizing a loss function using backpropagation, amounts to no more than mere instructions to apply the exception using a generic neural network. Mere instructions to apply an exception using a generic neural network cannot provide an inventive concept. Accordingly, claims 1 and 9 are not patent eligible.
Dependent claims 2-8 and 10-16 are also rejected under 35 U.S.C. §101. That is, neither the generic neural network in independent claims 1 and 9, nor any other additional element introduced in the dependent claims, add meaningful limitations to the abstract idea because these additional elements represent insignificant extra-solution activity. When viewed as a combination, these above-identified dependent claims merely instruct the practitioner to implement the claimed functions with well-understood, routine and conventional activity specified at a high level of generality in a particular technological environment. As such, the above-identified additional elements, when viewed as whole, do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself.
Claim Rejections - 35 USC § 112
35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 1-16 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 2-8 and 10-16 are rejected at least by virtue of dependency upon a rejected claim.
Claims 1 and 9 use inconsistent language in the recitation of multiple limitations. The recited limitations of: “said modeled wave field”; “an improved set of said physical properties”; "using said neural network representation of said wave function"; “a current improved set of said physical properties”; and “are said recovered physical properties of said non-linear medium” in claim 1 are unclear, because there is insufficient antecedent basis for these limitations in the claim. The interchangeable use of ‘said modeled wave field’ vs. said wave field’, ‘said neural network representation of said wave function’ vs. either of ‘a neural network’ or ‘the ‘non-linear wave function’, is unclear for this reason. Similarly, the use of ‘a set of physical properties’ throughout the claim is unclear for antecedent basis in the multiple limitations it is recited (e.g., ‘represented’, ‘improved’, ‘current improved’, ‘recovered’) and one of ordinary skill in the art would be unable to distinguish between the recitations based on the claim language as drafted. Claim 9 is similarly rejected for the analogous language as discussed regarding claim 1. The claims must be amended to clearly point out what is being referred to by using consistent language for the different terms.
Claim 1 further recites the limitations “said wave field modeler being implemented as a neural network having a neural network representation of a non-linear wave function of a set of physical properties of said wave field”; and claim 9 recites the limitations “said modeling using a neural network having a neural network representation of a non-linear wave function of a set of physical properties of said wave field”, which renders the claims indefinite. It is not clear what this language is claiming with the use of ‘a neural network representation of a non-linear wave function of a set of physical properties of said wave field’, whether the neural network is ‘representing’ a set of physical properties, if the neural network is a ‘non-linear wave function’ or is applying the ‘non-linear wave function’, if the ‘non-linear wave function’ ‘represents’ a set of physical properties, etc. It is suggested to amend the claims to clearly define the architecture and functional structure of the ‘neural network’, including the input(s) and output(s) of the ‘neural network’, within the context of the system.
Claim 1 further recites the limitations “said properties adjuster operating a backpropagator using said neural network representation of said wave function, said properties adjuster to activate said wave field modeler with said improved set of said physical properties”; and claim 9 recites the limitations “said generating using backpropagation with said neural network representation of said wave function; activating said modeling with said improved set of said physical properties”,
which renders the claims indefinite, because the claim language does not clearly describe the functions of the ‘properties adjuster’. It is not clear how the ‘properties adjuster’ operates (or performs the function of) backpropagation relative to the ‘neural network’ using a ‘representation of said wave function’; furthermore it is unclear what is meant by ‘activating’ the wave field modeler (e.g., applying an activation function, using the ‘improved set of said physical properties’ as input, etc.). The claim language fails to clearly describe the relationship between the different elements being claimed; accordingly, it is suggested to amend the claim to clearly define the functions of the ‘properties adjuster’.
Claim 1 further recites the limitations “a medium properties recoverer to output a current improved set of said physical properties once said properties adjuster finishes operation”; and claim 9 recites the limitations “providing a current improved set of said physical properties once said optimizing finishes operation”, which renders the claims indefinite. It is not clear when the ‘operation’ is finished, as there is no indication of any type or form of threshold (e.g., accuracy, sensitivity, etc.), number of iterations/training epochs, error tolerance, etc., which indicates completion of operation of the ‘properties adjuster’. For the purposes of examination, the broadest reasonable interpretation of the language recited in independent claims 1 and 9 – including those discussed above – is applied to the claim limitations. Appropriate correction is required.
Claim 2 and claim 10 recite the limitation “wherein said medium is a two or three dimensional body tissue”. There is insufficient antecedent basis for this limitation in the claim. It is not clear what ‘said medium’ is referring to; in an interpretation it may point to ‘said non-linear medium’ as recited in claim 1 and claim 9, or in another interpretation may refer to a new ‘medium’. It is suggested to amend the claims to use consistent language, in view of the discussion regarding claim 1 and claim 9 above.
Claim 5 recites the limitation “wherein said neural network of said wave field modeler comprises a non- linear function receiving said improved set of said physical parameters as input and to which two previous wave samples and a pulse sample are provided”; and claim 13 recites the limitation “wherein said neural network comprises a non- linear function receiving said improved set of said physical parameters as input and to which two previous wave samples and a pulse sample are provided”, which renders the claims indefinite. First, the use of “said neural network of said wave field modeler” is unclear and lacks antecedent basis, because the ‘neural network’ recited in claim 5 appears to be comprised within the ‘wave field modeler’; however, the ‘wave field modeler’ of claim 1 is ‘implemented’ as a neural network which implies that the ‘wave field modeler’ is a neural network. Accordingly, it is uncertain what the claim language is referring to, and whether or not there are multiple structures (i.e., a ‘wave field modeler’ distinct from the ‘neural network’) or a single structure. Similarly, the use of “a non- linear function” lacks antecedent basis because it is unclear if this element refers to the ‘non-linear wave function’ in claim 1 and claim 9 or to another new, distinct ‘function’. Furthermore, it is unclear what data elements are being input and to where – in an interpretation the ‘non-linear function’ receives as input all of the “improved set of said physical parameters”, “two previous wave samples” and “a pulse sample”; in another interpretation the “two previous wave samples” and “a pulse sample” are provided to the ‘neural network’ or ’wave field modeler’; or some other distinct combination of inputs. Additionally, it is not clear what distinction (if any) exists between the ‘wave samples’ and the ‘pulse sample’. It is suggested to amend the claims to clarify what data elements are input to which structures, and the relationship between the ‘wave field modeler’ and the ‘neural network’. For the purposes of examination the broadest reasonable interpretation of the claim language – including those discussed above – is applied to the limitations.
Claim 6 and claim 14 recites the limitations “restrict an output of said neural network to one of a linear array of elements, a convex array of elements, an elliptic array of elements, and an endo-cavitary array of elements” which renders the claim indefinite and lacks antecedent basis. First, it is not clear what is being ‘output’ by the neural network, if the claim limitation refers to either/both of the generated ‘predicted transducer output’ or ‘measured transducer output’ in claim 1 or claim 9, or to a new distinct ‘output’. Next, the claim is not clear as to what the restriction to an ‘array of elements’ means (e.g., a simulated output, a model of a transducer, etc.) or if it is pointing to the ‘predicted transducer output’ or ‘measured transducer output’ from claim 1 and claim 9. It is suggested to amend the claim language to clarify these limitations. For the purposes of examination the broadest reasonable interpretation of the claim language – including those discussed above – is applied to the limitations.
Claim 7 recites the limitation “a plurality of error calculators each to generate an error vector between said predicted transducer output and said measured transducer output for an associated wave field sample; a loss accumulator to accumulate said error vectors;”; and claim 15 recites the limitation “generating an error vector between said predicted transducer output and said measured transducer output for associated wave field samples; accumulating said error vectors;”, which renders the claims indefinite. There is insufficient antecedent basis for these limitations in the claims, due to the inconsistent use of a singular ‘error vector’ and a plurality of ‘error vectors’. It is suggested to amend the language to indicate a plurality of ‘error vectors’ to remain consistent throughout the claim. In addition, it is not clear what the ‘wave field sample(s)’ are generated by (e.g., the ‘predicted transducer output’, the ‘measured transducer output’, etc.), if they refer to the modeled ‘wave field’ from claim 1 and claim 9, or are provided by another resource. For the purposes of examination the broadest reasonable interpretation of the claim language – including those discussed above – is applied to the limitations. Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-6, 8-14 and 16 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sanchez et al. (US20210169336A1, 2021-06-10; hereinafter “Sanchez”).
Regarding claim 1, Sanchez teaches a system for recovering physical properties from a non-linear medium (“methods and systems for identifying a tissue characteristic in a subject” [abst]; “Provided herein are methods and apparatuses that improve information that may be used to identify characteristics in tissue. Methods and apparatuses described herein may improve machine learning algorithms and applications of such algorithms.” [0004]; “The present disclosure provides computer systems that are programmed to implement methods of the disclosure.” [322]; [0173-0334], [fig. 1, 6, 8-13, 16A-16D]), the system comprising:
a wave field modeler to model a wave field generated by at least one transmitted pulse as it travels in said non-linear medium and to generate a predicted transducer output from said modeled wave field (“For example, the different signals (e.g., two-photon or three-photon signals) can be used to form a map which may be indicative of different elements of a tissue. In some cases, the map is used to train machine learning based diagnosis algorithms.” [0100]; “an apparatus for generating a depth profile of a tissue of a subject may comprise an optical probe that transmits an excitation light beam from a light source towards a surface of the tissue,” [0173]; “the first and second images can be obtained in vivo. […] The optical scanning pattern can be set or determined by a trained algorithm, and can be modified during use, for example as different features are identified and used to model the data file(s)” [0218]; “The one or more machine learning algorithms may be used to process the data. The data from the second image may be used as a control. For example, the second image can be used in part to develop a model of the appearance of a healthy tissue,” [0243]; “The trained machine learning algorithm may be trained to generate a spatial map of the tissue. The spatial map may be a three-dimensional model of the tissue.” [0317]; An optical probe transmits an excitation light beam into tissue to induce a plurality of multi-photon excitations and receive a plurality of corresponding multi-photon signals, wherein the signals may be used to identify tissue characteristics and map elements of the tissue by applying a trained algorithm to the data [0173-0334], [fig. 1, 6, 8-13, 16A-16D]),
said wave field modeler being implemented as a neural network having a neural network representation of a non-linear wave function of a set of physical properties of said wave field (“Examples of features include, but are not limited to a property; physiology; anatomy; composition; histology; function; treatment; size; geometry; regularity; irregularity; optical property; chemical property; mechanical property or other property;” [0084] “An activation function may be a linear or non-linear function.” [0275]; “Neural networks may be programmed by training them with a sample set (data collected from one or more sensors) and allowing them to modify themselves during (and after) training so as to provide an output such as an output value. A trained algorithm may comprise convolutional neural networks, recurrent neural networks,” [0276]; “A nonlinear regression may be a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of model parameters and depends on one or more independent variables.” [0290]; The trained algorithm may be a recurrent neural network which models observational data using a nonlinear function [0211-0334], [fig. 1, 6, 8-13, 16A-16D]);
a properties adjuster to optimize a loss function between said predicted transducer output and a measured transducer output and to generate an improved set of said physical properties, said properties adjuster operating a backpropagator using said neural network representation of said wave function, said properties adjuster to activate said wave field modeler with said improved set of said physical properties (“Neural networks may be programmed by training them with a sample set (data collected from one or more sensors) and allowing them to modify themselves during (and after) training so as to provide an output such as an output value.” [0276]; “parameters may be trained using input data from a training data set and a gradient descent or backward propagation method” [0277]; “A recurrent neural network may be configured to receive sequential data as an input, such as consecutive data inputs, and a recurrent neural network software module may update an internal state at every time step. A recurrent neural network can use internal state (memory) to process sequences of inputs. […] A recurrent neural network may comprise fully recurrent neural network,” [0284]; A fully recurrent neural network utilizes recurrent connections and may receiving sequential data as input, providing the output of a prior time step as input to subsequent time step, and is trained using backpropagation through time to optimize loss function between time steps [0211-0334], [fig. 1, 6, 8-13, 16A-16D]); and
a medium properties recoverer to output a current improved set of said physical properties once said properties adjuster finishes operation, wherein said current improved set of said physical properties are said recovered physical properties of said non-linear medium (“the second image can be used in part to develop a model of the appearance of a healthy tissue, which can improve the accuracy of the machine learning algorithm in determining the presence of the tissue characteristic in the first region.” [0243]; “A neural network may comprise an input layer, to which data is presented; one or more internal, and/or “hidden,” layers; and an output layer.” [0274]; “Neural networks may be programmed by training them with a sample set (data collected from one or more sensors) and allowing them to modify themselves during (and after) training so as to provide an output such as an output value. A trained algorithm may comprise convolutional neural networks, recurrent neural networks,” [0276]; “The trained machine learning algorithm may be trained to generate a spatial map of the tissue. The spatial map may be a three-dimensional model of the tissue.” [0317]; The machine learning algorithm is trained to output a model of the tissue including features [0211-0334], [fig. 1, 6, 8-13, 16A-16D]).
Regarding claim 2, Sanchez teaches the system according to claim 1
Sanchez further teaching wherein said medium is a two or three dimensional body tissue (“A depth profile can provide information at various depths of the sample, for example at various depths of a skin tissue. […] A depth profile may or may not correspond to a planar slice of tissue. A depth profile may correspond to a slice of tissue on a slanted plane. A depth profile may correspond to a tissue region that is not precisely a planar slice (e.g., the slice may have components in all three dimensions).” [0096]; “The image frame may be in any design, shape, or size. Examples of shapes or designs include but are not limited to: […] two-dimensional geometric shapes, multi-dimensional geometric shapes,” [0122]; “an apparatus for generating a depth profile of a tissue of a subject may comprise an optical probe that transmits an excitation light beam from a light source towards a surface of the tissue,” [0173]; [0173-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 1 rejection]).
Regarding claim 3, Sanchez teaches the system according to claim 1
Sanchez further teaching wherein said wave field is one of: an acoustic, an electromagnetic, an elastic, a photo-acoustic, and an acousto-optic wave (“The term “light,” as used herein, generally refers to electromagnetic radiation.” [0089]; “an apparatus for generating a depth profile of a tissue of a subject may comprise an optical probe that transmits an excitation light beam from a light source towards a surface of the tissue,” [0173]; [0173-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 1 rejection]).
Regarding claim 4, Sanchez teaches the system according to claim 1
Sanchez further teaching wherein said at least one transmitted pulse is one of: a plane wave, a focused beam, and a diverging wave (“The term “excitation light beam,” as used herein, generally refers to the focused light beam directed to tissue to create a generated signal. […] An excitation light beam can be a pulsed single beam of light. An excitation beam of light can be a plurality of light beams.” [0105]; [0173-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 1 rejection]).
Regarding claim 5, Sanchez teaches the system according to claim 1
Sanchez further teaching wherein said neural network of said wave field modeler comprises a non- linear function receiving said improved set of said physical parameters as input and to which two previous wave samples and a pulse sample are provided (“An activation function may be a linear or non-linear function.” [0275]; “Neural networks may be programmed by training them with a sample set (data collected from one or more sensors) and allowing them to modify themselves during (and after) training so as to provide an output such as an output value. A trained algorithm may comprise convolutional neural networks, recurrent neural networks,” [0276]; “parameters may be trained using input data from a training data set and a gradient descent or backward propagation method” [0277]; “A recurrent neural network may be configured to receive sequential data as an input, such as consecutive data inputs, and a recurrent neural network software module may update an internal state at every time step. A recurrent neural network can use internal state (memory) to process sequences of inputs. […] A recurrent neural network may comprise fully recurrent neural network, independently recurrent neural network, […], or any combination thereof.” [0284]; “A nonlinear regression may be a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of model parameters and depends on one or more independent variables.” [0290]; Sequential data inputs (i.e., wave samples, pulse samples) are received by the recurrent neural network as input during each time step [0211-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 1 rejection]).
Regarding claim 6, Sanchez teaches the system according to claim 5
Sanchez further teaching wherein said wave field modeler also comprises a restriction operator to restrict an output of said neural network to one of a linear array of elements, a convex array of elements, an elliptic array of elements, and an endo-cavitary array of elements (“Other generated signals may include but are not limited to Optical Coherence Tomography (OCT), single or multi-photon fluorescence/autofluorescence lifetime imaging, polarized light microscopy signals, additional confocal microscopy signals, and ultrasonography signals.” [0106]; “The one or more computer processors may be operatively coupled to the one or more sensors. The one or more sensors may comprise an infrared sensor, optical sensor, microwave sensor, ultrasonic sensor, radio-frequency sensors, magnetic sensor, vibration sensor, acceleration sensor, gyroscopic sensor, tilt sensor, piezoelectric sensor,” [0210]; The ultrasonic sensor, piezoelectric sensor are transducer elements [0211-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 1 rejection]).
Regarding claim 8, Sanchez teaches the system according to claim 1
Sanchez further teaching wherein said neural network is one of: a recurrent neural network and a deep neural network whose layers express the time-dependence of said non- linear wave function (“The trained algorithm may comprise one or more neural networks.” [0274]; “A trained algorithm may comprise convolutional neural networks, recurrent neural networks,” [0276]; [0211-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 1 rejection]).
Regarding claim 9, Sanchez teaches a method for recovering physical properties from a non-linear medium (“A method for generating a dataset comprising a plurality of images of a tissue of a subject,” [clm 76]; “Provided herein are methods and apparatuses that improve information that may be used to identify characteristics in tissue. Methods and apparatuses described herein may improve machine learning algorithms and applications of such algorithms.” [0004]; “The present disclosure provides computer systems that are programmed to implement methods of the disclosure.” [322]; [0173-0334], [fig. 1, 6, 8-13, 16A-16D]), the method comprising:
modeling a wave field generated by at least one transmitted pulse as it travels in said non-linear medium (“(a) obtaining, via a handheld imaging probe, a first set of images from a first part of said tissue of said subject and a second set of images from a second part of said tissue of said subject,” [clm 76]; “For example, the different signals (e.g., two-photon or three-photon signals) can be used to form a map which may be indicative of different elements of a tissue. In some cases, the map is used to train machine learning based diagnosis algorithms.” [0100]; “an apparatus for generating a depth profile of a tissue of a subject may comprise an optical probe that transmits an excitation light beam from a light source towards a surface of the tissue,” [0173]; “the first and second images can be obtained in vivo. […] The optical scanning pattern can be set or determined by a trained algorithm, and can be modified during use, for example as different features are identified and used to model the data file(s)” [0218]; [0173-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 1 rejection]),
said modeling using a neural network having a neural network representation of a non-linear wave function of a set of physical properties of said wave field (“(i) applying a trained machine learning algorithm to said data” [clm 89]; “Examples of features include, but are not limited to a property; physiology; anatomy; composition; histology; function; treatment; size; geometry; regularity; irregularity; optical property; chemical property; mechanical property or other property;” [0084] “An activation function may be a linear or non-linear function.” [0275]; “Neural networks may be programmed by training them with a sample set (data collected from one or more sensors) and allowing them to modify themselves during (and after) training so as to provide an output such as an output value. A trained algorithm may comprise convolutional neural networks, recurrent neural networks,” [0276]; “A nonlinear regression may be a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of model parameters and depends on one or more independent variables.” [0290]; [0211-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 1 rejection]);
generating a predicted transducer output from said modeled wave field (“(ii) classifying said subject as being positive or negative for said tissue characteristic based on a presence or absence of one or more features indicative of said tissue characteristic” [clm 89]; “The one or more machine learning algorithms may be used to process the data. The data from the second image may be used as a control. For example, the second image can be used in part to develop a model of the appearance of a healthy tissue,” [0243]; “The trained machine learning algorithm may be trained to generate a spatial map of the tissue. The spatial map may be a three-dimensional model of the tissue.” [0317]; [0211-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 1 rejection]);
optimizing a loss function between said predicted transducer output and a measured transducer output to generate an improved set of said physical properties, said generating using backpropagation with said neural network representation of said wave function (“parameters may be trained using input data from a training data set and a gradient descent or backward propagation method” [0277]; “A recurrent neural network may be configured to receive sequential data as an input, such as consecutive data inputs, and a recurrent neural network software module may update an internal state at every time step. A recurrent neural network can use internal state (memory) to process sequences of inputs. […] A recurrent neural network may comprise fully recurrent neural network,” [0284]; [0211-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 1 rejection]);
activating said modeling with said improved set of said physical properties (“(c) training a machine learning algorithm using said data.” [clm 97]; “Neural networks may be programmed by training them with a sample set (data collected from one or more sensors) and allowing them to modify themselves during (and after) training so as to provide an output such as an output value.” [0276]; “A recurrent neural network may be configured to receive sequential data as an input, such as consecutive data inputs, and a recurrent neural network software module may update an internal state at every time step. A recurrent neural network can use internal state (memory) to process sequences of inputs. […] A recurrent neural network may comprise fully recurrent neural network, independently recurrent neural network, […], or any combination thereof.” [0284]; [0211-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 1 rejection]); and
providing a current improved set of said physical properties once said optimizing finishes operation, wherein said current improved set of said physical properties are said recovered physical properties of said non-linear medium (“the second image can be used in part to develop a model of the appearance of a healthy tissue, which can improve the accuracy of the machine learning algorithm in determining the presence of the tissue characteristic in the first region.” [0243]; “A neural network may comprise an input layer, to which data is presented; one or more internal, and/or “hidden,” layers; and an output layer.” [0274]; “Neural networks may be programmed by training them with a sample set (data collected from one or more sensors) and allowing them to modify themselves during (and after) training so as to provide an output such as an output value. A trained algorithm may comprise convolutional neural networks, recurrent neural networks,” [0276]; “The trained machine learning algorithm may be trained to generate a spatial map of the tissue. The spatial map may be a three-dimensional model of the tissue.” [0317]; [0211-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 1 rejection]).
Regarding claim 10, Sanchez teaches the method according to claim 9
Sanchez further teaching wherein said medium is a two or three dimensional body tissue (“A method for generating a dataset comprising a plurality of images of a tissue of a subject,” [clm 76]; “A depth profile can provide information at various depths of the sample, for example at various depths of a skin tissue. […] A depth profile may or may not correspond to a planar slice of tissue. A depth profile may correspond to a slice of tissue on a slanted plane. A depth profile may correspond to a tissue region that is not precisely a planar slice (e.g., the slice may have components in all three dimensions).” [0096]; “The image frame may be in any design, shape, or size. Examples of shapes or designs include but are not limited to: […] two-dimensional geometric shapes, multi-dimensional geometric shapes,” [0122]; [0173-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 9 rejection]).
Regarding claim 11, Sanchez teaches the method according to claim 9
Sanchez further teaching wherein said wave field is one of: an acoustic, an electromagnetic, an elastic, a photo-acoustic, and an acousto-optic wave wave (“The term “light,” as used herein, generally refers to electromagnetic radiation.” [0089]; “an apparatus for generating a depth profile of a tissue of a subject may comprise an optical probe that transmits an excitation light beam from a light source towards a surface of the tissue,” [0173]; [0173-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 9 rejection]).
Regarding claim 12, Sanchez teaches the method according to claim 9
Sanchez further teaching wherein said at least one transmitted pulse is one of: a plane wave, a focused beam, and a diverging wave (“The term “excitation light beam,” as used herein, generally refers to the focused light beam directed to tissue to create a generated signal. […] An excitation light beam can be a pulsed single beam of light. An excitation beam of light can be a plurality of light beams.” [0105]; [0173-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 9 rejection]).
Regarding claim 13, Sanchez teaches the method according to claim 9
Sanchez further teaching wherein said neural network comprises a non- linear function receiving said improved set of said physical parameters as input and to which two previous wave samples and a pulse sample are provided (“An activation function may be a linear or non-linear function.” [0275]; “Neural networks may be programmed by training them with a sample set (data collected from one or more sensors) and allowing them to modify themselves during (and after) training so as to provide an output such as an output value. A trained algorithm may comprise convolutional neural networks, recurrent neural networks,” [0276]; “parameters may be trained using input data from a training data set and a gradient descent or backward propagation method” [0277]; “A recurrent neural network may be configured to receive sequential data as an input, such as consecutive data inputs, and a recurrent neural network software module may update an internal state at every time step. A recurrent neural network can use internal state (memory) to process sequences of inputs. […] A recurrent neural network may comprise fully recurrent neural network, independently recurrent neural network, […], or any combination thereof.” [0284]; “A nonlinear regression may be a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of model parameters and depends on one or more independent variables.” [0290]; [0211-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 5 rejection]).
Regarding claim 14, Sanchez teaches the method according to claim 13
Sanchez further teaching wherein said modeling also comprises restricting an output of said neural network to one of a linear array of elements, a convex array of elements, an elliptic array of elements, and an endo-cavitary array of elements (“Other generated signals may include but are not limited to Optical Coherence Tomography (OCT), single or multi-photon fluorescence/autofluorescence lifetime imaging, polarized light microscopy signals, additional confocal microscopy signals, and ultrasonography signals.” [0106]; “The one or more computer processors may be operatively coupled to the one or more sensors. The one or more sensors may comprise an infrared sensor, optical sensor, microwave sensor, ultrasonic sensor, radio-frequency sensors, magnetic sensor, vibration sensor, acceleration sensor, gyroscopic sensor, tilt sensor, piezoelectric sensor,” [0210]; [0211-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 6 rejection]).
Regarding claim 16, Sanchez teaches the method according to claim 9
Sanchez further teaching wherein said neural network is one of: a recurrent neural network and a deep neural network whose layers express the time-dependence of said non- linear wave function (“(i) applying a trained machine learning algorithm to said data” [clm 89]; “(c) training a machine learning algorithm using said data.” [clm 97]; “The trained algorithm may comprise one or more neural networks.” [0274]; “A trained algorithm may comprise convolutional neural networks, recurrent neural networks,” [0276]; [0211-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 9 rejection]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 7 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sanchez as applied to claims 1 and 9 above, in further view of Lucka et al. (Felix Lucka et al 2022 Inverse Problems 38 025008, 2021-12-30; hereinafter “Lucka”) as provided by Applicant.
Regarding claim 7, Sanchez teaches the system according to claim 1
Sanchez further teaching the properties adjuster [see claim 1 rejection]; and
a non-linear gradient calculator, implemented by said backpropagator, to exploit said non-linear wave function to backpropagate gradients through said neural network, thereby to return gradients of said wave field with respect to said set of physical properties (“Neural networks may be programmed by training them with a sample set (data collected from one or more sensors) and allowing them to modify themselves during (and after) training so as to provide an output such as an output value.” [0276]; “parameters may be trained using input data from a training data set and a gradient descent or backward propagation method” [0277]; “A recurrent neural network may be configured to receive sequential data as an input, such as consecutive data inputs, and a recurrent neural network software module may update an internal state at every time step. A recurrent neural network can use internal state (memory) to process sequences of inputs. […] A recurrent neural network may comprise fully recurrent neural network,” [0284]; [0211-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 1 rejection]);
but Sanchez fails to teach an error vector.
However, in the same field of endeavor, Lucka a system for recovering physical properties from a non-linear medium (“The key focus and contribution of our work is to develop and demonstrate a comprehensive computational strategy that achieves accurate high-resolution, 3D FWI for breast UST with a hemispherical array using only moderate computational resources” [1.3 Paper Scope and Structure, p.4]; [abst], [1. Introduction, p.2-4]);
Lucka further teaching a plurality of error calculators each to generate an error vector between said predicted transducer output and said measured transducer output for an associated wave field sample (“In FWI, we assume that for a given u, we can solve (7) to simulate data, i.e., fi(u) := MiA(u)-1si. Then, we try to optimize u such that the discrepancies between simulated and measured data become small: […] where D(f, g) is a loss function (see [98, 25] for a discussion of suitable loss functions), and U is a set of constraints on u, e.g., bound constraints” [2.3. Full Waveform Inversion, p.5]; The loss function D(f, g) finds the error between simulated and measured data (i.e., error vectors) for each iteration ns of temporal sequences [2. Full Waveform Inversion for Ultrasound Tomography, 3. Improving Memory Footprint, Efficiency and Convergence, p.4-9]);
a loss accumulator to accumulate said error vectors (“First-order optimization schemes solve (8) using only the gradient VJ(u), which is given by the sum over terms of the form VuD(MA-1(u)s,fd)” [2.3. Full Waveform Inversion, p.5]; Sum of the terms accumulates the error for each temporal sequence [2. Full Waveform Inversion for Ultrasound Tomography, 3. Improving Memory Footprint, Efficiency and Convergence, p.4-9]); and
a non-linear gradient calculator, implemented by said backpropagator, to exploit said non-linear wave function to backpropagate gradients through said neural network, thereby to return gradients of said wave field with respect to said set of physical properties (“First-order optimization schemes solve (8) using only the gradient VJ(u), which is given by the sum over terms of the form VuD(MA-1(u)s,fd)” [2.3. Full Waveform Inversion, p.5]; “TR was developed for focusing ultrasound waves through inhomogenous media [18, 34, 97] and is used for image reconstruction in photoacoustic tomography (PAT) [9, 57]. Here, we use it as a numerical trick to approximately replay the forward field p(x, t) backwards in time, in parallel to solving the adjoint wave equation. […] The integral in (10) can directly be accumulated during the parallel time stepping scheme solving the both TR and adjoint wave equations: the state variables in each computation are exactly what is needed to compute the contribution to the integral.” [3.1. Memory-Efficient Gradient Computation Using Time Reversal, p.7];
[2. Full Waveform Inversion for Ultrasound Tomography, 3. Improving Memory Footprint, Efficiency and Convergence, p.4-9]).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to combine the system for recovering physical properties from a non-linear medium taught by Sanchez with the error vectors taught by Lucka. Evaluation of tissue characteristics can be slow and inefficient due to the biopsy process used to generate the tissue samples; furthermore, biopsies can be invasive, thus limiting the number and/or size of excised tissue samples taken from a subject. There is a need for improved methods for identifying and detecting tissue characteristics. The combined system may improve information that may be used to identify characteristics in tissue, as well as may improve machine learning algorithms and applications of such algorithms (Sanchez [0003-0004]). Even when using the adjoint state method, solving time-domain full waveform inversion for such scenarios is computationally challenging: even an efficient k-space pseudospectral method takes at least 10 minutes to solve a single wave simulation on a recent GPU. As J(u) involves the sum over ns sources, computing VJ(u) requires ns parallel gradient computations each involving two wave simulations. On a single GPU, this would take 14 days. In addition to these difficulties, first-order optimization methods for FWI suffer from slow
convergence and may get stuck in suboptimal local minima of the non-convex function J(u). The described techniques may circumvent these problems and combine them into a comprehensive computational strategy to achieve high resolution 3D TD-FWI for ultrasonic breast imaging (Lucka [2.4. Challenges of High Resolution 3D Time Domain Full Waveform Inversion, p.6]).
Regarding claim 15, Sanchez teaches the method according to claim 9
Sanchez further teaching exploiting said non-linear wave function to backpropagate gradients through said neural network, thereby to return gradients of said wave field with respect to said set of physical properties (“Neural networks may be programmed by training them with a sample set (data collected from one or more sensors) and allowing them to modify themselves during (and after) training so as to provide an output such as an output value.” [0276]; “parameters may be trained using input data from a training data set and a gradient descent or backward propagation method” [0277]; “A recurrent neural network may be configured to receive sequential data as an input, such as consecutive data inputs, and a recurrent neural network software module may update an internal state at every time step. A recurrent neural network can use internal state (memory) to process sequences of inputs. […] A recurrent neural network may comprise fully recurrent neural network,” [0284]; [0211-0334], [fig. 1, 6, 8-13, 16A-16D], [see claim 7 rejection]);
but Sanchez fails to teach an error vector.
However, in the same field of endeavor, Lucka teaches a method for recovering physical properties from a non-linear medium (“The key focus and contribution of our work is to develop and demonstrate a comprehensive computational strategy that achieves accurate high-resolution, 3D FWI for breast UST with a hemispherical array using only moderate computational resources” [1.3 Paper Scope and Structure, p.4]; [abst], [1. Introduction, p.2-4]);
Lucka further teaching generating an error vector between said predicted transducer output and said measured transducer output for associated wave field samples (“In FWI, we assume that for a given u, we can solve (7) to simulate data, i.e., fi(u) := MiA(u)-1si. Then, we try to optimize u such that the discrepancies between simulated and measured data become small: […] where D(f, g) is a loss function (see [98, 25] for a discussion of suitable loss functions), and U is a set of constraints on u, e.g., bound constraints” [2.3. Full Waveform Inversion, p.5]; [2. Full Waveform Inversion for Ultrasound Tomography, 3. Improving Memory Footprint, Efficiency and Convergence, p.4-9], [see claim 7 rejection]);
accumulating said error vectors (“First-order optimization schemes solve (8) using only the gradient VJ(u), which is given by the sum over terms of the form VuD(MA-1(u)s,fd)” [2.3. Full Waveform Inversion, p.5]; Sum of the terms accumulates the error for each temporal sequence [2. Full Waveform Inversion for Ultrasound Tomography, 3. Improving Memory Footprint, Efficiency and Convergence, p.4-9], [see claim 7 rejection]); and
exploiting said non-linear wave function to backpropagate gradients through said neural network, thereby to return gradients of said wave field with respect to said set of physical properties (“First-order optimization schemes solve (8) using only the gradient VJ(u), which is given by the sum over terms of the form VuD(MA-1(u)s,fd)” [2.3. Full Waveform Inversion, p.5]; “TR was developed for focusing ultrasound waves through inhomogenous media [18, 34, 97] and is used for image reconstruction in photoacoustic tomography (PAT) [9, 57]. Here, we use it as a numerical trick to approximately replay the forward field p(x, t) backwards in time, in parallel to solving the adjoint wave equation. […] The integral in (10) can directly be accumulated during the parallel time stepping scheme solving the both TR and adjoint wave equations: the state variables in each computation are exactly what is needed to compute the contribution to the integral.” [3.1. Memory-Efficient Gradient Computation Using Time Reversal, p.7];
[2. Full Waveform Inversion for Ultrasound Tomography, 3. Improving Memory Footprint, Efficiency and Convergence, p.4-9], [see claim 7 rejection]).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to combine the method for recovering physical properties from a non-linear medium taught by Sanchez with the error vectors taught by Lucka. Evaluation of tissue characteristics can be slow and inefficient due to the biopsy process used to generate the tissue samples; furthermore, biopsies can be invasive, thus limiting the number and/or size of excised tissue samples taken from a subject. There is a need for improved methods for identifying and detecting tissue characteristics. The combined system may improve information that may be used to identify characteristics in tissue, as well as may improve machine learning algorithms and applications of such algorithms (Sanchez [0003-0004]). Even when using the adjoint state method, solving time-domain full waveform inversion for such scenarios is computationally challenging: even an efficient k-space pseudospectral method takes at least 10 minutes to solve a single wave simulation on a recent GPU. As J(u) involves the sum over ns sources, computing VJ(u) requires ns parallel gradient computations each involving two wave simulations. On a single GPU, this would take 14 days. In addition to these difficulties, first-order optimization methods for FWI suffer from slow
convergence and may get stuck in suboptimal local minima of the non-convex function J(u). The described techniques may circumvent these problems and combine them into a comprehensive computational strategy to achieve high resolution 3D TD-FWI for ultrasonic breast imaging (Lucka [2.4. Challenges of High Resolution 3D Time Domain Full Waveform Inversion, p.6]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Sun et al. (WO2021116800A1, 2021-06-17) teaches a system and method for applying a neural network to an optimization problem, and more particularly, to using a neural network for providing a trained misfit function that estimates a distance between measured data and calculated data [0002].
Chiang et al. (US20210015456A1, 2021-01-21) teaches systems and methods of medical ultrasound imaging which can employ a plurality of machine learning applications including, for example, neural network for processing ultrasound image data and quantitative data generated by the system [0005].
Matsuura et al. (US20200311878A1, 2020-10-01) teaches a method and apparatus to perform medical imaging in which feature-aware reconstruction is performed using a neural network [abst].
Pandit et al. (US20180279965A1, 2018-10-04) teaches methods, apparatus and systems for determining one or more physiological parameters, such as for ambulatory blood pressure and other vital sign monitoring [abst], further teaching mapping the measured differential pulse arrival time to a corresponding blood pressure determined by the calibration data, wherein the mapping may be performed using an artificial neural network mapping [0023].
Any inquiry concerning this communication or earlier communications from the examiner should be directed to James F. McDonald III whose telephone number is (571)272-7296. The examiner can normally be reached M-F; 8AM-6PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Koharski can be reached at 5712727230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JAMES FRANKLIN MCDONALD III
Examiner
Art Unit 3797
/CHRISTOPHER KOHARSKI/Supervisory Patent Examiner, Art Unit 3797