DETAILED ACTION
This office action is responsive to the amendment filed November 4, 2025. The application contains claims 1 and 3-14, all examined and rejected.
Please note the change in examiner in this case.
This action contains at least one new ground of rejection not necessitated by applicant’s amendment or by the filing of an IDS. Therefore the action is made non-final.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 and 3-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
With regard to Claim 1,
Step 2A, Prong 1
This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim.
Claim 1 recites:
A data processing device, comprising:
a processing circuit that includes a complex number neural network with an activation function by which an output varies according to an argument of a complex number that is input,
wherein the activation function is a function that is expressed by a product of a gain control function that extracts a signal component in a given angular direction and the complex number that is input,
wherein the gain control function is zero for an angle input to the gain control function that is not within a given angular range.
The broadest reasonable interpretation of the bolded limitations above are directed to mathematical concepts. The claims recite mathematical relationships and/or formulas.
Step 2A, Prong 1 (Yes).
Step 2A, Prong 2
This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d).
The additional elements in this claim are: “A data processing device comprising a processing circuit.” This element is recited at a high level of generality and thus is a generic computer component performing generic computer functions. Thus these are mere instructions to apply the exception using a generic computer component. See MPEP 2106.05(f).
Even when viewed in combination the additional element does not integrate the recited judicial exception into a practical application.
Step 2A, Prong 2 (No).
Step 2B
This part of the eligibility analysis evaluates whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05.
As explained with respect to Step 2A, the only additional element is “A data processing device comprising a processing circuit” which at best is mere instructions to apply the abstract ideas and cannot provide an inventive concept, even when considered in combination. See MPEP 2106.05(f).
Step 2B (No).
Claim 1 is ineligible.
With regard to Independent Claim 14,
Claim 14 is similar in scope to Claim 1 and is rejected under a similar rationale and thus is also ineligible.
Dependent Claims:
Claims 3-12: These claims only recite further abstract ideas (mathematical concepts) or elaborate on the existing abstract ideas and thus are ineligible. Note that for claim 12, the “training” step uses the activation function which is a mathematical concept. See e.g., Example 47, claim 2.
Claim 13: This claim merely indicates a field of use or technological environment in which the judicial exception is performed and thus fails to add an inventive concept to the claim. See MPEP 2106.05(h). This claim is ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3-6, 8 and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Daval, U.S. PGPub 20200042873 in view of Martinez-Canales (hereinafter “Martinez”), US PG Pub #2020/0117993, effective filing date May 31, 2018
With regard to Independent Claim 1,
Daval teaches a processing circuit that includes a complex number neural network with an activation function by which an output varies according to an argument of a complex number that is input. (in Daval [0012]: The complex valued neural network [a complex number neural network] is defined with a Cardioid or kernel activation function.; in Daval [0065]: The rotated Cardioid, bivariate kernel, and polar kernel activation functions in relationship in the complex plane [according to an argument of a complex number that is input]; in Daval [0063]: The output is for spatially distributed locations, such as outputting T1 and T2 [output varies] values for real 2D or 3D locations based on MR measurements for the same or different locations.; in Daval [0064]: The machine learning model is trained to learn the correspondence between the input and the output where one or both of the input or output have complex values…
wherein the activation function is a function that is expressed by a product of a gain control function that extracts a signal component in a given angular direction and a complex number that is input. (in Daval [0054]: The Cardioid function [gain control function] is: f(z)=½(1+cos(∠z))z [expressed by a product], where z is the input value [that is input]. The Cardioid function may be considered as a smoother version of a complex ReLU (CReLU), where the real and imaginary parts are separately handled as real values. This Cardioid function has a fixed orientation toward the real axis. To include a complex parameter, the trainable Cardioid function may orient differently in the 2D complex plane for each neuron by introducing a bias term in the phase as: f(z)=½(1+cos(∠z+∠b))z where ∠b is the rotation learned through training. The behavior concerning the phase of the complex numbers as input is to be learned where the magnitude [extracts a signal component] is modulated according to a specific angle [in a given angular direction].; in Daval [0058]: Z and d are both complex numbers [a complex number] represented as 2D vectors in the complex plane or 2D grid.)
Daval does not explicitly disclose “wherein the gain control function is zero for an angle input to the gain control function that is not within a given angular range.”
However the examiner notes that Daval does disclose using Relu or complex Relu for the gain control function see e.g., [0008]. These functions are well known in the art and typically output zero when the input is zero or less.
For example, Martinez discloses a function [with output] zero for an angle input to the function that is not within a given angular range. See e.g., [0170], discussing that so long as each of the real and imaginary components is a positive real number then the output is the input but otherwise (not within the given range) it is zero. The examiner notes that “angular range” is understood in the art to simply be the angle if the real and imaginary parts were to be plotted on a graph. Thus, here the “given angular range” is the quadrant where both real/imaginary components are positive.
It would have been obvious to a person of ordinary skill in the art before the effective filing date having both Daval and Martinez before them to modify Daval’s gain control function so that the output is zero for angle’s input to the function that are not within a preferred range as discussed by Martinez. One would be motivated to do so to improve efficiency by focusing only on relevant signals to pass through.
Regarding claim 3, Daval teaches all the limitations of claim 1 as mentioned above.
Daval further teaches:
wherein the activation function is a function that extracts the signal component within a given angular range from the given angular direction. (in Daval [0054]: This Cardioid function has a fixed orientation toward the real axis. To include a complex parameter, the trainable Cardioid function may orient differently in the 2D complex plane for each neuron by introducing a bias term in the phase as: f(z)=½(1+cos(∠z+∠b))z where ∠b [within a given angular range] is the rotation learned through training. The behavior concerning the phase of the complex numbers as input is to be learned where the magnitude [extracts a signal component] is modulated according to a specific angle [from the given angular direction].)
Regarding claim 4, Daval teaches all the limitations of claim 1 as mentioned above.
Daval further teaches:
wherein the activation function is a function corresponding to an operation of rotation on an origin, an operation of taking a real part of the complex number, and an operation of rotation in a direction opposite to that of the operation of rotation. (in Daval [0053]: For the Cardioid activation function, the learnable parameters are a rotation in the grid [an operation of rotation on an origin]. The rotation of the activation function in the complex value space relates the two components (e.g., X-axis real component [an operation of taking a real part] to the y-axis imaginary component [of a complex number]).; [0067]: For the rotated Cardioid, the rotation in the complex plane is reflected [an operation of rotation in a direction opposite to that of the operation of rotation] by the change in angle of the dashed line as compared to the Cardioid activation function without the learned angle (i.e., B).).
Regarding claim 5, Daval teaches all the limitations of claim 1 as mentioned above.
Daval further teaches:
wherein the processing circuit is further configured to apply the activation function while changing a parameter contained in the activation function. (in Daval [0050]: The model for machine learning is defined to include one or more learnable complex-valued activation functions [configured to apply the activation function]. The complex-valued activation functions each include [contained in the activation function] one or plural learnable parameters [changing a parameter] for a relationship between real and imaginary components or between magnitude and phase components.).
Regarding claim 6, Daval further teaches:
wherein the processing circuit is further configured to apply the activation function while changing the parameter to different nodes. (in Daval [0050]: Each or some of the nodes are defined with an activation function [configured to apply the activation function]. The same or different [while changing the parameter] activation functions are provided for the different nodes [to different nodes].)
Regarding claim 8, Daval teaches all the limitations of claim 5 as mentioned above.
Daval further teaches:
wherein the processing circuit is further configured to apply the activation function while changing the parameter in each layer of the complex number neural network. (in Daval [0029]: The non-linearities are extended for complex values either by adapting them from the real domain to the complex domain or by adding customizable parameters in their definition. Learnable parameters are included in the definition of the non-linearities. “Learning,” “learned,” or “learnable” terms refer to the process of backpropagation used to train the neural networks or another machine learned network. The shape of the different non-linearities [the parameter] for each layer [in each layer of the complex number neural network] or neuron is learned [changing] from the complex-value data.)
Regarding claim 12, Daval teaches all the limitations of claim 1 as mentioned above.
Daval further teaches:
wherein the processing circuit is further configured to optimize a parameter relating to the activation function, (in Daval [0062]: During the optimization [optimizing], the different distinguishing values of learnable parameters [a parameter] are learned, such as learning the angle for a Cardioid activation function [relating to the activation function] of a given node and/or the shifts or variance for a kernel activation function of a given node.)
and perform training using the activation function based on the optimized parameter and generate a trained model. (in Daval [0063]: Once trained [performing training], the model may be applied to generate the output from new inputs.; in [0064]: The many samples in the training data are used to learn. The machine learning model is trained [generate a trained machine learning model] to learn the correspondence between the input and the output where one or both of the input or output have complex values.)
Regarding claim 13, Daval teaches all the limitations of claim 1 as mentioned above.
Daval further teaches:
wherein the processing circuit is further configured to apply the complex number neural network to magnetic resonance data or ultrasound data. (in Daval [0006]: For example, the complex-valued neural network [to apply to complex number neural network] was trained for outputting values of multiple parameters for magnetic resonance [to magnetic resonance data] fingerprinting, and the displayed image is for one of the parameters where another image is displayed for another of the parameters.)
Regarding claim 14, the claim recites similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Daval in view of Martinez further in view of Proctor et al. (US 2019/0080233 A1).
Regarding claim 7, Daval teaches all the limitations of claim 5 as mentioned above.
Daval further teaches wherein the processing circuit is further configured to apply the activation function (in Daval [0055]: The learnable parameter for the relationship between complex components may be separately learned for each activation function or node [apply the activation function].)
However, Daval does not appear to explicitly teach while changing the parameter to the same node.
However, Proctor teaches while changing the parameter to a same node. (in Proctor [0061]: While the above method 600 is implemented in a multi-node system, it may also be possible to adjust the above method to a single-node [to the same node] system in which a shared memory is used to store and update the gradients of the parameters [changing the parameters] values of layers.).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Daval and Proctor before them, to include Proctor’s adjustable parameters within a single node in Daval’s complex neural network that uses activation functions. One would have been motivated to make such a combination in order to improve efficiency of the model by synchronizing parameters as taught by Proctor ([0016]).
Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Daval in view of Martinez further in view of Takeshima (US 2020/0003858 A1)
Regarding claim 9, Daval teaches all the limitations of claim 5 as mentioned above.
Daval further teaches wherein the parameter is an amount corresponding to the angle (in Daval [0053]: the learnable parameters [wherein the parameter] are a rotation in the grid. The rotation of the activation function in the complex value space relates the two components (e.g., X-axis real component to the y-axis imaginary component). The rotation is an angle [is an amount corresponding to an angle] to be machine learned as a bias term for rotation in phase of the Cardioid function.)
However, Daval does not appear to explicitly teach and the processing circuit is further configured to change the parameter to an integer multiple of a first angle.
However, Takeshima teaches and the processing circuit is configured to change the parameter to an integer multiple of a first angle. (in Takeshima [0055]: The acquisition angle of each candidate trajectory is set to a multiple [to a multiple of a first angle] of the basic angle. In other words, assuming that the acquisition angle of the first data acquisition trajectory is 0 degrees, setting would be the acquisition angle of the second candidate trajectory = basic angle, the acquisition angle of the third candidate trajectory = “basic anglex2”, and the acquisition angles of the fourth candidate trajectory = "basic anglex3",... and so forth.; in [0112]: when the acquisition condition of the target imaging adopts FSE is adopted, the value of the element 84 corresponding to FSE is set to “1”, the value of the element 84 corresponding to FE is set to “O” [integer], and the value of the element 84 corresponding to EPI is set to “O”.)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Daval and Takeshima before them, to include Takeshima’s use of integer multiples in Daval’s complex neural network that uses activation functions. One would have been motivated to make such a combination in order to improve the model by allowing the basic angle to be substantially equal to the golden angle as taught by Takeshima ([0057]).
Regarding claim 10, Daval in view of Takeshima teach all the limitations of claim 9 as mentioned above.
Takeshima further teaches wherein the first angle is a value obtained by dividing 360 degrees or 180 degrees by a golden ratio. (in Takeshima [0057]: The acquisition angle [wherein the first angle is a value obtained] may be set to a value obtained by dividing 360 [by dividing 360 degrees] degrees by the number of the elements. For example, when the number of the elements is 1000, the basic angle may be set to 360/1000 degrees=0.36 degrees. Also, the basic angle may be set to a multiple of 360/1000 degrees. For example, if the multiple is 309, the basic angle will be set to 360/1000 degrees×309=111.24 degrees. This allows the basic angle to be substantially equal to the golden angle [by a golden ratio].)
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Daval in view of Martinez further in view of Takeshima further in view of Clark, US PGPub 20140161520
Regarding claim 11, Daval in view of Takeshima teach all the limitations of claim 10 as mentioned above.
However, Daval in view of Takeshima do not appear to explicitly teach wherein a number of nodes of each layer of the complex number neural network is a Fibonacci value.
However, Clark teaches wherein the number of nodes of each layer of the complex number neural network is a Fibonacci value. (in [0024]: The relationship between the levels of branching and the number of nodes [wherein the number of nodes] at each level [of each layer of the neural network] may follow the Fibonacci sequence [is a Fibonacci value].)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Daval, Takeshima, and Clark before them, to include Clark’s use of the Fibonnaci sequence in Daval and Takeshima’s complex neural network that uses activation functions related to an angle. One would have been motivated to make such a combination in order to improve the model by allowing all branches to lie in the same plane as taught by Clark ([0025]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. PG Pub #20220047201 to Attia et al – Similar to Martinez in that it discloses zero-ing functions when input is not in a desired range.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATT ELL whose telephone number is (571)270-3264. The examiner can normally be reached 9-5, M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dave Wiley can be reached at 571-272-4150. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW ELL/ Supervisory Patent Examiner, Art Unit 2141