DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“a selecting unit configured to…”
“an inference processing unit configured to …” in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Specifications in para [0107] recites that the controlling unit includes the learned model selecting unit and the inference processing unit and para [0064] recites the structure of the controlling unit as general computer including a processor.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 7 – 8, and 11 - 13 are rejected under 35 U.S.C. 103 as being unpatentable over Masuda (US 20210365722 A1; hereafter referred to as Masuda) in view of Tsukagoshi et al. (US 20230110665 A1; hereafter referred to as Tsukagoshi).
Regarding Claim 1, Masuda teaches:
An image processing apparatus configured to apply image processing to a moving image including a plurality of frames of radiation images (Masuda, [0057] “, the acquiring unit 131 acquires an image group made up of a plurality of image types obtained by imaging a subject by different modalities”; Masuda, [0059] “image group made up of a combination of a plurality of images obtained by difference sequences of a singular modality”), the image processing apparatus comprising:
a selecting unit configured to select a learned model used for the image processing of a frame to be processed from among a plurality of learned models (Masuda, [0018] “The information processing device 130 has an acquiring unit 131, a selecting unit 132”; Masuda, [0021] “the selecting unit 132 then selects, out of the inference models that satisfy the input conditions of the acquired image group…The selecting unit 132 corresponds to selecting means that selects at least one inference model from a plurality of inference models, on the basis of at least one imaging condition out of the plurality of imaging conditions”); and
an inference processing unit configured to perform inference processing using the selected learned model in the image processing of the frame to be processed (Masuda, [0022] “The selecting unit 132 outputs the selected inference model and the image group to the inference unit 133”; Masuda, [0025] The inference unit 133 applies the selected inference model to the new image group made up of images matching the input conditions of the inference model selected by the selecting unit 132, and performs detection of a brain tumor that is the lesion in the images”).
However, Masuda does not explicitly teach:
which differ in the number of frames to be input, based on the number of frames which have been obtained;
In the same field of endeavor, Tsukagoshi teaches:
which differ in the number of frames to be input, based on the number of frames which have been obtained (Tsukagoshi, [0009] “a selection unit configured to select, based on a high-definition target image selected from the second image group, a pair of supervisory data to be used for learning among a plurality of pairs of supervisory data each including an image included in the first image group as one of a pair of images”; Tsukagoshi, [0032] An image processing apparatus according to a first embodiment uses, as an input, two moving images A and B simultaneously captured by an identical image capturing apparatus. [0037] “the moving image a and the moving image b are simultaneously captured by an image capturing apparatus including an image sensor”; Tsukagoshi, [0039] “FIG. 3 illustrates an example of a frame configurations of the moving image A and the moving image B. In FIG. 3, the total number of frames of the moving image A is n, and the total number of frames of the moving image B is m”)
Masuda and Tsukagoshi are considered analogous art as they are reasonably pertinent to the same field of endeavor. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Masuda with the invention of Tsukagoshi to make the invention that selects a learned model used for the image processing of a frame to be processed from among a plurality of learned models which differ in the number of frames to be input, based on the number of frames which have been obtained; doing so can generate high-definition moving image by inputting all frames obtained into the learning model (Tsukagoshi, [0002]); thus one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 7, Masuda in view of Tsukagoshi teaches the image processing apparatus according to claim 1, wherein:
the plurality of learned models includes a group of learned models according to an imaging mode (Masuda, [0019] “The data server 120 also holds a plurality of trained inference models that have learned image groups including T1-weighted images, T2-weighted images, and diffusion-weighted images in advance by a deep neural network or the like. Note that it is sufficient for at least one type of images of T1-weighted images, T2-weighted images, and diffusion-weighted images to be included in the image group, and images imaged under other imaging conditions may be included. Also, images to be used for learning are not limited to images imaged by the imaging device 110, and may be selected as appropriate”); and
the selecting unit is configured to select, from the group of learned models corresponding to an imaging mode of the moving image to be processed, the learned model used for the image processing of the frame to be processed based on the number of frames which have been obtained (Masuda, [0018] “The information processing device 130 has an acquiring unit 131, a selecting unit 132”; Masuda, [0021] “the selecting unit 132 then selects, out of the inference models that satisfy the input conditions of the acquired image group…The selecting unit 132 corresponds to selecting means that selects at least one inference model from a plurality of inference models, on the basis of at least one imaging condition out of the plurality of imaging conditions”; Tsukagoshi, [0009] “a selection unit configured to select, based on a high-definition target image selected from the second image group, a pair of supervisory data to be used for learning among a plurality of pairs of supervisory data each including an image included in the first image group as one of a pair of images”; Tsukagoshi, [0032] “An image processing apparatus according to a first embodiment uses, as an input, two moving images A and B simultaneously captured by an identical image capturing apparatus”; Tsukagoshi, [0037] “the moving image a and the moving image b are simultaneously captured by an image capturing apparatus including an image sensor”; Tsukagoshi, [0039] “FIG. 3 illustrates an example of a frame configurations of the moving image A and the moving image B. In FIG. 3, the total number of frames of the moving image A is n, and the total number of frames of the moving image B is m”).
Regarding Claim 8, Masuda in view of Tsukagoshi teaches the image processing apparatus according to claim 1, wherein the imaging mode is set based on at least one of the sensitivity of a detector used for imaging, a bias voltage, noise characteristics, an amplification factor at readout, a frame rate, image size, accumulation time at signal reception, and imaging technique (Masuda, [0020] “The acquiring unit 131 corresponds to acquiring means that acquires an image group obtained by imaging a subject under a plurality of imaging conditions”; Masuda, [0019] “the imaging device 110 is an MRI device that images the head of a subject under the imaging conditions of T1 weighting, T2 weighting, and diffusion weighting. The data server 120 saves an image group of T1-weighted images, T2-weighted images, and diffusion-weighted images received from the imaging device 110”; Tsukagoshi, [0032] An image processing apparatus according to a first embodiment uses, as an input, two moving images A and B simultaneously captured by an identical image capturing apparatus. The relationship between a resolution XA and a frame rate FA of the moving image A and a resolution XB and a frame rate FB of the moving image B satisfies “XA > XB and FA < FB”) (Note: since the claim recites “at least one of the”, the Examiner is mapping the limitations such that at least of the conditions is mapped”).
Regarding Claim 11, Masuda in view of Tsukagoshi teaches:
A radiation imaging system (Masuda, [0019] “the imaging device 110 is an MRI device that images the head of a subject”), including
the image processing apparatus according to claim 1 (see rejection for claim 1 above in step 9);
a radiation detecting apparatus for detecting radiation irradiated by a radiation generating apparatus (Masuda, [0017] “images obtained by imaging the head of a subject using an MRI device”; Masuda, [0019] “the imaging device 110 is an MRI device that images the head of a subject”).
Regarding Claim 12, Masuda teaches:
A method of operating an image processing apparatus configured to apply image processing to a moving image including a plurality of frames of radiation images (Masuda, [0057] “, the acquiring unit 131 acquires an image group made up of a plurality of image types obtained by imaging a subject by different modalities”; Masuda, [0059] “image group made up of a combination of a plurality of images obtained by difference sequences of a singular modality”), the method comprising:
selecting a learned model used for the image processing of a frame to be processed from among a plurality of learned models (Masuda, [0018] “The information processing device 130 has an acquiring unit 131, a selecting unit 132”; Masuda, [0021] “the selecting unit 132 then selects, out of the inference models that satisfy the input conditions of the acquired image group…The selecting unit 132 corresponds to selecting means that selects at least one inference model from a plurality of inference models, on the basis of at least one imaging condition out of the plurality of imaging conditions”); and
performing inference processing using the selected learned model in the image processing of the frame to be processed (Masuda, [0022] “The selecting unit 132 outputs the selected inference model and the image group to the inference unit 133”; Masuda, [0025] The inference unit 133 applies the selected inference model to the new image group made up of images matching the input conditions of the inference model selected by the selecting unit 132, and performs detection of a brain tumor that is the lesion in the images”).
However, Masuda does not explicitly teach:
which differ in the number of frames to be input, based on the number of frames which have been obtained;
In the same field of endeavor, Tsukagoshi teaches:
which differ in the number of frames to be input, based on the number of frames which have been obtained (Tsukagoshi, [0009] “a selection unit configured to select, based on a high-definition target image selected from the second image group, a pair of supervisory data to be used for learning among a plurality of pairs of supervisory data each including an image included in the first image group as one of a pair of images”; Tsukagoshi, [0032] “An image processing apparatus according to a first embodiment uses, as an input, two moving images A and B simultaneously captured by an identical image capturing apparatus”; Tsukagoshi, [0037] “the moving image a and the moving image b are simultaneously captured by an image capturing apparatus including an image sensor”; Tsukagoshi, [0039] “FIG. 3 illustrates an example of a frame configurations of the moving image A and the moving image B. In FIG. 3, the total number of frames of the moving image A is n, and the total number of frames of the moving image B is m”)
Masuda and Tsukagoshi are considered analogous art as they are reasonably pertinent to the same field of endeavor. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Masuda with the invention of Tsukagoshi to make the invention that selects a learned model used for the image processing of a frame to be processed from among a plurality of learned models which differ in the number of frames to be input, based on the number of frames which have been obtained; doing so can generate high-definition moving image by inputting all frames obtained into the learning model (Tsukagoshi, [0002]); thus one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 13, Masuda in view of Tsukagoshi teaches:
A non-transitory computer-readable storage medium having stored thereon a program, for causing, when executed by a computer (Masuda, [0067] “a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions”), the computer to execute the method of operation according to claim 12 (See rejection for claim 12 above is step 13).
Allowable Subject Matter
Claim 2 – 6 and 9 – 10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20220301718 A1 System, Device, And Method Of Determining Anisomelia Or Leg Length Discrepancy (LLD) Of A Subject By Using Image Analysis And Machine Learning
US 20220180512 A1 METHOD FOR PREDICTING DISEASE BASED ON MEDICAL IMAGE
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAISALI RAO KOPPOLU whose telephone number is (571)270-0273. The examiner can normally be reached Monday - Friday 8:30 - 5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format.
For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
VAISALI RAO. KOPPOLU
Examiner
Art Unit 2664
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664