Prosecution Insights
Last updated: April 19, 2026
Application No. 17/731,368

LEARNED MODEL GENERATING METHOD, PROCESSING DEVICE, AND STORAGE MEDIUM

Final Rejection §101§103§112
Filed
Apr 28, 2022
Examiner
PENG, BO JOSEPH
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
GE Precision Healthcare LLC
OA Round
4 (Final)
69%
Grant Probability
Favorable
5-6
OA Rounds
3y 7m
To Grant
82%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
525 granted / 756 resolved
-0.6% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
33 currently pending
Career history
789
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
40.6%
+0.6% vs TC avg
§102
16.9%
-23.1% vs TC avg
§112
27.9%
-12.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 756 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 6, 11, 12, 14, and 18 are objected to because of the following informalities: words such as “incldue,” and “inclduing” are misspelled. They should be “include,” or “including.” Furthermore, “posutres” should be “postures.” “decubitis” should be “decubitus.” Appropriate correction is required. 35 USC § 112(f) or 112 (pre-AIA ), Sixth Paragraph CLAIM INTERPRETATION The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. Claim limitation: “a generating part …;” recited in claims 8 and dependent claims thereafter has/have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it uses/they use a generic placeholder “part” coupled with functional language “generates” without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier. “a deducing part …” recited in claims 8, 12, and dependent claims thereafter has/have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it uses/they use a generic placeholder “part” coupled with functional language “deduces” without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier. “a selecting part …” recited in claims 10 and dependent claims thereafter has/have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it uses/they use a generic placeholder “part” coupled with functional language “selects” without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier. “a calculating part” recited in claims 12 and dependent claims thereafter has/have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it uses/they use a generic placeholder “part” coupled with functional language “calculates” without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier. “a reconfiguring part” recited in claims 15 and dependent claims thereafter has/have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it uses/they use a generic placeholder “part” coupled with functional language “reconfigures” without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier. Since the claim limitation(s) invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, claim(s) above has/have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof. A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: For “a generating part …,” “a deducing part …,” “a selecting part …,” “a calculating part,” and “a reconfiguring part” recited in claims 8, 10, 12, 15 and dependent claims thereafter, the specification (0065, 0066, 0110, 0113) discloses the following: [0065] The processing part 84 performs an image reconfiguring process and various other operations based on data of the patient 40 acquired by the gantry 2. The processing part 84 has one or more processors, and the one or more processors execute various processes described in the program stored in the storage part 83. [0066] FIG. 4 is a diagram showing main functional blocks of the processing part 84. The processing part 84 has a generating part 841, a deducing part 842, a confirming part 843, and a reconfiguring part 844. [0110] FIG. 13 is a diagram showing main functional blocks of the processing part 84 according to embodiment 2. The processing part 84 has a generating part 940, a deducing part 941, a calculating part 942, a confirming part 943, and a reconfiguring part 944. [0113] The processing part 84 of the console 8 can read the program stored in the storage part 83 and execute the aforementioned operations (b1) to (b5). Therefore, ““a generating part …,” “a deducing part …,” “a selecting part …,” “a calculating part,” and “a reconfiguring part” recited in claims 8, 10, 12, 15 and dependent claims thereafter have been interpreted as being any processors or any equivalent structures in light of the specification for the purpose of examination. If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. If applicant does not intend to have the claim limitation(s) treated under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112 , sixth paragraph, applicant may amend the claim(s) so that it/they will clearly not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, or present a sufficient showing that the claim recites/recite sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011). Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-15 and 17-19 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Applicant has failed to show that Applicant possess any mathematical algorithm for “training the model.” A blank box of unknown information does not satisfy the requirement of the possession. Applicant has failed to identify the relevant descriptions in the Spec. Furthermore, the Examiner cannot find any Spec support for the limitation, “a camera positioned over the table” from the original Specification. It is unclear what structure would support such limitation. It is unclear if the camera 6 in fig. 2 is the support for the limitation “a camera position over the table.” The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-15 and 17-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In re claims 1, 6, 17, it is unclear what the model is in “training a model.” Applicant has failed to provide any mathematical algorithm for the neural network’s learning model. A blank box of unknown information has failed to explaining how this model actually works. Applicant has failed to identify the relevant descriptions in the Spec. Furthermore, it is unclear what the scope of “a camera positioned over the table” are. Would the position of the camera 6 shown in fig. 2 be the only position for the limitation of “a camera positioned over the table”? Or any other camera position that can acquire a patient on the time be considered a camera position over the table? In response to Applicant’s argument, the Examiner is asking about the scope of the limitation, it is not about the support of this limitation. Applicant has NOT claimed the camera is installed on the ceiling. So would fig. 2 and para 0050-0051 be the limiting scope of this claim language? Would a camera not installed on the ceiling but over the table read on the claim? The Examiner needs Applicant to answer the scope. In re claims 2-5, for example, claim 2 is referring to “the learned model generating method according to claim 1,” however, claim 1 now is “a method of training a model that outputs … wherein training the model comprises;” it is unclear if claims 2-5 are still based on the claim 1. Again “learned model” is NOT mentioned in claim 1. Now if the training a model is a model that’s well-known by one of ordinary skill in the art as what Applicant claims. See Arg. Page 7, last para. Then Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15, 17-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Statutory Category: YES - The claim recites a method (claim 1) or a medical device (claim 6) and, therefore, is a device. Step 2A, Prong 1, Judicial Exception: YES - The claim recites the limitation of providing learning images, the providing a weight from training of these learned images, and calculating BMI (note that doctor today already uses a BMI chart with weight and height chart table). This limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of elements such as generic computer components, or additional element of table and camera. That is, other than reciting elements, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for these element language, the claim encompasses a user look at that these picture or patients on these beds many times, and then determined acceptable weight, height and BMI of the patient in his/her mind. The mere nominal recitation of a generic elements does not take the claim limitation out of the mental processes grouping. Thus, the claim recites a mental process. Step 2A, Prong 2, Integrated into Practical Application: No - The claim recites additional elements: images of different posture of patient on a table, number of images. The storing step is recited at a high level of generality amounts to mere data gathering, which is a form of insignificant extra-solution activity. The processing circuitry that performs the comparison step is also recited at a high level of generality, and merely automates the comparison step. Each of the additional limitations is no more than mere instructions to apply the exception using a generic computer component, or elements such as generic camera and a table. The combination of these additional elements is no more than mere instructions to apply the exception using a generic computer component, or elements such as generic camera and a table. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea. Step 2B, Inventive Concept: No - As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component, or elements such as generic camera and a table. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, images of different posture of patient on a table, number of images were considered to be extra-solution activity in Step 2A, and thus it is re-evaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The background of the example does not provide any indication that the processing circuitry and storage is anything other than a generic, off-the-shelf computer component, and the Symantec, TLI, and OIP Techs. court decisions cited in MPEP 2106.05(d)(II) indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Accordingly, a conclusion that the collecting and comparing step is well-understood, routine, conventional activity is supported under Berkheimer Option 2. For these reasons, there is no inventive concept in the claim, and thus it is ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-11 and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sa et al. (US 2020/0271507, hereinafter Sa ‘507 in view of Tadayon et al. (US 2014/0079297, hereinafter Tadayon ‘297) and further in view of Tamersoy et al. (US 2020/0297237, hereinafter Tamerosy ‘237). In re claim 1, Sa ‘507 teaches a method of training a model that outputs a body weight of an imaging subject based on an input image of the imaging subject lying on a table of a medical device, wherein training the model comprises: providing a plurality of learning images generated based on a plurality of camera images of a human lying on a table of a medical device (fig. 7, sensor 77, patient 78, table 79; 0032, 0035, 0068-0070), wherein the plurality of camera images are obtained from a camera positioned over the table of the medical device (fig. 7, sensor 77, patient 78, table 79; 0032, 0035, 0068-0070); and providing a plurality of correct answer data corresponding to the plurality of learning images, where each of the plurality of correct answer data represents a body weight of a human included in a corresponding learning image (0052, 0055-0059); and Sa ‘507 teaches the surface data is normalized prior to input. The surface data is rescaled, resized, warped, or shifted (e.g., interpolation). See 0038. Sa ‘507 merely fails to teach to normalize orientation and posture of the subject. Tadayon ‘297 teaches wherein the camera images are pre-processed to normalize orientation (0481) and posture of the subject (0481, note that, head or face of a human to be in the right angle or direction is a posture of the head or face). It would have been prima facie obvious to one of ordinary skills in the art at the time of invention to modify the method/device of Sa ‘507 to include the features of Tadayon ‘297 in order to increases the accuracy of the recognition. Sa ‘507 and Tadavon ‘297 fail to teach training a plurality of models, each of the plurality of models corresponding to a different normalized posture of the subject, wherein each model is trained using learning images and correct answer data associated with the corresponding posture. Tamerosy ‘237 teaches training a plurality of models, each of the plurality of models corresponding to a different It would have been prima facie obvious to one of ordinary skills in the art at the time of invention to modify the method/device of Sa ‘507 to include the features of Tadayon ‘297 in order to increases the accuracy of the recognition, and to include the features of Tamerosy ‘237 in order to better control and operate the medical device based on the changes in patient pose. In re claim 2, Sa ‘507 teaches wherein the plurality of learning images includes an image of a human lying on a table in a prescribed posture (fig. 4). In re claims 3-5, Sa ‘507 teaches the camera captures the outer surface with the patient in a particular position, such as capturing a front facing surface as the patient lies in a bed or on a table for treatment or imaging (0033). Sa ‘507 fails to teach patient in a particular position/posture. Tamerosy ‘237 teaches an image of the human lying on a table in a different posture from the prescribed posture of claim 3 (0006, 0010, 0011, 0012, 0025, 0032, 0042, 0049, 0052, 0066, 0068, 0069, 0081, 0093, 0094); specifically, teaches: [0052] The body pose network 36 is designed or configured to output a class membership for pose, such as one of four classes (e.g., head-first supine, feet-first supine, head-first prone, and feet-first prone) of which reads on claim 4: wherein the plurality of learning images includes at least two of: a first learning image of the human lying in a supine position; a second learning image of the human lying in a prone position; a third learning image of the human lying in a left lateral decubitus position; and a fourth learning image of the human lying in a right lateral decubitus position, and read on claim 5: wherein the plurality of learning images include an image of the human lying on a table in a head- first condition and an image of the human lying on a table in a feet-first condition. See also para 0011, 0042, etc. It would have been prima facie obvious to one of ordinary skills in the art at the time of invention to modify the method/device of Sa ‘507 and Tadayon ‘297 to include the features of Tamerosy ‘237 in order to better control and operate the medical device based on the changes in patient pose. In re claims 6 and 17, Sa ‘507 teaches a medical device comprising: a table; a camera positioned over the table, the camera to obtain a camera image of an imaging subject positioned on the table (fig. 7, sensor 77, patient 78, table 79; 0032, 0035, 0068-0070); and a processing device that executes instructions stored on a memory of the processing device (fig. 7, memory 74), wherein the instructions include: Sa ‘507 teaches the surface data is normalized prior to input. The surface data is rescaled, resized, warped, or shifted (e.g., interpolation). See 0038. Sa ‘507 fails to teach to pre-processing the camera image to normalize orientation and posture of the subject: selecting, based on the posture of the subject, a trained learned model from a plurality of trained learned models, each trained corresponding to a different posture of the subject; and Tadayon ‘297 teaches pre-processing the camera image to normalize orientation (0481) and posture of the subject (0481, note that, head or face of a human to be in the right angle or direction is a posture of the head or face). It would have been prima facie obvious to one of ordinary skills in the art at the time of invention to modify the method/device of Sa ‘507 to include the features of Tadayon ‘297 in order to increases the accuracy of the recognition. Tamerosy ‘237 teaches selecting, based on the posture of the subject, a trained learned model from a plurality of trained learned models, each trained corresponding to a different posture of the subject (0006, 0010-0011, 0032, 0042, 0066, 0081, note that pre-processing and normalization are taught by Tadayon ‘297), wherein each model is trained using learning images and correct answer data (0040, 0057) associated with the corresponding posture (0053, 0054, 0057, 0074). It would have been prima facie obvious to one of ordinary skills in the art at the time of invention to modify the method/device of Sa ‘507 to include the features of Tadayon ‘297 in order to increases the accuracy of the recognition, and to include the features of Tamerosy ‘237 in order to better control and operate the medical device based on the changes in patient pose. In re claim 7, Sa ‘507 teaches wherein the trained learned model outputs the body weight of the imaging subject when an input image generated based on the camera image is input (fig. 1 and 2, 0052, 0055-0059). In re claim 8, Sa ‘507 teaches a generating part that generates the input image based on the camera image; and a deducing part that deduces the body weight of the imaging subject by inputting the input image into the trained learned model (fig. 1 and 2, 0052, 0055-0059). In re claim 9, Sa ‘507 teaches wherein the trained learned model is generated by a neural network executing learning using: a plurality of learning images generated based on a plurality of camera images of a human positioned on a table of a medical device (fig. 7, sensor 77, patient 78, table 79; 0032, 0035, 0068-0070); and a plurality of correct answer data corresponding to the plurality of learning images, where each of the plurality of correct answer data represents a body weight of a human included in a corresponding learning image (0049, 0050, 0058, 0059). In re claim 10, Sa ‘507 teaches wherein the processing device includes: a selecting part that selects a learned model used for deducing the body weight of the imaging subject from the plurality of learned models corresponding to a plurality of possible postures of the imaging subject during imaging (0040-0044), wherein the deducing part deduces the body weight of the imaging subject using the selected learned model (0040-0044). In re claim 11, Sa ‘507 teaches wherein the processing device includes a confirming part for confirming to an operator whether or not to update a deduced body weight (0062, 0063, 0077, 0081, 0084). In re claim 18, note that lateral decubitus position refers to a body orientation where an individual lies on their side. Tamerosy ‘237 teaches wherein the different postures of the subject includes prone, supine, left lateral decubitus, and right lateral decubitus (0006, 0042). In re claim 19, Tadayon ‘297 teaches identifying the posture of the subject based on the image (0328, 0347, 0350). Claim(s) 12-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sa ‘507, Tadayon ‘297 and Tamerosy ‘237 in view of De Brouwer et al. (US 2018/0289334, hereinafter De Brouwer ‘334). In re claim 12, Sa ‘507 teaches comprising: a deducing part that deduces the height of the imaging subject, containing a learned model that outputs the body height of the imaging subject when an input image generated based on the camera image is input (0072); but fails to teach a calculating part that calculates the body weight of the imaging subject based on the height and BMI of the imaging subject. De Brouwer ‘334 teaches calculates the body weight of the imaging subject based on the height and BMI of the imaging subject (0028-0029, note that the equation to calculate BMI can be easily adjust to Mass = BMI * height). It would have been prima facie obvious to one of ordinary skills in the art at the time of invention to modify the method/device of Sa ‘507 and Tadayon ‘297 to include the features of De Brouwer ‘334 in order to use additional weight related information as input to allow the learning model to learn more information in order to arrive at better answers. In re claim 13, Sa ‘507 teaches wherein the learned model is generated by a neural network executing learning using: a plurality of learning images generated based on a plurality of camera images of a human lying on a table of a medical device (fig. 7, sensor 77, patient 78, table 79; 0032, 0035, 0068-0070); and a plurality of correct answer data corresponding to the plurality of learning images, where each of the plurality of correct answer data represents a height (0072, note that weight and/or heights can be both obtained) of a human included in a corresponding learning image (0049, 0050, 0058, 0059, hence, it is obvious that the fig. 5 of the volume vs weight can be adjust to volume vs height per teaching of para 0072). In re claim 14, Sa ‘507 teaches further comprising a generating part that generates the input image based on the camera image (fig. 7, sensor 77, patient 78, table 79; 0032, 0035, 0068-0070). In re claim 15, De Brouwer ‘334 teaches comprising: a reconfiguring part that reconfigures a scout image obtained by scout scanning the imaging subject, wherein the calculating part calculates the BMI based on the scout image (0027-0029). It would have been prima facie obvious to one of ordinary skills in the art at the time of invention to modify the method/device of Sa ‘507 and Tadayon ‘297 to include the features of De Brouwer ‘334 in order to use additional weight related information as input to allow the learning model to learn more information in order to arrive at better answers. Response to Arguments Applicant’s arguments with respect to claim(s) 1-15 and 17-19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. In response to Applicant’s argument that: “a user cannot mentally acquire a camera image of a patient lying on a table, generate an input image based on the camera image, and input the image into a learned model to deduce body weight,” The Examiner disagrees. The 101 analyses have identified that the device (i.e. camera) that used to acquire these images are generic and operates at high level of merely conducting data collection, and therefore, they are not integrated into the practical application. In response to Applicant’s argument that: “[t]hese steps require specialized machinery (e.g., a ceiling-mounted camera and a processing system) and cannot be done by a mental process. The camera images depict the external appearance of a patient in a specific posture.” The Examiner disagrees. Applicant has never claimed “a ceiling-mounted camera.” Applicant has not claim any special processing system. The argument that “The camera images depict the external appearance of a patient in a specific posture” is merely the data collection” where any person can look at the appearance of the patient and depict the specific posture. In response to Applicant’s argument that: “the body weight is derived from machine learning models trained on labeled data. A human cannot perform these steps mentally, nor can they deduce accurate body weight from such images without computational assistance. It is crucial that the body weight be accurate because some medical procedures require accurate patient information to calculate dose or medication.” The Examiner disagrees. Applicant has NEVER claimed any special machine learning model that is not generic machine learning model. The Examiner has issued 112 rejection to ask Applicant to show any special model that Applicant uses that would be different than any generic one. However, Applicant has not show what the model is. In response to the 112 rejection, Applicant states that “one of skill in the art would understand there are multiple types of neural networks that may be used to train the model.” Hence, the model that is used for training is generic. Human has guessed other human and other subject’s weight for a long time. Accuracy is not part of the claims. Applicant even admits that “a person may be able to roughly determine the difference between a 100 lb patient and a 200 lb patient.” Applicant does not claim the invention uses factors such as muscle density. Applicant does not provide any additional method that improves the generic machine learning model. Applicant merely feed more data into this machine learning model and allow this machine learning model to output an result that Applicant claims to be useful. Yet all the input are generic images of patient (note that pre-processing and normalization are standard practice of imaging acquisition, see with reference above). Applicant has NOT provide any methods that improve the machine learning algorithm. Rather, Applicant merely supplies tons of data and expect the same machine learning algorithm (either a black box, or generic box) to output a results. Hence, the Examiner has consider the Applicant as a whole and determine this is an abstract idea. In response to Applicant’s argument: “written description as originally filed provides sufficient support for the claimed training of the model. Paragraphs [0075]-[0079] describe that a plurality of learning images C1-Cn include images of a human (subject) in a first position. The images are normalized so that the craniocaudal directions match (e.g., all head-first or all feet-first). Additionally, the correct answer data G1 to Gn are prepared. The correct answer data includes data indicating a weight of each human (subject) in each of the learning images. Each correct answer data is matched to its corresponding learning image. The neural network learns from the matched data and corresponding learning image (e.g., the prepared image-label pairs). One of skill in the art would understand there are multiple types of neural networks that may be used to train the model, and that the novelty is in the training inputs and using the outputs to determine patient body weight is specific patient positions during an imaging scan.” The Examiner still disagrees with this statement. Does Applicant now admit that the trained or learned model is a generic model when Applicant claims that “[o]ne of skill in the art would understand there are multiple types of neural networks that may be used to train the model?” Please answer this question with yes or no. Again, if no, then what is so special in this model. Furthermore, the Examiner has shown in the prior arts that “the training inputs” are not novel. In response to Applicant’s argument that “[n]either Sa nor Tadayon teaches or suggests selecting, based on a posture of the subject, a trained learned model from a plurality of trained learned model, each trained corresponding to a different posture of the subject,” the Examiner has rejected the amended limitation with further in view of Tamerosy ‘237 as shown above in the Office Action. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BO JOSEPH PENG whose telephone number is (571)270-1792. The examiner can normally be reached Monday thru Friday: 8:00 AM-5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANNE M KOZAK can be reached at (571) 270-0552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BO JOSEPH PENG/Primary Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Apr 28, 2022
Application Filed
Nov 07, 2024
Non-Final Rejection — §101, §103, §112
Jan 02, 2025
Interview Requested
Jan 15, 2025
Applicant Interview (Telephonic)
Jan 19, 2025
Examiner Interview Summary
Feb 12, 2025
Response Filed
Apr 27, 2025
Final Rejection — §101, §103, §112
Jun 19, 2025
Interview Requested
Jul 18, 2025
Interview Requested
Jul 24, 2025
Applicant Interview (Telephonic)
Jul 24, 2025
Examiner Interview Summary
Jul 31, 2025
Request for Continued Examination
Aug 04, 2025
Response after Non-Final Action
Aug 06, 2025
Non-Final Rejection — §101, §103, §112
Oct 17, 2025
Interview Requested
Oct 27, 2025
Applicant Interview (Telephonic)
Oct 30, 2025
Examiner Interview Summary
Nov 07, 2025
Response Filed
Jan 29, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599369
Ultrasound Methods and Systems for Measuring Physiological Properties
2y 5m to grant Granted Apr 14, 2026
Patent 12599356
FETAL HEART RATE MONITORING
2y 5m to grant Granted Apr 14, 2026
Patent 12594057
APPARATUS AND METHOD FOR AUTOMATIC ULTRASOUND SEGMENTATION FOR VISUALIZATION AND MEASUREMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12588819
OCT CATHETER WITH LOW REFRACTIVE INDEX OPTICAL MATERIAL
2y 5m to grant Granted Mar 31, 2026
Patent 12589264
ULTRASONIC TREATMENT DEVICE
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
69%
Grant Probability
82%
With Interview (+13.0%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 756 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month