Prosecution Insights
Last updated: April 19, 2026
Application No. 18/110,749

VIDEO CLIP SELECTOR FOR MEDICAL IMAGING AND DIAGNOSIS

Non-Final OA §101§102§112
Filed
Feb 16, 2023
Examiner
POKRZYWA, JOSEPH R
Art Unit
3992
Tech Center
3900
Assignee
Caption Health Inc.
OA Round
1 (Non-Final)
34%
Grant Probability
At Risk
1-2
OA Rounds
5y 2m
To Grant
58%
With Interview

Examiner Intelligence

Grants only 34% of cases
34%
Career Allow Rate
16 granted / 47 resolved
-26.0% vs TC avg
Strong +24% interview lift
Without
With
+24.2%
Interview Lift
resolved cases with interview
Typical timeline
5y 2m
Avg Prosecution
21 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
33.8%
-6.2% vs TC avg
§102
30.5%
-9.5% vs TC avg
§112
24.2%
-15.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 47 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION Brief Summary This is a non-final Office action regarding reissue U.S. Application 18/110,749 (hereafter the “reissue ‘749 Application”), which is a reissue application of U.S. Patent 11,166,678 (hereafter “the ‘678 Patent”). Upon review of the reissue ‘749 Application, the application was filed on February 16, 2023, along with a Reissue Declaration, also filed February 16, 2023. The ‘678 Patent originally issued on November 9, 2021, with original claims 1-20, being filed as U.S. Application 17/075,560 (hereafter “the original ’560 Application). Here, the original ‘560 Application was filed as a continuation of U.S. Application 16/839,040, filed on April 2, 2020, now U.S. Patent 10,806,402, which is a continuation of U.S. Application 16/016,725, filed on June 25, 2018, now U.S. Patent 10,631,791. As noted above, the ‘678 Patent issued with claims 1-20. A preliminary amendment was filed with the reissue ‘749 Application on February 16, 2023, which adds new claims 21-50, keeping claims 1-20 in their original patented form. Thus, with the preliminary amendment dated February 16, 2023, claims 1-50 are pending, with claims 1, 8, 14, 21, 32, 41, and 50 being independent. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Reissue Applicant is reminded of the continuing obligation under 37 CFR 1.178(b), to timely apprise the Office of any prior or concurrent proceeding in which Patent No. 11,166,678 is or was involved. These proceedings would include interferences, reissues, reexaminations, and litigation. Applicant is further reminded of the continuing obligation under 37 CFR 1.56, to timely apprise the Office of any information which is material to patentability of the claims under consideration in this reissue application. These obligations rest with each individual associated with the filing and prosecution of this application for reissue. See also MPEP §§ 1404, 1442.01 and 1442.04. Non-Compliant Amendment The preliminary amendment filed February 16, 2023 adds claims 21-50, whereby this preliminary amendment shows the new claims with no underlines. In this regard, 37 CFR 1.173(b) states, in part: (d) Changes shown by markings. Any changes relative to the patent being reissued which are made to the specification, including the claims, upon filing, or by an amendment paper in the reissue application, must include the following markings: (1) The matter to be omitted by reissue must be enclosed in brackets; and (2) The matter to be added by reissue must be underlined, except for amendments submitted on compact discs (§§ 1.96 and 1.821(c))…. With this, the added claims 21-50 do not comply with 37 CFR 1.173(b), as the added matter must be underlined. Therefore, the preliminary amendment filed February 16, 2023 does not comply with 37 CFR 1.173, which sets forth the manner of making amendments in reissue applications. A supplemental paper correctly amending the reissue application is required. Reissue Declaration and Rejection under 35 USC § 251 Claims 1-50 are rejected as being based upon a defective reissue declaration under 35 U.S.C. 251 as set forth above. See 37 CFR 1.175. The nature of the defect(s) in the declaration is set forth in the discussion below in this Office action. In this regard, 37 CFR 1.175(a) states, in part: …The inventor’s oath or declaration for a reissue application, in addition to complying with the requirements of § 1.63, § 1.64, or § 1.67, must also specifically identify at least one error pursuant to 35 U.S.C. 251 being relied upon as the basis for reissue … With this, the reissue oath/declaration filed on February 16, 2023 is defective (see 37 CFR 1.175 and MPEP § 1414) because of the following: the Applicant has not sufficiently identified the error which is relied on to support the reissue application, and has not identified a specific claim that the application seeks to be broadened. In this regard, the Applicant’s error statement does state that “Applicants request re-issue … by reason of Applicants claiming less than they had the right to claim in the patent. Applicants have added thirty new claims to pursue additional subject matter not encompassed by the patent claims. …” However, this statement is seen to merely be a general statement, and is not seen to specifically identify a claim that the applicant is intending to be broadened. In this regard, MPEP 1414 (II), states, in part: The "at least one error" pursuant to 35 U.S.C. 251 which is relied upon to support the reissue application must be specifically identified in the oath/declaration. … For an application filed on or after September 16, 2012 that seeks to enlarge the scope of the claims of the patent, the reissue oath or declaration must also identify a claim that the application seeks to broaden in the identification of the error that is relied upon to support the reissue application. A general statement, e.g., that all claims are broadened, is not sufficient to satisfy this requirement. In specifically identifying the error as required by 37 CFR 1.175(a), it is sufficient that the reissue oath/declaration identify the claim being broadened and a single word, phrase, or expression in the specification or in an original claim, and how it renders the original patent wholly or partly inoperative or invalid. With this, the error statement is required to identify a specific claim that is being broadened, and “a single phrase, or expression in the specification or in an original claim, and how it renders the original patent wholly or partly inoperative or invalid”. Correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Here, this application includes claim limitations that use the word “means,” modified with functional language, with the term “means” not being modified by structure, material, or acts for performing the claimed function. Thus, these claim limitations are thus being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Such claim limitation(s) is/are found in new independent claim 50. Particularly, independent claim 50 recites: “means for receiving signals representing a set of ultrasound images of the subject”, “means for deriving one or more extracted feature representations from the set of ultrasound images”, “means for determining, based on the derived one or more extracted feature representations, a quality assessment value representing a quality assessment of the set of ultrasound images”, “means for determining, based on the derived one or more extracted feature representations, an image property associated with the set of ultrasound images”, and “means for producing signals representing the quality assessment value and the image property for causing the quality assessment value and the image property to be associated with the set of ultrasound images.” Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. In this regard, the specification of the ‘678 Patent will be reviewed for the corresponding structure for each of these claimed “means-plus-function” elements. First, regarding the “means for receiving signals representing a set of ultrasound images of the subject”, the ‘678 Patent discusses in col. 4, line 28-col. 5, line 6 that “As shown in FIG. 1, a medical imaging device 100 acquires imagery in an image set 130 of a target organ in a mammalian body. Each image in the image set 130 reflects a specific view of the target organ and has a particular quality. Chip selector logic 150 loads each of the images in the image set 130 and determines both a view reflected by the image and also a determined quality 160. …The host computing system 200 also is coupled to a medical imaging device 250 adapted to acquire medical imagery of target organs, and an image store 240 into which the acquired medical imagery is stored.” Here, the “means for receiving signals” appears to be described as either the “medical imaging device 100”, seen in Fig. 1, the “medical imaging device 250”, seen in Fig. 2, or the “image store 240”, seen in Fig. 2. Second, regarding the “means for deriving one or more extracted feature representations from the set of ultrasound images”, the “means for determining, based on the derived one or more extracted feature representations, a quality assessment value representing a quality assessment of the set of ultrasound images”, and the “means for determining, based on the derived one or more extracted feature representations, an image property associated with the set of ultrasound images”, and the “means for producing signals representing the quality assessment value and the image property for causing the quality assessment value and the image property to be associated with the set of ultrasound images”, the ‘678 Patent describes that these functions are performed by a processor 210 that executes a program code of a clip selection module 300. Particularly, in col. 4, line 61-col. 6, line 4, the 678 Patent states “The operating system 260 supports the execution of program code of a clip selection module 300. … The program code further is enabled during execution to analyze and assign to each image in the image set both a view and a quality of each image. … More specifically, a set of training images each annotated with a known pose utilized to acquire a corresponding one of the training images, and optionally a deviation from an a prior known optimal pose to acquire a highest quality form of the training image, are correlated so that a subsequent image, when compared to the training images, can result in identification of a likely pose variation referred to as an echo distance. The foregoing may be achieved through content-based image retrieval or through a neural network trained with the training images to indicate the echo distance. A quality is then assigned to the subsequent image based upon a correlated echo distance such that a threshold echo distance indicates poorer quality than a smaller echo distance for the subsequent image. …Once the program code of the clip selection module 300 has established a computed view and quality for each image in the image set, the program code is further enabled to select a particular rule from a rules-base keyed upon the indicated procedure and to apply the rule to each image in the image set. In this regard, the determined view and quality of each image in the image set is provided as input to the particular rule in order to determine of the view and quality exceeds that required by the particular rule. If so, the image is added to a subset of images in the image store 240.” With this, with respect to independent claim 50, the ‘678 Patent describes that the element of a “means for receiving signals representing a set of ultrasound images of the subject” appears to be a hardware element (either the imaging device 100 or 250, or the image store 240), while the other remaining claimed “means for deriving”, “means for determining”, “means for determining”, and “means for producing signals”, appear to be software elements, performed by a processor 210. Claim Objections Applicant is advised that should claim 43 be found allowable, claim 44 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Here, it appears that claims 43 and 44 recite the same exact language, with the exception of claim 44 being dependent on claim 43, instead of claim 43 being dependent on claim 42. Claim 44 should be amended to add a limitation to differentiate the claim from claim 43, or perhaps claim 44 should be amended to change the dependency to from “claim 43” to that of “claim 41”. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 7 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 7 and 20, these claims both recite “the storing occurs during real-time acquisition …”, and depend on their respective independent claims 1 and 14. But looking at independent claims 1 and 14, these claims both recite two separate storing steps, being the step of “storing the video clip imagery”, and the step of “storing the subset of video clip imagery”. With respect to dependent claims 7 and 20, the claims are not clear which of the “storing” steps found in the respective independent claim, the limitation of “the storing” is referring to. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21-30 and 32-48 are rejected under 35 U.S.C. 101 because the claimed inventions are directed to nothing more than abstract ideas. In this regard, with respect to claims 21-30, independent claim 21 recites a computer-implemented method that comprises: “receiving signals representing a set of ultrasound images of the subject; deriving one or more extracted feature representations from the set of ultrasound images; determining, based on the derived one or more extracted feature representations, a quality assessment value representing a quality assessment of the set of ultrasound images; determining, based on the derived one or more extracted feature representations, an image property associated with the set of ultrasound images; and producing signals representing the quality assessment value and the image property for causing the quality assessment value and the image property to be associated with the set of ultrasound images.” With this, independent claim 21 recites a method that performs a combination of abstract ideas, whereby the steps above of “receiving signals representing a set of ultrasound images”, “deriving one or more extracted feature representations”, “determining…a quality assessment”, “determining …an image property”, each appear to be a type of mental processes, with the limitations seen as each being “an observation, evaluation, judgement, opinion”, while the limitation of “producing signals representing the quality assessment value and the image property” appears to be some sort of a mathematical concept. Here, these limitations appear to be a combination of categories of abstract ideas. See MPEP 2106.04(a)(2). Thus, claim 21 is seen to recite a combination of these abstract ideas. In this regard, MPEP 2106.04(III)(B), states in part: The use of a physical aid (e.g., pencil and paper or a slide rule) to help perform a mental step (e.g., a mathematical calculation) does not negate the mental nature of the limitation, but simply accounts for variations in memory capacity from one person to another. For instance, in CyberSource, the court determined that the step of "constructing a map of credit card numbers" was a limitation that was able to be performed "by writing down a list of credit card transactions made from a particular IP address." In making this determination, the court looked to the specification, which explained that the claimed map was nothing more than a listing of several (e.g., four) credit card transactions. The court concluded that this step was able to be performed mentally with a pen and paper, and therefore, it qualified as a mental process. 654 F.3d at 1372-73, 99 USPQ2d at 1695. See also Flook, 437 U.S. at 586, 198 USPQ at 196 (claimed "computations can be made by pencil and paper calculations"); University of Florida Research Foundation, Inc. v. General Electric Co., 916 F.3d 1363, 1367, 129 USPQ2d 1409, 1411-12 (Fed. Cir. 2019) (relying on specification’s description of the claimed analysis and manipulation of data as being performed mentally "‘using pen and paper methodologies, such as flowsheets and patient charts’"); Symantec, 838 F.3d at 1318, 120 USPQ2d at 1360 (although claimed as computer-implemented, steps of screening messages can be "performed by a human, mentally or with pen and paper"). Along this vein, the specification of the ‘678 Patent even discusses that traditional approaches that would appear to comprise the above noted steps are performed by a technician and a physician. (see the ‘678 Patent, col. 1, line 62-col. 2, line 23). With the, these judicial exceptions are not integrated into a practical application because the claim, as a whole, does not contain any additional elements outside of the judicial exceptions. Here, in the limitations, there are no additional claimed elements besides the judicial exceptions, noted above, whereby this is insufficient to integrate the judicial exception into a practical application. In this regard, MPEP 2106.04(II)(A) states, in part: Because a judicial exception is not eligible subject matter, Bilski, 561 U.S. at 601, 95 USPQ2d at 1005-06 (quoting Chakrabarty, 447 U.S. at 309, 206 USPQ at 197 (1980)), if there are no additional claim elements besides the judicial exception, or if the additional claim elements merely recite another judicial exception, that is insufficient to integrate the judicial exception into a practical application. See, e.g., RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract"); Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016) (eligibility "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself."). For a claim reciting a judicial exception to be eligible, the additional elements (if any) in the claim must "transform the nature of the claim" into a patent-eligible application of the judicial exception, Alice Corp., 573 U.S. at 217, 110 USPQ2d at 1981, either at Prong Two or in Step 2B. If there are no additional elements in the claim, then it cannot be eligible. In such a case, after making the appropriate rejection (see MPEP § 2106.07 for more information on formulating a rejection for lack of eligibility), it is a best practice for the examiner to recommend an amendment, if possible, that would resolve eligibility of the claim. Here, these claims do not affirmatively recite any action that is done with a “produced signals”, only that signals are “produced”, which represent the quality assessment and the image property. As the claims are currently worded, the claim as a whole, merely “produces” or generates a signal after a derivation and determinations. Perhaps one way to resolve this issue is to amend the claim to positively recite specific elements or circuits of a computer that perform these functions, so as to differentiate the claim from being a general generic computer, as well as some functionality of doing something with the produced signal, such as processing received image data based on a selected rule indicated by the produced signal and then storing the processed image data in an image store. This type of functionality is described in the specification of the ‘678 Patent, whereby in col. 5, line 55-col. 6, line 4, wherein the ‘678 Patent states “Once the program code of the clip selection module 300 has established a computed view and quality for each image in the image set, the program code is further enabled to select a particular rule from a rules-base keyed upon the indicated procedure and to apply the rule to each image in the image set. In this regard, the determined view and quality of each image in the image set is provided as input to the particular rule in order to determine of the view and quality exceeds that required by the particular rule. If so, the image is added to a subset of images in the image store 240.” Thus, in viewing independent claim 21, as a whole, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Dependent claims 22-30 recite further limitations that also fall within the abstract ideas discussed above. These claims recite additional mental processes and also add limitations related to “a neural network”. Here, the mental process of thinking can be performed in the human mind, such that this thinking utilizes the neural network of one’s brain. As such, these claims are also rejected for being directed to a judicial exception. In addition, independent claim 41 recites a system comprising at least one processor configured to perform the same exact steps noted above in claim 21. With this, the claim appears to recite a general generic computer (a “system” with “at least one processor”), that is configured to perform the abstract ideas noted above in claim 21. Here, looking at the specification of the ‘678 Patent, the specification explicitly describes the invention as being implemented using a generic computing device, whereby in col. 6, line 63-col. 7, line 22, the ‘678 Patent states “These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus…”. Because a generic computer is merely used as a tool to implement the mental processes and mathematical concepts, claim 41 is also seen to recite an abstract idea. See MPEP 2106.04(A)(2)(III)(C). Here, claim 41 appears to recite a general generic computer (a “system” with “at least one processor”), that is configured to perform the abstract ideas noted above. Here, looking at the specification of the ‘678 Patent, the specification appears to describe a generic computing device (see col. 4, line 61-col. 5, line 6). Because a generic computer is merely used as a tool to implement the mental processes and mathematical concepts, the claims are seen to recite an abstract idea. See MPEP 2106.04(A)(2)(III)(C). Thus, in viewing independent claim 41, as a whole, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Dependent claims 42-48 recite further limitations that also fall within the abstract ideas discussed above. These claims recite additional mental processes and also add limitations related to “a neural network”. Here, the mental process of thinking can be performed in the human mind, such that this thinking utilizes the neural network of one’s brain. As such, these claims are also rejected for being directed to a judicial exception. However, it is noted that dependent claim 31 and dependent claim 49 both recite features that appear to be more than just a judicial exception, and thus appear to be patent eligible. Particularly, claim 31 recites that “wherein producing signals representing the quality assessment value and the image property for causing the quality assessment value and the image property to be associated with the set of ultrasound images comprises producing signals for causing a representation of the quality assessment value and a representation of the image property to be displayed by at least one display in association with the set of ultrasound images.” Further dependent claim 49 recites “wherein the at least one processor is configured to produce signals for causing a representation of the quality assessment value and a representation of the image property to be displayed by at least one display in association with the set of ultrasound images.” With this, the inclusion of “causing a representation of the quality assessment value and a representation of the image property to be displayed by at least one display” in both claims 31 and 49, these claims appear to add structure, and is seen to distinguish over simply being a judicial exception. Continuing, with respect to claims 32-40, independent claim 32 recites a computer implemented method that comprises: “receiving signals representing a plurality of sets of ultrasound training images; receiving signals representing quality assessment values, each of the quality assessment values associated with one of the sets of ultrasound training images and representing a quality assessment of the associated set of ultrasound training images; receiving signals representing image properties, each of the image properties associated with one of the sets of ultrasound training images; and training a neural network, the training comprising, for each set of the plurality of sets of ultrasound training images, using the set of ultrasound training images as an input to the neural network and using the quality assessment values and the image properties associated with the set of ultrasound training images as desired outputs of the neural network.” Here, independent claim 32 recites a method that performs an abstract idea, whereby the three steps above of “receiving signals” appear to be a combination of mental processes, with the processes seen as being “an observation, evaluation, judgement, opinion”, while the “training a neural network” appears to be some sort of a mathematical concept. Here, these limitations appear to be a combination of categories of abstract ideas. See MPEP 2106.04(a)(2). Thus, claim 32 is seen to recite a combination of these abstract ideas. These judicial exceptions are not integrated into a practical application because the claim, as a whole, do not contain any additional elements outside of the judicial exceptions. Here, in the limitations, there are no additional claimed elements besides the judicial exceptions, noted above, whereby this is insufficient to integrate the judicial exception into a practical application. In this regard, MPEP 2106.04(II)(A) states, in part: Because a judicial exception is not eligible subject matter, Bilski, 561 U.S. at 601, 95 USPQ2d at 1005-06 (quoting Chakrabarty, 447 U.S. at 309, 206 USPQ at 197 (1980)), if there are no additional claim elements besides the judicial exception, or if the additional claim elements merely recite another judicial exception, that is insufficient to integrate the judicial exception into a practical application. See, e.g., RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract"); Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016) (eligibility "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself."). For a claim reciting a judicial exception to be eligible, the additional elements (if any) in the claim must "transform the nature of the claim" into a patent-eligible application of the judicial exception, Alice Corp., 573 U.S. at 217, 110 USPQ2d at 1981, either at Prong Two or in Step 2B. If there are no additional elements in the claim, then it cannot be eligible. In such a case, after making the appropriate rejection (see MPEP § 2106.07 for more information on formulating a rejection for lack of eligibility), it is a best practice for the examiner to recommend an amendment, if possible, that would resolve eligibility of the claim. Here, claim 32 does not affirmatively recite any action that is done with any “desired outputs” in the “training” step, only that there are desired outputs of the neural network. As the claim is currently worded, the claim as a whole, merely generates an output after some mathematical process. Perhaps one way to resolve the issue is to amend the claims to positively recite specific elements or circuits of a computer that perform these functions, so as to differentiate the claim from being a general generic computer, as well as some functionality of doing something with the produced data, so as to improve the functionality of a computer. This type of functionality is described in the specification of the ‘678 Patent, whereby in col. 4, lines 8-27, the ‘678 Patent states “An intended use of the acquired video clips is then specified, for example, to compute a measurement of the organ in furtherance of the computation of a measurement in respect to a specified diagnostic procedure, and a rule from a rules base retrieved indicating a specific view, modality and quality requirement for the intended use. Optionally, the rule indicates a presentation arrangement of video clips in a viewer. Based upon the indication by the rule of the specific view, modality and quality requirement for the intended use, the acquired video clips are filtered to produce a subset of video clips of the specific view, modality and quality. Finally, the subset of video clips is provided as input to a diagnostic viewer presenting the subset of video clips for viewing by a health care professional. Optionally, the viewer arranges the presentation of the subset of video clips in accordance with the rule. In particular, the arrangement of the presentation of the subset of the video clips may include a re-ordering of the subset of the video clips so that the most relevant ones of the video clips are first presented to the health care professional.” Dependent claims 33-40 recite further limitations that also fall within the abstract ideas discussed above. These claims recite additional mental processes and also add limitations related to “a neural network”. Here, the mental process of thinking can be performed in the human mind, such that this thinking utilizes the neural network of one’s brain. As such, these claims are also rejected for being directed to a judicial exception. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-50 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by U.S. Patent Application Publication 2019/0130554, with the inventor of Rothberg et al. (noted as “Rothberg”, cited in the Information Disclosure Statement dated 2/16/2023). Regarding claim 1, Rothberg discloses a method for clip selection for medical imaging [see Abstract; also see Figs. 22 and 23] comprising: receiving through an interface to a medical imaging device, a selection of a diagnostic procedure and a target portion of a mammalian body [input/output device 2203, seen in Fig. 22; also see paragraphs 0108-0109, wherein “The input/output (I/O) devices 2203 may be configured to facilitate communication with other systems and/or an operator. Example I/O devices 2203 that may facilitate communication with an operator include: a keyboard, a mouse, a trackball, a microphone, a touch screen, a printing device, a display screen, a speaker, and a vibration device.”; also see paragraph 0067, wherein “The offline quality may be a quality calculated after the sequence of ultrasound images has been saved to memory. The process 300 begins with act 302. In act 302, the computing device receives a selection to perform an automatic measurement. In some embodiments, the operator may select an option displayed on the display screen of the computing device to perform the automatic measurement. In some embodiments, the operator may select an option to record a sequence of ultrasound images, and after recording the sequence of ultrasound images, the computing device may automatically proceed with performing the automatic measurement.”]; acquiring a multiplicity of video clip imagery of the target portion utilizing the medical imaging device [see paragraphs 0067-0068, wherein “In act 304, the computing device receives a sequence of images. The sequence of images may have been saved to memory, for example, based on the operator selecting an option to record a sequence of ultrasound images. The memory may be, for example, on the computing device or a server in communication with the computing device.”]; storing the video clip imagery in an image store [see paragraph 0068, wherein “In act 304, the computing device receives a sequence of images. The sequence of images may have been saved to memory, for example, based on the operator selecting an option to record a sequence of ultrasound images. The memory may be, for example, on the computing device or a server in communication with the computing device.”; also see paragraphs 0082-0083, wherein “FIG. 4 shows a conceptual illustration of a process for saving images to memory (e.g., non-volatile memory) for further use in accordance with certain embodiments discussed herein. For example, the images saved to memory may be used for automatically performing measurements (e.g., automatically calculating ejection fraction). In FIG. 4, consecutive images are represented as boxes with consecutively increasing numbers. In stage 402, the sequence of the ten previously collected images, images 1-10, are saved to a temporary storage buffer (e.g., volatile memory).”; also see paragraph 0116, wherein “The computing device 2302 may be connected to the network 2316 over a wired connection (e.g., via an Ethernet cable) and/or a wireless connection (e.g., over a WiFi network). As shown in FIG. 23, these external devices may include servers 2318, workstations 2320, and/or databases 2322. The computing device 2302 may communicate with these devices to, for example, off-load computationally intensive tasks. For example, the computing device 2302 may send an ultrasound image over the network 2316 to the server 2318 for analysis (e.g., to identify an anatomical feature in the ultrasound) and receive the results of the analysis from the server 2318. Additionally (or alternatively), the computing device 2302 may communicate with these devices to access information that is not available locally and/or update a central information repository. For example, the computing device 2302 may access the medical records of a subject being imaged with the ultrasound imaging device 2314 from a file stored in the database 2322.”]; image processing each video clip of the video clip imagery to determine a view and a quality of each video clip [see paragraphs 0069-0077, wherein “In act 306, the computing device calculates the offline quality of the sequence of images. In some embodiments, calculating the offline quality may include calculating a quality of each of the images in the sequence (as discussed above with reference to process 100) and selecting the offline quality as the image quality at a specific quantile (5th percentile, 10th percentile, 15th percentile, 20th percentile, 25th percentile, 30th percentile, 35th percentile, 40th percentile, or any suitable percentile) of the image qualities of all the images in the sequence. In some embodiments, the specific percentile used may depend on the type of images in the sequence. For example, in some embodiments a higher percentile may be used for apical four chamber views of the heart vs. parasternal long axis views of the heart (e.g., 15th percentile for apical four chamber views of the heart and 25th percentile for parasternal long axis views of the heart) while in other embodiments a higher percentile may be used for parasternal long axis views of the heart vs. apical four chamber views of the heart.”]; retrieving a rule from a rules base corresponding to the selected diagnostic procedure and target portion, the rule specifying a requisite view and quality of the video clip imagery and also a presentation arrangement of the video clip imagery in the interface [see paragraphs 0069-0070, wherein “…In some embodiments, the specific percentile used may depend on the type of images in the sequence. For example, in some embodiments a higher percentile may be used for apical four chamber views of the heart vs. parasternal long axis views of the heart (e.g., 15th percentile for apical four chamber views of the heart and 25th percentile for parasternal long axis views of the heart) while in other embodiments a higher percentile may be used for parasternal long axis views of the heart vs. apical four chamber views of the heart.”; also see paragraphs 0116-0125, wherein “The input layer 2404 may be followed by one or more convolution and pooling layers 2410. A convolutional layer may include a set of filters that are spatially smaller (e.g., have a smaller width and/or height) than the input to the convolutional layer (e.g., the image 2402). Each of the filters may be convolved with the input to the convolutional layer to produce an activation map (e.g., a 2-dimensional activation map) indicative of the responses of that filter at every spatial position.”]; applying the retrieved rule to the video clip imagery as a filter to produce a subset of video clip imagery satisfying the specified requisite view and quality [see paragraphs 0075-0077, wherein “In act 308, the computing device determines whether the offline quality exceeds (or in some embodiments, exceeds or is equal to) a threshold quality (e.g., 50% on a scale of 0% to 100%). If the offline quality exceeds the threshold quality, the process 300 proceeds from act 308 to act 310. …Act 310 proceeds if the offline quality exceeds the threshold quality. In act 310, the computing device performs the automatic measurement on the sequence of ultrasound images. In some embodiments, the computing device may display the automatic measurement, and the computing device may also display the offline quality (e.g., as a number) of the sequence of ultrasound images on which the automatic measurement was performed.”; also see paragraphs 0116-0125]; and, storing the subset of video clip imagery in the image store along with the specified presentation arrangement [see paragraphs 0082-0083, wherein “In some embodiments, the computing device may automatically save to memory (e.g., non-volatile memory) a sequence of images upon calculating that the live quality of the sequence of images exceeds a threshold quality. For example, the computing device continuously saving in a buffer (e.g., volatile memory) the previous N images collected during imaging. The computing device may further remove, when a new image is collected, the oldest image in the buffer (i.e., the image in the buffer that was collected longest ago) from the buffer, adding the new image to the buffer, and calculating the quality of the sequence of image currently in the buffer. The computing may further automatically save to memory the sequence of images to memory if the live quality of the sequence of images exceeds a threshold, or if the live quality of the sequence of images does not exceed a threshold, receive another image to replace the oldest image and repeating the above procedure for the new sequence of images in the buffer.”; also see paragraph 0116, wherein “In this example, the computing device 2302 may also provide one or more captured ultrasound images of the subject to the database 2322 to add to the medical record of the subject.”; also see paragraph 0120, wherein “Once the training data has been created, the training data may be loaded to a database (e.g., an image database) and used to train a neural network using deep learning techniques.”]. Regarding claim 2, Rothberg discloses the method discussed above in claim 1, and further comprising: on condition that a video clip satisfying the specified requisite view and quality in the video clip imagery is determined upon the application of the retrieved rule not to exist in the video clip imagery, generating an alert through the interface of the medical imaging device [see step 216 in Fig. 2, and see step 312 in Fig. 3; also see paragraph 0064; also see paragraph 0077; also see Figs. 11 and 21]. Regarding claim 3, Rothberg discloses the method discussed above in claim 1, and further teaches wherein the image processing of each video clip comprises submitting each video clip to a neural network trained to generate output indicating a recognized view in a submitted video clip at a specified level of confidence [see paragraphs 0116-0121, wherein “Aspects of the technology described herein relate to the application of automated image processing techniques to analyze images, such as ultrasound images. In some embodiments, the automated image processing techniques may include machine learning techniques such as deep learning techniques. … Deep learning techniques may include those machine learning techniques that employ neural networks to make predictions. Neural networks typically include a collection of neural units (referred to as neurons) that each may be configured to receive one or more inputs and provide an output that is a function of the input. For example, the neuron may sum the inputs and apply a transfer function (sometimes referred to as an “activation function”) to the summed inputs to generate the output. The neuron may apply a weight to each input, for example, to weight some inputs higher than others. Example transfer functions that may be employed include step functions, piecewise linear functions, and sigmoid functions. These neurons may be organized into a plurality of sequential layers that each include one or more neurons. The plurality of sequential layers may include an input layer that receives the input data for the neural network, an output layer that provides the output data for the neural network, and one or more hidden layers connected between the input and output layers.”]. Regarding claim 4, Rothberg discloses the method discussed above in claim 1, and further teaches wherein the image processing of each video clip comprises submitting each video clip to a content-based image retrieval system adapted to compare a submitted video clip to a data store of known images of particular views so as to indicate a recognized view in the submitted video clip [see paragraphs 0116-0125, wherein “As shown in FIG. 23, these external devices may include servers 2318, workstations 2320, and/or databases 2322. The computing device 2302 may communicate with these devices to, for example, off-load computationally intensive tasks. For example, the computing device 2302 may send an ultrasound image over the network 2316 to the server 2318 for analysis (e.g., to identify an anatomical feature in the ultrasound) and receive the results of the analysis from the server 2318. Additionally (or alternatively), the computing device 2302 may communicate with these devices to access information that is not available locally and/or update a central information repository. For example, the computing device 2302 may access the medical records of a subject being imaged with the ultrasound imaging device 2314 from a file stored in the database 2322. In this example, the computing device 2302 may also provide one or more captured ultrasound images of the subject to the database 2322 to add to the medical record of the subject. …Aspects of the technology described herein relate to the application of automated image processing techniques to analyze images, such as ultrasound images. In some embodiments, the automated image processing techniques may include machine learning techniques such as deep learning techniques. … Deep learning techniques may include those machine learning techniques that employ neural networks to make predictions. Neural networks typically include a collection of neural units (referred to as neurons) that each may be configured to receive one or more inputs and provide an output that is a function of the input. For example, the neuron may sum the inputs and apply a transfer function (sometimes referred to as an “activation function”) to the summed inputs to generate the output. The neuron may apply a weight to each input, for example, to weight some inputs higher than others. Example transfer functions that ma
Read full office action

Prosecution Timeline

Feb 16, 2023
Application Filed
Feb 16, 2023
Response after Non-Final Action
Aug 07, 2025
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent RE50756
Terminal Device and Printer
2y 5m to grant Granted Jan 20, 2026
Patent RE50750
SIGNALING A SYNCHRONIZATION FRAME TRANSMISSION REQUEST
2y 5m to grant Granted Jan 13, 2026
Patent RE50286
INTRA-PERINODULAR TEXTURAL TRANSITION (IPRIS): A THREE DIMENISONAL (3D) DESCRIPTOR FOR NODULE DIAGNOSIS ON LUNG COMPUTED TOMOGRAPHY (CT) IMAGES
2y 5m to grant Granted Jan 28, 2025
Patent RE50114
DEVICE, FINGERPRINT INPUT DEVICE AND MACHINE-READABLE MEDIUM
2y 5m to grant Granted Sep 10, 2024
Patent RE49898
IMAGE FORMING SYSTEM, INFORMATION FORMING APPARATUS, AND COMPUTER READABLE MEDIUM HAVING MANAGEMENT APPARATUS WITH DISTRIBUTED STORAGE
2y 5m to grant Granted Apr 02, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
34%
Grant Probability
58%
With Interview (+24.2%)
5y 2m
Median Time to Grant
Low
PTA Risk
Based on 47 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month