Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is responsive to patent application as filed on 9/5/2023
This action is made Non-Final.
Claims 1 – 20 are pending in the case. Claims 1, 10, and 15 are independent claims.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 9/5/2023, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings filed on 9/5/2023 have been accepted by the Examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, 5, 6, 8-11, 14-17, 19 and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
In determining whether a claim falls within an excluded category, the Examiner is guided by the Court’s two-part framework, described in Mayo and Alice. Id. at 217-18 (citing Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 75-77 (2012)); Bilski v. Kappos, 561 U.S. 593, 611 (2010); 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (Jan. 7, 2019), and the October 2019 Update of the 2019 Revised Guidance (Oct. 17, 2019).
Step 1
Claims are eligible for patent protection under § 101 if they are in one of the four statutory categories and not directed to a judicial exception to patentability (i.e., laws of nature, natural phenomena, and abstract ideas). Alice Corp. v. CLS Bank Int'l, 573 U. S. ____ (2014). Claim 1 is directed to a statutory category, because a series of steps for analyzing a set of medical images satisfies the requirements of a process (a series of acts). (Step 1: Yes).
Next, the claim is analyzed to determine whether it is directed to a judicial exception.
Step 2A – Prong 1
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more of medical image analysis. The claim recites:
1. A computer-implemented method comprising: receiving 1) a first medical image depicting an anatomical object at a first time and 2) a second medical image depicting the anatomical object at a second time; encoding the first medical image into a first set of features; encoding the second medical image into a second set of features; encoding the first set of features and the second set of features into a set of longitudinal features; performing a medical imaging analysis task on longitudinal changes depicted in the first medical image and the second medical image using a machine learning based network based on the set of longitudinal features; and outputting results of the medical imaging analysis task.
The limitations of receiving first and second images, encoding the first and second images into a first and second set of features, encoding the first and second set of features into a set of longitudinal features, performing a medical imaging analysis task on longitudinal changes, outputting results, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, and/or certain methods of organizing human activity but for the recitation of generic computer components. (Note: the Examiner’s language (e.g. “receiving images”; “encoding images,” etc.) is an abbreviated reference to the detailed claim steps and is not an oversimplification of the claim language; the Examiner employing such shortcuts (that refer to more specific steps) when attempting to explain the rejection). That is, other than reciting “a machine learning based network,” nothing in the claim element precludes the step from practically being performed in the mind, and/or performed as organized human activity. Aside from the general technological environment (addressed below), it covers purely mental concepts and/or certain methods of organizing human activity processes, and the mere nominal recitation of a generic network appliance (e.g. an interface for inputting or outputting data, or generic network-based storage devices and displays) does not take the claim limitation out of the mental processes and/or certain methods of organizing human activity grouping. As per using machine learning (ML) technology for data processing limitations, said recitation does not make the claim patent eligible, because said tools are utilized merely for data gathering and are not utilized in express manipulation and control of functional aspects and/or hardware components/equipment of real-world processes and systems using output of AI models (e.g., manufacturing processes and equipment, medical treatments, communications processes and systems, logistics systems and hardware, interactive smart phone apps, etc.).
Specifically, the utilizing statistical tools to process the data and to output the estimated values - said functions could be performed by a human using mental steps or basic critical thinking, which are types of activities that have been found by the courts to represent abstract ideas (e.g., mental comparison regarding a sample or test subject to a control or target data in Ambry, Myriad CAFC, or the diagnosing an abnormal condition by performing clinical tests and thinking about the results in In re Grams, 888 F.2d 835 (Fed. Cir. 1989) (Grams)). In Grams, the recited functions require obtaining data or patient information (from sensors), and analyze that data to ascertain the existence and identity of an abnormality or estimated responses, and possible causes thereof. While said functions are performed by a computer, they are in essence a mathematical algorithm, in that they represent "[a] procedure for solving a given type of mathematical problem." Gottschalk v. Benson, 409 U.S. 63, 65, 93 S.Ct. 253, 254, 34 L.Ed.2d 273 (1972). Moreover, the Federal Circuit has held, “without additional limitations, a process that employs mathematical algorithms to manipulate existing information to generate additional information is not patent eligible.” Digitech Image Techs., LLC v. Elecs. for Imaging, Inc., 758 F.3d 1344, 1351 (Fed. Cir. 2014). Further, “analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, [are] essentially mental processes within the abstract-idea category.” Elec. Power, 830 F.3d at 1354; see also Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1146 (Fed. Cir. 2016). “[T]he fact that the required calculations could be performed more efficiently via a computer does not materially alter the patent eligibility of the claimed subject matter.” Bancorp Servs., L.L.C. v. Sun Life Assurance Co. of Can. (U.S.), 687 F.3d 1266, 1278 (Fed. Cir. 2012).
It is similar to other abstract ideas held to be non-statutory by the courts. See, also, Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363 (Fed. Cir. 2015)—tailoring sales information presented to a user based on, e.g., user data and time data; Electric Power Grp., LLC v. Alstom S.A., 830 F.3d 1350 (Fed. Cir. 2016) - collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind; DataTreasury Corp. v. Fidelity National Information Services 669 Fed. Appx. 572 (Fed. Cir. 2016) - remote image capture with centralized processing and storage.
Further, regarding parsing and extracting the data, - in Content Extraction & Transmission LLC v. Wells Fargo Bank, National Ass’n, Nos. 13-1588,-1589, 14-1112, -1687 (Fed. Cir. Dec. 23, 2014) the Federal Circuit affirmed that such limitations were generally directed to “the abstract idea of 1) collecting data, 2) recognizing certain data within the collected data set, and 3) storing that recognized data in a memory.” The Court explained that ”[t]he concept of data collection, recognition, and storage is undisputedly well-known,” and noted that “humans have always performed these functions.” Id. The Court then rejected CET’s argument that the claims were patent eligible because they required hardware to perform functions that humans cannot, such as processing and recognizing the stream of bits output by the scanner. Comparing the asserted claims to “the computer-implemented claims in Alice,” the Court concluded that the claims were “drawn to the basic concept of data recognition and storage,” even though they recited a scanner. Id. at 8. Mental processes, e.g., parsing and extracting, as recited in claim 1, remain unpatentable even when automated to reduce the burden on the user of what once could have been done with pen and paper. CyberSource Corp. at 1375 (“That purely mental processes can be unpatentable, even when performed by a computer, was precisely the holding of the Supreme Court in Gottschalk v. Benson, [409 U.S. 63 (1972)].”).
As per receiving, storing and outputting data limitations, it has been held that “As many cases make clear, even if a process of collecting and analyzing information is ‘limited to particular content’ or a particular ‘source,’ that limitation does not make the collection and analysis other than abstract.” SAP Am., Inc. v. InvestPic, LLC, 898 F.3d 1161, 1168 (Fed. Cir. 2018) (citation omitted); see also In re Jobin, 811 F. App’x 633, 637 (Fed. Cir. 2020) (claims to collecting, organizing, grouping, and storing data using techniques such as conducting a survey or crowdsourcing recited a method of organizing human activity, which is a hallmark of abstract ideas).
All these cases describe the significant aspects of the claimed invention, albeit at another level of abstraction. See Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1240-41 (Fed. Cir. 2016) ("An abstract idea can generally be described at different levels of abstraction. As the Board has done, the claimed abstract idea could be described as generating menus on a computer, or generating a second menu from a first menu and sending the second menu to another location. It could be described in other ways, including, as indicated in the specification, taking orders from restaurant customers on a computer.").
Therefore, if a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes”, and/or “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. (Step 2A – Prong 1: Yes).
Step 2A – Prong 2
In Prong Two, the Examiner determines whether claim 1, as a whole, recites additional elements that integrate the judicial exception into a practical application of the exception, i.e., whether the additional elements apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is no more than a drafting effort designed to monopolize the judicial exception. See Guidance, 84 Fed. Reg. at 54-55. If the additional elements do not integrate the judicial exception into a practical application, then the claim is directed to the judicial exception. See id., 84 Fed. Reg. at 54. “An additional element [that] reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field” is indicative of integrating a judicial exception into a practical application. See Guidance, 84 Fed. Reg. at 55.
The Examiner determined that this judicial exception is not integrated into a practical application, because there are no meaningful limitations that transform the exception into a patent eligible application. In particular, the claim recites additional elements – computer implemented method to perform the steps of receiving first and second images, encoding the first and second images into a first and second set of features, encoding the first and second set of features into a set of longitudinal features, performing a medical imaging analysis task on longitudinal changes, outputting results. However, the computer in each step is recited (or implied) at a high level of generality, i.e., as a generic computer performing generic computer functions of processing data, including receiving, extracting and encoding, analyzing, and outputting data. This generic computer limitation is nor more than mere instructions to apply the exception using a generic computer component. The processor that performs the recited steps merely automates these steps which can be done mentally or manually. Thus, while the additional elements have and execute instructions to perform the abstract idea itself, this also does not serve to integrate the abstract idea into a practical application as it merely amounts to instructions to "apply it." The claim only manipulates abstract data elements into another form, and does not set forth improvements to another technological field or the functioning of the computer itself and, instead, uses computer elements as tools in a conventional way to improve the functioning of the abstract idea identified above. As per using ML technology for data processing limitations, said steps are nothing more than an attempt to recycle preexisting artificial intelligence or machine-learning (AI/ML) technologies to apply for voice recognition applications. There are no improvements in said ML techniques, such as advances in the field of computer science itself, or designing a new neural network, and there is no controlling of a technological process using the outcome of said AI/ML operations.
Further, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually; there is no indication that the combination of elements improves the functioning of a computer or improves any other technology including AI/ML technology, - their collective functions merely provide conventional computer implementation. None of the additional elements "offers a meaningful limitation beyond generally linking 'the use of the [method] to a particular technological environment,' that is, implementation via computers." Alice Corp., slip op. at 16 (citing Bilski v. Kappos, 561 U.S. 610, 611 (U.S. 2010)).
Also, the recited steps do not control or improve operation of a machine (MPEP 2106.05(a)), do not effect a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)), and do not apply the judicial exception with, or by use a particular machine (MPEP 2106.05(b)), but, instead, require receiving, extracting and encoding, analyzing, and outputting data.
As per receiving first and second images and outputting results limitations, these recitations amount to mere data gathering and/or outputting, is insignificant post-solution or extra-solution component and represents nominal recitation of technology. Insignificant "post-solution” or “extra-solution" activity means activity that is not central to the purpose of the method invented by the applicant. However, “(c) Whether its involvement is extra-solution activity or a field-of-use, i.e., the extent to which (or how) the machine or apparatus imposes meaningful limits on the execution of the claimed method steps. Use of a machine or apparatus that contributes only nominally or insignificantly to the execution of the claimed method (e.g., in a data gathering step or in a field-of-use limitation) would weigh against eligibility”. See Bilski, 138 S. Ct. at 3230 (citing Parker v. Flook, 437 U.S. 584, 590, 198 USPQ 193, ___ (1978)). Thus, claim drafting strategies that attempt to circumvent the basic exceptions to § 101 using, for example, highly stylized language, hollow field-of-use limitations, or the recitation of token post-solution activity should not be credited. See Bilski, 130 S. Ct. at 3230.
Therefore, the method as a whole, outputs only data structure, - everything remains in the form of a code stored in the computer memory. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. (Step 2A – Prong 2: No).
Step 2B
If a claim has been determined to be directed to a judicial exception under revised Step 2A, examiners should then evaluate the additional elements individually and in combination under Step 2B to determine whether the provide an inventive concept (i.e., whether the additional elements amount to significantly more than the exception itself).
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a computer to perform the recited steps amount to no more than mere instructions to apply the exception using a generic computer component. The claim is now storing steps were considered to be extra-solution activity in Step 2A, and thus it is re-evaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field.
The system would require a processor and memory in order to perform basic computer functions of receiving medical images, encoding the received images for further analysis, and outputting results of the analysis. These components are not explicitly recited and therefore must be construed at the highest level of generality. Based on the Specification, the invention utilizes existing, conventional communication networks and generic processors, which can be found in mobile devices or desktop computers, conventional memory and display devices, and conventional AI/ML techniques, and the functions performed by said generic computer elements are basic functions of a computer - performing a mathematical operation, receiving, storing and outputting data - have recognized by the courts as routine and conventional activity.
Here, the Examiner notes, that the use of AI/ML techniques in various fields of research and development is very common, and is “well known in the art” given that Donald Hebb created a model of brain cell interaction and described it in his book titled “The Organization of Behavior” in 1949. Hebb’s model involves altering the relationships between artificial neurons/nodes and the changes to individual neurons, wherein the relationship between two neurons/nodes strengthens if the two neurons/nodes are activated at the same time and weakens if they are activated separately, and wherein nodes/neurons tending to be both positive or both negative are described as having strong positive weights, and those nodes tending to have opposite weights develop strong negative weights. And Markov decision process(s) (MDPs) were known at least as early as the 1950s; a core body of research on Markov decision processes resulted from Ronald Howard's 1960 book, “Dynamic Programming and Markov Processes”. They are used in many disciplines, including robotics, automatic control, economics and manufacturing.
Further, the recited functions do not improve the functioning of computers itself, including of the processor(s) or the network elements. There are no physical improvements in the claim, like a faster processor or more efficient memory, and there is no operational improvement, like mathematical computation that improve the functioning of the computer. Applicant did not invent a new type of computer; Applicant like everyone else programs their computer to perform functions. The Supreme Court in Alice indicated that an abstract claim might be statutory if it improved another technology or the computer processing itself. Using a (programmed) computer to implement a common business practice does neither. The Federal Circuit has recognized that "an invocation of already-available computers that are not themselves plausibly asserted to be an advance, for use in carrying out improved mathematical calculations, amounts to a recitation of what is 'well-understood, routine, [and] conventional.'" SAP Am., Inc. v. InvestPic, LLC, 890 F.3d 1016, 1023 (Fed. Cir. 2018) (alteration in original) (citing Mayo v. Prometheus, 566 U.S. 66, 73 (2012)). Apart from the instructions to implement the abstract idea, they only serve to perform well-understood functions (e.g., receiving, encoding, analyzing, and outputting data—see the Specification as well as Alice Corp.; Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307 (Fed. Cir. 2016); and Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334 (Fed. Cir. 2015) covering the well-known nature of these computer functions). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually; there is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. “However, it is not apparent how appellant’s programmed digital computer can produce any synergistic result. Instead, the computer will simply do the job it is instructed to do. Where is there any surprising or unexpected result? The unlikelihood of any such result is merely one more reason why patents should not be granted in situations where the only novelty is in the programming of general purpose digital computers”. See Sakraida v. Ag. Pro, Inc., 425 U.S. 273 [ 96 S.Ct. 1532, 47 L.Ed.2d 784], 189 USPQ 449 (1976) and A P Tea Co. V. Supermarket Corp., 340 U.S. 147 [ 71 S.Ct. 127, 95 L.Ed. 162], 87 USPQ 303 (1950).
Furthermore, there is no transformation recited in the claim as understood in view of 35 USC 101. The steps of receiving first and second images, encoding the first and second images into a first and second set of features, encoding the first and second set of features into a set of longitudinal features, performing a medical imaging analysis task on longitudinal changes, outputting results merely represent abstract ideas which cannot meet the transformation test because they are not physical objects or substances. Bilski, 545 F.3d at 963. Said steps are nothing more than mere manipulation or reorganization of data, which does not satisfy the transformation prong. It is further noted that the underlying idea of the recited steps could be performed via pen and paper or in a person's mind. Moreover, “We agree with the district court that the claimed process manipulates data to organize it in a logical way such that additional fraud tests may be performed. The mere manipulation or reorganization of data, however, does not satisfy the transformation prong.” and “Abele made clear that the basic character of a process claim drawn to an abstract idea is not changed by claiming only its performance by computers, or by claiming the process embodied in program instructions on a computer readable medium. Thus, merely claiming a software implementation of a purely mental process that could otherwise be performed without the use of a computer does not satisfy the machine prong of the machine-or-transformation test”. CyberSource, 659 F.3d 1057, 100 U.S.P.Q.2d 1492 (Fed. Cir. 2011)
Therefore, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, because, when considered separately and in combination, the claim elements do not add significantly more to the exception. Considered separately and as an ordered combination, the claim elements do not provide an improvement to another technology or technical field; do not provide an improvement to the functioning of the computer itself; do not apply the judicial exception by use of a particular machine; do not effect a transformation or reduce a particular article to a different state or thing; and do not add a specific limitation other than what is well-understood, routine and conventional in the operation of a generic computer. None of the hardware recited "offers a meaningful limitation beyond generally linking 'the use of the [method] to a particular technological environment,' that is, implementation via computers." Id., slip op. at 16 (citing Bilski v. Kappos, 561 U.S. 610, 611 (U.S. 2010)). As per “… automatic electronic health record documentation” recitations, these limitations do not add significantly more because they are simply an attempt to limit the abstract idea to a particular technological environment, that is, implementation via computers." Id., slip op. at 16 (citing Bilski v. Kappos, 561 U.S. 610, 611 (U.S. 2010)). Limiting the claims to the particular technological environment is, without more, insufficient to transform the claim into patent-eligible applications of the abstract idea at their core.
Accordingly, claim 1 is not directed to significantly more than the exception itself, and is not eligible subject matter under § 101. (Step 2B: No).
Further, although the Examiner takes the steps recited in the independent claims as exemplary, the Examiner points out that limitations recited in dependent claims 2, 5, 6, 8 and 9 further narrow the abstract idea but do not make the claims any less abstract. Dependent claims 2, 5, 6, 8 and 9 each merely add further details of the abstract steps recited in claim 1 without including an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of an abstract idea to a particular technological environment. These claims "add nothing of practical significance to the underlying idea," and thus do not transform the claimed abstract idea into patentable subject matter. Ultramercial, 772 F.3d at 716. Therefore, dependent claims 2, 5, 6, 8 and 9 are also directed to non-statutory subject matter.
Because Applicant’s Apparatus claims 10, 11 and 14 and CRM claims 15-17, 19 and 20 add nothing of substance to the underlying abstract idea, they too are patent ineligi-ble under §101.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claims 10-14, invoke 35 U.S.C. 112(f) because for example, claim 10 has means so therefore prong 1 is satisfied. The means is linked with the transition word "for", therefore prong 2 is satisfied. The phrase is also not modified by sufficient structure or material for performing the claimed function, so prong 3 is satisfied, less technically, if for instance, this claim said the means is a memory, then the claim limitation would fail prong 3 and not invoke 35 U.S.C 112(f).
Since the claim limitation(s) invokes 35 U.S.C. 112(f), claim 10-14 has/have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 2, 5, 6, 8-11, 14-17, 19, and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kang (USPUB 20200357120 A1).
Claim 1:
Kang discloses A computer-implemented method comprising: receiving 1) a first medical image depicting an anatomical object at a first time and 2) a second medical image depicting the anatomical object at a second time; encoding the first medical image into a first set of features; encoding the second medical image into a second set of features; encoding the first set of features and the second set of features into a set of longitudinal features; performing a medical imaging analysis task on longitudinal changes depicted in the first medical image and the second medical image using a machine learning based network based on the set of longitudinal features; and outputting results of the medical imaging analysis task (0017: According to an exemplary embodiment, a method for predicting brain disease state change, as performed by a brain disease prediction apparatus includes acquiring, by the brain disease prediction apparatus, a plurality of test images, which comprise images obtained by capturing at least a portion of a human brain at a predetermined time interval, performing, by the brain disease prediction apparatus, a pre-processing procedure of converting the plurality of test images into test voxels configured to be processed for image analysis, wherein a respective test voxel of the test voxels is data composed of three-dimensional voxel units, mapping, by the brain disease prediction apparatus, first and second test voxels selected from the test voxels acquired from a patient, with each other on a three-dimensional voxel unit, wherein the first test voxel is acquired at a first time-point and the second test voxel is acquired at a second time-point, in which a predetermined time has elapsed from the first time-point, generating, by the brain disease prediction apparatus, a voxel-based data-set based on the mapped first and second test voxels, extracting, by the brain disease prediction apparatus, a change in the test voxels using a deep neural network, and generating, by the brain disease prediction apparatus, a state change probability model based on the change in the test voxels).
Claim 2:
Kang discloses encoding the first medical image into a first set of features comprises encoding the first medical image with first spatial information to generate the first set of features, and encoding the second medical image into a second set of features comprises encoding the second medical image with second spatial information to generate the second set of features (0017: According to an exemplary embodiment, a method for predicting brain disease state change, as performed by a brain disease prediction apparatus includes acquiring, by the brain disease prediction apparatus, a plurality of test images, which comprise images obtained by capturing at least a portion of a human brain at a predetermined time interval, performing, by the brain disease prediction apparatus, a pre-processing procedure of converting the plurality of test images into test voxels configured to be processed for image analysis, wherein a respective test voxel of the test voxels is data composed of three-dimensional voxel units, mapping, by the brain disease prediction apparatus, first and second test voxels selected from the test voxels acquired from a patient, with each other on a three-dimensional voxel unit, wherein the first test voxel is acquired at a first time-point and the second test voxel is acquired at a second time-point, in which a predetermined time has elapsed from the first time-point, generating, by the brain disease prediction apparatus, a voxel-based data-set based on the mapped first and second test voxels, extracting, by the brain disease prediction apparatus, a change in the test voxels using a deep neural network, and generating, by the brain disease prediction apparatus, a state change probability model based on the change in the test voxels; emphasis added).
Claim 5:
Kang discloses encoding the first medical image into a first set of features comprises combining features representing the first medical image with temporal information associated with the first medical image to generate the first set of features, and encoding the second medical image into a second set of features comprises combining features representing the second medical image with temporal information associated with the second medical image to generate the second set of features (0017: According to an exemplary embodiment, a method for predicting brain disease state change, as performed by a brain disease prediction apparatus includes acquiring, by the brain disease prediction apparatus, a plurality of test images, which comprise images obtained by capturing at least a portion of a human brain at a predetermined time interval, performing, by the brain disease prediction apparatus, a pre-processing procedure of converting the plurality of test images into test voxels configured to be processed for image analysis, wherein a respective test voxel of the test voxels is data composed of three-dimensional voxel units, mapping, by the brain disease prediction apparatus, first and second test voxels selected from the test voxels acquired from a patient, with each other on a three-dimensional voxel unit, wherein the first test voxel is acquired at a first time-point and the second test voxel is acquired at a second time-point, in which a predetermined time has elapsed from the first time-point, generating, by the brain disease prediction apparatus, a voxel-based data-set based on the mapped first and second test voxels, extracting, by the brain disease prediction apparatus, a change in the test voxels using a deep neural network, and generating, by the brain disease prediction apparatus, a state change probability model based on the change in the test voxels; emphasis added).
Claim 6:
Kang discloses encoding the first medical image into a first set of features comprises combining features representing the first medical image with patient demographic information associated with the first medical image to generate the first set of features, and encoding the second medical image into a second set of features comprises combining features representing the second medical image with patient demographic information associated with the second medical image to generate the second set of features (0111, 0130 and 0133).
Claim 8:
Kang teaches the medical imaging analysis task comprises classification of the longitudinal changes depicted in the first medical image and the second medical image (0105-106: the brain disease prediction apparatus 30 in some embodiments compares first feature information obtained from the first test voxel in the first test image acquired at the first time-point and second feature information acquired from the second test voxel in the second test image acquired at the second time-point with each other and performs machine learning of the comparison result, together with considering a time duration elapsed from the first time-point to the second time-point... the test voxel includes one state information among “normal”, “cerebral infarction”, “cerebral hemorrhage”. The change in the test voxel means a change from one state among the normal, cerebral infarction, and cerebral hemorrhage states to another state among them. For example, the test voxel in some embodiments includes information indicating that a state of a first position (a position of a specific voxel) of a brain at the first time-point is “normal” and then a state of the first position of the brain at the second time-point is “cerebral infarction”, and then a state of the first position of the brain at a third time-point is “cerebral hemorrhage”).
Claim 9:
Kang discloses wherein the anatomical object comprises one or more lesions in a brain of a patient (0069, 0100-101).
Claim 10:
Kang discloses An apparatus comprising: means for receiving 1) a first medical image depicting an anatomical object at a first time and 2) a second medical image depicting the anatomical object at a second time; means for encoding the first medical image into a first set of features; means for encoding the second medical image into a second set of features; means for encoding the first set of features and the second set of features into a set of longitudinal features; means for performing a medical imaging analysis task on longitudinal changes depicted in the first medical image and the second medical image using a machine learning based network based on the set of longitudinal features; and means for outputting results of the medical imaging analysis task (0017: According to an exemplary embodiment, a method for predicting brain disease state change, as performed by a brain disease prediction apparatus includes acquiring, by the brain disease prediction apparatus, a plurality of test images, which comprise images obtained by capturing at least a portion of a human brain at a predetermined time interval, performing, by the brain disease prediction apparatus, a pre-processing procedure of converting the plurality of test images into test voxels configured to be processed for image analysis, wherein a respective test voxel of the test voxels is data composed of three-dimensional voxel units, mapping, by the brain disease prediction apparatus, first and second test voxels selected from the test voxels acquired from a patient, with each other on a three-dimensional voxel unit, wherein the first test voxel is acquired at a first time-point and the second test voxel is acquired at a second time-point, in which a predetermined time has elapsed from the first time-point, generating, by the brain disease prediction apparatus, a voxel-based data-set based on the mapped first and second test voxels, extracting, by the brain disease prediction apparatus, a change in the test voxels using a deep neural network, and generating, by the brain disease prediction apparatus, a state change probability model based on the change in the test voxels).
Claim 11:
Kang discloses the means for encoding the first medical image into a first set of features comprises means for encoding the first medical image with first spatial information to generate the first set of features, and the means for encoding the second medical image into a second set of features comprises means for encoding the second medical image with second spatial information to generate the second set of features (0017: According to an exemplary embodiment, a method for predicting brain disease state change, as performed by a brain disease prediction apparatus includes acquiring, by the brain disease prediction apparatus, a plurality of test images, which comprise images obtained by capturing at least a portion of a human brain at a predetermined time interval, performing, by the brain disease prediction apparatus, a pre-processing procedure of converting the plurality of test images into test voxels configured to be processed for image analysis, wherein a respective test voxel of the test voxels is data composed of three-dimensional voxel units, mapping, by the brain disease prediction apparatus, first and second test voxels selected from the test voxels acquired from a patient, with each other on a three-dimensional voxel unit, wherein the first test voxel is acquired at a first time-point and the second test voxel is acquired at a second time-point, in which a predetermined time has elapsed from the first time-point, generating, by the brain disease prediction apparatus, a voxel-based data-set based on the mapped first and second test voxels, extracting, by the brain disease prediction apparatus, a change in the test voxels using a deep neural network, and generating, by the brain disease prediction apparatus, a state change probability model based on the change in the test voxels; emphasis added).
Claim 14:
Kang discloses the means for encoding the first medical image into a first set of features comprises means for combining features representing the first medical image with temporal information associated with the first medical image to generate the first set of features, and the means for encoding the second medical image into a second set of features comprises means for combining features representing the second medical image with temporal information associated with the second medical image to generate the second set of features (0017: According to an exemplary embodiment, a method for predicting brain disease state change, as performed by a brain disease prediction apparatus includes acquiring, by the brain disease prediction apparatus, a plurality of test images, which comprise images obtained by capturing at least a portion of a human brain at a predetermined time interval, performing, by the brain disease prediction apparatus, a pre-processing procedure of converting the plurality of test images into test voxels configured to be processed for image analysis, wherein a respective test voxel of the test voxels is data composed of three-dimensional voxel units, mapping, by the brain disease prediction apparatus, first and second test voxels selected from the test voxels acquired from a patient, with each other on a three-dimensional voxel unit, wherein the first test voxel is acquired at a first time-point and the second test voxel is acquired at a second time-point, in which a predetermined time has elapsed from the first time-point, generating, by the brain disease prediction apparatus, a voxel-based data-set based on the mapped first and second test voxels, extracting, by the brain disease prediction apparatus, a change in the test voxels using a deep neural network, and generating, by the brain disease prediction apparatus, a state change probability model based on the change in the test voxels; emphasis added).
Claim 15:
Kang discloses A non-transitory computer readable medium storing computer program instructions (0148), the computer program instructions when executed by a processor cause the processor to perform operations comprising: receiving 1) a first medical image depicting an anatomical object at a first time and 2) a second medical image depicting the anatomical object at a second time; encoding the first medical image into a first set of features; encoding the second medical image into a second set of features; encoding the first set of features and the second set of features into a set of longitudinal features; performing a medical imaging analysis task on longitudinal changes depicted in the first medical image and the second medical image using a machine learning based network based on the set of longitudinal features; and outputting results of the medical imaging analysis task (0017: According to an exemplary embodiment, a method for predicting brain disease state change, as performed by a brain disease prediction apparatus includes acquiring, by the brain disease prediction apparatus, a plurality of test images, which comprise images obtained by capturing at least a portion of a human brain at a predetermined time interval, performing, by the brain disease prediction apparatus, a pre-processing procedure of converting the plurality of test images into test voxels configured to be processed for image analysis, wherein a respective test voxel of the test voxels is data composed of three-dimensional voxel units, mapping, by the brain disease prediction apparatus, first and second test voxels selected from the test voxels acquired from a patient, with each other on a three-dimensional voxel unit, wherein the first test voxel is acquired at a first time-point and the second test voxel is acquired at a second time-point, in which a predetermined time has elapsed from the first time-point, generating, by the brain disease prediction apparatus, a voxel-based data-set based on the mapped first and second test voxels, extracting, by the brain disease prediction apparatus, a change in the test voxels using a deep neural network, and generating, by the brain disease prediction apparatus, a state change probability model based on the change in the test voxels).
Claim 16:
Kang discloses encoding the first medical image into a first set of features comprises encoding the first medical image with first spatial information to generate the first set of features, and encoding the second medical image into a second set of features comprises encoding the second medical image with second spatial information to generate the second set of features (0017: According to an exemplary embodiment, a method for predicting brain disease state change, as performed by a brain disease prediction apparatus includes acquiring, by the brain disease prediction apparatus, a plurality of test images, which comprise images obtained by capturing at least a portion of a human brain at a predetermined time interval, performing, by the brain disease prediction apparatus, a pre-processing procedure of converting the plurality of test images into test voxels configured to be processed for image analysis, wherein a respective test voxel of the test voxels is data composed of three-dimensional voxel units, mapping, by the brain disease prediction apparatus, first and second test voxels selected from the test voxels acquired from a patient, with each other on a three-dimensional voxel unit, wherein the first test voxel is acquired at a first time-point and the second test voxel is acquired at a second time-point, in which a predetermined time has elapsed from the first time-point, generating, by the brain disease prediction apparatus, a voxel-based data-set based on the mapped first and second test voxels, extracting, by the brain disease prediction apparatus, a change in the test voxels using a deep neural network, and generating, by the brain disease prediction apparatus, a state change probability model based on the change in the test voxels; emphasis added).
Claim 17:
Kang discloses encoding the first medical image into a first set of features comprises combining features representing the first medical image with patient demographic information associated with the first medical image to generate the first set of features, and encoding the second medical image into a second set of features comprises combining features representing the second medical image with patient demographic information associated with the second medical image to generate the second set of features (0111, 0130 and 0133).
Claim 19:
Kang teaches the medical imaging analysis task comprises classification of the longitudinal changes depicted in the first medical image and the second medical image (0105-106: the brain disease prediction apparatus 30 in some embodiments compares first feature information obtained from the first test voxel in the first test image acquired at the first time-point and second feature information acquired from the second test voxel in the second test image acquired at the second time-point with each other and performs machine learning of the comparison result, together with considering a time duration elapsed from the first time-point to the second time-point... the test voxel includes one state information among “normal”, “cerebral infarction”, “cerebral hemorrhage”. The change in the test voxel means a change from one state among the normal, cerebral infarction, and cerebral hemorrhage states to another state among them. For example, the test voxel in some embodiments includes information indicating that a state of a first position (a position of a specific voxel) of a brain at the first time-point is “normal” and then a state of the first position of the brain at the second time-point is “cerebral infarction”, and then a state of the first position of the brain at a third time-point is “cerebral hemorrhage”).
Claim 20:
Kang discloses wherein the anatomical object comprises one or more lesions in a brain of a patient (0069, 0100-101).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3, 4, 7, 12, 13 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kang in view of Chartrand (USPUB 20250285265 A1).
Claim 3:
Kang discloses every feature of claim 1.
Kang 0141-143 further discusses The feature extraction layer has a structure in which a convolution layer for applying a plurality of filters to each region of the image to create a feature map, and a pooling layer for pooling feature maps spatially to extract a feature invariant relative to change in a position or a rotation are repeated alternately with each other multiple times... The convolution layer applies a non-linear activation function to a dot product between a filter and a local receptive field for each patch of an input image to obtain the feature map... The pooling layer (or a sub-sampling layer) creates a new feature map by utilizing local information of the feature map obtained from the previous convolution layer. In general, the feature map newly created by the pooling layer is reduced to a smaller size than a size of an original feature map. A typical pooling method includes a max pooling method which selects a maximum value of a corresponding region in the feature map, and an average pooling method which calculates an average of a corresponding region in the feature map. The feature map of the pooling layer is generally less affected by a location of any structure or pattern in the input image than a feature map of the previous layer is. That is, the pooling layer may extract a feature that is more robust to a regional change such as noise or distortion in the input image or the previous feature map. This may play an important role in classification performance.
Kang, by itself, does not seem to completely teach encoding the first medical image with first spatial information to generate the first set of features comprises: encoding the first medical image with one or more first coordinate maps, and resampling the first medical image and the one or more first coordinate maps to a common resolution; and encoding the second medical image with second spatial information to generate the second set of features comprises: encoding the second medical image with one or more second coordinate maps, and resampling the second medical image and the one or more second coordinate maps to the common resolution.
The Examiner maintains that these features were previously well-known as taught by Chartrand.
Chartrand teaches encoding the first medical image with first spatial information to generate the first set of features comprises: encoding the first medical image with one or more first coordinate maps, and resampling the first medical image and the one or more first coordinate maps to a common resolution; and encoding the second medical image with second spatial information to generate the second set of features comprises: encoding the second medical image with one or more second coordinate maps, and resampling the second medical image and the one or more second coordinate maps to the common resolution (0085: The coordinates corresponding to a voxel within the contour of a structure of interest, which can be referred to as foreground voxels, can be assigned a positive value, while coordinates corresponding to a voxel not in a structure of interest, which can be referred to as background voxels, can be assigned a zero value. Furthermore, in embodiments where the ground truth contour definitions 520 are not represented by signed distance maps 580, they can be converted to one by a distance transform module 540 implementing for instance an interpolation algorithm, such as a radial basis function interpolation algorithm, or a signed distance transform. A resampling module 220 can thereafter be used to resample the generated maps 550, 560, 570 and 580 to the same configurable resolution used for instance in the exemplary detection and contouring system 100, for instance an isotropic resolution where each voxel side corresponds to a width of 1 mm).
Kang and Chartrand are analogous art because they are from the same problem-solving area, detecting and analyzing content of an image.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Kang and Chartrand before him or her, to combine the teachings of Kang and Chartrand. The rationale for doing so would have been to more accurately identify points of interest in an image.
Therefore, it would have been obvious to combine Kang and Chartrand to obtain the invention as specified in the instant claim(s).
Claim 4:
Kang, by itself, does not seem to completely teach the one or more first coordinate maps define a location of each pixel in the first medical image relative to a reference coordinate system and the one or more second coordinate maps define a location of each pixel in the second medical image relative to the reference coordinate system.
The Examiner maintains that these features were previously well-known as taught by Chartrand.
Chartrand teaches the one or more first coordinate maps define a location of each pixel in the first medical image relative to a reference coordinate system and the one or more second coordinate maps define a location of each pixel in the second medical image relative to the reference coordinate system (0085: The coordinates corresponding to a voxel within the contour of a structure of interest, which can be referred to as foreground voxels, can be assigned a positive value, while coordinates corresponding to a voxel not in a structure of interest, which can be referred to as background voxels, can be assigned a zero value. Furthermore, in embodiments where the ground truth contour definitions 520 are not represented by signed distance maps 580, they can be converted to one by a distance transform module 540 implementing for instance an interpolation algorithm, such as a radial basis function interpolation algorithm, or a signed distance transform. A resampling module 220 can thereafter be used to resample the generated maps 550, 560, 570 and 580 to the same configurable resolution used for instance in the exemplary detection and contouring system 100, for instance an isotropic resolution where each voxel side corresponds to a width of 1 mm).
Kang and Chartrand are analogous art because they are from the same problem-solving area, detecting and analyzing content of an image.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Kang and Chartrand before him or her, to combine the teachings of Kang and Chartrand. The rationale for doing so would have been to more accurately identify points of interest in an image.
Therefore, it would have been obvious to combine Kang and Chartrand to obtain the invention as specified in the instant claim(s).
Claim 7:
Kang teaches encoding the first medical image into a first set of features comprises encoding the first medical image using a feature extraction network, encoding the second medical image into a second set of features comprises encoding the second medical image using the feature extraction network (0017: a voxel-based data-set based on the mapped first and second test voxels, extracting, by the brain disease prediction apparatus, a change in the test voxels using a deep neural network).
Kang, by itself, does not seem to completely teach and wherein the feature extraction network is trained to perform a plurality of unsupervised medical imaging analysis tasks.
The Examiner maintains that these features were previously well-known as taught by Chartrand.
Chartrand teaches and wherein the feature extraction network is trained to perform a plurality of unsupervised medical imaging analysis tasks (0081).
Kang and Chartrand are analogous art because they are from the same problem-solving area, detecting and analyzing content of an image.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Kang and Chartrand before him or her, to combine the teachings of Kang and Chartrand. The rationale for doing so would have been to more accurately identify points of interest in an image.
Therefore, it would have been obvious to combine Kang and Chartrand to obtain the invention as specified in the instant claim(s).
Claim 12:
Kang discloses every feature of claim 1.
Kang 0141-143 further discusses The feature extraction layer has a structure in which a convolution layer for applying a plurality of filters to each region of the image to create a feature map, and a pooling layer for pooling feature maps spatially to extract a feature invariant relative to change in a position or a rotation are repeated alternately with each other multiple times... The convolution layer applies a non-linear activation function to a dot product between a filter and a local receptive field for each patch of an input image to obtain the feature map... The pooling layer (or a sub-sampling layer) creates a new feature map by utilizing local information of the feature map obtained from the previous convolution layer. In general, the feature map newly created by the pooling layer is reduced to a smaller size than a size of an original feature map. A typical pooling method includes a max pooling method which selects a maximum value of a corresponding region in the feature map, and an average pooling method which calculates an average of a corresponding region in the feature map. The feature map of the pooling layer is generally less affected by a location of any structure or pattern in the input image than a feature map of the previous layer is. That is, the pooling layer may extract a feature that is more robust to a regional change such as noise or distortion in the input image or the previous feature map. This may play an important role in classification performance.
Kang, by itself, does not seem to completely teach the means for encoding the first medical image with first spatial information to generate the first set of features comprises: means for encoding the first medical image with one or more first coordinate maps, and means for resampling the first medical image and the one or more first coordinate maps to a common resolution; and the means for encoding the second medical image with second spatial information to generate the second set of features comprises: means for encoding the second medical image with one or more second coordinate maps, and means for resampling the second medical image and the one or more second coordinate maps to the common resolution.
The Examiner maintains that these features were previously well-known as taught by Chartrand.
Chartrand teaches the means for encoding the first medical image with first spatial information to generate the first set of features comprises: means for encoding the first medical image with one or more first coordinate maps, and means for resampling the first medical image and the one or more first coordinate maps to a common resolution; and the means for encoding the second medical image with second spatial information to generate the second set of features comprises: means for encoding the second medical image with one or more second coordinate maps, and means for resampling the second medical image and the one or more second coordinate maps to the common resolution (0085: The coordinates corresponding to a voxel within the contour of a structure of interest, which can be referred to as foreground voxels, can be assigned a positive value, while coordinates corresponding to a voxel not in a structure of interest, which can be referred to as background voxels, can be assigned a zero value. Furthermore, in embodiments where the ground truth contour definitions 520 are not represented by signed distance maps 580, they can be converted to one by a distance transform module 540 implementing for instance an interpolation algorithm, such as a radial basis function interpolation algorithm, or a signed distance transform. A resampling module 220 can thereafter be used to resample the generated maps 550, 560, 570 and 580 to the same configurable resolution used for instance in the exemplary detection and contouring system 100, for instance an isotropic resolution where each voxel side corresponds to a width of 1 mm).
Kang and Chartrand are analogous art because they are from the same problem-solving area, detecting and analyzing content of an image.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Kang and Chartrand before him or her, to combine the teachings of Kang and Chartrand. The rationale for doing so would have been to more accurately identify points of interest in an image.
Therefore, it would have been obvious to combine Kang and Chartrand to obtain the invention as specified in the instant claim(s).
Claim 13:
Kang, by itself, does not seem to completely teach the one or more first coordinate maps define a location of each pixel in the first medical image relative to a reference coordinate system and the one or more second coordinate maps define a location of each pixel in the second medical image relative to the reference coordinate system.
The Examiner maintains that these features were previously well-known as taught by Chartrand.
Chartrand teaches the one or more first coordinate maps define a location of each pixel in the first medical image relative to a reference coordinate system and the one or more second coordinate maps define a location of each pixel in the second medical image relative to the reference coordinate system (0085: The coordinates corresponding to a voxel within the contour of a structure of interest, which can be referred to as foreground voxels, can be assigned a positive value, while coordinates corresponding to a voxel not in a structure of interest, which can be referred to as background voxels, can be assigned a zero value. Furthermore, in embodiments where the ground truth contour definitions 520 are not represented by signed distance maps 580, they can be converted to one by a distance transform module 540 implementing for instance an interpolation algorithm, such as a radial basis function interpolation algorithm, or a signed distance transform. A resampling module 220 can thereafter be used to resample the generated maps 550, 560, 570 and 580 to the same configurable resolution used for instance in the exemplary detection and contouring system 100, for instance an isotropic resolution where each voxel side corresponds to a width of 1 mm).
Kang and Chartrand are analogous art because they are from the same problem-solving area, detecting and analyzing content of an image.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Kang and Chartrand before him or her, to combine the teachings of Kang and Chartrand. The rationale for doing so would have been to more accurately identify points of interest in an image.
Therefore, it would have been obvious to combine Kang and Chartrand to obtain the invention as specified in the instant claim(s).
Claim 18:
Kang teaches encoding the first medical image into a first set of features comprises encoding the first medical image using a feature extraction network, encoding the second medical image into a second set of features comprises encoding the second medical image using the feature extraction network (0017: a voxel-based data-set based on the mapped first and second test voxels, extracting, by the brain disease prediction apparatus, a change in the test voxels using a deep neural network).
Kang, by itself, does not seem to completely teach and wherein the feature extraction network is trained to perform a plurality of unsupervised medical imaging analysis tasks.
The Examiner maintains that these features were previously well-known as taught by Chartrand.
Chartrand teaches and wherein the feature extraction network is trained to perform a plurality of unsupervised medical imaging analysis tasks (0081).
Kang and Chartrand are analogous art because they are from the same problem-solving area, detecting and analyzing content of an image.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Kang and Chartrand before him or her, to combine the teachings of Kang and Chartrand. The rationale for doing so would have been to more accurately identify points of interest in an image.
Therefore, it would have been obvious to combine Kang and Chartrand to obtain the invention as specified in the instant claim(s).
Note
The Examiner cites particular columns, line numbers and/or paragraph numbers in the references as applied to the claims below for the convenience of the Applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2123.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed in the attached PTOL-892 form.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED-IBRAHIM ZUBERI whose telephone number is (571)270-7761. The examiner can normally be reached on M-Th 8-6 Fri: 7-12/OFF.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steph Hong can be reached on (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMED H ZUBERI/ Primary Examiner, Art Unit 2178