DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
The applicant’s claim to priority of Japan 2011-105514 on 5/10/2011 is acknowledged.
Information Disclosure Statement
The applicant filed an IDS on 2/5/24. It has been annotated and considered.
Claim Objections
Claims 2-12 are objected to because of the following informalities:
Regarding claim 2 (and similarly claims 3 and 5), the receiving step should be clearer to indicate it refers to the “receiving a vector” step in claim 1 for clarity.
Regarding claim 3 (and similarly 6-11), the generating step could be cleare and more concise by claiming “generating the action” without “generating comprises”. The “receiving comprises receiving the vector” could be amended similarly.
Regarding claim 4, the “receiving the image” step should technically be “receiving the vector as an image” for clarity.
Regarding claim 12, the “outputting” step should technically be “outputting the action” for clarity and conciseness.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 19-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 19, the limitation “and a processor configured to output an action to move the medical sensor towards the goal based on input of the goal and a current state of the medical sensor to the policy network, which policy network outputs the action in response to the input” is not clear as it is grammatically incorrect as claimed.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-9, 11-12 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Mino et al. (US 20240197403 hereinafter Mino) in view of Wang et al. (US 20240087147 hereinafter Wang).
Regarding claim 1 (and similarly 19), Mino teaches a method for navigation of an ultrasound imaging transducer (See at least: [0066] FIG. 1 is a schematic diagram illustrating an example of an echoendoscopy system 100 for use in endoscopic ultrasound (EUS) procedures for diagnostic or treatment purposes, such as EUS-guided tissue acquisition. The echoendoscopy system 100 comprises an ultrasound endoscope, also referred to as an echoendoscope 120, a light source apparatus 130, a video processor 140, a first monitor 150 for displaying an optical image, an ultrasound observation apparatus 160, and a second monitor 170 for displaying an ultrasound image.), the method comprising:
receiving a vector representing Note: Specification states vectors refer to images);
generating an action to reposition the ultrasound imaging transducer based on the vector, the action generated by a processor input of the vector to a contrastive reinforcement learned policy network, the contrastive reinforcement learned policy network outputting the action in response to the input of the vector (See at least: Figs. 7A-7D; [0116] FIGS. 7A-7D are diagrams illustrating examples of training an ML model and using the trained ML model to generate a EUS-TA plan for endoscopically collecting tissue from a biliary ductal stricture. FIG. 7A illustrates an ML model training (or learning) phase during which an ML model 741 may be trained using training data comprising a plurality of images 710 of respective anatomical target 711 from past endoscopic tissue acquisition procedures performed on a plurality of patients. The training data may also include annotated procedure data 720 including information about the tissue acquisition devices used in each of the procedures, such as biopsy forceps of particular size and characteristics. The tool information may include type, size, operational data associated with the use of such tools in the past endoscopic tissue acquisition procedures. The training data may also include procedure outcome, such as success/failure assessment of the procedure, total procedure time, procedure difficulty and skills requirement, etc. The ML model 741 can be trained using supervised learning, unsupervised learning, or reinforcement leaning. Examples of ML model architectures and algorithms may include, for example, decision trees, neural networks, support vector machines, or a deep-learning networks, etc. Examples of deep-learning networks include a convolutional neural network (CNN), a recurrent neural network (RNN), a deep belief network (DBN), or a hybrid neural network comprising two or more neural network models of different types or different model configurations.; [0119]; Note: Contrastive reinforcement learned policy is taught by navigating by comparing current to past images (i.e. contrastive) and combining with reinforcement learning.); and
outputting, by an output interface, the action (See at least: [0065] via “The processor can receive images including one or more EUS images converted from the ultrasound scans of the anatomical target, apply the received images to at least one trained ML model to generate an EUS-guided tissue acquisition (EUS-TA) plan including a recommended tissue acquisition device, and recommended values of operational parameters for manipulating the tissue acquisition device, navigating the steerable elongate instrument, or positioning the EUS probe. The EUS-TA plan can be presented to a user, or provided to a robotic endoscopy system to facilitate robot-assisted tissue acquisition.”).
but fails to explicitly teach receiving a vector representing
Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Mino in view of Wang to teach receiving a vector representing
Regarding claim 2, Mino teaches wherein receiving comprises receiving as a user input (See at least: [0073] An image pickup section for acquiring an optical image inside a subject, and an illumination section and an ultrasound transducer section (see FIGS. 2A and 2B) for acquiring an ultrasound tomographic image inside the subject are provided at the distal end portion 121 of the echoendoscope 120. This allows the operator to insert the echoendoscope 120 into the subject and causes the monitors 150 and 170 to display an optical image and an ultrasound tomographic image inside the subject at a desired position in the subject respectively.). Regarding claim 3, Mino teaches wherein receiving comprises receiving the vector as an image of anatomy and further comprising receiving another vector representing a current position of the ultrasound imaging transducer, wherein generating comprises generating the action by input of the vector representing the goal and the vector representing the current position to the reinforcement learned policy network (See at least: Figs. 7A-7D; [0065]; [0116]).
Regarding claim 4, Mino teaches wherein receiving the image comprises receiving the image of the anatomy as a standard view (See at least: Figs. 7A-7D; [0065]; [0116]).
Regarding claim 5, Mino teaches wherein receiving comprises receiving the vector as a location and orientation of the ultrasound imaging transducer (See at least: [0073]). Regarding claim 6, Mino teaches wherein generating comprises generating with the contrastive reinforcement learned policy network comprising a neural network trained with an actor loss based on a critic network outputting a reward (See at least: [0116] FIGS. 7A-7D are diagrams illustrating examples of training an ML model and using the trained ML model to generate a EUS-TA plan for endoscopically collecting tissue from a biliary ductal stricture. FIG. 7A illustrates an ML model training (or learning) phase during which an ML model 741 may be trained using training data comprising a plurality of images 710 of respective anatomical target 711 from past endoscopic tissue acquisition procedures performed on a plurality of patients. The training data may also include annotated procedure data 720 including information about the tissue acquisition devices used in each of the procedures, such as biopsy forceps of particular size and characteristics. The tool information may include type, size, operational data associated with the use of such tools in the past endoscopic tissue acquisition procedures. The training data may also include procedure outcome, such as success/failure assessment of the procedure, total procedure time, procedure difficulty and skills requirement, etc. The ML model 741 can be trained using supervised learning, unsupervised learning, or reinforcement leaning. Examples of ML model architectures and algorithms may include, for example, decision trees, neural networks, support vector machines, or a deep-learning networks, etc. Examples of deep-learning networks include a convolutional neural network (CNN), a recurrent neural network (RNN), a deep belief network (DBN), or a hybrid neural network comprising two or more neural network models of different types or different model configurations. Note: Actor and critic taught as Mino teaches a reinforcement learning network which decides which action to take (i.e. actor) and judging how good the action was (i.e. critic)).
Regarding claim 7, Mino teaches wherein generating comprises generating with the contrastive reinforcement learning policy network having been trained with the reward being a probability of reaching desired result (See at least: [0116] via “…such as success/failure assessment of the procedure…”).
Regarding claim 8, Mino teaches wherein generating comprises generating with the contrastive reinforcement learning policy network having been trained with the critic network comprising first and second encoders configured to receive states sampled from just two different patients to form a matrix for critic loss (See at least: [0105] via “The training data may include procedure data acquired during respective endoscopic procedures performed on a plurality of patients.” Note: a plurality of patients can be two patients. Refer at least to claims 1 and 6 for reasoning and rationale.).
Regarding claim 9, Mino teaches wherein generating comprises generating with the contrastive reinforcement learning policy network having been trained with the critic network comprising first and second encoders configured to receive states from different trajectories for a same patient (See at least: [0105] via “The training data may include procedure data acquired during respective endoscopic procedures performed on a plurality of patients.” Note: The ability to receive data from a plurality of patients can be teach taking data from one patient. Refer at least to claims 1 and 6 for reasoning and rationale.).
Regarding claim 11, Mino teaches wherein generating comprises generating with the contrastive reinforcement learning policy network having been trained with the critic network having a critic loss to maximize similarity between state-action pairs and goals and minimize similarity between goals from different trajectories (See at least: [0094] via In addition to images of various modalities or from various sources, the input interface 630 may receive other information… to effectively and efficiently sample biopsy tissue… In some examples, the input interface 630 may receive physician/patient information, such as operating physician's habits or preference of using a steerable elongate instrument (e.g., preferred approach for cannulation and endoscope navigation) or past procedures of the similar type to the present procedure performed by the physician and the corresponding procedure outcome (e.g., success/failure assessment, procedure time, prognosis and complications), or patient information including patient demographics (e.g., age, gender, race), medical history such as prior endoscopic procedures and images or data associated therewith, etc.).
Regarding claim 12 , Mino teaches wherein outputting comprises outputting the action comprising instructions to move the ultrasound imaging transducer ([0065]; Refer at least to claim 1 for reasoning and rationale.).
Regarding claim 20, Mino teaches wherein the policy network was trained using trajectories with inputs sampled from different patients for a same iteration in optimization of a critic network of the contrastive reinforcement learning framework, the trajectories created from simulation (See at least: Refer at least to claims 1 and 6 for reasoning and rationale.)
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Mino in view of Wang and further in view of Goldenberg et al. (US 20110071380 hereinafter Goldenberg).
Regarding claim 10, Mino fails to teach the following limitation, but Goldenberg teaches wherein generating comprises
Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to take modified Mino in view of Goldenberg to teach wherein generating comprises generating with the contrastive reinforcement learning policy network and critic network having been trained using trajectories from simulation from computed tomography or magnetic resonance imaging so that a trajectory can be pre-planned from beginning to end by providing an operator a complete image of the area being worked on prior to the procedure.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Harry Oh whose telephone number is (571)270-5912. The examiner can normally be reached on Monday-Thursday, 9:00-3:00.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HARRY Y OH/Primary Examiner, Art Unit 3657