DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the anatomy of the patient" in Line 4. There is insufficient antecedent basis for this limitation in the claim.
Claim 13 recites the limitation "the anatomy of the patient" in Line 4. There is insufficient antecedent basis for this limitation in the claim.
Claim 20 recites the limitation "the anatomy of the patient" in Line 4. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-8, 11-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Patel et al (US20230218146A1; hereinafter referred to as Patel).
Regarding Claim 1, Patel discloses a method for tracking a location of interest of anatomy of a patient in medical imaging comprising (“This disclosure addresses these compromises and solves the technological problems noted above by enabling various systems, apparatuses, and methods for endoscopy, whether for medical (e.g., prevention, forecasting, diagnosis, amelioration, monitoring, or treatment of medical conditions in various mammalian pathology)” [0008], “The portable system 10 aims to minimize implantation errors by identifying and tracking various anatomical features 40, 42, 44, 60 in the ROI 16 and determining optimal treatment sites” [0056]), at a computing system:
receiving a series of medical imaging video frames captured by a medical imager imaging the anatomy of the patient (“ The imaging unit comprises a housing and a display integrated into the housing. An imaging coupler is configured for receiving imaging information from an imaging assembly of an endoscope having a field of view (FoV) comprising of at least a portion of an end effector and a portion of a region of interest (ROI). An imaging processor is configured with instructions to process the received imaging information into pixel values representing an image of a time series and to display the image in real-time on the display” [0009]);
analyzing at least one frame of the series of medical imaging video frames to determine a position of the location of interest relative to the medical imager (“the imaging unit comprises a detection processing unit (DPU) configured with instructions to: classify at least one anatomical feature in each image of the time series based on an artificial intelligence classifier; determine a confidence metric of the classification; determine a motion vector based on the detected motion; and display, concurrently with the corresponding image, the classification of the at least one anatomical feature, the determined confidence metric, and the determined motion vector.” [0009]);
determining at least one estimate of relative motion between the medical imager and the anatomy of the patient (“The imaging unit comprises a detection processing unit (DPU) configured with instructions to: classify at least one anatomical feature in each image of the time series based on an artificial intelligence classifier; determine a confidence metric of the classification; determine a motion vector based on the detected motion” [0009], “the detection processing unit is configured to display the displacement vector relative to one or more classified anatomical features.” [0012]);
and tracking the position of the location of interest relative to the medical imager based on the position of the location of interest determined from the at least one frame and the at least one estimate of relative motion between the medical imager and the anatomy of the patient (“the imaging unit comprises a detection processing unit (DPU) configured with instructions to: classify at least one anatomical feature in each image of the time series based on an artificial intelligence classifier; determine a confidence metric of the classification; determine a motion vector based on the detected motion; and display, concurrently with the corresponding image, the classification of the at least one anatomical feature, the determined confidence metric, and the determined motion vector.” [0009]).
Regarding Claim 2, Patel discloses comprising receiving medical imager motion data associated with motion of the medical imager and determining the at least one estimate of relative motion between the medical imager and the anatomy of the patient based on the medical imager motion data (“the practitioner may rely on a relative motion vector 146 such as, for example, from the bladder neck 60 to apply treatment to the proximal treatment sites 56 a, 56 b. The practitioner will introduce the distal tip 14 and end effector 32 into the ROI 16 till bladder 54 and bladder neck 60 is displayed as the classified anatomical feature 140 in real-time or substantially in real-time as, for example, a textual indicator indicating the corresponding anatomical feature and the displayed confidence metric 144 meets the practitioner's expectations as illustrated in FIG. 5A. The practitioner then may interact with the touchscreen 78 of the display 24 to initiate a relative motion vector 146 therefrom and retract the distal tip 14 and/or end effector 32 until the relative motion vector 146 displays an adequate displacement and/or rotation to locate an optimal location for the proximal treatment sites 64 a, 64 b as illustrated in FIG. 5B.” [0071]).
Regarding Claim 3, Patel discloses that the medical imager is an endoscopic imager and the medical imager motion data comprises data from a motion sensor system mounted to an endoscope of the endoscopic imager (“ an imaging unit for an endoscopic procedure is presented. The imaging unit comprises a housing and a display integrated into the housing. An imaging coupler is configured for receiving imaging information from an imaging assembly of an endoscope having a field of view (FoV) comprising of at least a portion of an end effector and a portion of a region of interest (ROI). An imaging processor is configured with instructions to process the received imaging information into pixel values representing an image of a time series and to display the image in real-time on the display, while a motion sensor is configured to detect a motion of the housing during the time series.” [0009]).
Regarding Claim 4, Patel discloses that the motion sensor system is mounted to a light post of the endoscope (“The region of interest 16 is illuminated by an external light source 18 which directs incident light along an illumination pathway, such as an optical fiber that extends along a tube of the endoscope 12 to an illumination lens at a distal tip 14.” [0048]).
Regarding Claim 5, Patel discloses that the motion sensor system is configured to generate electrical energy from light directed through the light post (“The region of interest 16 is illuminated by an external light source 18 which directs incident light along an illumination pathway, such as an optical fiber that extends along a tube of the endoscope 12 to an illumination lens at a distal tip 14. The illuminated region of interest 16 reflects the incident light back to an imaging lens at the distal tip 14 to convey the reflected light along an imaging pathway, such as an optical fiber to an observation port 20, such as an eyepiece. The reflected light is received by a wireless imaging unit (WIU) 22 via the observation port 18. The WIU 22 may include a digital imaging sensor that converts the reflected light into imaging data which can then be processed and displayed on a display 24.” [0048]).
Regarding Claim 6, Patel discloses that the medical imager motion data comprises data from a camera that captures images of at least one tracking object associated with the medical imager (“ an imaging unit for an endoscopic procedure is presented. The imaging unit comprises a housing and a display integrated into the housing. An imaging coupler is configured for receiving imaging information from an imaging assembly of an endoscope having a field of view (FoV) comprising of at least a portion of an end effector and a portion of a region of interest (ROI).” [0009]).
Regarding Claim 7, Patel discloses that the at least one estimate of relative motion between the medical imager and the anatomy of the patient is determined by analyzing a plurality of frames of the series of medical imaging video frames (“ an imaging unit for an endoscopic procedure is presented. The imaging unit comprises a housing and a display integrated into the housing. An imaging coupler is configured for receiving imaging information from an imaging assembly of an endoscope having a field of view (FoV) comprising of at least a portion of an end effector and a portion of a region of interest (ROI). An imaging processor is configured with instructions to process the received imaging information into pixel values representing an image of a time series and to display the image in real-time on the display, while a motion sensor is configured to detect a motion of the housing during the time series.” [0009], “The imaging unit comprises a detection processing unit (DPU) configured with instructions to: classify at least one anatomical feature in each image of the time series based on an artificial intelligence classifier; determine a confidence metric of the classification; determine a motion vector based on the detected motion” [0009]).
Regarding Claim 8, Patel discloses that analyzing the at least one frame of the series of medical imaging video frames to determine the position of a location of interest comprises locating the location of interest in the at least one frame based on a position of at least a portion of a tool in the at least one frame (“the practitioner may rely on a relative motion vector 146 such as, for example, from the bladder neck 60 to apply treatment to the proximal treatment sites 56 a, 56 b. The practitioner will introduce the distal tip 14 and end effector 32 into the ROI 16 till bladder 54 and bladder neck 60 is displayed as the classified anatomical feature 140 in real-time or substantially in real-time as, for example, a textual indicator indicating the corresponding anatomical feature and the displayed confidence metric 144 meets the practitioner's expectations as illustrated in FIG. 5A. The practitioner then may interact with the touchscreen 78 of the display 24 to initiate a relative motion vector 146 therefrom and retract the distal tip 14 and/or end effector 32 until the relative motion vector 146 displays an adequate displacement and/or rotation to locate an optimal location for the proximal treatment sites 64 a, 64 b as illustrated in FIG. 5B.” [0071]).
Regarding Claim 11, Patel discloses that comprising displaying a visualization comprising a graphical indication of the location of interest in association with the medical imaging video frames (“and display, concurrently with the corresponding image, the classification of the at least one anatomical feature, the determined confidence metric, and the determined motion vector.” [0009]).
Regarding Claim 12, Patel discloses that determining the at least one estimate of relative motion between the medical imager and the anatomy of the patient comprises combining data from different medical imager motion tracking algorithms (“An imaging processor is configured with instructions to process the received imaging information into pixel values representing an image of a time series and to display the image in real-time on the display, while a motion sensor is configured to detect a motion of the housing during the time series. The imaging unit comprises a detection processing unit (DPU) configured with instructions to: classify at least one anatomical feature in each image of the time series based on an artificial intelligence classifier; determine a confidence metric of the classification; determine a motion vector based on the detected motion; and display, concurrently with the corresponding image, the classification of the at least one anatomical feature, the determined confidence metric, and the determined motion vector.” [0009], “the motion sensor includes at least a gyroscope configured to generate a gyroscopic signal and an accelerometer configured to generate acceleration signals, the detection processing unit further configured to determine a displacement vector based on at least the gyroscopic signal and the acceleration signal.” [0010], “the detection processing unit is configured to display the displacement vector relative to one or more classified anatomical features.” [0012]).
Regarding Claim 13, Patel discloses A system comprising one or more processors, memory, and one or more programs stored in the memory for execution by the one or more processors, the one or more programs including instructions for (“This disclosure addresses these compromises and solves the technological problems noted above by enabling various systems, apparatuses, and methods for endoscopy, whether for medical (e.g., prevention, forecasting, diagnosis, amelioration, monitoring, or treatment of medical conditions in various mammalian pathology)” [0008], “An imaging processor is configured with instructions to process the received imaging information into pixel values representing an image of a time series and to display the image in real-time on the display,” [0009]):
receiving a series of medical imaging video frames captured by a medical imager imaging the anatomy of the patient (“ The imaging unit comprises a housing and a display integrated into the housing. An imaging coupler is configured for receiving imaging information from an imaging assembly of an endoscope having a field of view (FoV) comprising of at least a portion of an end effector and a portion of a region of interest (ROI). An imaging processor is configured with instructions to process the received imaging information into pixel values representing an image of a time series and to display the image in real-time on the display” [0009]);
analyzing at least one frame of the series of medical imaging video frames to determine a position of the location of interest relative to the medical imager (“the imaging unit comprises a detection processing unit (DPU) configured with instructions to: classify at least one anatomical feature in each image of the time series based on an artificial intelligence classifier; determine a confidence metric of the classification; determine a motion vector based on the detected motion; and display, concurrently with the corresponding image, the classification of the at least one anatomical feature, the determined confidence metric, and the determined motion vector.” [0009]);
determining at least one estimate of relative motion between the medical imager and the anatomy of the patient (“The imaging unit comprises a detection processing unit (DPU) configured with instructions to: classify at least one anatomical feature in each image of the time series based on an artificial intelligence classifier; determine a confidence metric of the classification; determine a motion vector based on the detected motion” [0009], “the detection processing unit is configured to display the displacement vector relative to one or more classified anatomical features.” [0012]);
and tracking the position of the location of interest relative to the medical imager based on the position of the location of interest determined from the at least one frame and the at least one estimate of relative motion between the medical imager and the anatomy of the patient (“the imaging unit comprises a detection processing unit (DPU) configured with instructions to: classify at least one anatomical feature in each image of the time series based on an artificial intelligence classifier; determine a confidence metric of the classification; determine a motion vector based on the detected motion; and display, concurrently with the corresponding image, the classification of the at least one anatomical feature, the determined confidence metric, and the determined motion vector.” [0009]).
Regarding Claim 14, Patel discloses the one or more programs include instructions for receiving medical imager motion data associated with motion of the medical imager and determining the at least one estimate of relative motion between the medical imager and the anatomy of the patient based on the medical imager motion data (“the practitioner may rely on a relative motion vector 146 such as, for example, from the bladder neck 60 to apply treatment to the proximal treatment sites 56 a, 56 b. The practitioner will introduce the distal tip 14 and end effector 32 into the ROI 16 till bladder 54 and bladder neck 60 is displayed as the classified anatomical feature 140 in real-time or substantially in real-time as, for example, a textual indicator indicating the corresponding anatomical feature and the displayed confidence metric 144 meets the practitioner's expectations as illustrated in FIG. 5A. The practitioner then may interact with the touchscreen 78 of the display 24 to initiate a relative motion vector 146 therefrom and retract the distal tip 14 and/or end effector 32 until the relative motion vector 146 displays an adequate displacement and/or rotation to locate an optimal location for the proximal treatment sites 64 a, 64 b as illustrated in FIG. 5B.” [0071]).
Regarding Claim 15, Patel discloses that the medical imager is an endoscopic imager and the medical imager motion data comprises data from a motion sensor system mounted to an endoscope of the endoscopic imager (“ an imaging unit for an endoscopic procedure is presented. The imaging unit comprises a housing and a display integrated into the housing. An imaging coupler is configured for receiving imaging information from an imaging assembly of an endoscope having a field of view (FoV) comprising of at least a portion of an end effector and a portion of a region of interest (ROI). An imaging processor is configured with instructions to process the received imaging information into pixel values representing an image of a time series and to display the image in real-time on the display, while a motion sensor is configured to detect a motion of the housing during the time series.” [0009]).
Regarding Claim 16, Patel discloses that the motion sensor system is mounted to a light post of the endoscope (“The region of interest 16 is illuminated by an external light source 18 which directs incident light along an illumination pathway, such as an optical fiber that extends along a tube of the endoscope 12 to an illumination lens at a distal tip 14.” [0048]).
Regarding Claim 17, Patel discloses that the motion sensor system is configured to generate electrical energy from light directed through the light post (“The region of interest 16 is illuminated by an external light source 18 which directs incident light along an illumination pathway, such as an optical fiber that extends along a tube of the endoscope 12 to an illumination lens at a distal tip 14. The illuminated region of interest 16 reflects the incident light back to an imaging lens at the distal tip 14 to convey the reflected light along an imaging pathway, such as an optical fiber to an observation port 20, such as an eyepiece. The reflected light is received by a wireless imaging unit (WIU) 22 via the observation port 18. The WIU 22 may include a digital imaging sensor that converts the reflected light into imaging data which can then be processed and displayed on a display 24.” [0048]).
Regarding Claim 18, Patel discloses that the medical imager motion data comprises data from a camera that captures images of at least one tracking object associated with the medical imager (“ an imaging unit for an endoscopic procedure is presented. The imaging unit comprises a housing and a display integrated into the housing. An imaging coupler is configured for receiving imaging information from an imaging assembly of an endoscope having a field of view (FoV) comprising of at least a portion of an end effector and a portion of a region of interest (ROI).” [0009]).
Regarding Claim 19, Patel discloses that the at least one estimate of relative motion between the medical imager and the anatomy of the patient is determined by analyzing a plurality of frames of the series of medical imaging video frames (“ an imaging unit for an endoscopic procedure is presented. The imaging unit comprises a housing and a display integrated into the housing. An imaging coupler is configured for receiving imaging information from an imaging assembly of an endoscope having a field of view (FoV) comprising of at least a portion of an end effector and a portion of a region of interest (ROI). An imaging processor is configured with instructions to process the received imaging information into pixel values representing an image of a time series and to display the image in real-time on the display, while a motion sensor is configured to detect a motion of the housing during the time series.” [0009], “The imaging unit comprises a detection processing unit (DPU) configured with instructions to: classify at least one anatomical feature in each image of the time series based on an artificial intelligence classifier; determine a confidence metric of the classification; determine a motion vector based on the detected motion” [0009]).
Regarding Claim 20, Patel discloses A non-transitory computer readable medium storing one or more programs for execution by the one or more processors of a computing system, the one or more programs including instructions for (“This disclosure addresses these compromises and solves the technological problems noted above by enabling various systems, apparatuses, and methods for endoscopy, whether for medical (e.g., prevention, forecasting, diagnosis, amelioration, monitoring, or treatment of medical conditions in various mammalian pathology)” [0008], “An imaging processor is configured with instructions to process the received imaging information into pixel values representing an image of a time series and to display the image in real-time on the display,” [0009]):
receiving a series of medical imaging video frames captured by a medical imager imaging the anatomy of the patient (“ The imaging unit comprises a housing and a display integrated into the housing. An imaging coupler is configured for receiving imaging information from an imaging assembly of an endoscope having a field of view (FoV) comprising of at least a portion of an end effector and a portion of a region of interest (ROI). An imaging processor is configured with instructions to process the received imaging information into pixel values representing an image of a time series and to display the image in real-time on the display” [0009]);
analyzing at least one frame of the series of medical imaging video frames to determine a position of the location of interest relative to the medical imager (“the imaging unit comprises a detection processing unit (DPU) configured with instructions to: classify at least one anatomical feature in each image of the time series based on an artificial intelligence classifier; determine a confidence metric of the classification; determine a motion vector based on the detected motion; and display, concurrently with the corresponding image, the classification of the at least one anatomical feature, the determined confidence metric, and the determined motion vector.” [0009]);
determining at least one estimate of relative motion between the medical imager and the anatomy of the patient (“The imaging unit comprises a detection processing unit (DPU) configured with instructions to: classify at least one anatomical feature in each image of the time series based on an artificial intelligence classifier; determine a confidence metric of the classification; determine a motion vector based on the detected motion” [0009], “the detection processing unit is configured to display the displacement vector relative to one or more classified anatomical features.” [0012]);
and tracking the position of the location of interest relative to the medical imager based on the position of the location of interest determined from the at least one frame and the at least one estimate of relative motion between the medical imager and the anatomy of the patient (“the imaging unit comprises a detection processing unit (DPU) configured with instructions to: classify at least one anatomical feature in each image of the time series based on an artificial intelligence classifier; determine a confidence metric of the classification; determine a motion vector based on the detected motion; and display, concurrently with the corresponding image, the classification of the at least one anatomical feature, the determined confidence metric, and the determined motion vector.” [0009]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Patel in view of Nosato et al (US20240038391A1; hereinafter referred to as Nosato)
Regarding Claim 9, Patel discloses all limitation noted above except that the at least one estimate of relative motion between the medical imager and the anatomy of the patient is determined using a first motion estimate at a rate of every M frames, wherein M is greater than one.
However, in a similar field of endeavor, Nosato teaches an endoscopic diagnosis support method whereby an examined area and an unexamined area can be clearly discriminated [Abstract].
Nosato also teaches that the at least one estimate of relative motion between the medical imager and the anatomy of the patient is determined using a first motion estimate at a rate of every M frames, wherein M is greater than one (“Here, the determined position data includes absolute position information using the center of the observation canvas as an origin and the frame number. This is because relative intervals of plural following frames existing between two key frame positions are determined when the first key-frame position data and the next key-frame position data are determined. The temporary position data of the following frames includes relative position information with respect to the first key-frame position data and frame numbers.” [0019]).
It would have been obvious to an ordinary skilled person in the art before the effective filing
date of the claimed invention to modify the system of Patel as outlined above with the at least one estimate of relative motion between the medical imager and the anatomy of the patient is determined using a first motion estimate at a rate of every M frames, wherein M is greater than one as taught by Nosato, because it is capable of increasing diagnostic accuracy [0013]
Regarding Claim 10, Patel discloses all limitation noted above except that the at least one estimate of relative motion between the medical imager and the anatomy of the patient is determined using a second motion estimate at a rate of every N frames, wherein N is less than M.
However, in a similar field of endeavor, Nosato teaches that the at least one estimate of relative motion between the medical imager and the anatomy of the patient is determined using a second motion estimate at a rate of every N frames, wherein N is less than M (“a displacement amount between the preceding frame and the following frame is calculated on the basis of the coordinates of the three or more key points in the endoscopic image.” [0018]).
It would have been obvious to an ordinary skilled person in the art before the effective filing
date of the claimed invention to modify the system of Patel as outlined above with the at least one estimate of relative motion between the medical imager and the anatomy of the patient is determined using a second motion estimate at a rate of every N frames, wherein N is less than M, wherein M is greater than one as taught by Nosato, because it is capable of increasing diagnostic accuracy [0013].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN MALDONADO whose telephone number is 703-756-1421. The examiner can normally be reached 8:00 am-4:00 pm PST M-Th Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at
http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Koharski can be reached on (571) 272-7230. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Steven Maldonado/
Patent Examiner, Art Unit 3797
/CHRISTOPHER KOHARSKI/Supervisory Patent Examiner, Art Unit 3797