DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims Pending
Claims 1-19 and 28 are currently under examination.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 11-13 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 11 recites the limitation “performing foreign matter detection on the scout image of the subject using at least one foreign matter detection model”, which lacks sufficient detail within the applicant’s specification in regards to the structure of the foreign matter detection model. The applicant’s specification does state “A foreign subject detection model may be a trained model (e.g., a trained machine learning model) configured to receive the scout image of the subject as an input, and output a result of foreign matter detection (referred to as a foreign matter detection result for brevity).” (Par. 173 of applicant’s spec.) and “the foreign matter detection model may include a linear regression model, a ridge regression model, a support vector regression model, a support vector machine model, a decision tree model, a fully connected neural network model, a deep learning model, etc. Exemplary deep learning models may include a deep neural network (DNN) model, a convolutional neural network (CNN) model…” (Par. 174 of applicant’s spec.), and further recites “The preliminary model may include one or more model parameters, such as the number (or count) of layers, the number (or count) of nodes, a loss function, or the like, or any combination thereof. Before training, the preliminary model may have one or more initial parameter values of the model parameter(s).” (Par. 181 of applicant’s spec.). However, simply reciting an exemplary model type and reciting the existence of layers does not amount to sufficient support. For example, the applicant has not provided sufficient detail regarding the specific weights, biases, or layers used for the model itself. As such, the claim is rejected.
Claims 12-13 are dependent on claim 11, and as such are also rejected.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 11-13 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 11 recites the limitation “performing foreign matter detection on the scout image of the subject using at least one foreign matter detection model”, which fails to effectively define the metes and bounds of the claim as to the structure of the “foreign matter detection model” that is used for the indicated function. The applicant’s specification does state “A foreign subject detection model may be a trained model (e.g., a trained machine learning model) configured to receive the scout image of the subject as an input, and output a result of foreign matter detection (referred to as a foreign matter detection result for brevity).” (Par. 173 of applicant’s spec.) and “the foreign matter detection model may include a linear regression model, a ridge regression model, a support vector regression model, a support vector machine model, a decision tree model, a fully connected neural network model, a deep learning model, etc. Exemplary deep learning models may include a deep neural network (DNN) model, a convolutional neural network (CNN) model…” (Par. 174 of applicant’s spec.), and further recites “The preliminary model may include one or more model parameters, such as the number (or count) of layers, the number (or count) of nodes, a loss function, or the like, or any combination thereof. Before training, the preliminary model may have one or more initial parameter values of the model parameter(s).” (Par. 181 of applicant’s spec.). However, simply reciting an exemplary model type and reciting the existence of layers does not amount to sufficient support. For example, the applicant has not provided sufficient detail regarding the specific weights, biases, or layers used for the model itself. As such, the claim is indefinite as the applicant has failed to effectively define the metes and bounds of the claim. For examination purposes, this will be interpreted as any generic algorithm.
Claims 12 and 13 are dependent on claim 11, and as such are also rejected.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-7, 14-19, and 28 are rejected under 35 U.S.C. 101 because the claimed invention is directed towards a judicial exception without significantly more. These claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception or that are sufficient to amount to significantly more than the judicial exception.
Step 1 of the subject matter eligibility test
Claims 1, 14, and 28 are directed towards a system, method, and method, respectively, which describes one of the four statutory categories of patentable subject matter.
Step 2A of the subject matter eligibility test
Prong 1: Claims 1, 14, and 28 and recite the abstract idea of a mental process as follows: “determining a respiratory amplitude of a respiratory motion of a subject during a medical scan based on a respiratory signal relating to the respiratory motion”, “obtaining surface information of the target region”, “correcting the respiratory amplitude based on the surface information of the target region”.
The determining a respiratory amplitude of a respiratory motion of a subject during a medical scan based on a respiratory signal relating to the respiratory motion, obtaining surface information of the target region, correcting the respiratory amplitude based on the surface information of the target region can be practically performed by the human mind, with the aid of a pen and paper, but for performance on a generic processor, in a computer environment, or merely using the computer as a tool to perform the steps.
A person of ordinary skill in the art could reasonably determine a respiratory amplitude of a respiratory motion of a subject during a medical scan based on being handed a piece of paper with a respiratory signal. A person of ordinary skill in the art could reasonably obtain surface information of a target region based on receiving a respiratory signal. A person of ordinary skill in the art could reasonably correct a respiratory amplitude based on being handed a piece of paper with surface information of the target region based on a respiratory signal.
There is currently nothing to suggest an undue level of complexity in the determining, obtaining, or correcting steps. Therefore, a person would be able to practically be able to perform the determining and correcting steps mentally or with the aid of pen and paper.
Prong Two: Claims 1, 14, and 28 do not recite additional elements that integrate the mental process into a practical application. Therefore, the claims are “directed to” the mental process. The additional elements merely:
Recite the words “apply it” or an equivalent with the judicial exception, or include instructions to implement the abstract idea on a computer, or merely use the computer as a tool to perform the abstract idea (e.g., (a storage with instructions (Claim 1), a processor communicating with a storage device (Claim 1), a computing device with a storage device and processor (Claim 14), “A non-transitory computer readable medium, comprising at least one set of instructions, wherein when executed by one or more processors of a computing device” (Claim 28)) and
Add insignificant extra-solution activity (the pre-solution activity of: using generic data-gathering components (e.g. “a respiratory motion detector”).
For claims 1, 14 and 28. The additional elements merely serve to gather data to be used by the abstract idea. The processor, storage, instructions, and computing device are merely used as a pre-solution step of necessary data gathering to be used by the abstract idea. The respiratory motion detector is merely used as additional types data gathering. There is no practical application because the abstract idea is not applied, relied on, or used in a meaningful way. The processing that is performed remains in the abstract realm, i.e. the gathered data is not used for a treatment or meaningful purpose. Additionally, there is no overall improvement to existing technology present. The mental process merely functions on generic computer elements that do not change the functionality of the device itself. Therefore, the additional elements, alone or in combination, do not integrate the abstract idea into a practical application.
Step 2B of the subject matter eligibility test for Claims 1, 14, and 28
Per the Berkheimer requirement, the additional elements are well-understood, routine, and conventional. For example,
A respiratory motion detector as disclosed by Murdock (US Pub. No. 20180065017) hereinafter Murdock “A multifunction physiological processor and the multifunction sound processor work in similar fashion. The multifunction physiological processor wirelessly secures, cures and processes, analyzes heart rate, respiration rate, brain waves and many other physiological functions, simultaneously on and off the Internet or Cloud using a variety of well-known and established technologies…” “…biosensor, ultrasound sensor, accelerometer sensor, lidar sensor, sonar sensor, video camera sensor including video streaming, piezo sensor including electric and resistive, eye sensor, infrared sensor…” and Bohan (US Pub. No. 20060047188) hereinafter Bohan “the present invention anticipates using any suitable instrument for detecting physiological data including a sphygmomanometer, an infrared sensor array, a means for detecting pulse rate, a means for detecting blood oxygen levels, a means for detecting body temperature, a means to measure respiratory rate and combinations thereof such as are already well known” (Par. 80)
A Processor, storage, instructions and computing device as disclosed by Miao (US Pub. No. 20140355855) hereinafter Miao “The above-described methods for MRI-based motion correction for PET images can be implemented on a computer using well-known computer processors, memory units, storage devices, computer software, and other components…” (Par. 27) and Sanger (US Pub. No. 20160007871) hereinafter Sanger “Systems, apparatus, and methods described herein may be implemented using digital circuitry, or using hardware using well known processors, memory units, storage devices, computer software, and other components. Typically, hardware includes a processor for executing instructions and one or more memories for storing instructions and data. A computer may also include, or be coupled to, one or more storage devices, such as one or more magnetic disks, internal hard disks and removable disks, optical disks, etc.” (Par. 61).
are all well-understood, routine, and conventional.
Claims 2-7 and 15-19 do not include additional elements, alone or in combination that are sufficient to amount to significantly more than the judicial exception (i.e., an inventive concept) as all of the elements are directed to the further describing of the abstract idea, pre-solution activities, and computer implementation.
The dependent claims merely further define the abstract idea and are, therefore, directed to an abstract idea for similar reasons: they merely further describe the abstract idea:
wherein the corrected respiratory amplitude reflects an intensity of the respiratory motion of the subject along a standard direction (Claims 2 and 15),
acquiring a three-dimensional (3D) optical image of the subject (Claims 3 and 16) (Examiner's Note: A person of ordinary skill in the art could reasonably acquire a 3D image with a generic computer),
determining, based on the 3D optical image of the subject, the surface information of the target region (Claims 3 and 16),
determining, based on the surface information of the target region, a surface profile of the target region (Claims 4 and 17),
dividing the surface profile into a plurality of subsections (Claims 4 and 17) (Examiner's Note: A person of ordinary skill in the art could reasonably divide a surface profile into subsections mentally or with a pen and paper),
for each of the plurality of subsections, determining a correction factor corresponding to the subsection (Claims 4 and 17) (Examiner's Note: A person of ordinary skill in the art could reasonably determine a correction factor corresponding to a subsection with a pen and paper),
correcting, based on the plurality of correction factors corresponding to the plurality of subsections, the respiratory amplitude of the subject (Claims 4 and 17) (Examiner's Note: A person of ordinary skill in the art could reasonably correct a respiratory amplitude based on a correction factor based on having a piece of paper with respiratory amplitudes and correction factors),
obtaining an installation angle relative to a reference direction (Claims 5 and 18),
determining an included angle between the subsection and the reference direction (Claims 5 and 18) (Examiner's Note: A person of ordinary skill in the art could reasonably determine an angle based on having reference angle information),
the determining a respiratory amplitude of a respiratory motion of a subject comprises determining a plurality of respiratory amplitudes of the respiratory motion at a plurality of time points during the medical scan based on the respiratory signal (Claims 6 and 19),
the obtaining surface information of the target region comprises obtaining sets of surface information of the target region, each of the sets of surface information corresponding to one of the plurality of time points (Claims 6 and 19),
the correcting the respiratory amplitude comprises, for each of the plurality of time points, correcting the respiratory amplitude at the time point based on the surface information corresponding to the time point (Claims 6 and 19),
obtaining scan data of the subject (Claim 7),
processing the scan data of the subject based on the corrected respiratory amplitudes corresponding to the plurality of time points (Claim 7).
Further describe the pre-solution activity (or structure used for such activity):
An image acquisition device (Claims 3 and 16),
A scanner (Claim 7).
Per the Berkheimer requirement, the additional elements are well-understood, routine, and conventional. For example,
An image acquisition device as disclosed by Addison (US Pub. No. US 20190209046) hereinafter Addison “The camera 214 generates a sequence of images over time. The camera 214 may be a depth sensing camera, such as a Kinect camera from Microsoft Corp. (Redmond, Wash.). A depth sensing camera can detect a distance between the camera and objects in its field of view. Such information can be used, as disclosed herein” (Par. 83) (Examiner's Note: Kinect is commercially available) and Liang (US Pub. No. 20030208116) hereinafter Liang “Generating such a 3D image representation generally involves acquiring a sequential series of 2D slice images, such as from a spiral computed tomography (CT) scanner, magnetic resonance imaging (MRI) scanner or ultrasound scanner (US) and transforming this 2D image data into a volumetric data set which provides a 3D representation of the region on a 2D display, such as a computer monitor. Such a technique is well known in the art” (Par. 31),
A scanner as disclosed by Addison (US Pub. No. US 20190209046) hereinafter Addison “The camera 214 generates a sequence of images over time. The camera 214 may be a depth sensing camera, such as a Kinect camera from Microsoft Corp. (Redmond, Wash.). A depth sensing camera can detect a distance between the camera and objects in its field of view. Such information can be used, as disclosed herein” (Par. 83) (Examiner's Note: Kinect is commercially available) and Liang (US Pub. No. 20030208116) hereinafter Liang “Generating such a 3D image representation generally involves acquiring a sequential series of 2D slice images, such as from a spiral computed tomography (CT) scanner, magnetic resonance imaging (MRI) scanner or ultrasound scanner (US) and transforming this 2D image data into a volumetric data set which provides a 3D representation of the region on a 2D display, such as a computer monitor. Such a technique is well known in the art” (Par. 31).
are all well-understood, routine, and conventional.
Taken alone or in combination, the additional elements do not integrate the judicial exception into a practical application at least because the abstract idea is not applied, relied on, or used in a meaningful way. The additional elements do not add anything significantly more than the abstract idea. The collective functions of the additional elements merely provide computer/electronic implementation and processing, data gathering, and no additional elements beyond those of the abstract idea. There is no indication that the combination of elements improves the functioning of a mobile device, output device, improves technology other than the technical field of the claimed invention, etc. Therefore, the claims are rejected as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The claims are generally directed towards a system. The system includes a processor configured to direct the system to perform operations that include, determining a respiratory amplitude of a subject during a medical scan, wherein respiratory the signal is collected with a respiratory motion detector by emitting signals toward a subject. The system further includes obtaining surface information of the target region and correcting respiratory amplitude based on the surface information.
Claim(s) 1-2, 4-7, 14-15, 17-19, and 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Addison (US Pub. No. 20190209046) hereinafter Addison.
Regarding claim 1, Addison discloses A system (Fig. 1, Fig. 2, Par. 82 (patient based monitoring system)), comprising:
at least one storage device including a set of instructions (Par. 89, “a processor 315 that is coupled to a memory 305. The processor 315 can store and recall data and applications in the memory 305”); and
at least one processor configured to communicate with the at least one storage device (Par. 89, “a processor 315 that is coupled to a memory 305. The processor 315 can store and recall data and applications in the memory 305…”), wherein when executing the set of instructions, the at least one processor is configured to direct the system to perform operations including (Par. 89, “a processor 315 that is coupled to a memory 305. The processor 315 can store and recall data and applications in the memory 305, including applications that process information and send commands/signals according to any of the methods disclosed herein…”):
determining a respiratory amplitude of a respiratory motion of a subject during a medical scan based on a respiratory signal relating to the respiratory motion (Par. 127, “Respiratory displacements of the chest are shown. These respiratory displacements are denoted as d.sub.i,j, where i and j are the indices along the vertical and horizontal plane of chest.”), wherein the respiratory signal is collected using a respiratory motion detector (Par. 82, “The system 200 includes a non-contact detector 210 placed remote from the patient 212. In this embodiment, the detector 210 includes a camera 214, such as a video camera. The camera 214 is remote from the patient, in that it is spaced apart from and does not contact the patient 212. The camera 214 includes a detector exposed to a field of view 216 that encompasses at least a portion of the patient 212.”) (Par. 89 (image capture device – 385)) toward a target region of the subject (Par. 127, “FIG. 30 is a diagram showing a representation of a patient at an angle to a line of sight of a camera from above according to various embodiments described herein”);
obtaining surface information of the target region (Fig. 26-30, Par. 127, “FIG. 30 shows the patient sitting at an angle (θ) to the line of sight. In this case, the displacements along the line of sight of the camera d*.sub.i,j will be less than the actual displacements orthogonal to the chest wall” (angle of incidence)); and
correcting the respiratory amplitude based on the surface information of the target region (Par. 127, “We may correct these displacements by dividing by the cosine of the angle θ as follows in Equation 7…” (equation 7)) (Par. 128, “The embodiments described above with respect to FIGS. 29 and 30 assume that the volume change of the ROI is solely in a direction orthogonal to the plane of the chest wall. Additional correction factors may be used to take account of the breathing which expands the torso in lateral directions. These correction factors may be applied irrespective of a position or orientation of the chest to the camera.”).
Addison fails to explicitly disclose a respiratory motion detector by emitting detecting signals toward a target region of the subject.
However, Addison does teach in an example a respiratory motion detector by emitting detecting signals toward a target region of the subject (Par. 89 (image capture device – 385)) (Par. 90, “For example, backscatter x-ray or millimeter wave scanning technology may be utilized to scan a patient, which can be used to define an ROI and monitor movement for tidal volume calculations. Advantageously, such technologies may be able to “see” through clothing, bedding, or other materials while giving an accurate representation of the patient's skin. This may allow for more accurate tidal wave measurements, particularly if the patient is wearing baggy clothing or is under bedding.”).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Addison with an example of Addison to explicitly include using a respiratory motion detector by emitting detecting signals toward a target region of the subject through the combination of examples as it would have yielded the predictable result of monitoring patient parameters through clothing (Addison (Par. 90)).
Regarding claim 14, Addison discloses A method (Fig. 1, Fig. 2, Par. 82 (patient based monitoring system)) (abstract (method and system)), the method being implemented on a computing device having at least one storage device and at least one processor (Par. 89, “a processor 315 that is coupled to a memory 305. The processor 315 can store and recall data and applications in the memory 305, including applications that process information and send commands/signals according to any of the methods disclosed herein…”), the method comprising:
determining a respiratory amplitude of a respiratory motion of a subject during a medical scan based on a respiratory signal relating to the respiratory motion (Par. 127, “Respiratory displacements of the chest are shown. These respiratory displacements are denoted as d.sub.i,j, where i and j are the indices along the vertical and horizontal plane of chest.”), wherein the respiratory signal is collected using a respiratory motion detector (Par. 82, “The system 200 includes a non-contact detector 210 placed remote from the patient 212. In this embodiment, the detector 210 includes a camera 214, such as a video camera. The camera 214 is remote from the patient, in that it is spaced apart from and does not contact the patient 212. The camera 214 includes a detector exposed to a field of view 216 that encompasses at least a portion of the patient 212.”) (Par. 89 (image capture device – 385)) toward a target region of the subject (Par. 127, “FIG. 30 is a diagram showing a representation of a patient at an angle to a line of sight of a camera from above according to various embodiments described herein”);
obtaining surface information of the target region (Fig. 26-30, Par. 127, “FIG. 30 shows the patient sitting at an angle (θ) to the line of sight. In this case, the displacements along the line of sight of the camera d*.sub.i,j will be less than the actual displacements orthogonal to the chest wall” (angle of incidence)); and
correcting the respiratory amplitude based on the surface information of the target region (Par. 127, “We may correct these displacements by dividing by the cosine of the angle θ as follows in Equation 7…” (equation 7)) (Par. 128, “The embodiments described above with respect to FIGS. 29 and 30 assume that the volume change of the ROI is solely in a direction orthogonal to the plane of the chest wall. Additional correction factors may be used to take account of the breathing which expands the torso in lateral directions. These correction factors may be applied irrespective of a position or orientation of the chest to the camera.”).
Addison fails to explicitly disclose a respiratory motion detector by emitting detecting signals toward a target region of the subject.
However, Addison does teach in an example a respiratory motion detector by emitting detecting signals toward a target region of the subject (Par. 89 (image capture device – 385)) (Par. 90, “For example, backscatter x-ray or millimeter wave scanning technology may be utilized to scan a patient, which can be used to define an ROI and monitor movement for tidal volume calculations. Advantageously, such technologies may be able to “see” through clothing, bedding, or other materials while giving an accurate representation of the patient's skin. This may allow for more accurate tidal wave measurements, particularly if the patient is wearing baggy clothing or is under bedding.”).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the method of Addison with an example of Addison to explicitly include using a respiratory motion detector by emitting detecting signals toward a target region of the subject through the combination of examples as it would have yielded the predictable result of monitoring patient parameters through clothing (Addison (Par. 90)).
Regarding claim 28, Addison discloses A non-transitory computer readable medium, comprising at least one set of instructions (Par. 89, “a processor 315 that is coupled to a memory 305. The processor 315 can store and recall data and applications in the memory 305, including applications that process information and send commands/signals according to any of the methods disclosed herein…”) (Fig. 1, Fig. 2, Par. 82 (patient based monitoring system)), wherein when executed by one or more processors of a computing device (Par. 89, “a processor 315 that is coupled to a memory 305. The processor 315 can store and recall data and applications in the memory 305, including applications that process information and send commands/signals according to any of the methods disclosed herein…”)(Fig. 1, Fig. 2, Par. 82 (patient based monitoring system)), the at least one set of instructions causes the computing device to perform a method (Fig. 1, Fig. 2, Par. 82 (patient based monitoring system)) (abstract (method and system)), the method comprising:
determining a respiratory amplitude of a respiratory motion of a subject during a medical scan based on a respiratory signal relating to the respiratory motion (Par. 127, “Respiratory displacements of the chest are shown. These respiratory displacements are denoted as d.sub.i,j, where i and j are the indices along the vertical and horizontal plane of chest.”), wherein the respiratory signal is collected using a respiratory motion detector (Par. 82, “The system 200 includes a non-contact detector 210 placed remote from the patient 212. In this embodiment, the detector 210 includes a camera 214, such as a video camera. The camera 214 is remote from the patient, in that it is spaced apart from and does not contact the patient 212. The camera 214 includes a detector exposed to a field of view 216 that encompasses at least a portion of the patient 212.”) (Par. 89 (image capture device – 385)) toward a target region of the subject (Par. 127, “FIG. 30 is a diagram showing a representation of a patient at an angle to a line of sight of a camera from above according to various embodiments described herein”);
obtaining surface information of the target region (Fig. 26-30, Par. 127, “FIG. 30 shows the patient sitting at an angle (θ) to the line of sight. In this case, the displacements along the line of sight of the camera d*.sub.i,j will be less than the actual displacements orthogonal to the chest wall” (angle of incidence)); and
correcting the respiratory amplitude based on the surface information of the target region (Par. 127, “We may correct these displacements by dividing by the cosine of the angle θ as follows in Equation 7…” (equation 7)) (Par. 128, “The embodiments described above with respect to FIGS. 29 and 30 assume that the volume change of the ROI is solely in a direction orthogonal to the plane of the chest wall. Additional correction factors may be used to take account of the breathing which expands the torso in lateral directions. These correction factors may be applied irrespective of a position or orientation of the chest to the camera.”).
Addison fails to explicitly disclose a respiratory motion detector by emitting detecting signals toward a target region of the subject.
However, Addison does teach in an example a respiratory motion detector by emitting detecting signals toward a target region of the subject (Par. 89 (image capture device – 385)) (Par. 90, “For example, backscatter x-ray or millimeter wave scanning technology may be utilized to scan a patient, which can be used to define an ROI and monitor movement for tidal volume calculations. Advantageously, such technologies may be able to “see” through clothing, bedding, or other materials while giving an accurate representation of the patient's skin. This may allow for more accurate tidal wave measurements, particularly if the patient is wearing baggy clothing or is under bedding.”).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the method of Addison with an example of Addison to explicitly include using a respiratory motion detector by emitting detecting signals toward a target region of the subject through the combination of examples as it would have yielded the predictable result of monitoring patient parameters through clothing (Addison (Par. 90)).
Regarding claim 2, modified Addison further discloses wherein the corrected respiratory amplitude reflects an intensity of the respiratory motion of the subject along a standard direction (Par. 127, “a non-orthogonal angle of the plane of the chest to the line of sight of the camera. FIG. 29 is a diagram showing a representation of a patient from above according to various embodiments described herein. FIG. 30 is a diagram showing a representation of a patient at an angle to a line of sight of a camera from above according to various embodiments described herein…”).
Regarding claim 15, modified Addison teaches the system of claim 2 above, which comprises the method of claim 15. As the claims are similar, claim 15 is rejected in the same manner as claim 2.
Regarding claim 4, modified Addison further discloses wherein the correcting the respiratory amplitude based on the surface information of the target region includes (as indicated above):
determining, based on the surface information of the target region, a surface profile of the target region (Par. 127, “FIG. 29 is a diagram showing a representation of a patient from above according to various embodiments described herein. FIG. 30 is a diagram showing a representation of a patient at an angle to a line of sight of a camera from above according to various embodiments described herein…”);
dividing the surface profile into a plurality of subsections (Par. 110-112 (breaking down the image into differing regions)).
Modified Addison fails to explicitly disclose for each of the plurality of subsections, determining a correction factor corresponding to the subsection; and correcting, based on the plurality of correction factors corresponding to the plurality of subsections, the respiratory amplitude of the subject.
However, Addison does teach in examples for each of the plurality of subsections, determining a correction factor corresponding to the subsection (Par. 128, “Additional correction factors may be used to take account of the breathing which expands the torso in lateral directions. These correction factors may be applied irrespective of a position or orientation of the chest to the camera.”); and
correcting, based on the plurality of correction factors corresponding to the plurality of subsections, the respiratory amplitude of the subject (Par. 128, “Additional correction factors may be used to take account of the breathing which expands the torso in lateral directions. These correction factors may be applied irrespective of a position or orientation of the chest to the camera.”) (Par. 130-131 (flood field depth range increased based on using skeletal angle)).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Addison with an example of Addison to explicitly include for each of the plurality of subsections, determining a correction factor corresponding to the subsection; and correcting, based on the plurality of correction factors corresponding to the plurality of subsections, the respiratory amplitude of the subject through the combination of examples as it would have yielded the predictable result of factoring in respiratory movements in differing directions (Addison (Par. 128)).
Regarding claim 17, modified Addison teaches the system of claim 4 above, which comprises the method of claim 17. As the claims are similar, claim 17 is rejected in the same manner as claim 4.
Regarding claim 5, modified Addison fails to explicitly disclose the limitations of the claim
However, Addison does teach in examples wherein the for each of the plurality of subsections, determining a correction factor corresponding to the subsection includes (as indicated in claim 4 above):
obtaining an installation angle of the respiratory motion detector relative to a reference direction (Fig. 31-33) (Par. 129 (angle of camera))(Par. 128, “Additional correction factors may be used to take account of the breathing which expands the torso in lateral directions. These correction factors may be applied irrespective of a position or orientation of the chest to the camera.”);
determining an included angle between the subsection and the reference direction (Fig. 31-33) (Par. 129 (angle of camera relative to user))(Par. 128, “Additional correction factors may be used to take account of the breathing which expands the torso in lateral directions. These correction factors may be applied irrespective of a position or orientation of the chest to the camera.”); and
determining, based on the installation angle and the included angle, the correction factor corresponding to the subsection (Par. 130, “FIG. 33 is a diagram showing an angle at which a patient's ROI is not orthogonal to a line of sight of a camera according to various embodiments described herein…”) (Par. 129, “FIG. 31 is a diagram showing apparent movement of an ROI of a patient orthogonal to a line of sight of a camera according to various embodiments described herein. In other words, the surface of the patient's chest is oriented orthogonal to the line of sight of the camera, and the movement shown is movement, as seen by the camera, of the chest of the orthogonally oriented patient as that patient breathes…”) (Fig. 31-33) (Par. 128, “Additional correction factors may be used to take account of the breathing which expands the torso in lateral directions. These correction factors may be applied irrespective of a position or orientation of the chest to the camera.”).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Addison with an example of Addison to explicitly include wherein the for each of the plurality of subsections, determining a correction factor corresponding to the subsection includes: obtaining an installation angle of the respiratory motion detector relative to a reference direction; determining an included angle between the subsection and the reference direction; and determining, based on the installation angle and the included angle, the correction factor corresponding to the subsection through the combination of examples through the combination of examples as it would have yielded the predictable result of factoring in respiratory movements in differing directions (Addison (Par. 128)).
Regarding claim 18, modified Addison teaches the system of claim 5 above, which comprises the method of claim 18. As the claims are similar, claim 18 is rejected in the same manner as claim 5.
Regarding claim 6, modified Addison further discloses the determining a respiratory amplitude of a respiratory motion of a subject comprises determining a plurality of respiratory amplitudes of the respiratory motion at a plurality of time points during the medical scan based on the respiratory signal (Par. 127, “Respiratory displacements of the chest are shown. These respiratory displacements are denoted as d.sub.i,j, where i and j are the indices along the vertical and horizontal plane of chest.”) (Par. 127 (displacements integrated across the ROI)) (Par. 83 (over time)),
the obtaining surface information of the target region comprises obtaining sets of surface information of the target region, each of the sets of surface information corresponding to one of the plurality of time points (Fig. 26-30, Par. 127, “FIG. 30 shows the patient sitting at an angle (θ) to the line of sight. In this case, the displacements along the line of sight of the camera d*.sub.i,j will be less than the actual displacements orthogonal to the chest wall” (angle of incidence)) (Par. 127 (displacements integrated across the ROI)) (Par. 83 (over time)), and
the correcting the respiratory amplitude comprises, for each of the plurality of time points, correcting the respiratory amplitude at the time point based on the surface information corresponding to the time point (Par. 127, “We may correct these displacements by dividing by the cosine of the angle θ as follows in Equation 7…” (equation 7)) (Par. 128, “The embodiments described above with respect to FIGS. 29 and 30 assume that the volume change of the ROI is solely in a direction orthogonal to the plane of the chest wall. Additional correction factors may be used to take account of the breathing which expands the torso in lateral directions. These correction factors may be applied irrespective of a position or orientation of the chest to the camera.”) (Par. 127 (displacements integrated across the ROI)) (Par. 83 (over time)).
Regarding claim 19, modified Addison teaches the system of claim 6 above, which comprises the method of claim 19. As the claims are similar, claim 19 is rejected in the same manner as claim 6.
Regarding claim 7, modified Addison fails to explicitly disclose the limitations of the claim.
However, Addison does teach in an alternate embodiment obtaining scan data of the subject collected by medical scan (Fig. 46 (video signal 4605))(Fig. 45 (multiple ROI))(Par. 149, “In various embodiments, a multiple ROI method using a single camera may also be used…”); and processing the scan data (Fig. 46, (step 4615 data)) of the subject based on the corrected respiratory amplitudes corresponding to the plurality of time points (Par. 151, “a video signal 4605, from which a larger ROI is determined at 4610 and a smaller chest ROI is determined at 4615. The method 4600 further includes filtering the chest ROI at 4620. At 4625, the tidal volume of the patient is output.”) (Par. 152, (filtering signals)).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Addison with an example of Addison to include obtaining scan data of the subject collected by medical scan; and processing the scan data of the subject based on the corrected respiratory amplitudes corresponding to the plurality of time points through the combination of embodiments as it would have yielded the predictable result of improving signal quality (Addison (Par. 152)).
Claim(s) 3, 8, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Addison as applied to claims 1, 6, and 14 above, and further in view of Rehouma (US Pub. No. 20220378321) hereinafter Rehouma.
Addison teaches the device of claim 1 above.
Regarding claim 3, modified Addison fails to explicitly disclose the limitations of the claim.
However, Addison does disclose wherein the obtaining surface information of the target region includes:
acquiring, using an image acquisition device, an image of the subject (Fig. 26-30, Par. 127, “FIG. 30 shows the patient sitting at an angle (θ) to the line of sight. In this case, the displacements along the line of sight of the camera d*.sub.i,j will be less than the actual displacements orthogonal to the chest wall” (camera image)) (Par. 89-90 (image capture device 385)) (Par. 124 (image capture)); and
determining, based on the image of the subject, the surface information of the target region (Fig. 26-30, Par. 127, “FIG. 30 shows the patient sitting at an angle (θ) to the line of sight. In this case, the displacements along the line of sight of the camera d*.sub.i,j will be less than the actual displacements orthogonal to the chest wall” (angle of incidence)).
Addison does teach depth sensing cameras (Par. 83, “camera 214 may be a depth sensing camera, such as a Kinect camera from Microsoft Corp. (Redmond, Wash.)”).
Rehouma teaches acquiring, using an image acquisition device, a three-dimensional (3D) optical image of the subject (Par. 99, “As shown, at step 302, the 3D camera 102 generates a 3D image encompassing at least the thoraco-abdominal region of the patient, and more specifically the thorax region 14 and the abdomen region 16 of the patient 10 in this example. The 3D image can be stored on the memory 204, or stored on a remote memory as desired. The 3D image can also be communicated to a remote network for further processing and/or storing.”) (Fig. 3, method – 300); and
determining, based on the 3D optical image of the subject, the surface information of the target region (Par. 101, “At step 306, the computer 104 identifies first coordinates indicating coordinates of at least a first point of the thoraco-abdominal region of the patient 10 in the 3D image…”)(Par. 102, “At step 308, the computer 104 identifies second coordinates indicating coordinates of at least a different, second point of the thoraco-abdominal region of the patient 10 in the 3D image…”) (Par. 103, “At step 310, the computer 104 determines a distance based on the first and second coordinates. For instance, in embodiments where the first and second coordinates correspond to thorax and abdominal coordinates, respectively, the determined distance can correspond to a thoraco-abdominal distance....”).
Addison and Rehouma are considered to be analogous art to the claimed invention as they are involved with imaging of a user.
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Addison with that of Rehouma to explicitly include wherein the obtaining surface information of the target region includes: acquiring, using an image acquisition device, a three-dimensional (3D) optical image of Rehouma of the subject; and determining, based on the 3D optical image of Rehouma of the subject, the surface information of the target region through the combination of references as differing cameras are known (Rehouma (Par. 89)) (Addison (Par. 90, 94)) and it would have yielded the predictable result of providing additional depth information (Addison (Par. 83)).
Regarding claim 16, modified Addison teaches the system of claim 3 above, which comprises the method of claim 16. As the claims are similar, claim 16 is rejected in the same manner as claim 3.
Modified Addison teaches the system of claim 6 above.
Regarding claim 8, modified Addison fails to explicitly disclose the limitations of the claim.
However, Addison does teach in an alternate embodiment determining, based on at least one of respiratory motion data or posture data, motion data of the subject, wherein the respiratory motion data includes the corrected respiratory amplitude values corresponding to the plurality of time points and the posture data is collected over a time period including the plurality of time points (Par. 150, “Depth sensing camera data may be used to determine that the patient is eating, for example through movement of the jaw similar to chewing, neck movement indicating swallowing, hands moving periodically to the mouth to feed, appearance of a straw-like shape in front of the patient's face, etc. By identifying instances where irregular breathing is likely, the system can filter out data collected during those periods so as not to affect tidal volume measurements, averages, or other calculations. Additionally, the determinations of scenarios like eating and talking where breathing is expected to be irregular may also be beneficial for alarm conditions.”);
determining, based on the motion data of the subject, whether the subject has an obvious motion in the time period (Par. 150, “Depth sensing camera data may be used to determine that the patient is eating, for example through movement of the jaw similar to chewing, neck movement indicating swallowing, hands moving periodically to the mouth to feed, appearance of a straw-like shape in front of the patient's face, etc…”); and
in response to determining that the subject has an obvious motion in the time period (Par. 150, “Depth sensing camera data may be used to determine that the patient is eating, for example through movement of the jaw similar to chewing, neck movement indicating swallowing, hands moving periodically to the mouth to feed, appearance of a straw-like shape in front of the patient's face, etc. By identifying instances where irregular breathing is likely, the system can filter out data collected during those periods so as not to affect tidal volume measurements, averages, or other calculations. Additionally, the determinations of scenarios like eating and talking where breathing is expected to be irregular may also be beneficial for alarm conditions. For example, in a scenario when a patient is talking, any alarm related to a tidal volume measurement may be suppressed by the system”), controlling a device to perform a target operation (Par. 150, “Depth sensing camera data may be used to determine that the patient is eating, for example through movement of the jaw similar to chewing, neck movement indicating swallowing, hands moving periodically to the mouth to feed, appearance of a straw-like shape in front of the patient's face, etc. By identifying instances where irregular breathing is likely, the system can filter out data collected during those periods so as not to affect tidal volume measurements, averages, or other calculations. Additionally, the determinations of scenarios like eating and talking where breathing is expected to be irregular may also be beneficial for alarm conditions. For example, in a scenario when a patient is talking, any alarm related to a tidal volume measurement may be suppressed by the system”).
Rehouma discloses controlling a display device to perform a target operation (Par. 129, “the respiratory parameter, which may differ from one embodiment to another, may be monitored over time. As such, alert(s) may be generated when the respiratory rate exceeds a given threshold, when the tidal volume is below a given threshold and/or the retraction distance is above a given distance. Such alerts may be displayed on a display screen or acoustically emitted near the patient's bed.”).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Addison with that of Addison and Rehouma to include determining, based on at least one of respiratory motion data or posture data, motion data of the subject, wherein the respiratory motion data includes the corrected respiratory amplitude values corresponding to the plurality of time points and the posture data is collected over a time period including the plurality of time points; determining, based on the motion data of the subject, whether the subject has an obvious motion in the time period; and in response to determining that the subject has an obvious motion in the time period, controlling a display device of Rehouma to perform a target operation through the combination of embodiments as it would have yielded the predictable result of providing direct feedback regarding the measured data (Rehouma (Par. 129)).
Claim(s) 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Addison in view of Rehouma as applied to claim 8 above, and further in view of Ellerau (Ellerau, et al. “Feasibility Study of a novel MRI-safe and interactive respiratory biofeedback system.”, 2019) hereinafter Ellerau.
Addison and Rehouma teach the system of claim 8 above.
Regarding claim 9, modified Addison fails to explicitly disclose the limitations of the claim.
However, Ellerau teaches wherein the display device includes a projector disposed in a scanning tunnel of a medical scanner that performs the medical scan (Fig. 4, (mirror inside of MRI tunnel with feedback unit display with LED stripes)) (Page 5478, col. 2, “The third unit of the system is the biofeedback unit. Based on the given feedback information, the proband will be able to control the respiration interactively when it is detected that the actual breathing is not according to the desired one. The feedback is created with different-colored LED-stripes.”) (Page 5479, Col. 1, feedback is given with a green, blue or white light-signal since these colors are not absorbed from the laser protection glasses. The white light is shown before and after the breath-hold phase. The green light is activated when the actual respiration pattern is within a tolerance range during the breath-hold phase, and the blue light symbolizes the crossing of the tolerance thresholds.”).
Addison, Rehouma, and Ellerau are considered to be analogous art to the claimed invention as they are involved with imaging of a user.
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Addison and Rehouma with that of Ellerau to include wherein the display device includes a projector disposed in a scanning tunnel of a medical scanner that performs the medical scan through the combination of references as it would have yielded the predictable result of adapting for MRI measurements (Ellerau (abstract)).
Regarding claim 10, modified Addison fails to explicitly disclose the limitations of the claim.
However, Ellerau further teaches wherein the projector is configured to project a virtual character in a first status on an inside wall of the scanning tunnel (Fig. 4, (mirror inside of MRI tunnel with feedback unit display with LED stripes)), and the controlling a display device to perform a target operation includes:
controlling the projector to change the projected virtual character from the first status to a second status (Fig. 4)(Page 5478, col. 2, “The third unit of the system is the biofeedback unit. Based on the given feedback information, the proband will be able to control the respiration interactively when it is detected that the actual breathing is not according to the desired one. The feedback is created with different-colored LED-stripes.”) (Page 5479, Col. 1, feedback is given with a green, blue or white light-signal since these colors are not absorbed from the laser protection glasses. The white light is shown before and after the breath-hold phase. The green light is activated when the actual respiration pattern is within a tolerance range during the breath-hold phase, and the blue light symbolizes the crossing of the tolerance thresholds.”(changing the LED)).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Addison, Rehouma, and Ellerau with that of Ellerau to include wherein the projector is configured to project a virtual character in a first status on an inside wall of the scanning tunnel, and the controlling a display device to perform a target operation includes: controlling the projector to change the projected virtual character from the first status to a second status through the combination of references as it would have yielded the predictable result of providing the user with feedback during an MRI scan (Ellerau (abstract)).
Claim(s) 11-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Addison as applied to claim 1 above, and further in view of Buelow (US Pub. No. 20230298745) hereinafter Buelow.
Addison teaches the system of claim 1 above.
Regarding claim 11, modified Addison fails to explicitly disclose the limitations of the claim.
However, Addison does teach in an example wherein the operations further comprise:
obtaining a scout image of the subject collected by a scout scan, the scout scan being performed on the subject before the medical scan (Par. 135, “using a three-dimensional (3D) calibration procedure prior to real-time monitoring of tidal volume using a depth sensor camera system”);
performing foreign matter detection on the scout image of the subject using at least one foreign matter detection model (Par. 134, “…FIGS. 38 and 39 show the depth data obtained using a depth camera sensor as disclosed herein, showing the ROI without any obstruction in FIG. 38 and with partial obstruction of the ROI in FIG. 39.”) (Par. 135, “… This calibrated 3D surface profile is used to estimate a portion of an ROI that has been obscured. The obscured region is identified, and the 3D profile is used to estimate the contribution to the tidal volume of the obscured region according to various embodiments discussed below.”).
However, Buelow teaches wherein the operations further comprise:
obtaining a scout image of the subject collected by a scout scan, the scout scan being performed on the subject before the medical scan (Fig. 2, step 102-104) (Par. 35, “At an operation 104, the electronic processing device 20 is programmed to extract the preview image 12 from the received live video feed 17. The extracted preview image 12 is displayed in the preview image viewport 9. To do so, the at least one electronic processor 20 in some embodiments is programmed to determine at least one of a modality of the imaging device 2 and/or an anatomy...”);
performing foreign matter detection on the scout image of the subject using at least one foreign matter detection model (Par. 37, “At an operation 106, the electronic processing device 20 is programmed to perform an image analysis 38 on the extracted preview image 12 to detect or extract one or more image features 44 indicative of one or more potential problems associated with a medical imaging examination performed with the medical imaging device…”) (Fig. 2, step 106) (Par. 38, “… a potentially obscuring object present in the preview image (e.g., based on, for example, high Hounsfield Unit (HU) values, or detecting foreign objects by ML algorithms, and so forth). The potentially obscuring object can be, for example, jewelry, metal parts in clothing, and so forth). In another example, the electronic processing device 20 is programmed to perform the image analysis 38 to identify on the extracted features 44 a disease condition of the patient. These are merely non-limiting examples.”); and
determining, based on a result of the foreign matter detection, whether the medical scan can be started (Par. 43, “an operation 108, the electronic processing device 20 is programmed to output the alert 30 when one or more potential problems associated with the medical imaging examination is detected from the one or more image features 44. The alert 30 can indicate, for example, the presence of a potentially obscuring object in the preview image 12, an identification of the misplacement of the body part to be imaged present in the preview image, an identification of the misplacement of the position of the table 13, an identification of the disease condition of the patient, and so forth.”) (Par. 45, “The alert 30 can be textual messages (e.g., “remove jewelry” when the preview image 12 includes jewelry) In addition, the alert 30 can include advice for the technologist to resolve the issues (e.g., “consider re-position the patient” or “consider moving the table”).”).
Addison and Buelow are considered to be analogous art to the claimed invention as they are involved with imaging of a user.
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Addison with that of Buelow to include obtaining a scout image of the subject collected by a scout scan, the scout scan being performed on the subject before the medical scan; performing foreign matter detection on the scout image of the subject using at least one foreign matter detection model; and determining, based on a result of the foreign matter detection, whether the medical scan can be started through the combination of references as it would have yielded the predictable result of ensuring that the user removes any potentially obscuring objects that would negatively impact the scan (Buelow (Par. 43)).
Regarding claim 12, modified Addison fails to explicitly disclose the limitations of the claim.
However, Buelow further teaches in response to a result of the foreign matter detection that non-iatrogenic foreign matter is disposed on or within the subject, generating first prompt information for requiring the subject to take off the non-iatrogenic foreign matter (Buelow (Par. 43, “an operation 108, the electronic processing device 20 is programmed to output the alert 30 when one or more potential problems associated with the medical imaging examination is detected from the one or more image features 44. The alert 30 can indicate, for example, the presence of a potentially obscuring object in the preview image 12, an identification of the misplacement of the body part to be imaged present in the preview image, an identification of the misplacement of the position of the table 13, an identification of the disease condition of the patient, and so forth.”) (Par. 45, “The alert 30 can be textual messages (e.g., “remove jewelry” when the preview image 12 includes jewelry) In addition, the alert 30 can include advice for the technologist to resolve the issues (e.g., “consider re-position the patient” or “consider moving the table”).”)).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Addison and Buelow with that of Buelow in response to a result of the foreign matter detection that non-iatrogenic foreign matter is disposed on or within the subject, generating first prompt information for requiring the subject to take off the non-iatrogenic foreign matter for the reasoning as indicated in claim 11 above.
Regarding claim 13, modified Addison fails to explicitly disclose the limitations of the claim.
However, Buelow further teaches in response to a result of the foreign matter detection that iatrogenic foreign matter is disposed on or within the subject, generating second prompt information for reminding that artifact correction needs to be performed on the medical scan (Buelow (Par. 43, “an operation 108, the electronic processing device 20 is programmed to output the alert 30 when one or more potential problems associated with the medical imaging examination is detected from the one or more image features 44. The alert 30 can indicate, for example, the presence of a potentially obscuring object in the preview image 12, an identification of the misplacement of the body part to be imaged present in the preview image, an identification of the misplacement of the position of the table 13, an identification of the disease condition of the patient, and so forth.”) (Par. 45, “The alert 30 can be textual messages (e.g., “remove jewelry” when the preview image 12 includes jewelry) In addition, the alert 30 can include advice for the technologist to resolve the issues (e.g., “consider re-position the patient” or “consider moving the table”).”)(Par. 27, “While the imaging technician is expected to use the preview image to check for various possible issues before acquiring the clinical images, there is a possibility that the technician may fail to notice a problem shown in the preview image. For example, the technician may fail to notice incorrect patient positioning, or presence of a metal artifact in the patient…”).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Addison and Buelow with that of Buelow in response to a result of the foreign matter detection that iatrogenic foreign matter is disposed on or within the subject, generating second prompt information for reminding that artifact correction needs to be performed on the medical scan through the combination of references as it would have yielded the predictable result of improving image capture quality (Buelow (Par. 27)).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARI SINGH KANE PADDA whose telephone number is (571)272-7228. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Sims can be reached at (571) 272-7540. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ARI S PADDA/Examiner, Art Unit 3791
/JASON M SIMS/Supervisory Patent Examiner, Art Unit 3791