DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1-20 remain pending. Claims 5 and 16 have been cancelled. Applicant’s amendments to the specification and drawings have fixed all previous specification, drawing, and claim objections apart from reference character 458 not being included in the description. Applicant’s amendments have also overcome the previously held 35 USC § 112 (b and d) rejections.
Drawings
The drawings are also objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character is not mentioned in the description: 458 (see figure 30 B).
Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 6 and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 6 recites the limitation "The system of claim 5…”. There is insufficient antecedent basis for this limitation in the claim. Claim 5 has been cancelled.
Claim 17 recites the limitation "The method of claim 16…”. There is insufficient antecedent basis for this limitation in the claim. Claim 16 has been cancelled.
Claim Interpretation
Claim 6 recites the limitation "The system of claim 5…”. Because claim 5 has been cancelled claim 6 is interpretated to be dependent on claim 1.
Claim 17 recites the limitation "The method of claim 16…”. Because claim 16 has been cancelled claim 17 is interpretated to be dependent on claim 12.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6-15 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Huo (Huo, Hong, et al. "Endoscopic upper airway evaluation in obstructive sleep apnea: Mueller’s maneuver versus simulation of snoring." Sleep and Breathing 19 (2015): 661-667.) in view of Ariyoshi (WO 2018230130A1), Zur (US 20200387706 A1), and Campanini (Campanini, A et al. “Awake versus sleep endoscopy: personal experience in 250 OSAHS patients.” Acta otorhinolaryngologica Italica : organo ufficiale della Societa italiana di otorinolaringologia e chirurgia cervico-facciale vol. 30,2 (2010): 73-7.).
With respect to claim 1, Huo teaches a system for diagnosing obstructive sleep apnea
comprising an endoscope configured to capture images (“endoscope” page 3 col. 1 line 4). Huo also teaches receiving one or more images from the endoscope, wherein the one or more images are of the palatal and throat areas of a patient (“The entire UA was examined”, upper airway includes palate and throat areas page 3 col.1 lines 5-6) and are captured while the patient is awake (“…FNMM and FNSS were reliable and easy methods of evaluating the UA when patients were awake…” page 6 col.2 paragraph 2 lines 8-10); identifying an anatomical structure within the one or more images, wherein the anatomical structure is related to obstructive sleep apnea (“Retropalatal obstruction was detected in all of the participants by FNMM and FNSS.” Page 4 col.1 paragraph 3 lines 4-5); using a predictive image analysis function, create a simulated sleep dataset based upon the anatomical structure within the one or more images (Collapsibility as predictive image analysis function, see Huo table 1 and Examiner Figure 1), wherein the simulated sleep dataset describes or projects visual characteristics of the anatomical structure onto or within the one or more images simulating the anatomical structure in a sleep state with sleep apnea (“The entire UA was examined”, upper airway includes palate and throat areas page 3 col.1 lines 5-6 and “Retropalatal obstruction was detected in all of the participants by FNMM and FNSS.” Page 4 col.1 paragraph 3 lines 4-5)
Huo does not teach a display, and a processor configured to receive one or more images from the endoscope , wherein the one or more images are of the nasal area of a patient, a predictive image analysis function that comprises a machine learning function, a training dataset comprising: a first plurality of images of the anatomical structure previously captured from a plurality of patients while awake, a second plurality of images of the anatomical structure previously captured from the plurality of patients while sleep apnea is induced, and a dataset that correlates the first plurality of images to the second plurality of images and identifies in the anatomical structure within each image, and presenting a graphical user interface via the display based on the simulated sleep dataset.
Ariyoshi teaches a display (“display unit” page 3 paragraph 2 line 2), and a processor (“video processor” page 3 paragraph 2 line 1) configured to receive one or more images from the endoscope, wherein the one or more images are of the nasal area of a patient (“acquired by imaging the nasal sinuses” page 5 paragraph 12 lines 2-3). Ariyoshi also teaches an annotated dataset (“In the display image generation method of one embodiment of the present invention, a subject is captured by an imaging unit to acquire a subject image, and an index indicating the degree of abnormality of the subject is calculated according to a color included in the subject image by a calculation unit. Then, a display image in which the index is identified and displayed is generated according to a predetermined threshold set independently of the subject image by the image processing unit.” Page 2 paragraph 6).
Ariyoshi is analogous art in the same field of endeavor as the claimed invention. Ariyoshi is directed towards an “endoscope apparatus and a display image generation method” (see page 2 line 1). A person of ordinary skill in the art before the effective filing date of the claimed invention could have reasoned that combining the systematic approach of Huo with the system of Ariyoshi, by incorporating Ariyoshi’s nasal endoscopic imagery and abnormality indicator with Huo’s upper airway imagery and predictive collapse function (creating a simulated sleep dataset that covers the nose and upper airway, that could later be annotated, with predictably obstructive features detected) could lead to users being able to quantify the degree to which anatomy is abnormal (“Therefore, an object of the present invention is to provide an endoscope apparatus and a display image generation method that can quantitatively indicate the degree of abnormality such as inflammation of a subject.” Ariyoshi Page 3 Background-Art paragraph 4), which by providing information concerning obstructions can be helpful to surgeons regarding surgical technique decisions (“These techniques yield reliable information on the dynamic anatomy and physiology of UA obstruction in patients with OSAHS. FNSS may provide some different information regarding retroglossal obstruction and patterns of collapse from FNMM. Both techniques can help the surgeons to make decisions regarding the surgical technique in individual patients.” Huo page 6 Conclusions lines 2-8). Therefore it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the systematic approach of Huo with the system of Ariyoshi, by incorporating Ariyoshi’s nasal endoscopic imagery and abnormality indicator with Huo’s upper airway imagery and predictive collapse function (creating a simulated sleep dataset that covers the nose and upper airway, that could later be annotated, with predictably obstructive features detected) with the expectation that doing so would lead to users being able to quantify the degree to which anatomy is abnormal (see Ariyoshi Page 3 Background-Art paragraph 4), which by providing information concerning obstructions can be helpful to surgeons regarding surgical technique decisions (see Huo page 6 Conclusions lines 2-8)
Zur teaches a display (“presenting and updating a colon map that displays 2D and/or 3D locations of identified polyps” paragraph 0061 lines 11-13) and a processor (“…code instructions (i.e., stored on a memory and executable by one or more hardware processors for generating instructions for presenting a graphical user interface (GUI) for dynamically tracking one or more polyps in two dimensional (2D)…” paragraph 0047 lines 3-7) configured to identify an anatomical structure within the one or more images, wherein the anatomical structure is related to obstructive sleep apnea (“the plurality of endoscopic image, feeding, into a detection neural network, the processed sequential sub-set of the plurality of endoscopic images, outputting by the detection neural network, a current region depicting the at least one polyp for the respective endoscopic image” paragraph 0010 lines 8-13, polyps as anatomical structure) , that contains a predictive image analysis function that comprises a machine learning function that is configured based upon a training dataset (“a detection neural network that is fed the 2D image(s) and trained for segmenting polyps in 2D images.” Paragraph 0058 lines 11-13), and present a graphical user interface via a display based on anatomical images of obstructions (“presenting within the GUI, the colon map, wherein the colon map is dynamically updated with locations of new detected polyps.” Lines 7-9).
Zur is analogous art in the same field of endeavor as the claimed invention. Zur is directed towards a system for visualizing anatomical abnormalities on a display screen based on images taken by an endoscope (“The present invention, in some embodiments thereof, relates to colonoscopy and, more specifically, but not exclusively, to systems and methods for processing colon images and video, and/or processing colon polyps automatically detected during a colonoscopy procedure.” Paragraph 0001). A person of ordinary skill in the art before the effective filing date of the claimed invention could have reasoned that combining the system of Huo and Ariyoshi with the system of Zur, by utilizing the processor, display, visualization and GUI of Zur, could allow for the visualization of obstructions and thus represent an improvement in obstructions detection (with the utilization of machine learning ) (“In particular, at least some implementations of the systems, methods, apparatus, and/or code instructions described herein improve the technology of image processing and/or the technology of GUI, by code that analyzes the captured images, and/or the GUI that is used by the operator to help increase the polyp identification and/or detection rate.” Paragraph 0062 lines 4-10). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Huo and Ariyoshi with the system of Zur, by utilizing the processor, display, visualization and GUI of Zur, could allow for the visualization of obstructions and thus represent an improvement in obstruction detection (with the utilization of machine learning) (see Paragraph 0062 lines 4-10).
Campanini teaches a dataset comprising a first plurality of images of the anatomical structure previously captured from a plurality of patients while awake (“We have retrospectively analyzed 250 cases in order to compare the pharyngolaryngeal endoscopic findings detected in the awake state while in a supine position, with those obtained under drug-induced sedation.” Page 2 col.1 lines 8-12), a second plurality of images of the anatomical structure previously captured from the plurality of patients while sleep apnea is induced (“We have retrospectively analyzed 250 cases in order to compare the pharyngolaryngeal endoscopic findings detected in the awake state while in a supine position, with those obtained under drug-induced sedation.” Page 2 col.1 lines 8-12 and “During wakefulness, collapse of the upper airways can be prevented by a high pharyngeal neuromuscular tone. Due to a reduction of this neurophysiologic phenomenon, sleep onset results in a progressive upper airways muscular hypotonia, that is greater in OSAHS patients than in normal subjects” page 3 col.1 Discussion lines 4-9), and a dataset that correlates the first plurality of images to the second plurality of images (“The predictive value of the obstructive frameworks as detected in the awake vs sedation state has been shown to be extremely different: 76% (190/250) of overall dissonances (oropharyngeal and/or hypopharyngeal sites) (Fig. 1). On the other hand, endoscopic findings, in comparison with the two states of observation described, have been quite similar only in 24% (60/250) of cases.” Page 2 col.2 lines 7-13) and identifies in the anatomical structure within each image (“The analysis of the obstructing pattern has shown 49% (123/250) discrepancy: during sedation, the most remarkable events concerned a change from a transversal to a circular (48/250 = 19%) or to an anteroposterior (33/250 = 13%) collapsing shape. “ page 3 lines 8-12).
Campanini is analogous art in the same field of endeavor as the claimed invention. Campanini is directed towards a systematic approach for studying the effect of imagery, taken while sleep apnea patients are awake or asleep (“In our experience, on 250 cases retrospectively analysed between November 2005 and July 2008, the predictive value of the obstructive frameworks, as detected in the awake state or in sedation…” page 3 col.2 lines 41-45). A person of ordinary skill in the art before the effective filing date of the claimed invention could have reasoned that combining the system of Huo, Ariyoshi, and Zur, with the teachings of Campanini, by incorporating the systematic approach of Campanini’s sleep and awake imagery with accompanied by Campanini’s disclosed comparison and obstruction data and methodology, applying it to the set of data from Huo, Ariyoshi, and Zur, could lead to better patient outcomes by giving more complete data regarding anatomical abnormalities like obstructions (“During wakefulness, collapse of the upper airways can be prevented by a high pharyngeal neuromuscular tone. Due to a reduction of this neurophysiologic phenomenon, sleep onset results in a progressive upper airways muscular hypotonia, that is greater in OSAHS patients than in normal subjects. The described process contributes to a partial or complete airways obstruction in SDB patients 23. An anatomic-based methodological approach during sleep may be crucial to guide surgical treatment decision making” page 3 Discussion lines 4-12 and “Indeed, the awake state findings may differ quite dramatically from the sleep breathing situation 13, and inaccurate information may lead to inappropriate surgery” page 1 col. 2 lines 8-11). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Huo, Ariyoshi, and Zur, with the teachings of Campanini, by incorporating the systematic approach of Campanini’s sleep and awake imagery with accompanied by Campanini’s disclosed comparison and obstruction data and methodology, applying it to the set of data from Huo, Ariyoshi, and Zur, with the expectation that doing so would lead to better patient outcomes by giving more complete data regarding anatomical abnormalities like obstructions(see Campanini page 3 Discussion lines 4-12 and page 1 col. 2 lines 8-11).
PNG
media_image1.png
312
331
media_image1.png
Greyscale
Examiner Figure 1: A screen shot for Huo page 3 col.2
With respect to claim 2, Huo, Ariyoshi, Zur, and Campanini teach the system of claim 1. Zur further teaches that the processor is configured to, when identifying the anatomical structure within the one or more images, use an image recognition function to identify the anatomical structure within the one or more images based upon the visual characteristics of the anatomical structure depicted by the one or more images (“the plurality of endoscopic image, feeding, into a detection neural network, the processed sequential sub-set of the plurality of endoscopic images, outputting by the detection neural network, a current region depicting the at least one polyp for the respective endoscopic image” paragraph 0010 lines 8-13).
With respect to claim 3, Huo, Ariyoshi, Zur, and Campanini teach the system of claim 2. Ariyoshi teaches a plurality of annotation describing images (“In the display image generation method of one embodiment of the present invention, a subject is captured by an imaging unit to acquire a subject image, and an index indicating the degree of abnormality of the subject is calculated according to a color included in the subject image by a calculation unit. Then, a display image in which the index is identified and displayed is generated according to a predetermined threshold set independently of the subject image by the image processing unit.” Page 2 paragraph 6).
Zur further teaches the system of claim 2, wherein the image recognition function comprises a first machine learning function that is configured based upon a first training dataset (“a detection neural network that is fed the 2D image(s) and trained for segmenting polyps in 2D images.” Paragraph 0058 lines 11-13), wherein the first training dataset comprises a plurality of historic images of the anatomical structure (“a detection neural network that is fed the 2D image(s) and trained for segmenting polyps in 2D images.” Paragraph 0058 lines 11-13).
Ariyoshi is analogous art in the same field of endeavor as the claimed invention. Ariyoshi is directed towards an “endoscope apparatus and a display image generation method” (see page 2 line 1). A person of ordinary skill in the art before the effective filing date of the claimed invention could have reasoned that combining the systematic approach of Huo with the system of Ariyoshi, by incorporating Ariyoshi’s nasal endoscopic imagery and abnormality indicator with Huo’s upper airway imagery and predictive collapse function (creating a simulated sleep dataset that covers the nose and upper airway, that could later be annotated, with predictably obstructive features detected) could lead to users being able to quantify the degree to which anatomy is abnormal (“Therefore, an object of the present invention is to provide an endoscope apparatus and a display image generation method that can quantitatively indicate the degree of abnormality such as inflammation of a subject.” Ariyoshi Page 3 Background-Art paragraph 4), which by providing information concerning obstructions can be helpful to surgeons regarding surgical technique decisions (“These techniques yield reliable information on the dynamic anatomy and physiology of UA obstruction in patients with OSAHS. FNSS may provide some different information regarding retroglossal obstruction and patterns of collapse from FNMM. Both techniques can help the surgeons to make decisions regarding the surgical technique in individual patients.” Huo page 6 Conclusions lines 2-8). Therefore it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the systematic approach of Huo with the system of Ariyoshi, by incorporating Ariyoshi’s nasal endoscopic imagery and abnormality indicator with Huo’s upper airway imagery and predictive collapse function (creating a simulated sleep dataset that covers the nose and upper airway, that could later be annotated, with predictably obstructive features detected) with the expectation that doing so would lead to users being able to quantify the degree to which anatomy is abnormal (see Ariyoshi Page 3 Background-Art paragraph 4), which by providing information concerning obstructions can be helpful to surgeons regarding surgical technique decisions (see Huo page 6 Conclusions lines 2-8)
Zur is analogous art in the same field of endeavor as the claimed invention. Zur is directed towards a system for visualizing anatomical abnormalities on a display screen based on images taken by an endoscope (“The present invention, in some embodiments thereof, relates to colonoscopy and, more specifically, but not exclusively, to systems and methods for processing colon images and video, and/or processing colon polyps automatically detected during a colonoscopy procedure.” Paragraph 0001). A person of ordinary skill in the art before the effective filing date of the claimed invention could have reasoned that combining the system of Huo and Ariyoshi with the system of Zur, by utilizing the processor, display, visualization and GUI of Zur, could allow for the visualization of obstructions and thus represent an improvement in obstruction detection (with the utilization of machine learning ) (“In particular, at least some implementations of the systems, methods, apparatus, and/or code instructions described herein improve the technology of image processing and/or the technology of GUI, by code that analyzes the captured images, and/or the GUI that is used by the operator to help increase the polyp identification and/or detection rate.” Paragraph 0062 lines 4-10). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Huo and Ariyoshi with the system of Zur, by utilizing the processor, display, visualization and GUI of Zur, could allow for the visualization of obstructions and thus represent an improvement in obstruction detection (with the utilization of machine learning) (see Paragraph 0062 lines 4-10).
With respect to claim 4, Huo, Ariyoshi, Zur and Campanini teach the system of claim 3. Zur further teaches the system of claim 3, wherein the image recognition function comprises a plurality of machine learning functions (“detection neural network that is fed the 2D image(s) and trained for segmenting polyps in 2D images. The 2D image is fed into a 3D reconstruction neural network that outputs 3D coordinates for the pixels of the 2D image.” Paragraph 0058 lines 11-15) each having a corresponding training dataset (“a detection neural network that is fed the 2D image(s)” paragraph 0058 lines 11-12 and “the 3D reconstruction neural network is trained by a training dataset of pairs of 2D endoscopic images defining input images corresponding 3D coordinate values computed for pixels of the 2D endoscopic images computed by a 3D reconstruction process defining ground truth” paragraph 0020 lines 2-6), the plurality of machine learning functions including at least the first machine learning function and a second machine learning function (“detection neural network that is fed the 2D image(s) and trained for segmenting polyps in 2D images. The 2D image is fed into a 3D reconstruction neural network that outputs 3D coordinates for the pixels of the 2D image.” Paragraph 0058 lines 11-15), wherein the processor is further configured to, when identifying the anatomical structure:
(a) use each of the plurality of the machine learning functions to identify the anatomical structure and produce a plurality of identifications of the anatomical structure (“detection neural network that is fed the 2D image(s) and trained for segmenting polyps in 2D images. The 2D image is fed into a 3D reconstruction neural network that outputs 3D coordinates for the pixels of the 2D image.” Paragraph 0058 lines 11-15);
(b) present, via the display, the one or more images and a plurality of visual indicators that are based on the plurality of identifications of the anatomical structure (“presenting and updating a colon map that displays 2D and/or 3D locations of identified polyps” paragraph 0061 lines 11-13); and
(c) receive a user selection of a selected visual indicator from the plurality of visual indicators (“Optionally, treated polyps are marked, for example, manually by the physician (e.g., making a selection using the GUI, by pressing a “polyp removed” icon), and/or automatically by code (e.g., detects movement of the surgical excision device). Marked treated polyps may be tracked and/or presented on the colon map presented in the GUI, as described herein.” Paragraph 0098) and, in response, select one of the plurality of identifications of the anatomical structure that corresponds to the selected visual indicator as the identified anatomical structure and cease displaying the plurality of visual indicators that were not selected (“Optionally, treated polyps are marked, for example, manually by the physician (e.g., making a selection using the GUI, by pressing a “polyp removed” icon), and/or automatically by code (e.g., detects movement of the surgical excision device). Marked treated polyps may be tracked and/or presented on the colon map presented in the GUI, as described herein.” Paragraph 0098).
With respect to claim 6, Huo, Ariyoshi, Zur, and Campanini teach the system of claim 1. Huo teaches using the predictive analysis function to create a dataset, based on anatomical structures within images (Collapsibility as predictive image analysis function, see Huo table 1 and Examiner Figure 1), wherein the dataset describes visual characteristics of the anatomical structure within the image as if it had been captured while the patient is asleep (“SS and MM were repeated two or three times to ensure that stable images were recorded.”, SS being simulated snoring page 3 col.1 lines 14-15).
Campanini further teaches the system of claim 5, wherein the processor is further configured to select a first validation image from the first plurality of images, wherein the first validation image is associated with a first patient (“We have retrospectively analyzed 250 cases in order to compare the pharyngolaryngeal endoscopic findings detected in the awake state while in a supine position, with those obtained under drug-induced sedation.” Page 2 col.1 lines 8-12 and “The data reported refer to 250 SDB patients…” page 2 Patients and methods line 1, awake state images), select a second validation image from the second plurality of images, wherein the second validation image is also associated with the first patient (“We have retrospectively analyzed 250 cases in order to compare the pharyngolaryngeal endoscopic findings detected in the awake state while in a supine position, with those obtained under drug-induced sedation.” Page 2 col.1 lines 8-12 and “The data reported refer to 250 SDB patients…” page 2 Patients and methods line 1, sedation images), using the predictive image analysis function, create a validation dataset based upon the anatomical structure within the first validation image (“Endoscopic observations have been classified according to the sites of collapse (nasopharyngeal; oropharyngeal; hypopharyngeal or laryngeal). The minimal sectional area (Müller manoeuvre) has been classified in 4 obstructing grades…” page 2 col.1 Patients and methods lines 31-35), wherein the validation dataset describes visual characteristics of the anatomical structure within the first validation image simulating a patient sleep state (“We have retrospectively analyzed 250 cases in order to compare the pharyngolaryngeal endoscopic findings detected in the awake state while in a supine position, with those obtained under drug-induced sedation.” Page 2 col.1 lines 8-12 and “The data reported refer to 250 SDB patients…” page 2 Patients and methods line 1), and provide a validation comparison based on the second validation image and the validation dataset (“The predictive value of the obstructive frameworks as detected in the awake vs sedation state has been shown to be extremely different: 76% (190/250) of overall dissonances (oropharyngeal and/or hypopharyngeal sites) (Fig. 1). On the other hand, endoscopic findings, in comparison with the two states of observation described, have been quite similar only in 24% (60/250) of cases.” Page 2 col.2 lines 7-13).
With respect to claim 7, Huo, Ariyoshi, Zur, and Campanini teach the system of claim 6. Zur teaches a dataset comprising simulated image of anatomical structures (“presenting and updating a colon map that displays 2D and/or 3D locations of identified polyps” paragraph 0061 lines 11-13, 3D images, Fig. 15) and presenting an image and simulated image via the display (“presenting and updating a colon map that displays 2D and/or 3D locations of identified polyps” paragraph 0061 lines 11-13, Figs. 1 and 15).
Campanini further teaches the validation dataset (“We have retrospectively analyzed 250 cases in order to compare the pharyngolaryngeal endoscopic findings detected in the awake state while in a supine position, with those obtained under drug-induced sedation.” Page 2 col.1 lines 8-12 and “The data reported refer to 250 SDB patients…” page 2 Patients and methods line 1) and the system of claim 6, wherein the processor is further configured to provide the validation comparison (“The predictive value of the obstructive frameworks as detected in the awake vs sedation state has been shown to be extremely different: 76% (190/250) of overall dissonances (oropharyngeal and/or hypopharyngeal sites) (Fig. 1). On the other hand, endoscopic findings, in comparison with the two states of observation described, have been quite similar only in 24% (60/250) of cases.” Page 2 col.2 lines 7-13).
Campanini is analogous art in the same field of endeavor as the claimed invention. Campanini is directed towards a systematic approach for studying the effect of imagery, taken while sleep apnea patients are awake or asleep (“In our experience, on 250 cases retrospectively analysed between November 2005 and July 2008, the predictive value of the obstructive frameworks, as detected in the awake state or in sedation…” page 3 col.2 lines 41-45). A person of ordinary skill in the art before the effective filing date of the claimed invention could have reasoned that combining the system of Huo, Ariyoshi, and Zur, with the teachings of Campanini, by incorporating the systematic approach of Campanini’s sleep and awake imagery with accompanied by Campanini’s disclosed comparison and obstruction data and methodology, applying it to the set of data from Huo, Ariyoshi, and Zur, could lead to better patient outcomes by giving more complete data regarding anatomical abnormalities like obstructions (“During wakefulness, collapse of the upper airways can be prevented by a high pharyngeal neuromuscular tone. Due to a reduction of this neurophysiologic phenomenon, sleep onset results in a progressive upper airways muscular hypotonia, that is greater in OSAHS patients than in normal subjects. The described process contributes to a partial or complete airways obstruction in SDB patients 23. An anatomic-based methodological approach during sleep may be crucial to guide surgical treatment decision making” page 3 Discussion lines 4-12 and “Indeed, the awake state findings may differ quite dramatically from the sleep breathing situation 13, and inaccurate information may lead to inappropriate surgery” page 1 col. 2 lines 8-11). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Huo, Ariyoshi, and Zur, with the teachings of Campanini, by incorporating the systematic approach of Campanini’s sleep and awake imagery with accompanied by Campanini’s disclosed comparison and obstruction data and methodology, applying it to the set of data from Huo, Ariyoshi, and Zur, with the expectation that doing so would lead to better patient outcomes by giving more complete data regarding anatomical abnormalities like obstructions (see Campanini page 3 Discussion lines 4-12 and page 1 col. 2 lines 8-11).
With respect to claim 8, Huo, Ariyoshi, Zur and Campanini teach the system of claim 1. Huo teaches determining a simulated boundary of the anatomical structure based upon the simulated sleep dataset (Collapsibility as predictive image analysis function, see Huo table 1 and Examiner Figure 1). Zur further teaches the system of claim 1, wherein the processor is further configured to:
(a) determine a boundary of the anatomical structure based upon identification of the anatomical structure within the one or more images (“The detection neural network may include a segmentation process that identifies the location of the detected polyp in the image, for example, by generating a boundary box and/or other contour that delineates the polyp in the 2D frame” paragraph 0099 lines 3-7, contour);
(b) determine a simulated boundary of the anatomical structure (“The detection neural network may include a segmentation process that identifies the location of the detected polyp in the image, for example, by generating a boundary box and/or other contour that delineates the polyp in the 2D frame” paragraph 0099 lines 3-7, 3D bounding box, and Figs. 1 and 15)
(c) cause the graphical user interface to present the one or more images (“presenting and updating a colon map that displays 2D and/or 3D locations of identified polyps” paragraph 0061 lines 11-13), a first visual depiction of the boundary, and a second visual depiction of the simulated boundary (“The detection neural network may include a segmentation process that identifies the location of the detected polyp in the image, for example, by generating a boundary box and/or other contour that delineates the polyp in the 2D frame” paragraph 0099 lines 3-7, 2D contour and 3D bounding box as boundaries (first and second depictions respectfully), and Figs. 1 and 15).
With respect to claim 9, Huo, Ariyoshi, Zur and Campanini teach the system of claim 6. Zur teaches a dataset that comprises a simulated image of the anatomical structure (“presenting and updating a colon map that displays 2D and/or 3D locations of identified polyps” paragraph 0061 lines 11-13, 3D images and Fig. 1), and wherein the processor is further configured to determine a boundary of the anatomical structure based upon identification of the anatomical structure within the one or more images (“The detection neural network may include a segmentation process that identifies the location of the detected polyp in the image, for example, by generating a boundary box and/or other contour that delineates the polyp in the 2D frame” paragraph 0099 lines 3-7, contour as boundary and Fig. 1), determine a simulated boundary of the anatomical structure based upon the simulated image (“The detection neural network may include a segmentation process that identifies the location of the detected polyp in the image, for example, by generating a boundary box and/or other contour that delineates the polyp in the 2D frame” paragraph 0099 lines 3-7, boundary as 3D bounding box, and Figs. 1 and 15), cause the graphical user interface to present the one or more image (“presenting and updating a colon map that displays 2D and/or 3D locations of identified polyps” paragraph 0061 lines 11-13 and Fig.1) including a first visual depiction of the boundary (“The detection neural network may include a segmentation process that identifies the location of the detected polyp in the image, for example, by generating a boundary box and/or other contour that delineates the polyp in the 2D frame” paragraph 0099 lines 3-7, contour, and Fig. 1), and cause the graphical user interface to present the simulated image (“presenting and updating a colon map that displays 2D and/or 3D locations of identified polyps” paragraph 0061 lines 11-13, 3D image, and Figs. 1 and 15) including a second visual depiction of the simulated boundary (“The detection neural network may include a segmentation process that identifies the location of the detected polyp in the image, for example, by generating a boundary box and/or other contour that delineates the polyp in the 2D frame” paragraph 0099 lines 3-7, 3D bounding box, and Figs. 1 and 15).
Huo (Collapsibility as predictive image analysis function, see Huo table 1 and Examiner Figure 1, and “SS and MM were repeated two or three times to ensure that stable images were recorded.”, SS being simulated snoring page 3 col.1 lines 14-15) and Campanini teach the validation dataset of claims 6 and 7 (“We have retrospectively analyzed 250 cases in order to compare the pharyngolaryngeal endoscopic findings detected in the awake state while in a supine position, with those obtained under drug-induced sedation.” Page 2 col.1 lines 8-12 and “The data reported refer to 250 SDB patients…” page 2 Patients and methods line 1).
Campanini is analogous art in the same field of endeavor as the claimed invention. Campanini is directed towards a systematic approach for studying the effect of imagery, taken while sleep apnea patients are awake or asleep (“In our experience, on 250 cases retrospectively analysed between November 2005 and July 2008, the predictive value of the obstructive frameworks, as detected in the awake state or in sedation…” page 3 col.2 lines 41-45). A person of ordinary skill in the art before the effective filing date of the claimed invention could have reasoned that combining the system of Huo, Ariyoshi, and Zur, with the teachings of Campanini, by incorporating the systematic approach of Campanini’s sleep and awake imagery with accompanied by Campanini’s disclosed comparison and obstruction data and methodology, applying it to the set of data from Huo, Ariyoshi, and Zur, could lead to better patient outcomes by giving more complete data regarding anatomical abnormalities like obstructions (“During wakefulness, collapse of the upper airways can be prevented by a high pharyngeal neuromuscular tone. Due to a reduction of this neurophysiologic phenomenon, sleep onset results in a progressive upper airways muscular hypotonia, that is greater in OSAHS patients than in normal subjects. The described process contributes to a partial or complete airways obstruction in SDB patients 23. An anatomic-based methodological approach during sleep may be crucial to guide surgical treatment decision making” page 3 Discussion lines 4-12 and “Indeed, the awake state findings may differ quite dramatically from the sleep breathing situation 13, and inaccurate information may lead to inappropriate surgery” page 1 col. 2 lines 8-11). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Huo, Ariyoshi, and Zur, with the teachings of Campanini, by incorporating the systematic approach of Campanini’s sleep and awake imagery with accompanied by Campanini’s disclosed comparison and obstruction data and methodology, applying it to the set of data from Huo, Ariyoshi, and Zur, with the expectation that doing so would lead to better patient outcomes by giving more complete data regarding anatomical abnormalities like obstructions (see Campanini page 3 Discussion lines 4-12 and page 1 col. 2 lines 8-11).
With respect to claim 10, Huo, Ariyoshi, Zur and Campanini teach the system of claim 1. Huo teaches the system of claim 1, further a dataset associated with identifying the anatomical structure and creating the simulated sleep dataset (Collapsibility as predictive image analysis function, see Huo table 1 and Examiner Figure 1), wherein the dataset comprises images from a plurality of patients that depict an epiglottis (“A lubricated endoscope was inserted through a nostril and advanced until the epiglottis was visible. The entire UA was examined, with emphasis on the retropalatal and retroglossal levels. The uvula and tip of the epiglottis were used as landmarks for retropalatal and retroglossal levels.” Page 3 col.1 lines 3-8).
Ariyoshi teaches a dataset comprises annotated images (“In the display image generation method of one embodiment of the present invention, a subject is captured by an imaging unit to acquire a subject image, and an index indicating the degree of abnormality of the subject is calculated according to a color included in the subject image by a calculation unit. Then, a display image in which the index is identified and displayed is generated according to a predetermined threshold set independently of the subject image by the image processing unit.” Page 2 paragraph 6)
Zur further teaches the system of claim 1, further comprising a memory configured to store a training dataset associated with identifying the anatomical structure (“a detection neural network that is fed the 2D image(s) and trained for segmenting polyps in 2D images.” Paragraph 0058 lines 11-13).
With respect to claim 11, Huo, Ariyoshi, Zur and Campanini teach the system of claim 1. Zur further teaches the system of claim 1, wherein the processor comprises two or more processors that are in communication with each other over a network (“The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.” Paragraph 0069 lines 11-16), a wireless data connection (“In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN)” paragraph 0069 lines 16-19), or a wired data connection (“In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN)” paragraph 0069 lines 16-19).
With respect to claim 12, Huo teaches a method for diagnosing obstructive sleep apnea comprising an endoscope configured to capture images (“endoscope” page 3 col. 1 line 4). Huo also teaches receiving one or more images from the endoscope, wherein the one or more images are of the palatal and throat areas of a patient (“The entire UA was examined”, upper airway includes palate and throat areas page 3 col.1 lines 5-6) and are captured while the patient is awake (“…FNMM and FNSS were reliable and easy methods of evaluating the UA when patients were awake…” page 6 col.2 paragraph 2 lines 8-10); identifying an anatomical structure within the one or more images, wherein the anatomical structure is related to obstructive sleep apnea (“Retropalatal obstruction was detected in all of the participants by FNMM and FNSS.” Page 4 col.1 paragraph 3 lines 4-5); using a predictive image analysis function, create a simulated sleep dataset based upon the anatomical structure within the one or more images (Collapsibility as predictive image analysis function, see Huo table 1 and Examiner Figure 1), wherein the simulated sleep dataset describes or projects visual characteristics of the anatomical structure onto or within the one or more images simulating the anatomical structure in a sleep state with sleep apnea (“The entire UA was examined”, upper airway includes palate and throat areas page 3 col.1 lines 5-6 and “Retropalatal obstruction was detected in all of the participants by FNMM and FNSS.” Page 4 col.1 paragraph 3 lines 4-5)
Huo does not teach a display, and a processor configured to receive one or more images from the endoscope , wherein the one or more images are of the nasal area of a patient, a predictive image analysis function that comprises a machine learning function, a training dataset comprising: a first plurality of images of the anatomical structure previously captured from a plurality of patients while awake, a second plurality of images of the anatomical structure previously captured from the plurality of patients while sleep apnea is induced, and a dataset that correlates the first plurality of images to the second plurality of images and identifies in the anatomical structure within each image, and presenting a graphical user interface via the display based on the simulated sleep dataset.
Ariyoshi teaches a display (“display unit” page 3 paragraph 2 line 2), and a processor (“video processor” page 3 paragraph 2 line 1) configured to receive one or more images from the endoscope, wherein the one or more images are of the nasal area of a patient (“acquired by imaging the nasal sinuses” page 5 paragraph 12 lines 2-3). Ariyoshi also teaches an annotated dataset (“In the display image generation method of one embodiment of the present invention, a subject is captured by an imaging unit to acquire a subject image, and an index indicating the degree of abnormality of the subject is calculated according to a color included in the subject image by a calculation unit. Then, a display image in which the index is identified and displayed is generated according to a predetermined threshold set independently of the subject image by the image processing unit.” Page 2 paragraph 6).
Ariyoshi is analogous art in the same field of endeavor as the claimed invention. Ariyoshi is directed towards an “endoscope apparatus and a display image generation method” (see page 2 line 1). A person of ordinary skill in the art before the effective filing date of the claimed invention could have reasoned that combining the systematic approach of Huo with the system of Ariyoshi, by incorporating Ariyoshi’s nasal endoscopic imagery and abnormality indicator with Huo’s upper airway imagery and predictive collapse function (creating a simulated sleep dataset that covers the nose and upper airway, that could later be annotated, with predictably obstructive features detected) could lead to users being able to quantify the degree to which anatomy is abnormal (“Therefore, an object of the present invention is to provide an endoscope apparatus and a display image generation method that can quantitatively indicate the degree of abnormality such as inflammation of a subject.” Ariyoshi Page 3 Background-Art paragraph 4), which by providing information concerning obstructions can be helpful to surgeons regarding surgical technique decisions (“These techniques yield reliable information on the dynamic anatomy and physiology of UA obstruction in patients with OSAHS. FNSS may provide some different information regarding retroglossal obstruction and patterns of collapse from FNMM. Both techniques can help the surgeons to make decisions regarding the surgical technique in individual patients.” Huo page 6 Conclusions lines 2-8). Therefore it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the systematic approach of Huo with the system of Ariyoshi, by incorporating Ariyoshi’s nasal endoscopic imagery and abnormality indicator with Huo’s upper airway imagery and predictive collapse function (creating a simulated sleep dataset that covers the nose and upper airway, that could later be annotated, with predictably obstructive features detected) with the expectation that doing so would lead to users being able to quantify the degree to which anatomy is abnormal (see Ariyoshi Page 3 Background-Art paragraph 4), which by providing information concerning obstructions can be helpful to surgeons regarding surgical technique decisions (see Huo page 6 Conclusions lines 2-8)
Zur teaches a display (“presenting and updating a colon map that displays 2D and/or 3D locations of identified polyps” paragraph 0061 lines 11-13) and a processor (“…code instructions (i.e., stored on a memory and executable by one or more hardware processors for g