DETAILED ACTION
Applicant’s arguments, filed 01/26/2026, have been fully considered. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application. Applicant has canceled claims 6 and 12 and added claim 15.
Claims 1-2, 4-5, 7-8, 10-11, and 13-15 are the current claims hereby under examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The claimed elements of “graph image generator”, “a split image generator”, and “examination units” continue to be interpreted as described in the Office Action filed 03/12/2025.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 15 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 15, it is unclear what the limitation of “the sleep state being read and displayed in advance by professional examination personnel” requires to meet the claim. This limitation appears to not require any structural change to the polysomnography device; thus, any device that meets the claim limitations of claim 1 and 15 will also meet this claim limitation.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 4-5, 7-8, 10-11, 13, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 20210045676 - previously cited) and Ha (KR 102267105 – previously cited).
Regarding claims 1 and 15, Lee discloses a polysomnography device comprising:
a graph image generator configured to obtain polysomnography raw data measured in time series, and convert the polysomnography raw data into a graph with respect to time to generate a graph image comprising a plurality of individual line graphs (Paragraph 0071-0075, wherein polysomnography data is obtained; Paragraph 0035, wherein the signal data processor extracts feature data and transforms it to input into the classification model; Paragraph 0148),
the polysomnography raw data comprising a plurality of pieces of biometric data of a user obtained by a plurality of examination units (Paragraph 0035, EEG/EOG/EMG),
the graph comprising a first axis representing time and a second axis that intersects the first axis (Paragraph 0028; Paragraph 0080, “The signal data processor 110 may collect and process the entire sleep data that is measured based on a unit, for example, epoch, of 30 seconds”; Fig. 4, x-axis representing time with y-axis intersecting x-axis),
the graph image generator configured to:
convert the plurality of pieces of biometric data respectively into the plurality of individual line graphs with respect to time (Paragraph 0035; Fig. 4), and
generate the graph image by arranging the plurality of individual line graphs along the second axis and aligning time values on the first axis with respect to data values of the plurality of individual line graphs on the second axis such that the plurality of individual line graphs are temporally aligned and visually integrated into a single two-dimensional image (Paragraph 0148, “A sleep stage analysis using only a CNN may be used to construct an AI network by using 30 seconds as a single epoch. In this case, a sleep stage may be classified by constructing the CNN into consideration of EEG, EOG and chin-EMG signals as a single image”),
a learning processor configured to train a sleep state reading model via machine learning by performing image-based training directly on the plurality of individual line graphs of the graph image (Paragraph 0150-0152), such that the sleep-state reading model learns to interpret a sleep state of the user from the graph image (Paragraph 0039; Paragraph 0076; Paragraph 0203, “software that may analyze a sleep stage using a deep learning and read a result of analyzing the sleep stage more quickly and accurately than a person”).
While Lee discloses using the graph image comprising data from multiple channels (Paragraph 0031), Lee fails to explicitly disclose the limitations of claim 1 and 15 wherein the graph image comprises labeled data, a plurality of groups of line graphs, and a sub-bounding box and a dedicated label indicating a sleep state.
However, Ha teaches a deep learning-based polysomnography examination wherein a data conversion unit divides the data into time units, labels the data (Page 4, paragraph 8), and groups the data into a plurality of line graphs such that the line graphs have similar characteristics to other lines within the group (Figs. 2 and 5A-B, wherein the line graphs are grouped by similar characteristics). While Ha does not explicitly teach a sub-bounding box labeling a sleep state of the data, Ha teaches that data is labeled with a sleep state (Page 4, paragraph 10), and one of ordinary skill could apply the label onto the graph image. Ha states setting up the graph image this way is useful to convert the data into a format applicable for input into the artificial neural network (Page 4, paragraph 8). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device of Lee to incorporate the teachings of grouping and labeling the line graphs of Ha to input the data into an artificial neural network.
Regarding claim 2, Lee as modified further discloses a split image generator configured to generate a plurality of images by splitting the graph image in units of a preset time, wherein the learning processor trains the sleep state reading model based on the plurality of images obtained by the splitting of the graph image (Paragraph 0080, “The signal data processor 110 may collect and process the entire sleep data that is measured based on a unit, for example, epoch, of 30 seconds”; Paragraph 0161, wherein the data may be 50 epochs).
Regarding claim 4, Lee as modified further discloses wherein the plurality of pieces of biometric data comprise biometric data obtained using two or more of the plurality of examination units among an Electroencephalogram (EEG) sensor, an Electrooculography (EOG) sensor, and an Electromyogram (EMG) sensor (Paragraph 0035).
Regarding claim 5, Lee as modified further discloses wherein the graph image generator is configured to generate the graph image by matching times of the plurality of pieces of biometric data (Fig. 4).
Regarding claim 7, Lee discloses a computer-implemented examination method of a polysomnography device, the method comprising:
obtaining time-serially measured polysomnography data (Paragraph 0071-0075, wherein polysomnography data is obtained);
converting the polysomnography data into a graph with respect to time to generate a graph image comprising a plurality of individual line graphs (Paragraph 0035, wherein the signal data processor extracts feature data and transforms it to input into the classification model; Paragraph 0148),
the polysomnography data comprising a plurality of pieces of biometric data of a user obtained by a plurality of examination units (Paragraph 0035, EEG/EOG/EMG),
the graph comprising a first axis representing time and a second axis that intersects the first axis (Paragraph 0028; Paragraph 0080, “The signal data processor 110 may collect and process the entire sleep data that is measured based on a unit, for example, epoch, of 30 seconds”; Fig. 4, x-axis representing time with y-axis intersecting x-axis),
the converting comprising:
converting the plurality of pieces of biometric data respectively into the plurality of individual line graphs with respect to time (Paragraph 0035; Fig. 4), and
generating the graph image by arranging the plurality of individual line graphs along the second axis and aligning time values on the first axis with respect to data values of the plurality of individual line graphs on the second axis such that the plurality of individual line graphs are temporally aligned and visually integrated into a single two-dimensional image (Paragraph 0148, “A sleep stage analysis using only a CNN may be used to construct an AI network by using 30 seconds as a single epoch. In this case, a sleep stage may be classified by constructing the CNN into consideration of EEG, EOG and chin-EMG signals as a single image”);
training a sleep state reading model via machine learning by performing image-based training directly on the plurality of individual line graphs of the graph image (Paragraph 0150-0152), such that the sleep-state reading model learns to interpret a sleep state of the user from the graph image (Paragraph 0039; Paragraph 0076; Paragraph 0203, “software that may analyze a sleep stage using a deep learning and read a result of analyzing the sleep stage more quickly and accurately than a person”).
While Lee discloses using the graph image comprising data from multiple channels (Paragraph 0031), Lee fails to explicitly disclose the graph image comprising labeled data and a plurality of groups of line graphs.
However, Ha teaches a deep learning-based polysomnography examination wherein a data conversion unit divides the data into time units, labels the data (Page 4, paragraph 8), and groups the data into a plurality of line graphs such that the line graphs have similar characteristics to other lines within the group (Figs. 2 and 5A-B, wherein the line graphs are grouped by similar characteristics). Ha states this is useful to convert the data into a format applicable for input into the artificial neural network (Page 4, paragraph 8). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device of Lee to incorporate the teachings of grouping and labeling the line graphs of Ha to input the data into an artificial neural network.
Regarding claim 8, Lee as modified further discloses generating a plurality of images comprising the plurality of individual line graphs by splitting the graph image in units of a preset time, wherein the training comprises training the sleep state reading model using the plurality of images as the training data (Paragraph 0080, “The signal data processor 110 may collect and process the entire sleep data that is measured based on a unit, for example, epoch, of 30 seconds”; Paragraph 0161, wherein the data may be 50 epochs).
Regarding claim 10, Lee as modified further discloses wherein the plurality of pieces of biometric data comprise biometric data obtained using two or more of the plurality of examination units among an Electroencephalogram (EEG) sensor, an Electrooculography (EOG) sensor, and an Electromyogram (EMG) sensor (Paragraph 0035).
Regarding claim 11, Lee as modified further discloses wherein the graph image generator is configured to generate the graph image by matching times of the plurality of pieces of biometric data (Fig. 4).
Regarding claim 13, Lee as modified further discloses wherein each of the plurality of individual line graphs shows respective data values continuously changing over time along the second axis (Fig. 4).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Lee and Ha as applied to claim 1 above, and further in view of Tripathi (“Technical notes for digital polysomnography recording in sleep medicine practice” – previously cited).
Regarding claim 14, while Lee as modified discloses obtaining polysomnography raw data measured in time series from multiple examination units and incorporating them into a single graph image as discussed above, Lee fails to explicitly disclose converting the raw data into sets of numbers that are arranged along the first axis and displayed as numbers into the graph image.
However, Tripathi discusses digital polysomnography recordings wherein the polysomnography graph image shows multiple channels, two of which show sets of numbers arranged along the first axis (Fig. 6). It is within the technical grasp of one of ordinary skill in the art to be able to display measured data as a line graph or as number values. Examiner notes that, due to the individual line graphs of the graph image being used as training data as recited in claim 1, the sets of numbers displayed on the graph image are not required to be used as training data.
Lee and Tripathi are considered analogous to the claimed invention because they are in the same field of taking polysomnography readings. It is within the technical grasp of one of ordinary skill in the art to display measured data in the form of a line graph or as a set of number values. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Lee to incorporate the teachings of Tripathi.
Response to Arguments
Applicant’s arguments, see page 7, filed 01/26/2026, with respect to the rejection(s) of claim(s) 1 and 7 under 35 U.S.C. §102(a)(1) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of Ha as described above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NOAH MICHAEL HEALY whose telephone number is (703)756-5534. The examiner can normally be reached Monday - Friday 8:30am - 5:30pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Sims can be reached at (571)272-7540. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NOAH M HEALY/Examiner, Art Unit 3791
/JASON M SIMS/Supervisory Patent Examiner, Art Unit 3791