Prosecution Insights
Last updated: April 19, 2026
Application No. 18/715,683

METHOD FOR GENERATING TRAINING IMAGE USED TO TRAIN IMAGE-BASED ARTIFICIAL INTELLIGENCE MODEL FOR ANALYZING IMAGES OBTAINED FROM MULTI-CHANNEL ONE-DIMENSIONAL SIGNALS, AND DEVICE PERFORMING SAME

Non-Final OA §101§103
Filed
Aug 02, 2024
Examiner
GE, JIN
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Seoul National University Hospital
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
416 granted / 520 resolved
+18.0% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
38 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 520 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1-26 are pending in the present application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy of Korea patent application number KR10-2022-0166146 filed on 12/01/2022 has been received and made of record. Information Disclosure Statement The information disclosure statements (IDS) submitted on 06/06/2024 and 08/02/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 25 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 15 specifies a computer program combined with hardware and stored in a medium. Given the broadest reasonable interpretation consistent with the specification and state-of-the-art at the time of invention, the full scope of “computer readable storage medium” covers both non-transitory tangible media (e.g., RAM, ROM, hard drive) and transitory propagating signals (e.g., carrier waves, signals) per se. Transitory propagating signals do not fall within the definition of a process, machine, manufacture or composition of matter and therefore must be rejected under 35 U.S.C. 101 as covering non-statutory subject matter (See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for Evaluating Subject Matter Eligibility Under 35 U.S.C. § 101, Aug. 24, 2009; p. 2.). The examiner suggests amending the claim to exclude transitory propagating signals, by adding a modifier, such as non-transitory to the claimed medium. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 12, 19-20 and 24-26 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2023/0293079 to Chen et al. in view of U.S. PGPubs 2004/0051721 to Ramseth. Regarding claim 1, Chen et al. teach a method for generating a training image (abstract, par 0038, par 0045) that is performed by a computing device including a processor and a memory (par 0034), comprising: generating a training signal on the basis of source signal information (par 0038, “the data set needs to be trained. FIG. 4 illustrates a sample data in the training data set according to some embodiments of the present application.”, par 0041, “ Based on the ECG image training data set and the corresponding image label, some embodiments of the present application are directed to extract a distinguishable section from the ECG image, which indicate a potential abnormal section, and then perform adaptive weight learning based on the extracted distinguishable section to make the abnormal section more prominent, and then perform weighted fusion of features and one-dimensional processing, so as to perform multiple label classification”); selecting at least one output format from among a plurality of preset output formats to determine an output format of the training image (par 0038, “Each recording is stored as a 12-lead ECG image in which four waveforms were presented in the ECG image. Each of the first three waveforms includes four lead signals with a 10-second duration and a 2.5-second duration per lead. The fourth waveform is a 10-second duration signal for lead II. Different leads provide different signal amplitudes and intervals”, par 0043, “ an ECG image is received. The input ECG image is commonly in an image format of jpg, svg, png, pdf, etc. Both the format and size of the image may be set as required. For example, the ECG image may be uniformly converted into 150*300 png images as input”); determining an output section for each channel of the training signal on the basis of a length of a time section of a waveform of the determined output format (par 0038, “Before explaining the ECG image processing method according to the present application, the data set needs to be trained. FIG. 4 illustrates a sample data in the training data set according to some embodiments of the present application. …... Each recording is stored as a 12-lead ECG image in which four waveforms were presented in the ECG image. Each of the first three waveforms includes four lead signals with a 10-second duration and a 2.5-second duration per lead. The fourth waveform is a 10-second duration signal for lead II. Different leads provide different signal amplitudes and intervals. The 12-lead ECG image is the most widely used ECG image recording technique in clinical practice. The 12 leads include six anterior leads (V1, V2, V3, V4, V5, V6), three limb leads (I, II, iii), and three enhanced Limb leads (aVR, aVL, aVF). Each lead views the heart from a different angle. All images are stored in Portable Network Graphics (PNG) format “, par 0045, “based on the backbone framework inception-v3, features are aggregated from the input raw ECG image to learn the feature map of the ECG image by representation learning. The ECG image feature map is represented as F∈R.sup.H×W×M, where F represents the feature map, R represents the ECG image, H represents the height dimension of the ECG image, W represents the width dimension of the ECG image dimension, M represents a number of channels of the ECG image”); drawing a grid pattern on a two-dimensional plane in accordance with the determined grid scale (Fig 4, par 0038, “Before explaining the ECG image processing method according to the present application, the data set needs to be trained. FIG. 4 illustrates a sample data in the training data set according to some embodiments of the present application. …... Each recording is stored as a 12-lead ECG image in which four waveforms were presented in the ECG image. Each of the first three waveforms includes four lead signals with a 10-second duration and a 2.5-second duration per lead. The fourth waveform is a 10-second duration signal for lead II. Different leads provide different signal amplitudes and intervals. The 12-lead ECG image is the most widely used ECG image recording technique in clinical practice. The 12 leads include six anterior leads (V1, V2, V3, V4, V5, V6), three limb leads (I, II, iii), and three enhanced Limb leads (aVR, aVL, aVF). Each lead views the heart from a different angle. All images are stored in Portable Network Graphics (PNG) format“); setting a reference position of a waveform content of the training signal on the basis of at least one of the determined output section for each channel and the determined grid scale (Fig 4, par 0038, the reference position is seen as the position where the channels/leads start being drawn to generate waveform signal); and drawing the waveform content of the training image and a signal marker on the two-dimensional plane with the grid pattern drawn thereon (Fig 4, par 0038, “Before explaining the ECG image processing method according to the present application, the data set needs to be trained. FIG. 4 illustrates a sample data in the training data set according to some embodiments of the present application. …...The 12 leads include six anterior leads (V1, V2, V3, V4, V5, V6), three limb leads (I, II, iii), and three enhanced Limb leads (aVR, aVL, aVF). Each lead views the heart from a different angle. All images are stored in Portable Network Graphics (PNG) format“). But Chen et al. keep silent for teaching determining a grid scale of the training image by selecting a per-axis scale. In related endeavor, Ramseth teaches determining a grid scale of the training image by selecting a per-axis scale, drawing a grid pattern on a two-dimensional plane in accordance with the determined grid scale (par 0008, “data from a standard ECG recording may be displayed as traces overlaid on a millimeter grid reference background. By displaying ECG data in much the same format as that of a conventional ECG paper printout, the system capitalizes on the pattern recognition skills developed by cardiologists through years of training and experience. Moreover, the interactive display system maintains the aspect ratio, thereby preserving the pattern recognition advantages, during electronic magnification (zoom) operations. By preserving the aspect ratio in this manner, doubling, for example, both the horizontal and vertical scales for both the ECG waveform and the reference grid for a 2.times. magnified view of an ECG, a cardiologist may make precise, highly-refined ECG measurements, even while viewing an undistorted representation of the ECG”, par 0040, “data from a standard ECG recording may be displayed as traces overlaid on a millimeter grid reference background. By displaying ECG data in much the same format as that of a conventional ECG paper printout, the system capitalizes on the pattern recognition skills developed by cardiologists through years of training and experience. Moreover, the interactive display system maintains the aspect ratio, thereby preserving the pattern-recognition advantages, during electronic magnification (zoom) operations. By preserving the aspect ratio in this manner, doubling, for example, both the horizontal and vertical scales for both the ECG waveform and the reference grid for a 2.times. magnified view of an ECG, a cardiologist may make precise, highly-refined ECG measurements, even while viewing an undistorted representation of the ECG “, par 0042, “data from a 12 lead resting ECG may be displayed as waveform traces overlaid on a millimeter grid background. By displaying ECG data in much the same format as that of a conventional ECG paper printout, the system capitalizes on the pattern recognition skills developed by cardiologists through years of training and experience”, par 0045, “the ECG data are displayed as waveform traces overlaid on a millimeter grid background. In a display mode that may be used as a default, each grid division, or cell, represents 40 milliseconds along the abscissa and 0.1 millivolt along the ordinate. Multi-grid divisions may be "set off" by employing heavier grid lines every fifth division, for example, to form 200 millisecond by 0.5 millivolt "super-cells." A 1280.times.1024 pixel display that employs an 1220.times.690 pixel area for the display of waveforms may employ one pixel for every eight milliseconds (five pixels per millimeter) in displaying a standard 1 mm.times.1 mm, 40 ms.times.0.1 mV grid. Various "zoom" schemes may be employed to increase or decrease the resolution of the display by increasing or decreasing the number of pixels dedicated to each millisecond and/or millivolt. A user may select one or more features of interest by manipulating one or more markers “, par 0053, “ the interactive display supports zoom operations that allow an operator to place markers with greater precision than a full-screen twelve-lead display might otherwise permit. The number of pixels between grid lines varies in the process of zooming and 1 millimeter grid lines may be added during a "zoom in" in order to provide the standard recognizable grid background for visual reference. Conversely, 1 millimeter grid lines may be deleted during a "zoom out" operation in order to avoid cluttering the display. Five millimeter grid lines will always be present (unless the grid line display option is turned off). In an illustrative embodiment, a five-pixel threshold is employed whereby 1 mm grid lines are added to the display when five or more pixels are required to display a distance equal to one mm”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Chen et al. to include determining a grid scale of the training image by selecting a per-axis scale as taught by Ramseth to display as traces overlaid on a millimeter grid reference background with a standard ECG recording to allow a user to gain insights that might otherwise be overlooked. Regarding claim 12, Chen et al. as modified by Ramseth teach all the limitation of claim 1, and further teach wherein the determining of an output section for each channel of the training signal on the basis of a length of a time section of a waveform of the determined output format includes: selecting at least one point from a start point and an end point of an output section for each channel to be output on the training image of an entire length of a received source signal; and calculating the section for each channel on the basis of the length of a time section of a waveform of format elements of a selected output format and the selected point (Chen et al.: par 0025, “present application may be directed to three types of ECG: static ECG, dynamic ECG and exercise ECG. When applied to longer duration dynamic ECG and exercise ECG, slices may be made for a specific length of time (e.g. 5 s, 10 s), and each slice may be used as an input in a form of an image”, Fig 4, par 0038-0041, “Before explaining the ECG image processing method according to the present application, the data set needs to be trained. FIG. 4 illustrates a sample data in the training data set according to some embodiments of the present application …. Each of the first three waveforms includes four lead signals with a 10-second duration and a 2.5-second duration per lead. The fourth waveform is a 10-second duration signal for lead II. Different leads provide different signal amplitudes and intervals. The 12-lead ECG image is the most widely used ECG image recording technique in clinical practice. The 12 leads include six anterior leads (V1, V2, V3, V4, V5, V6), three limb leads (I, II, iii), and three enhanced Limb leads (aVR, aVL, aVF) ….On basis of this dataset, data augmentation is used to generate a more robust training dataset. Image data augmentation is a technique that may be used to artificially scale the size of the training dataset by creating modified versions of the images to build a better deep learning model. It is worth noting that the enhanced data cannot be directly added to the training set in most prior ECG image processing because they are very sensitive to distortions in the temporal digital data, which can significantly degrade performances in the test set. However, since the input in the present application is a two-dimensional ECG image, overfitting can be effectively reduced and a balanced distribution between classes can be maintained by modifying the image with appropriate enhancements”, Ramseth: par 0007, “a time interval may be marked off by the manipulation of one or more markers, and the controller may respond, not only by positioning the markers according to user input, but also by computing the time and/or other values (e.g., average signal level over the interval). Additionally, modifications to the position of the marker may be reflected by modifications to information displayed, with, for example, the coordinates of one or more markers displayed and updated "on the fly" as a marker is repositioned on the display. The coordinates may represent a multi-dimensional space in which dimensions are devoted to signal level, time, event number, or other variables”, par 0040, “data from a standard ECG recording may be displayed as traces overlaid on a millimeter grid reference background. By displaying ECG data in much the same format as that of a conventional ECG paper printout, the system capitalizes on the pattern recognition skills developed by cardiologists through years of training and experience. Moreover, the interactive display system maintains the aspect ratio, thereby preserving the pattern-recognition advantages, during electronic magnification (zoom) operations. By preserving the aspect ratio in this manner, doubling, for example, both the horizontal and vertical scales for both the ECG waveform and the reference grid for a 2.times. magnified view of an ECG, a cardiologist may make precise, highly-refined ECG measurements, even while viewing an undistorted representation of the ECG “, par 0044, “a time interval may be marked off by the manipulation of one or more markers, and the engine 204 may respond, not only by positioning the markers according to user input, but also by computing the time and/or other values (e.g., average signal level over the interval). Additionally, modifications to the position of the marker may be reflected by modifications to information displayed, with, for example, the coordinates of one or more markers displayed and updated "on the fly" as a marker is repositioned on the display. The coordinates may represent a multi-dimensional space in which dimensions are devoted to signal level, time, event number, or other variables”, par 0050-0051, “the start time, resolution of the display, and the number of leads for which data are displayed are respectively displayed in windows 304, 306, and 308. The windows may include up/down arrows, such as arrows 310 to allow an operator to select different resolutions or number of leads displayed, for example.”). Regarding claim 19, Chen et al. as modified by Ramseth teach all the limitation of claim 1, and further teach wherein a reference position of the waveform content of the training signal includes coordinates of a measurement value of the training signal that are calculated as coordinate values based on the grid pattern (Chen et al.: par 0013, “processing the ECG image in an end-to-end training manner, i.e. from image input to final formation of interpretation results, during which no human intervention is required”, Fig 4, par 0038-0041, “Before explaining the ECG image processing method according to the present application, the data set needs to be trained. FIG. 4 illustrates a sample data in the training data set according to some embodiments of the present application …. Each of the first three waveforms includes four lead signals with a 10-second duration and a 2.5-second duration per lead. The fourth waveform is a 10-second duration signal for lead II. Different leads provide different signal amplitudes and intervals. The 12-lead ECG image is the most widely used ECG image recording technique in clinical practice. The 12 leads include six anterior leads (V1, V2, V3, V4, V5, V6), three limb leads (I, II, iii), and three enhanced Limb leads (aVR, aVL, aVF) ….On basis of this dataset, data augmentation is used to generate a more robust training dataset. Image data augmentation is a technique that may be used to artificially scale the size of the training dataset by creating modified versions of the images to build a better deep learning model. It is worth noting that the enhanced data cannot be directly added to the training set in most prior ECG image processing because they are very sensitive to distortions in the temporal digital data, which can significantly degrade performances in the test set. However, since the input in the present application is a two-dimensional ECG image, overfitting can be effectively reduced and a balanced distribution between classes can be maintained by modifying the image with appropriate enhancements”, Ramseth: par 0040, “data from a standard ECG recording may be displayed as traces overlaid on a millimeter grid reference background. By displaying ECG data in much the same format as that of a conventional ECG paper printout, the system capitalizes on the pattern recognition skills developed by cardiologists through years of training and experience. Moreover, the interactive display system maintains the aspect ratio, thereby preserving the pattern-recognition advantages, during electronic magnification (zoom) operations. By preserving the aspect ratio in this manner, doubling, for example, both the horizontal and vertical scales for both the ECG waveform and the reference grid for a 2.times. magnified view of an ECG, a cardiologist may make precise, highly-refined ECG measurements, even while viewing an undistorted representation of the ECG “, par 0042, “data from a 12 lead resting ECG may be displayed as waveform traces overlaid on a millimeter grid background. By displaying ECG data in much the same format as that of a conventional ECG paper printout, the system capitalizes on the pattern recognition skills developed by cardiologists through years of training and experience”). Regarding claim 20, Chen et al. as modified by Ramseth teach all the limitation of claim 1, and further teach wherein the drawing of the waveform content of the training image and a signal marker on the two-dimensional plane with the grid pattern drawn thereon includes: defining a drawing function for drawing the waveform content of the training signal for each channel as a graphic on the basis of a reference position; and drawing the waveform content of the training signal and the signal marker using the defined drawing function, and the drawing function is based on reference coordinates of a training signal and one or more drawing elements, and the drawing element defines a design of a waveform or a design of a signal marker (Chen et al.: par 0013, “processing the ECG image in an end-to-end training manner, i.e. from image input to final formation of interpretation results, during which no human intervention is required”, Fig 4, par 0038-0040, “Before explaining the ECG image processing method according to the present application, the data set needs to be trained. FIG. 4 illustrates a sample data in the training data set according to some embodiments of the present application …. Each of the first three waveforms includes four lead signals with a 10-second duration and a 2.5-second duration per lead. The fourth waveform is a 10-second duration signal for lead II. Different leads provide different signal amplitudes and intervals. The 12-lead ECG image is the most widely used ECG image recording technique in clinical practice. The 12 leads include six anterior leads (V1, V2, V3, V4, V5, V6), three limb leads (I, II, iii), and three enhanced Limb leads (aVR, aVL, aVF) ….On basis of this dataset, data augmentation is used to generate a more robust training dataset. Image data augmentation is a technique that may be used to artificially scale the size of the training dataset by creating modified versions of the images to build a better deep learning model. It is worth noting that the enhanced data cannot be directly added to the training set in most prior ECG image processing because they are very sensitive to distortions in the temporal digital data, which can significantly degrade performances in the test set. However, since the input in the present application is a two-dimensional ECG image, overfitting can be effectively reduced and a balanced distribution between classes can be maintained by modifying the image with appropriate enhancements”, Ramseth: par 0040, “data from a standard ECG recording may be displayed as traces overlaid on a millimeter grid reference background. By displaying ECG data in much the same format as that of a conventional ECG paper printout, the system capitalizes on the pattern recognition skills developed by cardiologists through years of training and experience. Moreover, the interactive display system maintains the aspect ratio, thereby preserving the pattern-recognition advantages, during electronic magnification (zoom) operations. By preserving the aspect ratio in this manner, doubling, for example, both the horizontal and vertical scales for both the ECG waveform and the reference grid for a 2.times. magnified view of an ECG, a cardiologist may make precise, highly-refined ECG measurements, even while viewing an undistorted representation of the ECG “, par 0042, “data from a 12 lead resting ECG may be displayed as waveform traces overlaid on a millimeter grid background. By displaying ECG data in much the same format as that of a conventional ECG paper printout, the system capitalizes on the pattern recognition skills developed by cardiologists through years of training and experience”). Regarding claim 24, Chen et al. as modified by Ramseth teach all the limitation of claim 1, and Chan et al. further teach wherein the method is used to train an image-based artificial intelligence model analyzing images obtained from multi-channel one-dimensional signals (Chen et al.: par 0013, “processing the ECG image in an end-to-end training manner, i.e. from image input to final formation of interpretation results, during which no human intervention is required”, Fig 4, par 0038-0041, “Before explaining the ECG image processing method according to the present application, the data set needs to be trained. FIG. 4 illustrates a sample data in the training data set according to some embodiments of the present application …. Each of the first three waveforms includes four lead signals with a 10-second duration and a 2.5-second duration per lead. The fourth waveform is a 10-second duration signal for lead II. Different leads provide different signal amplitudes and intervals. The 12-lead ECG image is the most widely used ECG image recording technique in clinical practice. The 12 leads include six anterior leads (V1, V2, V3, V4, V5, V6), three limb leads (I, II, iii), and three enhanced Limb leads (aVR, aVL, aVF) ….On basis of this dataset, data augmentation is used to generate a more robust training dataset. Image data augmentation is a technique that may be used to artificially scale the size of the training dataset by creating modified versions of the images to build a better deep learning model. It is worth noting that the enhanced data cannot be directly added to the training set in most prior ECG image processing because they are very sensitive to distortions in the temporal digital data, which can significantly degrade performances in the test set. However, since the input in the present application is a two-dimensional ECG image, overfitting can be effectively reduced and a balanced distribution between classes can be maintained by modifying the image with appropriate enhancements”). Regarding claim 25, Chen et al. teach a computer program combined with hardware and stored in a medium to perform the method for generating a training image of claim 1 (par 0011). The remaining limitations of the claim are similar in scope to claim 1 and rejected under the same rationale. Regarding claim 26, Chen et al. teach a device for generating a training image, comprising: an obtaining unit configured to obtain a source signal; and an image generating unit including a processor and a memory, wherein the image generating unit receives source signal information received by the obtaining unit and performs the method for generating a training image of claim 1 (abstract, par 0034). The remaining limitations of the claim are similar in scope to claim 1 and rejected under the same rationale. Claim(s) 2-3 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2023/0293079 to Chen et al. in view of U.S. PGPubs 2004/0051721 to Ramseth, further in view of China PGPubs CN112329609 to Zhang et al.. Regarding claim 2, Chen et al. as modified by Ramseth teach all the limitation of claim 1, but keep silent for teaching wherein the generating of a training signal on the basis of source signal information includes: selecting at least one transform function from among a plurality of preset transform functions; and transforming the source signal into a training signal using the selected transform function, and the transform function is composed of one or more transform elements indicating transforming a signal processing attribute of an input signal, and the transform elements each correspond to the signal processing attribute. In related endeavor, Zhang et al. teach wherein the generating of a training signal on the basis of source signal information includes: selecting at least one transform function from among a plurality of preset transform functions; and transforming the source signal into a training signal using the selected transform function, and the transform function is composed of one or more transform elements indicating transforming a signal processing attribute of an input signal, and the transform elements each correspond to the signal processing attribute (abstract, “reconstructing a target electrocardiosignal from one dimension into a two-dimensional electrocardiosignal; a feature extraction module configured to: respectively extracting a first characteristic and a second characteristic from the two-dimensional electrocardiosignals; a feature fusion module configured to: fusing the first characteristic and the second characteristic; a classification module configured to: and inputting the fused features into a trained classifier, and outputting arrhythmia classification results corresponding to the current target electrocardiosignals”, par 0006, “Mashrur et al convert one-dimensional electrocardiosignals into a two-dimensional time-frequency graph by wavelet transform and learn and identify atrial fibrillation by using an AlexNet convolutional neural network, however, the signals obtained by wavelet transform are still narrow-band signals, and single frequency information cannot be accurately obtained”, par 0011-0016, “a reconstruction module configured to: reconstructing a target electrocardiosignal from one dimension into a two-dimensional electrocardiosignal; a feature extraction module configured to: respectively extracting a first characteristic and a second characteristic from the two-dimensional electrocardiosignals; a feature fusion module configured to: fusing the first characteristic and the second characteristic; a classification module configured to: and inputting the fused features into a trained classifier, and outputting arrhythmia classification results corresponding to the current target electrocardiosignals“, par 0043-0045, “a reconstruction module configured to: reconstructing a target electrocardiosignal from one dimension into a two-dimensional electrocardiosignal; a feature extraction module configured to: respectively extracting a first characteristic and a second characteristic from the two-dimensional electrocardiosignals; a feature fusion module configured to: fusing the first characteristic and the second characteristic”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Chen et al. as modified by Ramseth to include wherein the generating of a training signal on the basis of source signal information includes: selecting at least one transform function from among a plurality of preset transform functions; and transforming the source signal into a training signal using the selected transform function, and the transform function is composed of one or more transform elements indicating transforming a signal processing attribute of an input signal, and the transform elements each correspond to the signal processing attribute as taught by Zhang et al. to convert one-dimensional time series into two-dimensional images for classification and recognition to provide more accurate results through training by neural network (par 0018, “the essence of the signal is comprehensively reflected, and the precision is improved”). Regarding claim 3, Chen et al. as modified by Ramseth and Zhang et al. (teach all the limitation of claim 2, and Zhang et al. further teach wherein the transforming the signal processing attribute includes processing, changing, or removing one or more of a size, a waveform, a frequency range, frequency distribution, a start point, and a time range of a signal, and the transforming the signal processing attribute is performed for each channel, performed for a channel group, or performed for all of channels (par 0011-0016, “a reconstruction module configured to: reconstructing a target electrocardiosignal from one dimension into a two-dimensional electrocardiosignal; a feature extraction module configured to: respectively extracting a first characteristic and a second characteristic from the two-dimensional electrocardiosignals; a feature fusion module configured to: fusing the first characteristic and the second characteristic; a classification module configured to: and inputting the fused features into a trained classifier, and outputting arrhythmia classification results corresponding to the current target electrocardiosignals“, par 0018, “the network architecture has the function of acquiring general image features in a two-dimensional heart beating time-frequency diagram, the other is an 8-layer CNN feature extraction layer trained by a two-dimensional ECG time-frequency analysis diagram, the network architecture has the function of acquiring special domain features of ECG in the two-dimensional ECG time-frequency diagram, the special domain features are general time-frequency features common to image textures, shapes, colors and the like compared with general image features extracted by ResNet-101 trained by an ImageNet image library, due to the limitation of the ImageNet image library (images in the library are natural scene images, such as plants, animals and the like), and the two-dimensional ECG analysis diagram is a spectrum image different from natural scenes, therefore, the features extracted by the 8CNN feature extraction layers are different from the common image features, and can reflect the electrocardio time-frequency features of electrocardio energy and frequency distribution”). This would be obvious for the same reason given in the rejection for claim 2. Allowable Subject Matter Claims 4-11, 13-18 and 21-23 are objected to as being dependent upon a rejected base, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 4, including " wherein the selecting of at least one transform function from among a plurality of transform functions is selecting any one transform function from among a plurality of transform functions or selecting two or more transform functions from among a plurality of transform functions, depending on a 1-1 probability distribution set in advance for a set of the plurality of transform functions, when one transform function is selected from N pieces, the 1-1 probability distribution is a multinomial distribution, and when two or more transform functions are selected from N pieces, the 1-1 probability distribution is a binomial distribution". The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 5, including " wherein the transforming of the source signal into a training signal using the selected transform function includes: changing at least one transform element value of transform elements constituting the selected transform function into a new value; and generating a signal, to which a signal attribute changed in accordance with the changed transform element value has been applied, into a training signal; wherein the new value is a value selected in accordance with a 1-2 probability distribution set in advance for the transform element". The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 8, including " wherein the selecting at least one output format from a plurality of preset output formats to determine an output format of the training image includes selecting any one output format or selecting two or more output format from a plurality of output formats, depending on a 2-1 probability distribution set in advance for a set of the plurality of preset output formats, when one output format is selected from N pieces, the 2-1 probability distribution is a multinomial distribution, and when two or more transform functions are selected from N pieces, the 2-1 probability distribution is a binomial distribution". The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 13, including " wherein the start point or the end point is a point selected in accordance with a preset third probability distribution from a selection range of each point, the third probability distribution defines a probability distribution in which an individual value will be designated in a selection range for each of the start point and the end point, the selection range of the start point is a range from a first point of a training signal to a point extending from a final point of the training signal in a negative time direction by a time section of a determined waveform, and the selection range of the end point is a range from a final point of a training signal to a point extending from a first point of the training signal in a positive time direction by the time section of the determined waveform". The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 14, including " wherein the determining of a grid scale of the training image includes: selecting any one horizontal axis unit scale from a plurality of horizontal axis unit scale in accordance with a 4-1 probability distribution set in advance for all of a plurality of preset horizontal axis unit scales; or selecting any one vertical axis unit scale from a plurality of vertical axis unit scale in accordance with a 4-2 probability distribution set in advance for all of a plurality of preset vertical axis unit scales, and the 4-1 and 4-2 probability distributions have a multinomial distribution". The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 15, including " wherein the drawing of a grid pattern on a two-dimensional plane in accordance with the determined grid scale includes selecting any one grid pattern format from a plurality of grid pattern formats in accordance with a 5-1 probability distribution set in advance for a set of the plurality of preset grid pattern formats, and the 5-1 probability distribution is a multinomial distribution". The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 21, including " wherein the drawing of the waveform content of the training signal and the signal marker using the defined drawing function includes: adjusting a value of a drawing element of the drawing function; and drawing the waveform content of the training signal for each channel on a two-dimensional plane with the grid pattern drawn thereon by applying the adjusted drawing element value, the adjusted value is a value selected in accordance with a sixth probability distribution set in advance for the drawing element". Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jin Ge whose telephone number is (571)272-5556. The examiner can normally be reached 8:00 to 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JIN . GE Examiner Art Unit 2619 /JIN GE/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Aug 02, 2024
Application Filed
Jan 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592024
QUANTIFICATION OF SENSOR COVERAGE USING SYNTHETIC MODELING AND USES OF THE QUANTIFICATION
2y 5m to grant Granted Mar 31, 2026
Patent 12586296
METHODS AND PROCESSORS FOR RENDERING A 3D OBJECT USING MULTI-CAMERA IMAGE INPUTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579704
VIDEO GENERATION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573164
DESIGN DEVICE, PRODUCTION METHOD, AND STORAGE MEDIUM STORING DESIGN PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12573151
PERSONALIZED DEFORMABLE MESH BY FINETUNING ON PERSONALIZED TEXTURE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
98%
With Interview (+18.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 520 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month