DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 07/03/2023 is being considered by the examiner.
Drawings
The drawings are objected to because the replacement sheet drawings submitted on 06/21/2023 are not in English. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as "amended." If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either "Replacement Sheet" or "New Sheet" pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claims 1 and 16 are objected to because of the following informalities:
In claim 1, line 2, the term “and comprising the steps of:” should be changed to “and comprising in order to avoid a typographical and/or clarity issue to prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ).
In claim 16, line 2, the term “GPU” should be changed to “Graphics Processing Unit (GPU)” in order to avoid a typographical and/or clarity issue.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over SCHIMERT (US 20170193372 A1), hereinafter referenced as SCHIMERT, in view of KLUZNER et al. (US 20130064471 A1), hereinafter referenced as KLUZNER.
Regarding claim 1, SCHIMERT explicitly teaches a partitioning method (Fig. 14, step #1402 transforms time series data into a plurality of segments. Paragraph [0156].), implemented in a processing unit (Fig. 16, #1604 called a processor unit. Paragraph [0173]-SCHIMERT discloses processor unit 1604 is configured to execute instructions for software to perform a number of operations. Processor unit 1604 may comprise a number of processors, a multi-processor core, and/or some other type of processor, depending on the implementation. In some cases, processor unit 1604 may take the form of a hardware unit, such as a circuit system, an application specific integrated circuit (ASIC), a programmable logic device, or some other suitable type of hardware unit.) and comprising the steps of:
acquiring an observation matrix (Fig. 2, #221 called a prognostic distance matrix. Paragraph [0067].) produced from a database comprising observations of parameters made on at least one piece of equipment (Fig. 2. Paragraph [0039]-SCHIMERT discloses during flight, sensor data may be collected by, for example, without limitation, flight recorder 120. The sensor data collected by flight recorder 120 may be retrieved and stored in server system 122 (wherein data collected forms a database and wherein data collected by the sensor is observations of parameters). Further in paragraph [0042]-SCHIMERT discloses the illustrative embodiments provide a health management system that is capable of processing and transforming one or more time series for aircraft 100 into segments. Each of these segments may correspond to a different selected state of interest for aircraft 100. Distances are then computed based on these segments. These distances may be used to create a prognostic distance matrix (wherein the observations are distance measures and the matrix is the prognostic distance matrix). Further in paragraph [0057]-SCHIMERT discloses time series data 210 is used to manage health 211 of component 214 in aircraft 202 (wherein the component #214 is a piece of equipment).), the observation matrix (Fig. 8, #800 called a prognostic distance matrix.) comprising time series (Xi, X2,..., Xp) each comprising elements which are observations of a parameter (Fig. 8, illustrates an observation matrix with time series data in the form of a list of flights over time. Paragraph [0122]-SCHIMERT discloses for each of flights 802, submatrix 810 includes a distance computed for one of P parameters with respect to State 1 806. For example, each column of submatrix 810 comprises, for each one of flights 802, a distance between a nominal segment and a segment extracted from time series data for a particular parameter during State 1 806 for the corresponding flight (wherein distance is an observation of a parameter).).;
for each time series (Fig. 3. Paragraph [0095]-SCHIMERT discloses processing set of time series 300 generated over multiple flights by segmenting set of time series 300 and computing distances in the manner described in FIG. 2 above may produce effective features for later prognostic activities and better represent information in the data to match analysis goals (wherein processing is conducted for each segment of a time series and wherein a segment of a time series is a time series).):
calculating a distance matrix (Fig. 2, #221 called a prognostic distance matrix.) comprising distance values between the elements of the time series (Fig. 2. Paragraph [0067]-SCHIMERT discloses distance generator 206 transforms plurality of segments 216 into prognostic distance matrix 221. Prognostic distance matrix 221 may be a matrix that comprises distances computed based on pairings. Further in paragraph [0069]-SCHIMERT discloses segment-segment pairing 225 is selected such that distance 222 computed based on segment-segment pairing 225 may help distinguish between nominal performance and non-nominal performance (wherein segments are elements of the time series).), then generating a primary image (Fig. 6, #600 called a plot of distances.) on the basis of said distance matrix (Fig. 6, illustrates an examples of a primary image generated based on information in the prognostic distance matrix. Paragraph [0111]-SCHIMERT discloses FIG. 6, an illustration of a plot of distances over time is depicted in accordance with an illustrative embodiment. In this illustrative example, plot 600 of distances 601 for various pairings is shown with respect to flight number axis 602.),
implementing a learning algorithm (Fig. 6. Paragraph [0042]-SCHIMERT the prognostic distance matrix may be input into a set of machine learning algorithms. This set of machine learning algorithms may evaluate the health of aircraft 100 or one or more components of aircraft 100 using the prognostic distance matrix to generate a digital prognosis about the health of aircraft 100.) for segmenting the primary image (Fig. 6, #600 called a plot of distances) so as to obtain a segmented image comprising patterns delimited by boundaries (Fig. 6. Paragraph [0113]-SCHIMERT discloses second distances 616 represent the distances computed between segments for a second sensor in sensor pair 604 and nominal segments. Further in paragraph [0114]-SCHIMERT discloses portion 618 of first distances 614 diverge from second distances 616, thereby indicating a problem with the first sensor. This problem increases before line 620, which marks a maintenance event (wherein distances #614, #616 are patterns delimited by boundaries and wherein the primary image is segmented to segments #604, #606, #608, #610, and #612.).);
a primary boundary signal (Fig. 6, #614, 616 called distances.) representative of the boundaries (Fig. 6. Paragraph [0114]-SCHIMERT discloses portion 618 of first distances 614 diverge from second distances 616, thereby indicating a problem with the first sensor. This problem increases before line 620, which marks a maintenance event. The maintenance event was the replacement of the first sensor. Portion 622 of first distances 614 represents the flights after the maintenance event and shows that the replacement sensor began operating nominally after the maintenance event (wherein distances #614, #616 are the primary boundary signals representative of a boundary and wherein the boundary is section #604.).);
merging the primary boundary signals (Fig. 6, #614, #616 called distances.) to obtain a global boundary signal (Fig. 6. Paragraph [0114]-SCHIMERT discloses portion 618 of first distances 614 diverge from second distances 616, thereby indicating a problem with the first sensor. This problem increases before line 620, which marks a maintenance event. The maintenance event was the replacement of the first sensor. Portion 622 of first distances 614 represents the flights after the maintenance event and shows that the replacement sensor began operating nominally after the maintenance event (wherein distances #614, #616 are a primary boundary signals, #620 is a global boundary signal, and section #622 is a resultant of merging the signals #614, #616).), and defining classes from the global boundary signal (Fig. 6. Paragraph [0112]-SCHIMERT discloses distances 601 are shown for sensor pair 604, sensor pair 606, sensor pair 608, sensor pair 610, and sensor pair 612 (wherein sensor pairs are classes defined from the global boundary signal).).
SCHIMERT fails to explicitly teach comprising pixels having levels representative of the distance values of the distance matrix; and producing a segmented matrix comprising levels of pixels of the segmented image; defining from the segmented matrix.
However, KLUZNER explicitly teaches comprising pixels having levels representative of the distance values of the distance matrix (Fig. 4C. Paragraph [0029]-KLUZNER discloses FIG. 4C is a pictorial representation of second distance matrix 44 for the second binary image. Further in paragraph [0033]-KLUZNER discloses in FIG. 4C, the intensity of a given element of second distance matrix 44 is directly proportional to a distance between a corresponding pixel in second binary image 26 and contour 90. Black pixels in FIG. 3C indicate pixels either on or within contour 70, and black pixels in FIG. 4C indicate pixels either on or within contour 90.);
and producing a segmented matrix (Fig. 4C, #100 called second distance values of a corresponding subset of the second distance matrix.) comprising levels of pixels of the segmented image (FIG. 4A-C. Paragraph [0018]-KLUZNER discloses image warping application 34 is also configured to convert a second image 42 to second binary image 26, and to calculate a second distance matrix 44 and a second gradient matrix 46 from the second binary image (wherein the segmented image is second binary image #26).);
defining from the segmented matrix (FIG. 4A-C. Paragraph [0018]-KLUZNER discloses image warping application 34 is also configured to convert a second image 42 to second binary image 26, and to calculate a second distance matrix 44 and a second gradient matrix 46 from the second binary image (wherein the segmented matrix is a second distance matrix #44).),
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of SCHIMERT of a partitioning method, implemented in a processing unit and comprising the steps of: acquiring an observation matrix produced from a database comprising observations of parameters made on at least one piece of equipment, the observation matrix comprising time series (Xi, X2,..., Xp) each comprising elements which are observations of a parameter; for each time series: calculating a distance matrix comprising distance values between the elements of the time series, then generating a primary image on the basis of said distance matrix, with the teachings of KLUZNER of comprising pixels having levels representative of the distance values of the distance matrix; and producing a segmented matrix comprising levels of pixels of the segmented image; defining from the segmented matrix.
Wherein having SCHIMERT’s aircraft health management system comprising pixels having levels representative of the distance values of the distance matrix; and producing a segmented matrix comprising levels of pixels of the segmented image; defining from the segmented matrix.
The motivation behind the modification would have been to obtain an aircraft health management system that enhances the accuracy and efficiency of diagnosing potential issues with components of an aircraft. Since both SCHIMERT and KLUZNER calculate distance matrices, wherein SCHIMERT it is desirable to have a computer-based system that is capable of evaluating the health of a component in a complex system using time series data, while KLUZNER provides an accurate level of image warping even when there are no predefined features for the two binary images, and when the two binary images are not overlapping one another. Please see SCHIMERT (US 20170193372 A1), Paragraph [0032], and KLUZNER et al. (US 20130064471 A1), Paragraph [0015].
Regarding clam 15, SCHIMERT in view of KLUZNER explicitly teach the partitioning method according to claim 1,
SCHIMERT further explicitly teaches a processing unit (Fig. 6., #1604 called processor unit.) comprising at least one processing component (Fig. 16, #1604 called a processor unit. Paragraph [0173]-SCHIMERT discloses processor unit 1604 is configured to execute instructions for software to perform a number of operations. Processor unit 1604 may comprise a number of processors, a multi-processor core, and/or some other type of processor, depending on the implementation. In some cases, processor unit 1604 may take the form of a hardware unit, such as a circuit system, an application specific integrated circuit (ASIC), a programmable logic device, or some other suitable type of hardware unit.), wherein the partitioning method according to claim 1 is implemented (Fig. 16. Paragraph [0174]-SCHIMERT discloses instructions for the operating system, applications, and/or programs run by processor unit 1604 may be located in storage devices 1606. Storage devices 1606 may be in communication with processor unit 1604 through communications framework 1602. As used herein, a storage device, also referred to as a computer readable storage device, is any piece of hardware capable of storing information on a temporary and/or permanent basis. This information may include, but is not limited to, data, program code, and/or other information.).
Regarding Claim 18, SCHIMERT in view of KLUZNER explicitly teach the partitioning method according to claim 1,
SCHIMERT further explicitly teaches a non-transitory computer-readable medium (Fig. 16, #1606 called storage devices. Paragraph [0175]-SCHIMERT discloses memory 1614 and persistent storage 1616 are examples of storage devices 1606. Memory 1614 may take the form of, for example, a random access memory or some type of volatile or non-volatile storage device. Persistent storage 1616 may comprise any number of components or devices. For example, persistent storage 1616 may comprise a hard drive, a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above. The media used by persistent storage 1616 may or may not be removable.), on which a computer program comprising instructions which make a processing unit execute the steps is recorded (Fig. 16. Paragraph [0174]-SCHIMERT discloses instructions for the operating system, applications, and/or programs run by processor unit 1604 may be located in storage devices 1606. Storage devices 1606 may be in communication with processor unit 1604 through communications framework 1602. As used herein, a storage device, also referred to as a computer readable storage device, is any piece of hardware capable of storing information on a temporary and/or permanent basis. This information may include, but is not limited to, data, program code, and/or other information.).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over SCHIMERT (US 20170193372 A1), hereinafter referenced as SCHIMERT, in view of KLUZNER et al. (US 20130064471 A1), hereinafter referenced as KLUZNER, and further in view of BADRINARAYANAN et al. (US 20210182554 A1), hereinafter referenced as BADRINARAYANAN.
Regarding claim 4, SCHIMERT in view of KLUZNER explicitly teach the partitioning method according to claim 1,
SCHIMERT in view of KLUZNER explicitly teach fail to explicitly teach wherein the segmentation of the primary image consists of determining a probability of belonging of the pixels of the primary image to a first class representing the patterns or to a second class representing a background of the primary image.
However, BADRINARAYANAN explicitly teaches wherein the segmentation of the primary image (Fig. 6, called input image I(x,y,c).) consists of determining a probability of belonging of the pixels of the primary image to a first class representing the patterns (Fig. 6. Paragraph [0040]- BADRINARAYANAN discloses eye segmentation data 276 includes an assignment of every pixel of input image I(x,y,c) to a set of classes including background, sclera, pupil, and iris (wherein sclera, pupil, and iris are patterns), which may, in some embodiments, be obtained by taking the last layer of (decoder) neural network 256 and upsampling it to the same resolution as input image I(x,y,c) using deconvolution, which is in turn fed into a softmax cross-entropy loss across feature channels where each feature channel represents the probability of pixels belonging to a certain class.) or to a second class representing a background of the primary image (Fig. 6. Paragraph [0040]- BADRINARAYANAN discloses eye segmentation data 276 includes an assignment of every pixel of input image I(x,y,c) to a set of classes including background, sclera, pupil, and iris).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of SCHIMERT in view of KLUZNER of a partitioning method, implemented in a processing unit and comprising the steps of: acquiring an observation matrix produced from a database comprising observations of parameters made on at least one piece of equipment, the observation matrix comprising time series (Xi, X2,..., Xp) each comprising elements which are observations of a parameter; for each time series: calculating a distance matrix comprising distance values between the elements of the time series, then generating a primary image on the basis of said distance matrix, with the teachings of BADRINARAYANAN of wherein the segmentation of the primary image consists of determining a probability of belonging of the pixels of the primary image to a first class representing the patterns or to a second class representing a background of the primary image.
Wherein having SCHIMERT’s aircraft health management system wherein the segmentation of the primary image consists of determining a probability of belonging of the pixels of the primary image to a first class representing the patterns or to a second class representing a background of the primary image.
The motivation behind the modification would have been to obtain an aircraft health management system that enhances the accuracy and efficiency of diagnosing potential issues with components of an aircraft. Since both SCHIMERT and BADRINARAYANAN relate to the use of machine learning and analyzing data through the use of a learning algorithm, wherein SCHIMERT it is desirable to have a computer-based system that is capable of evaluating the health of a component in a complex system using time series data, while BADRINARAYANAN whereas errors in end-to-end deep networks can be hard to interpret, intermediate estimates made in each stage using the trained network help with interpretability. Please see SCHIMERT (US 20170193372 A1), Paragraph [0032], and BADRINARAYANAN et al. (US 20210182554 A1), Paragraph [0010].
Claims 5-6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over SCHIMERT (US 20170193372 A1), hereinafter referenced as SCHIMERT, in view of KLUZNER et al. (US 20130064471 A1), hereinafter referenced as KLUZNER, and further in view of KAMIGUCHI (US 20210003651 A1), hereinafter referenced as KAMIGUCHI.
Regarding claim 5, SCHIMERT in view of KLUZNER explicitly teach the partitioning method according to claim 1,
SCHIMERT in view of KLUZNER fail to explicitly teach wherein the learning algorithm is a U-NET-type convolutional neural network.
However, KAMIGUCHI explicitly teaches wherein the learning algorithm is a U-NET-type convolutional neural network (Fig. 1. Paragraph [0169]-KAMIGUCHI discloses through the optimization function 23, the processing circuitry 2 applies the trained model (U-net) to the sparse low-rank image set of each slice, thereby generating a full low-rank image set of each slice.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of SCHIMERT in view of KLUZNER of a partitioning method, implemented in a processing unit and comprising the steps of: acquiring an observation matrix produced from a database comprising observations of parameters made on at least one piece of equipment, the observation matrix comprising time series (Xi, X2,..., Xp) each comprising elements which are observations of a parameter; for each time series: calculating a distance matrix comprising distance values between the elements of the time series, then generating a primary image on the basis of said distance matrix, with the teachings of KAMIGUCHI of wherein the learning algorithm is a U-NET-type convolutional neural network.
Wherein having SCHIMERT’s aircraft health management system wherein the learning algorithm is a U-NET-type convolutional neural network.
The motivation behind the modification would have been to obtain an aircraft health management system that enhances the accuracy and efficiency of diagnosing potential issues with components of an aircraft. Since both SCHIMERT and KAMIGUCHI relate to data processing involving learning models and health systems, wherein SCHIMERT it is desirable to have a computer-based system that is capable of evaluating the health of a component in a complex system using time series data, while KAMIGUCHI a method of deep learning to improve the accuracy of analysis. Please see SCHIMERT (US 20170193372 A1), Paragraph [0032], and KAMIGUCHI (US 20210003651 A1), Paragraphs [0004-0005].
Regarding claim 6, SCHIMERT in view of KLUZNER explicitly teach the partitioning method according to claim 1,
SCHIMERT in view of KLUZNER fail to explicitly teach further comprising the step of generating artificial primary images from simulation functions, and of training the learning algorithm by using the artificial primary images.
However, KAMIGUCHI explicitly teaches further comprising the step of generating artificial primary images (Fig. 9, #90 called training data generation apparatus.) from simulation functions (Fig. 9. Paragraph [0096]-KAMIGUCHI discloses the training data generation apparatus 90 generates a pair of a full low-rank image set and a sparse low-rank image set through Bloch imaging simulation using a numerical phantom. The full low-rank image set may be generated directly from the Bloch simulation in real space instead of performing imaging simulation (wherein Bloch imaging simulation using a numerical phantom involves the use of simulation functions and wherein artificial primary images are low-rank images).), and of training the learning algorithm by using the artificial primary images (Fig. 9. Paragraph [0098]-KAMIGUCHI discloses the training data storage apparatus can store training data created by actually performing full sampling imaging and sparse sampling imaging in addition to the training data generated through simulation. Thus, it is possible to perform training using the training data generated through simulation or the training data obtained through actual imaging, or both. The model training apparatus 94 generates a trained model by training a machine learning model based on training data stored in the training data storage apparatus 92, according to a model training program (wherein artificial primary images are training data generated through simulation).).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of SCHIMERT in view of KLUZNER of a partitioning method, implemented in a processing unit and comprising the steps of: acquiring an observation matrix produced from a database comprising observations of parameters made on at least one piece of equipment, the observation matrix comprising time series (Xi, X2,..., Xp) each comprising elements which are observations of a parameter; for each time series: calculating a distance matrix comprising distance values between the elements of the time series, then generating a primary image on the basis of said distance matrix, with the teachings of KAMIGUCHI of further comprising the step of generating artificial primary images from simulation functions, and of training the learning algorithm by using the artificial primary images.
Wherein having SCHIMERT’s aircraft health management system further comprising the step of generating artificial primary images from simulation functions, and of training the learning algorithm by using the artificial primary images.
The motivation behind the modification would have been to obtain an aircraft health management system that enhances the accuracy and efficiency of diagnosing potential issues with components of an aircraft. Since both SCHIMERT and KAMIGUCHI relate to data processing involving learning models and health systems, wherein SCHIMERT it is desirable to have a computer-based system that is capable of evaluating the health of a component in a complex system using time series data, while KAMIGUCHI a method of deep learning to improve the accuracy of analysis. Please see SCHIMERT (US 20170193372 A1), Paragraph [0032], and KAMIGUCHI (US 20210003651 A1), Paragraphs [0004-0005].
Regarding claim 16, SCHIMERT in view of KLUZNER explicitly teach the processing unit according to claim 15,
SCHIMERT in view of KLUZNER fail to explicitly teach comprising a GPU, wherein at least the learning algorithm is implemented to segment the primary images.
However, KAMIGUCHI explicitly teaches comprising a GPU (Fig. 9, #94 called model training apparatus.), wherein at least the learning algorithm is implemented to segment the primary images (Paragraph [0098]-KAMIGUCHI discloses another model training apparatus 94 may be a computer, such as a workstation, including a general purpose processor such as a central processing unit (CPU) or a graphics processing unit (CPU), or including a processor exclusively configured for machine learning.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of SCHIMERT in view of KLUZNER of a partitioning method, implemented in a processing unit and comprising the steps of: acquiring an observation matrix produced from a database comprising observations of parameters made on at least one piece of equipment, the observation matrix comprising time series (Xi, X2,..., Xp) each comprising elements which are observations of a parameter; for each time series: calculating a distance matrix comprising distance values between the elements of the time series, then generating a primary image on the basis of said distance matrix, with the teachings of KAMIGUCHI of comprising a GPU, wherein at least the learning algorithm is implemented to segment the primary images.
Wherein having SCHIMERT’s aircraft health management system comprising a GPU, wherein at least the learning algorithm is implemented to segment the primary images.
The motivation behind the modification would have been to obtain an aircraft health management system that enhances the accuracy and efficiency of diagnosing potential issues with components of an aircraft. Since both SCHIMERT and KAMIGUCHI relate to data processing involving learning models and health systems, wherein SCHIMERT it is desirable to have a computer-based system that is capable of evaluating the health of a component in a complex system using time series data, while KAMIGUCHI a method of deep learning to improve the accuracy of analysis. Please see SCHIMERT (US 20170193372 A1), Paragraph [0032], and KAMIGUCHI (US 20210003651 A1), Paragraphs [0004-0005].
Allowable Subject Matter
Claims 2, 7-8, and 10, along with their dependent claims, 9, 11-14, are therefrom objected to as being dependent upon rejected base claim, claim 1, respectively but would be allowable if rewritten in independent form including all of the limitations of the base claims and any intervening claims, once the drawing and claim objections are overcome.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 2, the prior arts fail to explicitly teach wherein the distance matrix is a Gram matrix, such that:
PNG
media_image1.png
85
346
media_image1.png
Greyscale
where Xi is a time series, the
(
x
k
i
)
1
≤
k
≤
n
are the elements of said time series, and where
d
x
l
i
,
x
m
i
is a distance value between
x
l
i
and
x
m
i
with 1 ≤ l ≤ n and 1 ≤ m ≤ n, as claimed in claim 2.
Regarding claim 7, the prior arts fail to explicitly teach wherein the simulation functions are piecewise continuous constant functions of multi-slope increasing functions, as claimed in claim 7.
Regarding claim 8, the prior arts fail to explicitly teach wherein the definition of the primary boundary signal comprises the step of calculating a statistical function on sets of elements which each comprise elements of one of the anti-diagonals of the segmented matrix, as claimed in claim 8.
Regarding claim 10, the prior arts fail to explicitly teach wherein the merging of the primary boundary signals consists of summing the primary boundary signals to obtain a summed boundary signal, then of estimating an empirical density function of the summed boundary signal to produce the global boundary signal, as claimed in claim 10.
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered
pertinent to applicant’s disclosure.
PATTNAIK et al. (US 20210090239 A1) – Disclosed herein are methods and systems that determine a microfacies or a microfacies characteristic of a sample of a subterranean formation based on a segmented image of a petrographic image of the sample, wherein the segmented image is derived from the petrographic image using a machine-learning algorithm.
Narabu (US 8817121 B2) – An imaging apparatus includes: an image sensor in which plural pixels having a photoelectric conversion function are arranged; a light guiding unit including plural optical system windows that guide light from an object to the respective pixels of the image sensor; and a signal processing unit that performs signal processing based on imaging information of the image sensor, wherein the signal processing unit obtains distance information of the object based on the imaging information of the image sensor and generates an image in response to a distance of the object based on the distance information.
FAN et al. (US 20230222654 A1) - Machine learning systems and methods are disclosed for prediction of wound healing, such as for diabetic foot ulcers or other wounds, and for assessment implementations such as segmentation of images into wound regions and non-wound regions. Systems for assessing or predicting wound healing can include a light detection element configured to collect light of at least a first wavelength reflected from a tissue region including a wound, and one or more processors configured to generate an image based on a signal from the light detection element having pixels depicting the tissue region, determine reflectance intensity values for at least a subset of the pixels, determine one or more quantitative features of the subset of the plurality of pixels based on the reflectance intensity values, and generate a predicted or assessed healing parameter associated with the wound over a predetermined time interval.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ETHAN N WOLFSON whose telephone number is (571)272-1898. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ETHAN N WOLFSON/Examiner, Art Unit 2673
/CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673