DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/19/2026 has been entered.
Response to Arguments
Applicant’s arguments, see Remarks, filed 03/19/2026, with respect to the rejection(s) of claim 1 under 35 U.S.C. 103 have been fully considered and are persuasive. The combination of Guzman (US 20190310207 A1), Yamada (US 20220196615 A1), and Vitry (Encoding Time Series as Images; 2018) does not teach converting the multivariate series data to a two-dimensional line plot or a Gramian angular summation field. Therefore, the rejection has been withdrawn. However, Guzman does teach a plurality of detectors as seen in Para. [0049] as an array detector contains multiple diodes in an array which is viewed as a plurality of detectors. Furthermore, as seen in para. [0049] the detector generates frequency shift and intensity data points (x,y) where both frequency and intensity are viewed as two variables. Therefore, the data generated is multivariate. Yamada similarly teaches an array detector that outputs time series data as well as a tandem mass spectrometer which can be viewed as two detectors that output multivariate data as they output mass to charge ratios and intensity values. As a result, upon further consideration, a new ground(s) of rejection is made in view of Guzman (US 20190310207 A1), Yamada (US 20220196615 A1), and Yang (Multivariate Time Series Data Transformation for Convolutional Neural Network; 2019).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 4, and 10, 12, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Guzman (US 20190310207 A1) as modified by Yamada (US 20220196615 A1), and Yang (Multivariate Time Series Data Transformation for Convolutional Neural Network; 2019).
Regarding claim 1,
Guzman teaches,
A method, comprising:
analytically characterizing a chemical process or a product generated by the chemical process wherein analytically characterizing the product comprises at least one of a group of analytical characterizations including, liquid chromatography, gas chromatography, thermal gradient chromatography, size-exclusion chromatography, calorimetry, rheology, optical spectroscopy, mass spectroscopy, viscometry, particle sizing, and nuclear magnetic resonance spectroscopy: (Abstract teaches “The methods involve Raman spectroscopy and artificial intelligence to compute polymer properties and/or features.”(i.e. Raman spectroscopy is analogous to optical spectroscopy. Polymers are generated by a chemical process.))
receiving the prediction of the property of the product from the ANN; (Para. [0021] teaches “Instead, the disclosed method uses artificial intelligence, more specifically machine learning techniques, to develop a model capable of predicting relevant polymer properties/features from the Raman spectrum. Furthermore, by employing an online non-destructive fingerprinting method and artificial intelligence, real-time estimations of product specifications can be obtained, thus reducing the time and costs associated with using the conventional laboratory equipment for quality control.”)
wherein the system includes a plurality of detectors and wherein the series data comprises multivariate data corresponding to the plurality of detectors. (Para. [0049] teaches “The dispersed Raman scattering is imaged onto a detector. The choice of detector is easily made by one skilled in the art, taking into account various factors such as resolution, sensitivity to the appropriate frequency range, and response time. Typical detectors include array detectors generally used with fixed-dispersive monochromators, such as diode arrays or charge coupled devices (CCDs), or single element detectors generally used with scanning-dispersive monochromators, such as lead sulfide detectors and indium-gallium-arsenide detectors. In the case of array detectors, the detector is calibrated such that the frequency (wavelength) corresponding to each detector element is known.” Array detectors contain multiple detectors. Para. [0027] further teaches “many possible variables are generated”)
and adjusting the chemical process or rejecting the product based on the prediction of the property of the product. (Para. [0039] teaches “Accordingly, the parameters for the polymer production process can be adjusted by the polymer property computing device to achieve the polymer with the desired properties or features.” Para. [0003] teaches “If the polymer does not meet the specifications, the manufacturing lot is rejected, and the process engineers take corrective actions.”)
Guzman does not explicitly teach,
with a plurality of detectors thereby generating multivariate series data;
converting the multivariate series data to an image, wherein converting the series data to the image comprises converting the series data to a two-dimensional line plot or a Gramian angular summation field;
inputting the image to an artificial neural network (ANN), comprising a two-dimensional image input network, trained to predict a property of the product based on the image;
Nevertheless, Yamada teaches,
with a plurality of detectors thereby generating multivariate series data; (Para. [0046] teaches “The image generation unit 21 creates a chromatogram based on the chromatogram waveform data as a time-series signal,” Para. [0116] teaches “With the analyzer according to the seventh and the eighth aspects, the predetermined analysis corresponds to, typically, a chromatograph analysis using a photodiode array detector capable of detecting multiple wavelengths simultaneously as a detector, or a chromatograph analysis using a tandem mass spectrometer as a detector.;”)
converting series data to an image; (Para. [0046] teaches “The image generation unit 21 creates a chromatogram based on the chromatogram waveform data as a time-series signal, and converts the chromatogram waveform (chromatogram curve) indicating a change in signal intensity over time into a two-dimensional image having a pixel,”)
inputting the image to an artificial neural network (ANN), comprising a two-dimensional image input network, trained to predict a property of the product based on the image; (Para. [0043] teaches “Schematically speaking, the peak detection processing unit 120 converts the chromatogram waveform (a chromatogram curve) into a two-dimensional image, and based on a deep learning method as a method of machine learning to detect a category and a position of an object seen in the image, detects the positions of the start point and the end point of each of the peak or peaks.” (i.e. predicting position of peaks to determine the compounds in the sample. Also see Fig(s). [10 - 13]. Para. [0049] teaches “the machine learning by using a plurality of images generated from the chromatogram waveforms as the learning data.” (i.e. learning data is viewed as training the neural network.))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Guzman with a detector thereby generating series data; converting the series data to an image, inputting the image to an artificial neural network (ANN), comprising a two-dimensional image input network, trained to predict a property of the product based on the image such as that of Yamada.
One of ordinary skill would have been motivated to modify Guzman because according to Para. [0021] of Yamada “With this configuration, it is possible to reduce the workload required of the operator with regard to the qualitative analysis or the quantitative analysis in the simultaneous analysis of the multiple components, and thus to efficiently perform the analysis.”
The combination of Guzman and Yamada does not explicitly teach,
converting the multivariate series data to an image, wherein converting the series data to the image comprises converting the series data to a two-dimensional line plot or a Gramian angular summation field.
Yang teaches,
converting the multivariate series data to an image, wherein converting the series data to the image comprises converting the series data to a two-dimensional line plot or a Gramian angular summation field; (Abstract teaches “In this research, Gramian Angular Summation Field (GASF) and Gramian Angular Difference Field (GADF) were applied to encode time series into images. The proposed image aggregation method which appends multiple images into a single image is suggested. After transformation and aggregation, the 2-D images passed through a convolutional neural network (CNN), which is outstanding in solving computer vision problems, for classification.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Guzman and Yamada with converting the multivariate series data to an image, wherein converting the series data to the image comprises converting the series data to a two-dimensional line plot or a Gramian angular summation field such as that of Yang.
One of ordinary skill would have been motivated to modify the combination of Guzman and Yamada, because Gramian summation fields preserve the time information in the data allowing for a more comprehensive analysis of the sample.
Regarding claim 2,
Guzman further teaches,
the method of claim 1, further comprising adjusting the chemical process and rejecting the product based on the prediction of the property of the product. (Para. [0039] teaches “Accordingly, the parameters for the polymer production process can be adjusted by the polymer property computing device to achieve the polymer with the desired properties or features.” Para. [0003] teaches “If the polymer does not meet the specifications, the manufacturing lot is rejected, and the process engineers take corrective actions.”)
Regarding claim 4,
Guzman further teaches,
the method of claim 1,
wherein receiving the prediction of the property of the product comprises receiving the prediction of one of a group of properties including molecular weight, density, quality, performance, and identification. (Para. [0039] teaches “These parameters include, but are not limited to, the amount/concentration of the reactants (e.g., propylene, ethylene, hydrogen), additives, and polymerization catalyst; temperature; and pressure.” (i.e. Amount and concentration are viewed as identification.))
Regarding claim 10,
Guzman teaches,
a plurality of detectors configured to: (Para. [0049] teaches “The dispersed Raman scattering is imaged onto a detector. The choice of detector is easily made by one skilled in the art, taking into account various factors such as resolution, sensitivity to the appropriate frequency range, and response time. Typical detectors include array detectors generally used with fixed-dispersive monochromators, such as diode arrays”)
analytically characterize a product generated by a chemical process; wherein analytically characterizing the product comprises at least one of a group of analytical characterizations including, liquid chromatography, gas chromatography, thermal gradient chromatography, size exclusion chromatography, calorimetry, rheology, optical spectroscopy, mass spectroscopy, viscometry, particle sizing, and nuclear magnetic resonance spectroscopy: (Abstract teaches “The methods involve Raman spectroscopy and artificial intelligence to compute polymer properties and/or features.”(i.e. Raman spectroscopy is analogous to optical spectroscopy. Polymers are generated by a chemical process.))
and a controller coupled to the detector and to the ANN, wherein the controller is configured to: (Para. [0045] teaches “The dispersed Raman scattered light is then imaged onto a detector and subsequently processed in by the polymer property computing device, as further described below.” Para. [0024] teaches “algorithms for computing one or more polymer properties or features include, but are not limited to, Logistic Regression, Naive Bayes, Neural Networks,”)
receive the prediction of the property of the product from the ANN; (Para. [0021] teaches “Instead, the disclosed method uses artificial intelligence, more specifically machine learning techniques, to develop a model capable of predicting relevant polymer properties/features from the Raman spectrum. Furthermore, by employing an online non-destructive fingerprinting method and artificial intelligence, real-time estimations of product specifications can be obtained, thus reducing the time and costs associated with using the conventional laboratory equipment for quality control.”)
and adjust the chemical process or reject the product. (Para. [0039] teaches “Accordingly, the parameters for the polymer production process can be adjusted by the polymer property computing device to achieve the polymer with the desired properties or features.” Para. [0003] teaches “If the polymer does not meet the specifications, the manufacturing lot is rejected, and the process engineers take corrective actions.”)
Guzman does not explicitly teach,
generate series data from the analytical characterization;
an artificial neural network (ANN), comprising a two-dimensional image input network, trained with a plurality of images of converted series data from prior products generated by the chemical process to predict a property of the product based on an image converted from the series data;
wherein the plurality of detectors are thereby configured to generate multivariate series data from the analytical characterization;
convert the series data to the image, wherein converting the series data to the image comprises converting the series data to a two-dimensional line plot or a Gramian angular summation field;
input the image to the ANN.
Nevertheless, Yamada teaches,
wherein the plurality of detectors are thereby configured to generate multivariate series data from the analytical characterization; (Para. [0046] teaches “The image generation unit 21 creates a chromatogram based on the chromatogram waveform data as a time-series signal,” Para. [0116] teaches “With the analyzer according to the seventh and the eighth aspects, the predetermined analysis corresponds to, typically, a chromatograph analysis using a photodiode array detector capable of detecting multiple wavelengths simultaneously as a detector, or a chromatograph analysis using a tandem mass spectrometer as a detector.;”))
an artificial neural network (ANN), comprising a two-dimensional image input network, trained with a plurality of images of converted series data from prior products generated by the chemical process to predict a property of the product based on an image converted from the series data; (Para. [0043] teaches “Schematically speaking, the peak detection processing unit 120 converts the chromatogram waveform (a chromatogram curve) into a two-dimensional image, and based on a deep learning method as a method of machine learning to detect a category and a position of an object seen in the image, detects the positions of the start point and the end point of each of the peak or peaks.” (i.e. predicting position of peaks to determine the compounds in the sample see fig(s). [10 - 13]. Para. [0049] teaches “the machine learning by using a plurality of images generated from the chromatogram waveforms as the learning data.” (i.e. learning data is viewed as prior products)
convert the series data to the image; (Para. [0046] teaches “The image generation unit 21 creates a chromatogram based on the chromatogram waveform data as a time-series signal, and converts the chromatogram waveform (chromatogram curve) indicating a change in signal intensity over time into a two-dimensional image having a pixel,”)
input the image to the ANN; (Para. teaches “peak position presumption unit 122 applies the learned model stored in the learned model storage unit 123 to the pixel value of each of the pixels in the image generated, so as to acquire the five-dimensional information for each of the 120 segments. In other words, the peak position presumption unit 122 acquires the information regarding the pixel locations estimated as the start point and the end point of each of the peak or peaks, together with the confidence for detecting the corresponding peak”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Guzman with generate series data from the analytical characterization; an artificial neural network (ANN), comprising a two-dimensional image input network, trained with a plurality of images of converted series data from prior products generated by the chemical process to predict a property of the product based on an image converted from the series data; convert the series data to the image, and input the image to the ANN such as that of Yamada.
One of ordinary skill would have been motivated to modify Guzman because according to Para. [0021] of Yamada “With this configuration, it is possible to reduce the workload required of the operator with regard to the qualitative analysis or the quantitative analysis in the simultaneous analysis of the multiple components, and thus to efficiently perform the analysis.”
The combination of Guzman and Yamada does not explicitly teach,
convert the multivariate series data to the image, wherein converting the multivariate series data to the image comprises converting the multivariate series data to a two-dimensional line plot or a Gramian angular summation field.
Yang teaches,
convert the multivariate series data to the image, wherein converting the multivariate series data to the image comprises converting the multivariate series data to a two-dimensional line plot or a Gramian angular summation field; (Abstract teaches “In this research, Gramian Angular Summation Field (GASF) and Gramian Angular Difference Field (GADF) were applied to encode time series into images. The proposed image aggregation method which appends multiple images into a single image is suggested. After transformation and aggregation, the 2-D images passed through a convolutional neural network (CNN), which is outstanding in solving computer vision problems, for classification.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Guzman and Yamada with converting the multivariate series data to an image, wherein converting the series data to the image comprises converting the series data to a two-dimensional line plot or a Gramian angular summation field such as that of Yang.
One of ordinary skill would have been motivated to modify the combination of Guzman and Yamada, because Gramian summation fields preserve the time information in the data allowing for a more comprehensive analysis of the sample.
Regarding claim 12,
Guzman further teaches,
the system of claim 10, wherein the detector comprises one of a group of detectors including a concentration sensitive detector, a molecular weight sensitive detector, a composition sensitive detector, and combinations thereof. (Para. [0060] further teaches “The polymer property can be any property relating to the polymer that one skilled in the art can measure analytically through Raman spectroscopy, including molecular weight, melt flow rate, lamellar thickness, crystallinity, xylene solubles, mechanical properties (e.g., tensile or compressive properties), and combinations thereof.”)
Regarding claim 13,
Guzman further teaches,
the system of claim 10,
wherein the controller is configured to adjust the chemical process and reject the product. (Para. [0039] teaches “Accordingly, the parameters for the polymer production process can be adjusted by the polymer property computing device to achieve the polymer with the desired properties or features.” Para. [0003] teaches “If the polymer does not meet the specifications, the manufacturing lot is rejected, and the process engineers take corrective actions.”)
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Guzman (US 20190310207 A1), Yamada (US 20220196615 A1), and Yang (Multivariate Time Series Data Transformation for Convolutional Neural Network; 2019) as applied to claim 1 above, and further in view of Colby (US 20200176087 A1).
Regarding claim 3,
Guzman does not explicitly teach,
The method of claim 1, wherein the ANN is pretrained to identify a feature in any image; and wherein the method further comprises training the ANN via transfer learning with a plurality of images of converted series data from prior products generated by the chemical process such that the feature comprises the property of the product.
Yamada further teaches,
wherein the ANN is pretrained to identify a feature in any image; (Para. [0050] teaches “The SSD method uses a convolutional neural network (CNN) that is most widely used in the deep learning, and currently represents an algorithm capable of the image recognition at highest speed and at highest accuracy.” (i.e. the algorithm is widely used and is therefore seen as capable of identifying any feature in any image.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Guzman, Yamada, and Yang wherein the ANN is pretrained to identify a feature in any image such as that of Yamada.
One of ordinary skill would have been motivated to modify the combination of Guzman, Yamada, and Yang because according to Para. [0050] of Yamada the neural network used is capable if high speed and accuracy.
The combination of Guzman, Yamada, and Yang does not explicitly teach,
and wherein the method further comprises training the ANN via transfer learning with a plurality of images of converted series data from prior products generated by the chemical process such that the feature comprises the property of the product.
Colby further teaches,
and wherein the method further comprises training the ANN via transfer learning with a plurality of images of converted series data from prior products generated by the chemical process such that the feature comprises the property of the product. (Para. [0009] teaches “Training includes processing a cascade of transfer learning iterations comprising: a first dataset of unlabeled structures, a second dataset of properties calculated in silico and a third dataset of limited experimental data for fine tuning”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Guzman, Yamada, and Yang and wherein the method further comprises training the ANN via transfer learning with a plurality of images of converted series data from prior products generated by the chemical process such that the feature comprises the property of the product such as that of Colby.
One of ordinary skill would have been motivated to modify the combination in view of Colby, because according to Para. [0025] Colby “Through a cascade of transfer learning iterations, a network is able to learn as much as possible from each dataset, enabling success with progressively smaller datasets without overfitting.”
Claims 8 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Guzman (US 20190310207 A1), Yamada (US 20220196615 A1), and Yang (Multivariate Time Series Data Transformation for Convolutional Neural Network; 2019) as applied to claims 1 and 10 above, and further in view of Sharma (DeepInsight: A methodology to transform a non-image data to an image for convolution neural network architecture; 2019)
Regarding claim 8,
The combination of Guzman, Yamada, and Yang does not explicitly teach,
the method of claim 1, wherein converting the series data to the image comprises converting the series data without preprocessing the series data.
Nevertheless, Sharma further teaches,
wherein converting the series data to the image comprises converting the series data without preprocessing the series data. (The Abstract teaches “Here we propose, DeepInsight, which converts non-image samples into a well-organized image-form” It does not state that there is any preprocessing of the data before utilizing the deep insight which is their conversion method except when the data dimensionality is extremely large and difficult to handle.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Guzman, Yamada, and Yang wherein converting the series data to the image comprises converting the series data without preprocessing the series data such as that of Sharma.
One of ordinary skill would have been motivated to modify the combination of Guzman, Yamada, and Yang, because not preprocessing the data would streamline the system and increase its efficiency as there would be one less step before converting the series data to the image data.
Regarding claim 14,
The combination of Guzman, Yamada, and Yang does not explicitly teach,
the system of claim 10, wherein the controller is configured to convert the series data to the image without preprocessing the series data.
Nevertheless, Sharma teaches,
wherein the controller is configured to convert the series data to the image without preprocessing the series data. (The Abstract teaches “Here we propose, DeepInsight, which converts non-image samples into a well-organized image-form” It does not state that there is any preprocessing of the data before utilizing the deep insight which is their conversion method except when the data dimensionality is extremely large and difficult to handle.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Guzman, Yamada, and Yang, wherein converting the series data to the image comprises converting the series data without preprocessing the series data such as that of Sharma.
One of ordinary skill would have been motivated to modify the combination of Guzman, Yamada, and Yang, because not preprocessing the data would streamline the system and increase its efficiency as there would be one less step before converting the series data to the image data.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to JOSHUA L FORRISTALL whose telephone number is 703-756-4554. The examiner
can normally be reached Monday-Friday 8:30 AM- 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use
the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Schechter can be reached on 571-272-2302. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like
assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or
571-272-1000.
/JOSHUA L FORRISTALL/Examiner, Art Unit 2857
/ANDREW SCHECHTER/Supervisory Patent Examiner, Art Unit 2857